Communications, Signal Processing, and Systems: Proceedings of the 8th International Conference on Communications, Signal Processing, and Systems (Lecture Notes in Electrical Engineering, 571) 9811394083, 9789811394089

This book brings together papers from the 2019 International Conference on Communications, Signal Processing, and System

102 50 98MB

English Pages 2719 [2720] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Flame Detection Method Based on Feature Recognition
Abstract
1 Introduction
2 The Process of Flame Identification
3 Color Feature Recognition of Flame
4 Dynamic Feature Recognition of Flame
4.1 Irregularity
4.2 Similarity
4.3 Stability
5 Conclusion
Acknowledgements
References
Small Cell Deployment Based on Energy Efficiency in Heterogeneous Networks
Abstract
1 1 Introduction
2 2 System Model
3 3 Optimal Small Cell Deployment Scheme
3.1 Single Cell
3.2 Multiple Cells
4 4 Simulation Results
4.1 Single Cell
5 5 Conclusion
References
Research on Knowledge Mining Algorithm of Spacecraft Fault Diagnosis System
Abstract
1 1 Introduction
2 2 Background
3 3 Introduction of Expert System
4 4 Requirement Analysis
5 5 Overall Design
5.1 Analog Telemetry Information Mining
5.2 Digital Telemetry Information Mining
6 6 Experimental Results
7 7 Conclusion
References
Performance Analysis of SSK in AF Relay over Transmit Correlated Fading Channels
1 Introduction
2 System Model
3 BER Performance Analysis
4 Simulation Results
5 Conclusion
References
The JSCC Algorithm Based on Unequal Error Protection for H.264
Abstract
1 Introduction
2 H.264 Data Segmentation
3 Design and Implementation of Unequal Error Protection Scheme
4 Performance Evaluation
4.1 Campare the Function Between UEP and EEP
4.2 Performance Analysis of UEP with Different Bit Rate
5 Conclusion
Acknowledgements
References
Mean-Field Power Allocation for UDN
Abstract
1 1 Introduction
2 2 Problem Description of Dynamic Stochastic Game
3 3 Mean-Field Solution to the Problem
4 4 Simulation Results
5 5 Conclusion
Acknowledgements
References
Design of Gas Turbine State Data Acquisition Instrument Based on EEMD
Abstract
1 1 Introduction
2 2 Hardware Design
2.1 Temperature Signal Measuring Circuit
2.2 Speed Signal Measuring Circuit
2.3 Vibration Sensor Protection Circuit
2.4 Data Transmission Circuit
3 3 Software Design
3.1 EEMD Algorithm
3.2 Vibration Analysis
4 4 Conclusions
References
Cramér–Rao Bound Analysis for Joint Estimation of Target Position and Velocity in Hybrid Active and Passive Radar Networks
Abstract
1 Introduction
2 Signal Model
2.1 LFM Signal Model in Active Radar Networks
2.2 FM Signal Model in FM-Based Passive Radar Networks
3 Joint Cramér–Rao Lower Bound
3.1 Non-coherent FIM for LFM-Based Active Radar Networks
3.2 Non-coherent FIM for FM-Based Passive Radar Networks
3.3 Non-coherent CRLB for Hybrid Radar Networks
4 Simulation Results and Analysis
5 Conclusion
Acknowledgements
References
A Hinged Fiber Grating Sensor for Hull Roll and Pitch Motion Measurement
Abstract
1 1 Introduction
2 2 Theory
2.1 FBG Basic Sensing Principle
2.2 Temperature Compensation Method of FBG
2.3 Arc Hinge Flexibility Theory
3 3 Sensor Structure
3.1 Structure Description
3.2 Ansys Analysis
4 4 Sensor Test Experiment
5 5 Conclusions
Acknowledgements
References
Natural Scene Mongolian Text Detection Based on Convolutional Neural Network and MSER
Abstract
1 Introduction
2 Related Work
2.1 Generating Candidate Connected Areas
2.2 Training Text Classifier
3 Experimental Results and Analysis
3.1 Data Sets and Evaluation Criteria
3.2 Experimental Results and Analysis
4 Conclusion
Acknowledgements
References
Coverage Probability Analysis of D2D Communication Based on Stochastic Geometry Model
Abstract
1 1 Introduction
2 2 System Model
3 3 Performance Analysis of Downlink
3.1 Coverage Probability of Cellular Links
3.2 Coverage probability of D2D links
4 4 Simulation Analysis
5 5 Conclusions
Acknowledgements
Appendix 1: Proof of Theorem 1
B: Proof of Theorem 2
A: Proof of Theorem 3
References
Study of Fault Pattern Recognition for Spacecraft Based on DTW Algorithm
Abstract
1 1 Introduction
2 2 Principle of DTW Algorithm
3 3 Test Data Analysis Method Based on DTW Algorithms
3.1 Data Analysis Workflow
3.2 Analysis of Test Results
3.3 Threshold Determination Method
4 4 Fault Recognition Method Based on DTW Algorithms
4.1 Workflow of Fault Recognition System
4.2 Analysis of Test Results
5 5 System Performance Optimization
6 6 Conclusion
References
A Joint TDOA/AOA Three-Dimensional Localization Algorithm for Spacecraft Internal
Abstract
1 1 Introduction
2 2 Localization Algorithm
3 3 Localization Scenario
4 4 Conclusion
References
A Study on Lunar Surface Environment Long-Term Unmanned Monitoring System by Using Wireless Sensor Network
Abstract
1 1 Introduction
2 2 Design of Lunar Surface Environment Detection System
2.1 The Impact of the Lunar Environment
2.2 System Design
2.3 Energy Balance Routing Technology
3 3 Feasibility and Advantage Analysis
4 4 Concluding Remarks
References
A Study on Automatic Power Control Method Applied in Astronaut Extravehicular Activity
Abstract
1 Introduction
2 Communication System of EVA
3 Automatic Power Control Method for the Backward Signal
3.1 Open-Loop Automatic Power Control
3.2 Closed-Loop Automatic Power Control
3.2.1 Outer-Loop Automatic Power Control
3.2.2 Inner-Loop Automatic Power Control
4 Conclusion
References
Design of EVA Communications Method for Anti-multipath and Full-Range Coverage
Abstract
1 Introduction
2 EVA Communications Method
2.1 Design of Antenna Array
2.2 DS-CDMA
2.3 Time Diversity
2.4 Space Diversity
3 Conclusion
References
High Accurate and Efficient Image Retrieval Method Using Semantics for Visual Indoor Positioning
Abstract
1 1 Introduction
2 2 System Model
2.1 Visual Indoor Positioning System Overview
2.2 SCBIR Method Overview
3 3 Proposed Method
3.1 Semantic Segmentation Network Framework
3.2 Precise Semantic Segmentation
3.3 Efficient Image Retrieval
4 4 Implementation and Performance Analysis
4.1 Experiment Environment
4.2 Experiment Results
5 5 Conclusion
Acknowledgements
References
Massive MIMO Channel Estimation via Generalized Approximate Message Passing
1 Introduction
2 System Model and Channel Characteristics
3 Parameters Learning Through Generalized Approximate Message Passing Based EM
3.1 EM-based Sparse Signal Learning
3.2 E-Step
3.3 GAMP for Posterior Statistics
3.4 M-Step
4 Simulations Results
5 Conclusion
References
Study of Key Technological Performance Parameters of Carbon-Fiber Infrared Heating Cage
Abstract
1 Introduction
2 Structural Design of Carbon-Fiber Heating Cage and Layout of Heat-Flow Meter Used in Testing
2.1 Structural Design of Carbon-Fiber Heating Cage
2.2 Layout of Heat-Flow Meter Used in Testing
3 Testing and Analysis of the Performance in a Thermal-Vacuum Environment
3.1 Test Preparation
3.2 Testing
3.2.1 Measurement of Heat-Flow Uniformity in Carbon-Fiber Infrared Cage
3.2.2 Testing the Heating Capacity of Carbon-Fiber Heating Cage
3.2.3 Comparison of Heating Capability with Traditional Nickel-Chromium Alloy Heating Cage
3.3 Test Results and Analysis of Heat-Flow Uniformity
3.3.1 Calculation of Heat Flux
3.3.2 Calculation of Heat-Flow Uniformity
3.4 Comparison and Analysis of Two Kinds of Heating Cage
3.4.1 Test Results and Analysis
4 Simulation Analysis of Heat-Flow Uniformity
4.1 Monte Carlo Method
4.2 Parameter Setting
4.3 Simulation Results
4.4 Comparison of Test and Simulation Analysis Data
5 Conclusion
References
Research on Switching Power Supply Based on Soft Switching Technology
Abstract
1 Introduction
2 The Soft Switching Realization of Switching Power Supply
2.1 Section Heading (“H1”)
2.2 TMS320F2812 Control Hardware Implementation
2.3 Software Implementation of TMS320F2812 Control
2.4 System Simulation Model
3 Simulation Results and Analysis
4 Conclusion
Acknowledgements
References
Grid Adaptive DOA Estimation Method in Monostatic MIMO Radar Using Sparse Bayesian Learning
1 Introduction
2 System Model
2.1 MIMO Radar Signal Model
2.2 Traditional On-Grid Model
2.3 The Proposed Off-Grid Model
3 Grid Adaptive DOA Estimation Method
3.1 Sparse Bayesian Model
3.2 The Proposed GADE Algorithm
4 Numerical Simulations
4.1 Spatial Spectrum
4.2 Robustness Against Measurement Noise
4.3 Sensitivity to Initial Grid Granularity
5 Conclusion
References
Global Deep Feature Representation for Person Re-Identification
Abstract
1 1 Introduction
2 2 The Proposed Method
3 3 Experiments
3.1 Datasets
3.2 Comparison of Four Backbone Networks on GDCN
3.3 Comparison with Prior Methods
4 4 Conclusion
Acknowledgements
References
Hybrid Precoding Based on Phase Extraction for Partially-Connected mmWave MIMO Systems
1 Introduction
2 System Model
2.1 System Model
2.2 Channel Model
3 Hybrid Precoding for the Partially-Connected Structure Based on Phase Extraction (HPP-PE)
3.1 Analog Precoder Design of HPP-PE
3.2 Digital Precoder Design of HPP-PE
3.3 Complexity Evaluation
4 Simulation Results
5 Conclusion
References
Research on the Fusion of Warning Radar and Secondary Radar Intelligence Information
Abstract
1 Introduction
2 Point Fusion of Warning Radar and Secondary Radar
2.1 Point-and-Shoot Fusion Structure of Warning Radar and Secondary Radar
2.2 Point Fusion Process and Algorithm
3 Track Fusion of Warning Radar and Secondary Radar
3.1 Track Fusion Structure of Warning Radar and Secondary Radar
3.2 Track Fusion Process and Algorithm
4 Simulation and Performance Analysis
5 Conclusion
References
Antenna Array Design for Directional Modulation
1 Introduction
2 Review of Planar Antenna Array Based Beamforming
3 DM Design for the Uniform Planar Antenna Array
4 Design Examples
5 Conclusions
References
Capturing the Sparsity for Massive MIMO Channel with Approximate Message Passing
1 Introduction
2 System Model
3 Learning Sparse Virtual Channel Model Parameters Through DL Training
3.1 Problem Formulation
3.2 Expectation Step
3.3 Deriving the Posterior Statistics with AMP
3.4 Maximization Step
4 Simulations Results
5 Conclusion
References
An On-Line EMC Test System for Liquid Flow Meters
Abstract
1 Introduction
2 Design of Compact Liquid Flowrate Standard Facility
2.1 Design of Surge Tank
2.2 Design of Water Tank
3 Analysis of Hydraulic Resistance
4 Experimental Research of on-Line EMC Test System
References
Research on Kinematic Simulation for Space Mirrors Positioning 6DOF Robot
Abstract
1 The Position Analysis of 6DOF Parallel Robot
1.1 6DOF Parallel Robot Model
1.2 Inverse Solution of the Position of 6DOF Parallel Robot
1.3 Positive Solution of the Position of 6DOF Parallel Robot
2 Simulation of 6DOF Parallel Robot Based on OpenGL
3 Simulation Results
4 Conclusion
References
A Dictionary Learning-Based Off-Grid DOA Estimation Method Using Khatri-Rao Product
1 Introduction
2 Covariance-Based Model for Sparse DOA Estimation
2.1 Array Model
2.2 Covariance-Based Sparse Representation Model
3 Dictionary Learning-Based Off-Grid DOA Estimation Method
4 Simulation and Analysis
5 Conclusion
References
Radar Adaptive Sidelobe Cancellation Technique Based on Spatial Filtering
Abstract
1 Introduction
2 Radar Sidelobe Cancellation Technology
3 Adaptive Cancellation Algorithm
3.1 Least Mean Square Algorithm
3.2 Sampling Matrix Inversion Algorithm
3.3 Conjugate Gradient Algorithm
3.4 Normalized Least Mean Square Algorithm
4 Simulation and Performance Analysis
5 Conclusion
References
On the Spectral Efficiency of Multiuser Massive MIMO with Zero-Forcing Precoding
1 Introduction
2 System Model
3 Spectral Efficiency Analysis of System
3.1 Spectral Efficiency Analysis in Ricean Fading
3.2 Exact Expression of the Spectral Efficiency
3.3 Tight Bounds on the Spectral Efficiency
4 Numerical Results
5 Conclusion
References
A Signal Sorting Algorithm Based on LOF De-Noised Clustering
Abstract
1 1 Introduction
2 2 Data Standardization
3 3 Outlier Removal Algorithm
3.1 Isolated Point Removal Based on Euclidean Distance
4 4 K-Means Clustering Algorithm Based on Data Field
4.1 Data Field
4.2 Determination of the Initial Cluster Center
4.3 K-Means Algorithm
5 5 Simulation Analysis
6 6 Conclusion
Acknowledgements
References
Design of a Small-Angle Reflector for Shadowless Illumination
Abstract
1 Introduction
2 Design and Simulation
3 Conclusion
References
Anti-interference Communication Algorithm Based on Wideband Spectrum Sensing
Abstract
1 Introduction
2 System Model
3 Proposed Algorithm
3.1 Wideband Spectrum Sensing Algorithm Based on Compressed Sensing
3.2 Spectrum Decision Algorithm Based on Frequency Domain Entropy
4 Simulation Results
4.1 Chirp Interference Signal Sparse Representation
4.2 Signal Reconstruction Performance by Different Algorithms
4.3 Comparison of Detection Performance
5 Conclusions
Acknowledgements
References
A Multi-task Dynamic Compressed Sensing Algorithm for Streaming Signals Eliminating Blocking Effects
Abstract
1 Introduction
2 SMT-SBL Algorithm
3 Experimental Results
4 Summary
References
Thunderstorm Recognition Algorithm Research Based on Simulated Airborne Weather Radar Reflectivity Volume Scan Data
Abstract
1 1 Introduction
2 2 Airborne Weather Radar Scan Mode
3 3 Thunderstorm Recognition Algorithm
4 4 Thunderstorm Identification Case Analysis
5 5 Conclusion
Acknowledgements
References
FPGA-Based Fall Detection System
Abstract
1 Introduction
2 Over View of the System
3 Fall Detection Algorithm
3.1 Background Generation
3.2 Moving Object Segmentation
3.3 Fall Detection
4 Implementation on FPGA
5 Results and Discussions
5.1 Accuracy of System
5.2 Processing Frame Rate
5.3 System Alarm Time
6 Conclusions
References
Artificial Intelligence and Game Theory Based Security Strategies and Application Cases for Internet of Vehicles
Abstract
1 Introduction
2 Literature Survey
2.1 Structure of IoV
2.2 Attack Classification in IoV
2.3 Countermeasures for IoV Security
2.4 Artificial Intelligence and Game Theory Based Security Strategies for IoV
2.5 Case Study of Artificial Intelligence and Game Theory Based Security Strategies for IoV
3 Conclusions
Acknowledgements
References
The Effect of Integration Stage on Multimodal Deep Learning in Genomic Studies
1 Introduction
2 Data
3 Methods
4 Experimental Results
5 Validation
6 Discussions and Conclusions
References
An Advanced Aerospace High Precision Spread Spectrum Ranging System Technology
Abstract
1 Introduction
2 Basic Principle of Pseudo-Code Ranging
3 Optimization Design of Pseudo Code Ranging System
3.1 Conventional Pseudo-Code Ranging System
3.2 Advanced High Precision Pseudo-Code Ranging System
3.2.1 System Structure
3.2.2 Frequency Flow
3.2.3 Error Analysis
4 Test Verification
5 Conclusion
References
Weight-Assignment Last-Position Elimination-Based Learning Automata
Abstract
1 Introduction
2 Related Work
3 Proposed Learning Automata
4 Simulation Results
5 Conclusion
Acknowledgements
References
Nonlinear Multi-system Interactive Positioning Algorithms
Abstract
1 Introduction
2 System Modeling
3 Nonlinear Multi-system Interactive Positioning Algorithm
3.1 Multiple System Interaction
3.2 Multiple System Parallel Filtering
3.3 System Probability Update
3.4 System Fusion Output
4 Analysis of Simulation Experiment
5 Conclusion Analysis
References
Bandwidth Enhancement of Waveguide Slot Antenna Array for Satellite Communication
Abstract
1 Introduction
2 Antenna Design and Fabrication
3 Results and Discussion
4 Conclusion
References
Design of an Enhanced Turbulence Detection Process Considering Aircraft Response
Abstract
1 Introduction
2 The Estimation of the Vertical Load Factor
3 Enhanced Turbulence Detection Process Based on Vertical Load Factor
3.1 Turbulence Model and Its Power Spectral Density Function
3.2 Predicting Vertical Load Factor
4 Numerical Examples
4.1 Analysis of Examples
4.2 Application Analysis
5 Conclusion
Acknowledgements
References
Rain-Drop Size Distribution Case Study in Chengdu Based on 2DVD Observations
Abstract
1 Introduction
2 Instruments and Data
2.1 Instrument Introduction
2.2 Data Processing
3 Characteristics of Three Precipitation Raindrop Spectra
3.1 Raindrop Size Distribution
3.2 Total Particle Density Characteristics
3.3 Median Volume Diameter Feature
4 Analysis of Related Parameters
5 Conclusion
Acknowledgements
References
Analysis of the Influence on DPD with Memory Effect in Frequency Hopping Communication System
Abstract
1 Introduction
2 Theory Analysis
2.1 Basic Theory of DPD
2.2 Basic Principle of Frequency Hopping Communication
2.3 The Influence of Frequency Hopping on DPD with Memory Effect
3 Experiment Analysis
3.1 Experiment Scheme
3.2 Generation of the Frequency Hopping Signal
3.3 Analysis of the Test Data
3.3.1 Analysis of the Test Data Before and After Single Hop
3.3.2 Analysis of the Whole Frequency Band
3.3.3 Analysis of the Time Domain Signal
4 Conclusion
References
FPGA-Based Implementation of Reconfigurable Floating-Point FIR Digital Filter
Abstract
1 Introduction
2 Methods
2.1 Filtering Processing Module
2.2 Data Rearrangement Module
2.3 Overlap-Add Module
3 Result Analysis
4 Conclusion
Acknowledgements
References
High Precision Spatiotemporal Datum Design Based on Ground Observation Position
Abstract
1 1 Introduction
2 2 Definition
3 3 Calculation of the Observation Position of Celestial Bodies on the Ground Station
4 4 Simulation
5 5 Conclusion
References
Study on Two Types of Sensor Antennas for an Intelligent Health Monitoring System
Abstract
1 1 Introduction
2 2 Two Types of in-Body Sensor Antenna
3 3 In-Body Antenna Transmission Characteristic
4 4 Conclusion
Acknowledgements
References
A Fiber Bragg Grating Acceleration Sensor for Measuring Bow Slamming Load
Abstract
1 1 Introduction
2 2 Theory
2.1 Basic Fiber Grating Sensor Theory
2.2 Temperature Characteristic
2.3 Axial Strain
3 3 Sensor Structure Analysis
3.1 The Structure of the Sensor
3.2 Working Principle of Sensor
4 4 Vibration Testing of Sensors
5 5 Conclusion
Acknowledgements
References
Improving Indoor Random Position Device-Free People Recognition Resolution Using the Composite Method of WiFi and Chirp
Abstract
1 Introduction
2 Composite Preamble Scheme
3 Indoor Device-Free People Recognition
3.1 Scenario and Calculation Setting
3.2 Signal Setting
3.3 Result of the Experiment
4 Conclusions
Acknowledgements
References
Optimal Design of an S-Band Low Noise Amplifier
Abstract
1 Introduction
2 Design and Analysis
2.1 Performance Indexes
2.2 Design Proposal
3 Realization and Measurement
4 Conclusion
Acknowledgements
References
A Triangular Centroid Location Method Based on Kalman Filter
Abstract
1 Introduction
2 RSSI
2.1 RSSI and 802.11 Protocol
2.2 Measurement Process and Ranging Principle of RSSI Value
3 Location Algorithm
3.1 Principle of Improved Triangular Centroid Algorithm
3.2 Algorithm Implementation Steps
4 Kalman Filter
4.1 Principle of Kalman Filter
4.2 Algorithm Implement Steps
5 Experimental Results and Analysis
6 Conclusion
Acknowledgements
References
Research on Spatial Network Routing Model Based on Price Game
Abstract
1 1 Introduction
2 2 Definition of System Model
2.1 Spatial Network Topology Structure
2.2 Routing Node Storage Resource Allocation Scheme
2.3 Price Game Model
2.4 “Selfish” Node Penalty Mechanism
3 3 Routing Model Design
3.1 Routing Game Model Equilibrium Analysis
3.2 Routing Algorithm Design
4 4 Simulation Analysis
4.1 Transmission Success Rates Analysis
4.2 Transmission Delay Analysis
4.3 Network Spending Ratio Analysis
5 5 Conclusion
References
The TDOA and FDOA Algorithm of Communication Signal Based on Fine Classification and Combination
Abstract
1 Introduction
2 The TDOA and FDOA Estimation Accuracy of Communication Signal
3 The Fine Classification and Combination Estimation Algorithm
3.1 The TDOA and FDOA Estimation Model
3.2 The Fine Classification and Combination Estimation Algorithm
4 Simulation and Analysis Results
5 Summary
References
An Adaptive DFT-Based Channel Estimation Method for MIMO-OFDM
Abstract
1 1 Introduction
2 2 DFT Channel Estimation Principle DFT Channel Estimation Principle
2.1 LS Channel Estimation
2.2 DFT-Based LS Channel Estimation
3 3 Proposed Methods
4 4 Simulation Results and Analysis
References
A Novel Gradient L0-Norm Regularization Image Restoration Method Based on Non-local Total Variation
Abstract
1 Introduction
2 Models and Methods
2.1 The Non-local TV Model
2.2 Image Division Using L0-Norm of Image Gradient
2.3 The Proposed Model
3 Experimental Results
4 Conclusions
Acknowledgements
References
Study on Interference from 5G System to Earth Exploration Satellite Service System in High Frequency
Abstract
1 Introduction
2 System Modeling and Analytic Procedure
2.1 The Interference Model
2.2 Propagation Model
2.3 Method of Deterministic Calculation
3 Simulation Experiment and Result Analysis
3.1 Parameters of Simulation Experiment
3.1.1 Parameters of 5G Base Station
3.1.2 Parameters of Earth Station
3.2 Simulation
4 Conclusions
Acknowledgements
References
Sparse Planar Antenna Array Design for Directional Modulation
1 Introduction
2 Review of Planar Antenna Array Based Beamforming
3 Sparse Planar Antenna Array Design for DM
4 Design Examples
5 Conclusions
References
Research on the Linear Interpolation of Equal-Interval Fractional Delay Filter
Abstract
1 Introduction
2 Ideal Digital Fractional Delay Filter
3 Design of the Equal-Interval Delay Filter
3.1 Ideal Equal-Interval FD Filter
3.2 Equal-Interval FD Filter
3.3 Interpolation of the Equal-Interval Delay Filter
4 Simulation and Verification
4.1 Simulation of Equal-Interval Delay Filter
4.2 Simulation of the EIFD Filter Interpolation Algorithm
5 Conclusion
References
Single-Channel Grayscale Processing Algorithm for Transmission Tissue Images Based on Heterogeneity Detection
Abstract
1 1 Introduction
2 2 Grayscale Processing and Experimental Preparation
3 3 Experiments
3.1 Experimental Device
3.2 Experimental Process
4 4 Analysis of Experimental Results
5 5 Conclusions
Acknowledgements
References
Handwriting Numerals Recognition Using Convolutional Neural Network Implemented on NVIDIA’s Jetson Nano
Abstract
1 Introduction
2 Related Work
3 Handwritten Hindi Digits Recognition
4 Results and Discussion
References
Implementation of Image Recognition on Embedded Systems
Abstract
1 Introduction
2 Technical Background
2.1 ImageNet Dataset
2.2 Jetson Nano Embedded Development Board
2.3 Convolutional Neural Networks
3 Method
3.1 Convolutional Neural Network Classification Model
3.2 Convolutional Neural Network Model Construction
4 Result
5 Conclusion
References
A Precise 3-D Wireless Localization Technique Using Smart Antenna
Abstract
1 1 Introduction
2 2 3D Estimation of an Object’s Location by AOA Measurement
3 3 The Position Estimation
4 4 Consider the Influence of Distance
5 5 Simulations and Results Analysis
6 6 Conclusion
Acknowledgements
References
A Two-Phase Fault Diagnosis Algorithm Based on Convolutional Neural Network for Heterogeneous Wireless
Abstract
1 Introduction
2 CNN-Based Diagnosis Model
2.1 The First State: Monitoring Phase
2.1.1 Feature Selection
2.1.2 Diagnosis of Abnormal Symptoms
2.2 Second Stage: Diagnosis Stage
2.2.1 Base Station Selection
2.2.2 Fault Diagnosis Model
3 Simulation and Performance Evaluation
3.1 Simulation Environment
3.2 Performance Analysis
4 Conclusions
Acknowledgements
References
A Wireless Power Transfer System with Switching Circuit of Power Grid and Solar Energy
Abstract
1 Introduction
2 Wireless Power Charging System
3 Electricity Grid Charging Mode
4 Solar Energy Charging Mode
5 Charging Modes Switching Circuit
6 Experiments
7 Conclusion
Acknowledgements
References
A Fiber Bragg Grating Stress Sensor for Hull Local Strength Measurement
Abstract
1 1 Introduction
2 2 The Basic Principle of Fiber Grating Sensor
3 3 Short Base Stress Sensor Structure
4 4 Sensor Experiment Process
5 5 Conclusion
Acknowledgements
References
Direct Wave Parameters Estimation of Passive Bistatic Radar Based on Uncooperative Phased Array Radar
1 Introduction
2 Signal Model
3 Proposed Method
4 Experimental Results
5 Conclusion
References
Noncooperative Radar Illuminator Based Bistatic Receiving System
Abstract
1 1 Introduction
2 2 Framework of the Bistatic Receiving System
3 3 Experimental Scheme, Hardware Setup and Results
3.1 Deinterleaving of Direct Path Signals
3.2 Coherent Processing Results
4 4 Conclusions
References
Research on Simulation Technology for Remote Sensing Image Quality
Abstract
1 Introduction
2 System Construction Ideas
3 System Scheme and Composition
3.1 Geometric Simulation Subsystem
3.2 Radiation Simulation Subsystem
3.3 Compression and Decompression Subsystem
3.4 Ground Processing and Subjective Evaluation Subsystem
4 Applications
5 Subsequent Improvement and Consideration
6 Conclusion
References
Distributed Measurement of Micro-vibration and Analysis of the Influence on Imaging Quality
Abstract
1 Introduction
2 Requirements of Micro-vibration Based on Imaging Quality
2.1 Division of Micro-vibration Frequency
2.2 Characteristics of Optical Remote Sensing Camera Imaging
2.3 Calculation of the Effect of Micro-vibration on Imaging Quality
2.4 Requirements of Imaging Quality for Micro-vibration
3 Micro-vibration Measurement Based on Optical System
4 Results and Analysis of Micro-vibration Test
4.1 Results of Micro-vibration Measurement
4.2 Effect of Micro-vibration on Image Quality
5 Conclusion
References
Analysis and Verification of the Effect of Space Debris on the Output Power Decline of Solar Array
Abstract
1 1 Introduction
2 2 Analysis of Output Power Decline Process of the Solar Array
2.1 Method for Calculating Output Power of Solar Array
2.2 Shunting Principle of Solar Array
2.3 Analysis of the Decline Process of Output Power
2.4 Summary
3 3 Simulation Validation
3.1 Attitude Variation Mechanism of Space Debris Impact Disturbing
3.2 Mechanics Analysis of Space Debris Impact Disturbance
4 4 Impact Effect of Space Debris on the Solar Array
5 5 Conclusions
References
A New Nonlinear Method for Calculating the Error of Passive Location
Abstract
1 Introduction
2 Analysis of GDOP Based on Linear Method
3 GDOP Based on UT
4 Conclusions
Acknowledgements
References
A Static Method for Stack Overflow Detection Based on SPARC V8 Architecture
Abstract
1 1 Introduction
2 2 Introduction of SPARC V8
2.1 Register Windows
2.2 Stack Management
3 3 Static Method of Stack Overflow Detection
3.1 Principle Introduction
3.2 Analysis of Function Stack Usage
3.3 Analysis of Function Call Relationships
3.4 Implementation of Stack Overflow Detection Algorithm
4 4 Case Verification and Result Analysis
5 5 Conclusion
References
Enhanced Double Threshold Based Energy Detection
Abstract
1 1 Introduction
2 2 System Model
2.1 General Model
2.2 Energy Detection (Single Threshold Based Detection)
2.3 Double Threshold
3 3 Proposed Algorithm
4 4 Simulations and Results
5 5 Conclusion
Acknowledgements
References
Self-generating Topology Coloring Scheduling for Interference Mitigation in Wireless Body Area Networks
Abstract
1 Introduction
2 The Inter-WBAN Interference and Resource Scheduling Scheme
2.1 Inter-WBAN Interference
2.2 Inter-WBAN Interference Resource Scheduling Scheme
3 Self-generating Topology Colouring Scheduling
4 Experimental Analysis
5 Conclusion
References
Smart Parking and Recommendation System Under Fog Calculation
Abstract
1 Introduction
2 Fog Computing
3 Fog Computing Architecture of Smart Parking System
3.1 Hierarchical Framework of Cloud Computing for Smart Parking Systems
3.2 Cloud Computing Architecture for Smart Parking Systems
3.3 Key Technologies of Fog Calculation in Smart Parking System
4 Conclusions
References
Speech Synthesis Method Based on Tacotron + WaveNet
Abstract
1 Introduction
2 Speech Synthesis Model Based on Tacotron
2.1 CBHG Module
2.2 Encoding-Decoding Model
2.3 Attention Mechanism
2.4 Griffin-Lim Algorithm
3 Speech Synthesis Model Based on WaveNet
3.1 Feature Selection
3.2 WaveNet Network Architecture
4 Experiment and Results
4.1 Experimental Data
4.2 Experimental Process
4.3 Experimental Results
4.4 Contrast Experiment
References
A Novel Spatial Domain Based Steganography Scheme Against Digital Image Compression
Abstract
1 Introduction
2 Proposed Method
2.1 Embedding Procedure
2.2 Extracting Procedure
2.3 Cover Image Recovery Procedure
3 Experimental Results
4 Conclusion
Acknowledgements
References
Losen: An Accurate Indoor Localization System by Integrating CSI of Wireless Signal and MEMS Sensors
1 Introduction
2 Related Work
3 System Description
3.1 Locating the Target Based on AoA
3.2 Step Detection and Velocity Estimation
3.3 Heading Estimation
3.4 AOA/PDR Integration
4 Experimental Results
5 Conclusion
References
A Direct Target Recognition Algorithm for Low-Resolution Radar with Unbalanced Samples
Abstract
1 Introduction
2 Direct Recognition Algorithm of Low-Resolution Radar Target Based on Focal Loss Function
2.1 CNN
2.2 Focal Loss Function
2.3 Direct Recognition Algorithm of Low-Resolution Radar Target Based on Focal Loss Function
2.3.1 The Structure of CNN in This Paper
2.3.2 Algorithm Steps
3 Experimental Results and Analysis
3.1 Experimental Data Set
3.2 Focal Loss Function Parameter
3.3 The Influence of Different Loss Functions on the Recognition Effect
3.4 Recognition Effect of Low-Resolution Radar Target Direct Recognition Algorithm Based on the Focal Loss Function
4 Conclusion
References
DFT-Spread Based PAPR Reduction of OFDM for Short Reach Communication Systems
Abstract
1 Introduction
2 Theoretical Analysis
3 Simulation Setup
4 Results and Discussions
5 Conclusion
Acknowledgements
References
Underdetermined Mixed Matrix Estimation of Single Source Point Detection Based on Noise Threshold Eigenvalue Decomposition
Abstract
1 1 Introduction
2 2 Underdetermined Mixed Signal Model
3 3 Algorithm Principle
3.1 Traditional Algorithm Principle
3.2 Improved Algorithm in This Paper
4 4 Simulation
4.1 Evaluation Criteria
4.2 Experimental Verification
5 5 Conclusion
References
Optimization of APTEEN Routing Protocol for Wireless Sensor Networks Based on Genetic Algorithm
Abstract
1 Introduction
2 Related Work
2.1 Energy Consumption Model
2.2 Genetic Algorithm
2.3 Density Adaptive Algorithm
3 GA-APTEEN Optimization Agreement
3.1 Cluster Heads Optimization
3.2 Genetic Optimization Algorithm
3.3 Select the Cluster Heads for the Second Time
3.4 Node Sleep and Clustering Mechanism
3.4.1 Node Sleep Mechanism
3.4.2 Node Clustering Optimization Mechanism
4 Simulation Analysis
5 Conclusion
Acknowledgements
References
Optimization of APTEEN Routing Protocol in Wireless Sensor Networks Based on Particle Swarm Optimization
Abstract
1 Introduction
2 Particle Swarm Optimization and Wireless Communication Model
2.1 Particle Swarm Optimization
2.2 Wireless Communication Model
3 Particle Swarm Optimization Cluster Head Selection Algorithm
3.1 Network Model
3.2 Energy Position Equalization-Adaptive Threshold-Sensitive Energy Efficient Sensor Network Protocol (EPE-APTEEN)
3.2.1 Pre-clustered
3.2.2 Optimize Cluster Head
4 Simulation and Analysis
5 Conclusion
Acknowledgements
References
Research Status of Wireless Power Transmission Technology
Abstract
1 Introduction
2 Magnetically Coupled Wireless Power Transfer Technology
2.1 Fundamentals
2.2 Key Technology
2.3 Application Status
3 Laser Wireless Energy Transfer Technology
3.1 Fundamentals
3.2 Key Technology
3.3 Application Status
4 Microwave Wireless Energy Transmission Technology
4.1 Fundamentals
4.2 Key Technology
4.3 Application Status
5 Summary
References
Flexible Sparse Representation Based Inverse Synthetic Aperture Radar Imaging
1 Introduction
2 Sparse Representation Based ISAR Imaging
2.1 Basic Geometry of ISAR and the Recieved Radar Signal
2.2 Sparse Probing Frequencies
2.3 Discussions
3 Sparse Bayesian Learning
3.1 Probabilistic Models
3.2 Unknown Variable Inference
4 Simulation Results
5 Conclusion
References
Localization Schemes for 2-D Molecular Communication via Diffusion
1 Introduction
2 System Model
3 Localization Schemes
4 Numerical Results
5 Conclusions
References
Research on Support Vector Machine in Estimating Source Number
Abstract
1 1 Introduction
2 2 Signal Number Estimation Based on Support Vector Machine
2.1 Feature Extraction of Source Number Estimation Based on SVM
3 3 Implementation of Source Number Estimation Algorithm
3.1 Establishment and Optimization of Classifier Parameters
3.2 Simulation Experiment
4 4 Conclusions
Acknowledgements
References
Wireless Electricity Transmission Design of Unmanned Aerial Vehicle Charging Systems
Abstract
1 Introduction
2 Results and Discussion
2.1 Magnetic Coupling Resonance Method
2.2 Electromagnetic Induction Coupling Method
3 Conclusion
Acknowledgements
References
An ITD-Based Method for Individual Recognition of Secondary Radar Radiation Source
Abstract
1 1 Introduction
2 2 Secondary Radar Source Signal Model
3 3 The Core Idea of the ITD Approach
4 4 ITD Method for Individual Recognition of Secondary Radar Radiation Source
4.1 Methods the Thought
4.2 The Basic Principle of Fast Sample Entropy Algorithm
5 5 Performance Simulation Analysis
5.1 Experimental Data of Secondary Radar Radiation Source
5.2 Classification Recognition Performance Analysis
6 6 Conclusion
References
Gaussian Mixture Model Based Multi-region Blood Vessel Segmentation Method
Abstract
1 1 Introduction
2 2 Blood Vessel Segmentation Method
2.1 NSCT Transform
2.2 Gaussian Mixture Model Based Multi-region Blood Vessel Segmentation Method
2.2.1 Gaussian Mixture Model
2.2.2 Multi-region Segmentation
2.3 Adaptive Filling Filtering
3 3 Experimental Evaluation
3.1 Results Description
3.2 Parameter Description
3.2.1 Selection of High Frequency Parameters and Low Frequency Parameters
3.2.2 Selection of Optimal Threshold
4 4 Conclusion
References
Research on the Enhancement of VANET Coverage Based on UAV
1 Introduction
2 System Model
2.1 RSU Deployment in Non-congested State
2.2 RSU and UAV Joint Deployment in Congested State
3 Proposed Approach
3.1 Greedy Algorithm Applied to RSU Deployment in Non-congested State
3.2 Joint Deployment in Congested State Based on Markov
4 Simulation Results and Analysis
4.1 RSU Deployment in Non-congested State
4.2 RSU and UAV Joint Deployment in Congested State
5 Conclusion
References
Research on Image Encryption Algorithm Based on Wavelet Transform and Qi Hyperchaos
Abstract
1 1 Introduction
2 2 Wavelet Transform
3 3 Chaotic System
4 4 Encryption Process
5 5 Simulation Results and Analysis
5.1 Histogram Analysis
5.2 Information Entropy Analysis
5.3 Correlation Analysis
5.4 Key Sensitivity Analysis
5.5 Key Space Analysis
5.6 Noise Attack Analysis
6 6 Conclusion
Acknowlegments
References
A Design of Satellite Telemetry Acquisition System
Abstract
1 1 Introduction
2 2 Design and Implementation
2.1 Composition of Satellite Telemetry Acquisition System
2.2 TM Space Data Link Protocol Telemetry Format
2.2.1 Space Pack
2.2.2 Virtual Channel Data Unit (VCDU)
2.2.3 Channel Access Data Unit (CADU)
3 3 Design Example
3.1 Space Package Organization
3.2 Spatial Packet Scheduling
3.3 Virtual Channel Data Unit Organization
3.4 Virtual Channel Scheduling
4 4 Conclusion
References
Fingerprint Feature Recognition of Frequency Hopping Radio with FCBF-NMI Feature Selection
Abstract
1 1 Introduction
2 2 Fingerprint Feature Recognition Algorithm
2.1 Feature Selection Algorithm Based on Mutual Information
2.2 SVM Parameter Optimization Based on Quadratic Grid Search Algorithm
3 3 Simulation Experiment and Analysis
3.1 Experimental Data
3.2 Feature Selection Experiment
3.2.1 Experiment 1: Analysis of Feature Selection Algorithms Under Conventional Features
3.2.2 Experiment 2: Analysis of Feature Selection Algorithms Under Higher-Order Spectral Feature SIB
3.3 SVM Parameter Optimization Experiment
4 4 Conclusion
References
Integrated Design of High Speed Uplink and Emergency Telemetry and Control for LEO Satellite
Abstract
1 1 Preface
2 2 System Design
2.1 Systematic Composition
2.2 Data Stream Design
2.3 Link Design
2.3.1 Synchronization and Channel Coding
2.3.2 Data Link Protocol
2.3.3 Emergency Measurement and Control Link Establishment
3 3 Design Examples
3.1 System Design
3.2 System Design
4 4 Conclusion
References
Imaging Correction Based on AIS for Moving Vessels in Spaceborne SAR Images
Abstract
1 1 Introduction
2 2 Performance of Moving Vessels in SAR Images
3 3 Parameter Settings of the Simulation and Results Analysis
4 4 Moving Targets Scene Simulation and Association Analysis Combined with AIS Information
5 5 Conclusion
References
Research on Flying Catkins Detection and Removal in Target Video
Abstract
1 1 Introduction
2 2 Analysis of Flying Catkins Characteristics
2.1 Physical Characteristics of Flying Catkins
2.2 Brightness Characteristics of Flying Catkins
2.3 Time Domain Characteristics of Flying Catkins
3 3 Flying Catkins Detection and Removal
3.1 Frame Difference Method Raindrop Removal Algorithm
3.2 A Flying Catkins Removal Algorithm Based on Time Domain and Brightness Characteristics
3.3 Analysis of Results
4 4 Summary and Prospect
References
Robust Context-Aware Tracking with Temporal Regularization
1 Introduction
2 Related Works
3 Context-Aware Correlation Filter Framework
4 Proposed Method
4.1 Rich Context Aware Tracker
4.2 Multi-channel Features
5 Experiment
5.1 Quantitative Analysis
5.2 Qualitative Evaluation
6 Conclusion
References
Research on Motor Speed Estimation Method Based on Electric Vehicle
Abstract
1 1 Introduction
2 2 Completely Dependent on the Physical Parameters of the Motor and the Electromagnetic Equation
3 3 Partially Dependent on the Physical Parameters of the Motor and the Electromagnetic Equation
3.1 Model Reference Adaptive System
3.2 Luenberger Observer
3.3 Extended Kalman Filter (EKF)
3.4 Sliding Mode Observer (SMO)
4 4 Independent of the Physical Parameters of the Motor and the Electromagnetic Equation
4.1 External High Frequency Signal Injection
4.2 High Frequency Signal Excitation Method Based on PWM Modulation
4.3 Artificial Intelligent Algorithm
5 5 Conclusion
Acknowledgements
References
A Novel Virtual Cell Power Allocation and Interference Merging Algorithm in UDN
1 Introduction
2 System Model
3 Proposed Power Allocation and Interference Merging Algorithm
4 Simulation Results and Analysis
5 Conclusion
References
Device-Free Sensing for Gesture Recognition by Wi-Fi Communication Signal Based on Auto-encoder/decoder Neural Network
1 Introduction
2 Experimental Setup and Data Collection
3 Methodology
3.1 Data Preprocessing
3.2 Higher-Order Cumulant Feature for Encoding
3.3 Auto-encoder/decoder Deep Neural Network
4 Experimental Results and Discussion
5 Conclusion
References
Detection of Sleep Apnea Based on Cardiopulmonary Coupling
1 Introduction
2 Feature Extraction
2.1 Heart Rate Variability Feature Extraction
2.2 Feature Extraction of Cardiopulmonary Coupling
3 OSA Classification Model
4 Result Analysis
5 Conclusion
References
Study on a Space-Air-Ground Integrated Data Link Networks Architecture
Abstract
1 Introduction
2 System Architecture
2.1 System Composition
2.2 System Functionality
3 Information Flow Process
4 Protocol Structure
5 Conclusion
References
Similar Cluster Based Continuous Bag-of-Words for Word Vector Training
Abstract
1 Introduction
2 Related Work
2.1 Continuous Bag-of-Words
2.2 Softmax Regression
3 Proposed Method
4 Results
4.1 Dataset
4.2 Result of Word Vectors
4.3 Result of Text Similarity Comparison
5 Conclusion
Acknowledgements
References
Research on Integrated Waveform of FDA Radar and Communication Based on Linear Frequency Offsets
Abstract
1 1 Introduction
2 2 Integration of FDA Radar and Communication
2.1 Application Background
2.2 A Modulated Signal Loaded with Communications
2.3 MUSIC Algorithm for Multi-target Positioning
2.4 Analysis of Communication Performance
3 3 Conclusion
Acknowledgements
References
Research on Parameter Configuration of Deep Neural Network Applied on Speech Enhancement
Abstract
1 Introduction
2 Speech Enhancement Based on DNN
3 Common Influencing Parameter
4 Experiments and Analysis
4.1 Experiments and Results
4.2 Analysis
5 Conclusion
Acknowledgements
References
Mid-Infrared Characteristic Analysis of Stability Index of Vehicle Gasoline
Abstract
1 1 Introduction
2 2 Experiments
2.1 Collection of Experimental Samples
2.2 Testing Program
3 3 Results and Discussion
3.1 Correlation Analysis of Gasoline Stability Index and Infrared Spectrum During Storage
3.2 Establishment of Gasoline Quality Decay Model
4 4 Conclusion
References
Application of Mid-Infrared Characteristic Analysis Technology in Gasoline Quality Control
Abstract
1 Introduction
2 Experiments
2.1 Collection of Experimental Samples
2.2 Testing Program
3 Results and Discussion
3.1 Middle Infrared Spectrum Analysis of Hydrocarbon Compounds in Automotive Gasoline
3.2 Analysis of Infrared Spectrum Characteristics of Blending Components of Automotive Gasoline
3.3 Infrared Spectroscopy Analysis of Functional Group Decay in Gasoline Storage
4 Conclusion
References
A Generalized Sampling Based Method for Digital Predistortion of RF Power Amplifiers
Abstract
1 Introduction
2 Design for Baseband Predistortion
2.1 Basic Principle and Structure of Baseband Predistortion
2.2 Undersampling
2.3 System Construction
3 Simulation and Measurement Results
4 Conclusion
References
Optimum Design of Intersatellite Link Based on STK
1 Introduction
2 Constellation Design
2.1 The Constellation Configuration
2.2 Visibility Analysis
3 Link Design
3.1 Domestic Satellite and Overseas Satellite
3.2 Analysis of Two Types of Links
3.3 Time Slot Design
3.4 Time-Invariant Link Planning
3.5 Time-Varying Link Planning
4 Simulation Analysis
4.1 Broadcast Authentication
4.2 Unicast Validation
5 Conclusion
References
Integrated Detection and Tracking in Asynchronous Moving Radar Network
Abstract
1 Introduction
2 System Model
2.1 Dynamic Model
2.2 Detection Model
2.3 Measurement Model
3 Integrated of Detection and Tracking
4 Simulation Result
5 Conclusion
Acknowledgements
References
Fault-Tolerant Decompression Method of Compressed Chinese Text Files
Abstract
1 Introduction
2 Chinese Character Encoding Adaptation
3 Chinese Language Model
4 Performance Analysis
4.1 Decompression Success Rate
4.2 Normalized Edit Distance
5 Conclusion
References
Classification of Human Motion Status Using UWB Radar Based on Decision Tree Algorithm
Abstract
1 Introduction
2 Measurement of Different Human Motion Status
2.1 The Composition of UWB Radar System
2.2 Experimental Scenario
3 Data Processing and Classification Algorithm
3.1 Data Processing
3.1.1 Background Subtraction
3.1.2 Feature Extraction
3.2 Decision Tree Classification Algorithm
4 Experimental Results
5 Conclusion
References
A Sub-aperture Division Method for FMCW CSAR Imaging
Abstract
1 1 Introduction
2 2 FMCW CSAR Imaging Geometry and Echo Model
3 3 Analysis of Sub-aperture Selection
4 4 Simulation Analysis Based on Sub-aperture Algorithm
5 5 Conclusion
Acknowledgements
References
An Experimental Study of Sea Target Detection of Passive Bistatic Radar Based on Non-cooperative Radar Illuminators
Abstract
1 Introduction
2 Section Heading Signal Processing Method for Passive Bistatic Radar Using Non-cooperative Pulse Radar
3 Experimental Bistatic Radar Setup for Sea Target Detection
4 Experiment Results
5 Conclusions
References
Design of a Quasi-Real-Time Communication System for LEO Satellites Using Beidou Short-Message Service
Abstract
1 Introduction
2 Interface Characteristics of BDS
3 Communication System Design
3.1 System Description
3.2 Working Process
4 Simulation Results and Analysis
4.1 Test Scenarios
4.2 Orbit Intervals
4.3 Communication Delays
5 Conclusion
References
A Physically Decoupled Onboard Control Plane for Software Defined LEO Constellation Network
1 Introduction
2 The Architecture of the Physically Decoupled Onborad SDN
2.1 CDMA Based RF Communications System for Control Plane
2.2 System Model
2.3 Mechanism
3 Controller Placement Problem
3.1 Performance Metrics
3.2 Flow Set Generation
4 Simulate Results
4.1 Simulate Scenario
4.2 Average Flow Setup Time
4.3 Average Failure Recovery Time
5 Conclusion
References
A Dynamic Programming Based TBD Algorithm for Near Space Targets Under Range Ambiguity
Abstract
1 1 Introduction
2 2 System Model and Problem Description
2.1 System Dynamics Model
2.2 System Observation Model
3 3 Proposed Algorithm Introduction
3.1 Primary Threshold
3.2 Improved DP-TBD in Time-Range Domain
3.3 Ambiguity Resolution Procedure
4 4 Simulations and Discussion
5 5 Conclusions
References
Research and Design of Home Care System of Internet of Things Based on Wireless Network
Abstract
1 1 Introduction
2 2 Structure of Home Care System of IoT Based on CC3200
2.1 The Structure of the Overall Design Scheme
2.2 Design of Acquisition Terminal Node
3 3 The Design of the Specific Scheme of the System
3.1 Temperature Acquisition Scheme
3.2 Blood Pressure Collection Scheme
3.3 Blood Oxygen Collection Scheme
4 4 Software Design
5 5 Conclusion
Acknowledgements
References
Design of Wind Pendulum Control System Based on STM32F407
Abstract
1 Introduction
2 Analysis of Motion Model
2.1 Simple Pendulum Motion
2.2 Conical Pendulum Motion
2.3 Delay Analysis of Axial Flow Fan
3 Design and Implementation
3.1 Design of Hardware System
3.2 Design of Software System
3.2.1 PID Algorithm
3.2.2 PWM Output
3.2.3 MPU6050 Drive Function
3.2.4 Main Function
4 Function Implement and Test Result Analysis
4.1 Swing-Up
4.2 Stop-Swing
4.3 Drawing Line Segment of Specified Length
4.4 Drawing Line Segment of Specified Deflection Angle
4.5 Drawing a Circle of Specified Radius
4.6 Performance with External Interference
5 Conclusion
Acknowledgements
References
A High-Speed Parallel Accessing Scheduler of Space-Borne Nand Flash Storage System
Abstract
1 Introduction
2 Parallel Architecture of Multi-channel Flash Storage System
2.1 Traditional Parallel Architecture for Multi-channel Flash Storage System
2.2 An Optimized Accessing Scheduler for Multi-channel Nand Flash Storage System
3 Experiment Results
3.1 Flash Throughput Rate in Various Parallel Level
3.2 Flash Write Operation Speed Related with Input Data Rate
3.3 Compare with Previous Methods
4 Conclusion
References
Two Dimensional Joint ISAR Imaging Algorithm Based on Matrix Completion
Abstract
1 1 Introduction
2 2 Joint ISAR Imaging Model
2.1 Signal Model
2.2 Imaging Model
3 3 Algorithm of Joint ISAR Imaging
4 4 Experiments and Discussion
References
The Satellite GPS Antenna In-Orbit Phase Center Calibration Method
Abstract
1 1 Introduction
2 2 Background Information
3 3 Calibration Method
3.1 Antenna Phase Center Modeling
3.2 Processing Strategy
4 4 Results and Discussion
5 5 Conclusion
References
Migrating Target Detection Under Spiky Clutter Background
Abstract
1 1 Introduction
2 2 Signal and Clutter Model
2.1 Signal Model
2.2 Clutter Model
3 3 Non-iterative Migrating Targets Detector
4 4 Performance Evaluation
5 5 Conclusion
Acknowledgements
References
A Novel Range Super-Resolution Algorithm for UAV Swarm Target Based on LFMCW Radar
1 Introduction
2 System Model
3 Proposed Algorithm
4 Experiment
4.1 Swarm Target Simulation
4.2 Real Data Experiment
5 Conclusion
References
An Improved PDR/WiFi Integration Method for Indoor Pedestrian Localization
Abstract
1 Introduction
2 Approach
2.1 Pedestrian Dead Reckoning Equation and Weighted K-Nearest Neighbor Algorithm
2.2 Improved PDR/WiFi Integration Method
3 Experiments and Results
3.1 Experimental Setup
3.2 Localization Experiments
4 Conclusions
Acknowledgements
References
An Adaptive Radar Resource Scheduling Algorithm for ISAR Imaging Based on Step-Frequency Chirp Signal Optimization
Abstract
1 1 Introduction
2 2 Prior Knowledge
3 3 ISAR-Imaging-Considered Task Scheduling Algorithm with Two Dimensions
4 4 Simulations
5 5 Conclusion
Acknowledgements
References
A Task-Dependent Flight Plan Conflict Risk Assessment Method for General Aviation Operation Airspace
Abstract
1 Introduction
2 Task-Dependent Flight Plan and Conflict Risk Assessment
2.1 Task-Dependent Flight Plan
2.2 Conflict Risk Assessment
3 Simulations
3.1 Simulation Scenario
3.2 Results and Discussion
3.2.1 Conflict Risk Assessment
3.2.2 Conflict Risk of Different Uncertainty Levels
4 Conclusion
Acknowledgements
References
A Uniform Model for Conflict Prediction and Airspace Safety Assessment for Free Flight
Abstract
1 Introduction
2 The Uniform Model
2.1 The Electrostatic Model
2.2 The Velocity Potential
3 Safety Assessment
4 Conclusion
Acknowledgements
References
Optimization of Power Allocation for Full Duplex Relay-Assisted D2D Communication Underlaying Wireless Cellular Networks
1 Introduction
2 System Model
3 Outage Analysis
3.1 Boundary Condition A
3.2 Boundary Condition B
3.3 Boundary Condition C
4 Numerical and Simulation Results
5 Conclusion
References
Scene Text Recognition Based on Deep Learning
Abstract
1 1 Introduction
2 2 Background Knowledge
2.1 Scene Text Recognition Based on Deep Learning Method
3 3 Scene Text Recognition
3.1 Improved Sequence Recognition Algorithm
3.1.1 Image Pre-processing
3.1.2 Feature Extraction
3.1.3 Processing of Context Information
3.1.4 Transcription
4 4 Experiment and Analysis
4.1 Data Sets and Evaluation Criteria
4.2 Results and Analysis
5 5 Conclusion
Acknowledgements
References
Spectrum Sensing Algorithm Based on Twin Support Vector Machine
Abstract
1 Introduction
2 System Model
3 Spectrum Sensing Algorithm Based on TWSVM
3.1 Cognitive Process
3.2 Feature Extraction
3.3 TWSVM Training
3.4 Detection Decision
4 Simulation
5 Conclusions
References
Applicability Analysis of Plane Wave and Spherical Wave Model in Blue and Green Band
Abstract
1 Introduction
2 Gamma-Gamma Turbulence Model of Three Kinds of Beams
3 Simulation Analysis of Turbulence Model
4 Simulation Analysis of SNR Model
5 Conclusion
References
A Study of the Influence of Resonant Frequency in Wireless Power Transmission System
Abstract
1 Introduction
2 Related Works of WPT
3 Research Method
4 Simulation Results
5 Conclusion
Acknowledgements
References
Direction of Arrival Estimation Based on Support Vector Regression
Abstract
1 Introduction
2 Uniform Linear Array Model and Directions of Arrival Estimation Model
3 Experimental Results
3.1 SVR Test Results
3.2 Simulation Analysis of DOA Estimation Accuracy and Speed
4 Conclusions
Acknowledgements
References
Bistatic ISAR Radar Imaging Using Missing Data Based on Compressed Sensing
Abstract
1 Introduction
2 Methods
2.1 Radar Echo Model
2.2 Two-Dimensional CS Decoupling Imaging Algorithm
3 Results
4 Conclusion
Acknowledgements
References
Medical Images Segmentation Using a Novel Level Set Model with Laplace Kernel Function
Abstract
1 Introduction
2 Level Set Formulation
3 Experiments
4 Conclusion
Acknowledgements
References
Research on Multi-UAV Routing Simulation Based on Unity3d
Abstract
1 Instruction
2 Engineering Realization for Multi-UAV Routing Simulation
2.1 Simulation of Real Terrain
2.2 Simulation of UAV
2.3 Problem of Automatic Pathfinding About UAV
3 Communication Routing Network of Multi-UAV
4 Adjustment Plan When the UAV Loses Connection
5 Conclusions
Acknowledgements
References
Video Target Tracking Based on Adaptive Kalman Filter
Abstract
1 Introduction
2 Related Algorithms
2.1 Background Subtraction Algorithm
2.2 Standard Kalman Filter Algorithm
2.3 Adaptive Kalman Filter Algorithm
3 Proposed Algorithm Steps
4 Experimental Results and Analysis
5 Conclusion
Acknowledgements
References
Compressed Sensing Image Reconstruction Method Based on Chaotic System
Abstract
1 Introduction
2 Theory of Technology
3 Establish System Model and Analysis Metrics
3.1 Model the System
3.2 Analytical Method
4 Experimental Simulation and Analysis
5 Conclusion
Acknowledgements
References
An Underdetermined Blind Source Separation Algorithm Based on Variational Mode Decomposition
Abstract
1 Introduction
2 Variational Mode Decomposition
3 VMDSE-FastICA Algorithm
4 Simulation Experiment and Analysis
5 Conclusion
Acknowledgements
References
A Ranking Learning Training Method Based on Singular Value Decomposition
Abstract
1 Introduction
2 Application of SVD in Ranking Training
2.1 Related Algorithms
2.2 SVD Overview and Feature Extraction
3 Experimental
4 Conclusion
Acknowledgements
References
Research on Temperature Characteristics of IoT Chip Hardware Trojan Based on FPGA
Abstract
1 1 Background
2 2 Preliminary Preparation
2.1 Ring Oscillator Principle
2.2 Circuit Configuration
3 3 Circuit Design and Data Processing Method
3.1 Circuit Design
3.2 Data Process Method
4 4 Test Results at Different Temperatures
5 5 Conclusion
References
Wireless Communication Intelligent Voice Height Measurement System
Abstract
1 Overall Design of Height Measuring Instrument
2 Principle of Ultrasonic Ranging
2.1 Frequency Characteristics
2.2 Ultrasonic Detection Error
2.3 Least Square Fitting
2.4 Least Square Correction
3 Height Measuring Instrument Module Design
3.1 Overall Hardware Circuit Schematic Diagram
3.2 Ultrasonic Ranging Module
3.3 WIFI Data Transmission Module
4 Results Display
4.1 System Function Design Drawing
4.2 Test Outcome
5 Conclusion
References
Design of Intelligent Classification Waste Bin with Detection Technology in Fog and Haze Weather
Abstract
1 Overall Design
1.1 The Overall Design of Haze Detection
1.2 Intelligent Waste Bin Overall Design
2 Principle of Air Quality Measurement
2.1 Single Particle Scattering Intensity Distribution Characteristics
2.2 Effects on Scattered Light in Different Situations
3 Infrared Sensor Ranging Principle
4 System Function Design
5 Haze Detector and Smart Bin Module Design
5.1 The Overall Hardware Circuit Schematic
5.2 WIFI Data Transmission Module
5.3 LJA30A3-15-Z/BX Metal Detection Module
6 Conclusion
References
A False-Target Jamming Method for the Phase Array Multibeam Radar Network
Abstract
1 Introduction
2 Analysis of False Target Interference
3 The Establishment of Interference Model
4 Simulation Experiment and Analysis
5 Conclusion
References
Analysis of TDOA Location Algorithm Based on Ultra-Wideband
Abstract
1 Introduction
2 TDOA Positioning Algorithm Description
2.1 Based on Chan Algorithm
2.2 Taylor Series Expansion Positioning Algorithm
3 Algorithm Analysis Comparison
4 Conclusion
Acknowledgements
References
Algorithm Design of Combined Gaussian Pulse
Abstract
1 Introduction
2 Combined Algorithm Design
2.1 Random Selection Algorithm
2.2 Random Selection Algorithm
3 Simulation Comparison Analysis
4 Conclusion
Acknowledgements
References
A Network Adapter for Computing Process Node in Decentralized Building Automation System
Abstract
1 Introduction
2 System Structure
3 System Design
3.1 Communication Protocol
3.2 Hardware Design
3.3 Software Design
4 Conclusion
Acknowledgements
References
Model Reference Adaptive Control Application in Optical Path Scanning Control System
Abstract
1 1 Introduction
2 2 Optical Path Scanning System Analysis
2.1 Composition of Optical Path Scanning System
2.2 State Space Model of the Controlled Object
3 3 Model Reference Adaptive Control Stability Analysis
4 4 Matlab Simulation and Results
5 5 Conclusions
Acknowledgements
References
UAV Path Planning Design Based on Deep Learning
Abstract
1 1 Introduction
2 2 Scheme Design
2.1 Source of Data
2.2 Neural Network Model
2.3 Training Parameters
3 3 Constraint Conditions of Flight
4 4 Expected Results
5 5 Conclusion
Acknowledgements
References
Research on Temperature and Infrared Characteristics of Space Target
Abstract
1 Introduction
2 Orbit External Thermal Flux of Space Target
3 Space Target Temperature and Infrared Mathematical Model
4 Analysis of Calculation Results
4.1 Calculation Model
4.2 Effect of β on Target Temperature and Infrared Radiation Characteristics
4.3 Influence of Earth Albedo on Temperature and Infrared Radiation Characteristics
5 Conclusion
References
A Multispectral Image Edge Detection Algorithm Based on Improved Canny Operator
Abstract
1 1 Introduction
2 2 Traditional Canny Edge Detection Algorithm
3 3 Image Acquisition Experiment
4 4 Improved Canny Edge Detection Algorithm
4.1 Laplacian and Sobel Operator Hybrid Enhancement Algorithm
4.2 5 × 5 Size Sobel Operator to Calculate the Gradient Amplitude Image
4.3 Non-maximum Suppression
4.4 Double Threshold Detection and Edge Connection
5 5 Analysis of Experimental Results
6 6 Conclusion
References
A Green and High Efficient Architecture for Ground Information Port with SDN
Abstract
1 Introduction
2 Background Information
2.1 Space-Ground Integrated Information Network
2.2 Ground Information Port
2.3 Remote Sensing Data Open Policy
2.4 SDN
3 An Overall SDN-Based Green and Efficient Architecture for Ground Information Port
4 Efficiency Analysis
4.1 Green Evaluation: Energy Consumption Reduction
4.2 High Efficiency: Efficiency Improvement
5 Ground Information Port Development Roadmap
6 Conclusions
Acknowledgements
References
Marked Watershed Algorithm Combined with Morphological Preprocessing Based Segmentation of Adherent Spores
Abstract
1 1 Introduction
2 2 Materials and Methods
2.1 Data Acquisition
2.2 The Watershed Algorithm Combined with Morphology Algorithm
2.2.1 The Brief Framework of the Propose Method
2.2.2 The Concrete Description of the Propose Method
3 3 Experimental Results and Analysis
3.1 Experimental Results
3.2 Result Analysis
4 4 Conclusion
Acknowledgements
References
Data Storage Method for Fast Retrieval in IoT
1 Introduction
2 Storage Method for Fast Retrieval
3 High-Frequency Queries Statistics
4 Experimental Results
5 Conclusion
References
Equivalence Checking Between System-Level Descriptions by Identifying Potential Cut-Points
1 Introduction
2 Preliminary
2.1 Symbolic Simulation
2.2 Program Slice
2.3 Program Dependence Diagram
3 Equivalence Checking Algorithm
3.1 Generation of Potential Cut-Points
3.2 Selection of Potential Cut-Points and Program Slicing
3.3 Symbolic Simulation
4 Experiment Results
5 Conclusion
References
An Improved Adversarial Neural Network Encryption Algorithm Against the Chosen-Cipher Text Attack (CCA)
Abstract
1 Introduction
2 Improved Adversarial Neural Network Encryption Algorithm Based on CCA (CCA-ANC)
2.1 Algorithm Principle
2.2 Model Structure
2.3 Adversarial Neural Network Architecture
2.4 Improvement of Network Structure and Loss Function Design
3 Model Experiment Simulation and Safety Analysis
3.1 Model Experiment Simulation
3.2 Model Safety Analysis
4 Conclusion
Acknowledgements
References
Hardware Implementation Based on Contact IC Card Scalar Multiplication
Abstract
1 Introduction
2 Scalar Multiplication Module
2.1 Scalar Multiplication Theory
2.2 Jacobi Projective Coordinate System
2.3 Montgomery Modular Multiplication
3 Design of Scalar Multiplication
3.1 Design of Modular Addition Module
3.2 Design of Scalar Multiplication Module
3.3 Simulation Results
4 Introduction to ISO7816 Communication Protocol
4.1 System Module Design
4.2 System Simulation of Communication Modules
4.3 Hardware and Software Co-simulation Screenshot
5 FPGA Validation
6 Conclusions
References
Tiered Spectrum Allocation for General Heterogeneous Cellular Networks
1 Introduction
2 System Model
3 Area Spectral Efficiency Optimization
3.1 Problem Formulation for Spectrum Partitioning
3.2 Spectrum Sharing
4 Conclusion
References
Human Action Recognition Algorithm Based on 3D DenseNet-BC
Abstract
1 Introduction
2 3D Densenet-BC Construction
2.1 3D-CNN
2.2 DenseNet-BC
2.3 3D DenseNet-BC
3 Experimental Results and Analysis
3.1 Data Sets
3.2 Experimental Environment Settings
3.3 Experimental Results and Analysis
4 Conclusion
References
Color Image Encryption Based on Principal Component Analysis
Abstract
1 Introduction
2 2D-Logistic Chaos System and PCA
2.1 2D-Logistic Chaos System
2.2 PCA (Principal Component Analysis)
3 The Scheme of Image Encryption and Decryption
3.1 Image Encryption Algorithm Structure
3.2 Encryption Result
4 Safety Analysis and Experimental Results
4.1 Key Space
4.2 Histogram Analysis
4.3 Sensitivity Analysis
5 Conclusion
References
Research on Transmitter of the Somatosensory Hand Gesture Recognition System
Abstract
1 1 Introduction
2 2 Overall Design of Hand Gesture Recognition System
3 3 Hand Gesture Recognition Attitude Algorithm Principe
3.1 Attitude Algorithm of the Six-Axis Sensor MPU6050
3.2 Transformation from Quaternion to Euler Angle
3.3 Complementary Filter Correction Algorithm
4 4 Data Acquisition Process
5 5 Functional Verification Experiments
6 6 Conclusion
References
Research on Image Retrieval Based on Wavelet Denoising in Visual Indoor Positioning Algorithm
Abstract
1 1 Introduction
1.1 Visual Positioning Technology
1.2 Image Denoising
2 2 Traditional Denoising Algorithm
2.1 Spatial Domain Filtering
2.2 Frequency Domain Filtering
3 3 Wavelet Denoising Algorithm
3.1 Modulus Maxima Algorithm
3.2 Correlated Denoising Algorithm
3.3 Wavelet Threshold Denoising Algorithm
3.4 Selection of Wavelet Basis in Wavelet Threshold Denoising Algorithm
3.5 Contour Wave Denoising
4 4 Algorithm Performance Analysis
5 5 Conclusion
References
Analysis of the Matching Pursuit Reconstruction Algorithm Based on Compression Sensing
Abstract
1 1 Signal Reconstruction
2 2 Matching Pursuit Algorithm and Its Improvement
3 3 Reconstruction Algorithm Performance Analysis
4 4 Conclusion
References
Super-Resolution Based and Topological Structure for Narrow Road Extraction from Remote Sensing Image
Abstract
1 1 Introduction
2 2 Background
3 3 Procedure of Narrow Road Extraction
3.1 Road Extraction
3.2 Remove Noise Points by Topological Structure
4 4 Experiment Results and Analysis
5 5 Conclusion
Acknowledgements
References
Evaluation on Learning Strategies for Multimodal Ground-Based Cloud Recognition
1 Introduction
2 Method
3 Experiments
3.1 Multimodal Ground-Based Cloud Dataset
3.2 Experiment Setup
3.3 Results and Analysis
4 Conclusion
References
SAR Load Comprehensive Testing Technology Based on Echo Simulator
Abstract
1 1 Introduction
2 2 SAR Load Echo Simulator Works
3 3 SAR Load Test Mode, Test Project and Test Method Design
3.1 SAR Load Test Mode
3.1.1 Planar Near Field Test Mode
3.1.2 Full Power Test Mode
3.1.3 SAR Load Mode Test
3.2 SAR Load Test Project
3.3 SAR Load Test Method
3.3.1 Power Interface Check
3.3.2 Remote Command Check
3.3.3 Telemetry Parameter Check
3.3.4 SAR Load Sub-system Power Test
3.3.5 SAR Load Sub-system Frequency Test
3.3.6 SAR Load Subsystem PRF Test
3.3.7 SAR Performance Index Test
3.3.8 Image Quality Inspection
4 4 SAR Load Sub-system Test Equipment
5 5 SAR Load Satellite Test Application
6 6 Conclusion
References
A New Traffic Priority Aware and Energy Efficient Protocol for WBANs
Abstract
1 1 Introduction
2 2 Background and Motivation
3 3 An Improved Proposed Protocol
4 4 Simulation
5 5 Conclusion
References
Design of Modulation and Demodulation System Based on Full Digital Phase-Locked Loop
Abstract
1 Introduction
2 The Design of FSK Modulation
3 The Design of Demodulation of Full Digital Phase-Locked Loop
3.1 Composition of the Full Digital Phase-Locked Loop
3.2 The Design of the Full Digital Phase-Locked Loop
3.3 The Design of FSK Demodulation
4 System Testing
5 Conclusion
References
Ethanol Gas Sensor Based on SnO2 Hierarchical Nanostructure
Abstract
1 Introduction
2 Experimental Part
3 Result and Discussion
4 Conclusion
References
Generative Model for Person Re-Identification: A Review
1 Introduction
2 Approach
2.1 Overview of GAN
2.2 Generating Unlabeled Samples
2.3 Style Transferring
2.4 Learning Features
3 Experiments
3.1 Database
3.2 Evaluation
4 Conclusion
References
Location Fingerprint Indoor Positioning Based on XGBoost
Abstract
1 Introduction
2 Location Fingerprint Indoor Positioning Based on XGBoost
2.1 The Principle of XGBoost Algorithms
2.2 Modeling of Localization Algorithm
2.3 Implementation of Fingerprint Positioning Based on XGBoost
3 Performance Analysis of Location Algorithms
4 Conclusion
Acknowledgements
References
An Information Hiding Algorithm for Iris Features
Abstract
1 Introduction
2 Secure Iris Feature Using Steganography
2.1 Hiding Phase
2.2 Extracting Phase
3 Experiment Results and Analysis
4 Conclusion
Acknowledgements
References
Thin Film Transistor of CZ-PT Applied to Sensor
Abstract
1 Introduction
2 Experiment
3 Results and Discussion
4 Conclusion
References
An Image Dehazing Algorithm Based on Single-Scale Retinex and Homomorphic Filtering
Abstract
1 Introduction
2 Retinex Theory Overview
2.1 Single-Scale Retinex Algorithm
2.2 Multi-scale Retinex Algorithm
2.3 Multi-scale MSRCR Algorithm with Color Recovery
3 Homomorphic Filtering
3.1 Homomorphic Filtering Principle
3.2 Improved Gaussian Homomorphic Filtering
4 Proposed Algorithm
4.1 Algorithm Principle
4.2 Objective Performance Indicators
4.3 Simulation Results
4.3.1 Processing Colorful Images
4.3.2 Processing Images with Uneven Illumination
4.3.3 Processing Images with Uniform Illumination
4.4 Objective Analysis
5 Conclusion
References
Survey of Gear Fault Feature Extraction Methods Based on Signal Processing
Abstract
1 Introduction
2 Mechanism of Gear Fault Diagnosis
3 Gear Fault Feature Extraction Method Based on Signal Processing
3.1 Short Time Fourier Transformation, STFT
3.2 Autoregressive Moving Average, ARMA
3.3 Cohen Type Distribution
3.4 Wavelet Transform, WT
3.5 Hilbert-Huang Transform
4 Comparison of Various Signal Processing Based Gear Fault Feature Extraction Methods
5 Conclusion
References
Hyperspectral Image Classification Based on Bidirectional Gated Recurrent Units
Abstract
1 Introduction
2 Background of Theory
2.1 Bidirectional Recurrent Neural Network
2.2 Gated Recurrent Units
3 Method Based on Bidirectional Gated Recurrent Units
4 Experimental Result and Discussion
4.1 Data Description
4.2 Comparison Based on Vector Classification Method
5 Conclusion
Acknowledgements
References
A Survey of Pedestrian Detection Based on Deep Learning
Abstract
1 Introduction
2 Related Research
3 Dataset
4 Detection Framework
4.1 Evaluation
5 Conclusion
Acknowledgements
References
Detection of Anomaly Signal with Low Power Spectrum Density Based on Power Information Entropy
1 Introduction
2 Detection of Anomaly Signal with Low Power Spectrum Density
2.1 Analysis on Information Content of Overlapped Signals
2.2 Anomaly Detection Model Based on OCSVM
3 Results and Analysis
3.1 Analysis the Effect of Histogram Resolution
3.2 Analysis the Effect of Classifier Gamma Parameter
3.3 Analysis the Effect of Power Ratio of the DSSS Signal to the Noise
4 Conclusion
References
A Hybrid Multiple Access Scheme in Wireless Powered Communication Systems
1 Introduction
2 System Model
3 Problem Formulation
4 Simulation Results and Discussions
5 Conclusion
References
Gas Sensing Properties of Molecular Sieve Modified 3DIO ZnO to Ethanol
Abstract
1 Introduction
2 Experimental Section
2.1 Fabrication of ZnO Films
2.2 Characterization
2.3 Fabrication and Measurement of Gas Sensing Properties
3 Results and Discussion
3.1 Morphological and Structural Characteristics
3.2 Working Temperature and Selectivity of Gas Sensor
4 Conclusion
Acknowledgements
References
FiberEUse: A Funded Project Towards the Reuse of the End-of-Life Fiber Reinforced Composites with Nondestructive Inspection
Abstract
1 Introduction
2 Hyperspectral Imaging and Data Acquisition
3 Inspection of Metal Corrosion
4 Inspection of Erosion on Wind Turbine Blade
5 Conclusion
Acknowledgements
References
Autonomous Mission Planning and Scheduling Strategy for Data Transmission of Deep-Space Missions
1 Introduction
2 Planning Request of Data Transmission Task
3 Problem Modeling
3.1 Link Establishment (Model 1)
3.2 Antenna Pointing Control (Model 2)
3.3 Variable Rate Transmission (Model 3)
3.4 Storage Scheduling (Model 4)
4 Logical Relations and Connections of Planning Models
5 Conclusion
References
Preparation of TiO2 Nanotube Array Photoanode and Its Application in Three-Dimensional DSSC
Abstract
1 Experimental Part
2 Results and Discussion
2.1 Effect of Preparation Parameters on Morphology of TiO2 Nanotubes
2.1.1 Ammonium Fluoride Content in Electrolyte
2.1.2 Water Content in the Electrolyte
2.1.3 Oxidation Voltage
2.1.4 Oxidation Duration
2.2 Annealing
2.3 Solar Cell Assembly and Performance Test
3 Conclusion
References
Block-Based Data Security Storage Scheme
Abstract
1 Introduction
2 Formation of Lightweight Blocks
2.1 Block Data Structure
2.2 Data Structure of Singly Linked List
2.3 Light Blockchain Formation
3 Hash Salt Encryption Based on Large Prime Numbers
3.1 Large Prime Generation
3.2 Distribution of Prime Numbers
4 The Realization of the Fourth Block Blockchain Security Storage Scheme
4.1 Process Implementation
4.2 Algorithm Implementation
5 Program Testing and Analysis
5.1 Rainbow Table Attack Principle
5.2 Test and Analysis
6 Conclusion
Acknowledgements
References
Chaos Synchronization and Voice Encryption of Discretized Hyperchaotic Chen Based on Euler Algorithm
Abstract
1 Introduction
2 Discretized Hyperchaotic Chen System and Synchronization
2.1 Discretized Hyperchaotic Chen System Based on Euler Method
2.2 Nonlinear Feedback Synchronization
3 Voice Encryption
4 Conclusion
Acknowledgements
References
Multiple UAV Assisted Cellular Network: Localization and Access Strategy
1 Introduction
2 System Model
3 Multi-UAV Localization Technique
3.1 Dynamic State-Space Model
3.2 Predictor
3.3 Corrector
4 Access Strategy
5 Numerical Simulation Result
6 Conclusion
References
WiFi Location Fingerprint Indoor Positioning Method Based on WKNN
Abstract
1 Introduction
2 Location Fingerprint Location Method
2.1 WiFi Location Fingerprint Positioning Implementation Principle
2.2 Signal Strength Measurement
2.3 WKNN Matching Algorithm
3 Simulation Result Analysis
4 Conclusion
Acknowledgements
References
The Digital Design and Verification of Overall Power System for Spacecraft
Abstract
1 Introduction
2 Electric Overall Digital Design
2.1 Design Ideas
2.2 Model-Based Electrical Overall Digitalization Scheme
3 Verification Example
4 Innovation Points
5 Conclusion
References
The Analysis and Practice of Backup Spacecraft Tele Command Based on Chang’E-4
Abstract
1 Introduction
2 Mission Simulation
3 Design Analysis
4 Experimentation in Lab
5 Practice Onboard
5.1 Use the Shelter
5.2 Reduce Power
5.3 Mode Switch Onboard
5.4 Summary
6 Conclusion
References
A Modified Hough Transform TBD Method for Radar Weak Targets Using Plot’s Quality
Abstract
1 Introduction
2 General Description of the Proposed Method
3 The Definition and Calculation of Radar Plot Quality
3.1 The Definition of Radar Plot Quality
3.2 The Calculation Algorithm of Plot Quality
3.2.1 The Calculation of q_{EP}
3.2.2 The Calculation of q_{SNR}
3.2.3 The Calculation of q_{RA}
3.2.4 The Calculation of q_{D}
4 Simulation and Results Analysis
4.1 The Generation of K-Distributed Sea Clutter
4.2 The Validation of PQ Calculation Algorithms
4.3 The Performance of the PQ-HT TBD Method
5 Conclusion
References
Analysis of the Effects of Climate Teleconnections on Precipitation in the Tianshan Mountains Using Time-Frequency Methods
Abstract
1 1 Introduction
2 2 Method
2.1 EEMD Decomposition
2.2 Wavelet Coherence Analysis
3 3 Study Area and Data
3.1 EEMD Decomposition
3.2 Wavelet Coherence Between Precipitation and Climate Indices on the Northern Slope
3.3 Wavelet Coherence Between Precipitation and Climate Indices on the Southern Slope
4 4 Conclusions
Acknowledgement
References
An Improved Cyclic Spectral Algorithm Based on Compressed Sensing
Abstract
1 Introduction
2 Cyclic Spectrum Algorithm
3 An Improved Cyclic Spectrum Algorithm Based on CS Theory
3.1 The Theory of Compressive Sensing
3.2 An Improved Cyclic Spectrum Algorithm Based on CS Theory
4 Simulations
5 Conclusion
References
Video Deblocking for KMV-Cast Transmission Based on CNN Filtering
Abstract
1 1 Introduction
2 2 Related Work
2.1 CNN in Image Reconstruction
2.2 KMV-Cast
3 3 Proposed Method
3.1 Instance Normalization
3.2 L2 Regularization
4 4 Experiments
5 5 Conclusion
References
Improved YOLO Algorithm for Object Detection in Traffic Video
Abstract
1 1 Introduction
2 2 Data Acquisition and Preprocessing
3 3 Algorithm Implementation and Improvement
3.1 YOLOv3 Algorithm
3.2 Improvement of YOLOv3 Algorithm
3.2.1 Anchor Reselection
3.2.2 Focus Loss
4 4 Analysis of Experimental Results
5 5 Conclusion
References
Task Allocation for Multi-target ISAR Imaging in Bi-Static Radar Network
Abstract
1 1 Introduction
2 2 Bi-Static ISAR Signal Model
3 3 Task Allocation Optimization Model
4 4 Experiments
5 5 Conclusion
Acknowledgements
References
A New Tracking Algorithm for Maneuvering Targets
Abstract
1 Introduction
2 Kalman Filter Algorithm
3 Improved Algorithm Based on Law of Large Numbers
4 Simulation and Algorithm Analysis
4.1 Simulation and Analysis of Covariance Matrix Estimation Algorithms
4.2 Simulation and Analysis of Maneuvering Target Tracking
4.3 Analysis of Algorithms Under Different Noise Levels
5 Conclusion
References
Research on an Improved SVM Training Algorithm
Abstract
1 1 Introduction
2 2 Joint SVM
2.1 Joint Learning
2.2 Output Core
2.3 Optimal Solution
3 3 Output Kernel Learning
3.1 Linear Output Kernel
3.2 Odds Ratio Output Kernel
4 4 Simulation
5 5 Conclusion
Acknowledgements
References
Modeling for Coastal Communications Based on Cellular Networks
1 Introduction
2 System Model
3 Distribution of Cellular Link Distance
4 Coverage and Handover of Coastal Networks
5 Simulation Results and Discussions
6 Conclusion
References
Research of Space Power System MPPT Topology and Algorithm
Abstract
1 1 Introduction
2 2 Space MPPT Power System Design
2.1 S3MPR Circuit Topology
2.2 Conductivity Incremental Method Optimization Mechanism
3 3 Simulation and Experiment
3.1 Simulation Analysis
3.2 Test Verification
4 4 Conclusion
References
Far-Field Sources Localization Based on Fourth-Order Cumulants Matrix Reconstruction
Abstract
1 Introduction
2 Data Model
3 The Proposed Algorithms
3.1 The TFOC-OPRM Algorithm
3.2 Complexity Analysis
4 Simulation Results
5 Conclusions
Acknowledgements
References
ONENET-Based Greenhouse Remote Monitoring and Control System for Greenhouse Environment
Abstract
1 Introduction
2 System Overall Design
3 System Hardware Design
4 System Software Design
5 Conclusion
Acknowledgements
References
Design of Multi-Node Wireless Networking System on Lunar
Abstract
1 1 Introduction
2 2 Analysis of Wireless Communication Protocol
3 3 Design of Wireless Networking
4 4 Design of Protocol Architecture
5 5 Conclusion
References
Algorithm Improvement of Pedestrians’ Red-Light Running Snapshot System Based on Image Recognition
Abstract
1 Introduction
2 Algorithmic Improvement Analysis
3 Pedestrian Tracking and Snapping Process Design
4 Pedestrian Tracking Algorithm
5 Face Image Quality Discrimination Algorithms
6 System Testing
7 Conclusion
References
A Datacube Reconstruction Method for Snapshot Image Mapping Spectrometer
Abstract
1 Introduction
2 General Principle of IMS
3 Geometric Model of IMS
4 Imaging and Reconstruction Simulations
5 Conclusion
References
LFMCW Radar DP-TBD for Power Line Target Detection
Abstract
1 1 Introduction
2 2 LFMCW Radar Theory
3 3 LFMCW Radar DP-TBD Algorithm Simulation
3.1 Target Motion Model
3.2 Target Measurement Model
3.3 DP-TBD Algorithm Simulation
4 4 LFMCW Radar DP-TBD Algorithm Verification
5 5 Conclusion
References
Review of ML Method, LVD and PCFCRD and Future Research for Noisy Multicomponent LFM Signals Analysis
Abstract
1 Introduction
2 Review of the ML Method, LVD and PCFCRD
3 Comparisons Based on Theoretical Analyses
3.1 Cross Term
3.2 Computational Cost
3.3 Resolution and PSL
3.4 Anti-noise Performance
4 Simulations and Some Discussions
5 Conclusion
References
Research on Vision-Based RSSI Path Loss Compensation Algorithm
Abstract
1 1 Introduction
2 2 Indoor Crowded Scene Algorithm Based on RSSI Model
2.1 The Impact of the Human Body
2.2 Individual Quantity Detection
2.3 Consider a New Indoor Transmission Model of the Human Body
3 3 Testing and Evaluation
4 4 Conclusion
Acknowledgements
References
Efficient Energy Power Allocation for Forecasted Channel Based on Transfer Entropy
Abstract
1 Introduction
2 Granger Causality Test
3 Channel Forecasting Based on Transfer Entropy
4 IWF Algorithm
5 Simulations and Analysis
6 Conclusion
Acknowledgements
References
A Modular Indoor Air Quality Monitoring System Based on Internet of Thing
Abstract
1 Introduction
2 IoT Structure and Sensor Selection
3 Platform Software Development
4 Experimental Results and Analysis
5 Conclusion
References
Performance Analysis for Beamspace MIMO-NOMA System
1 Introduction
2 System Model
2.1 Channel Model
2.2 Beamspace MIMO-NOMA System Model
3 Relevant Algorithms and Methods
3.1 Beam Selection Algorithm
3.2 Clustering Method
3.3 Power Allocation of Intra-cluster and Inter-cluster
3.4 Precoding Matrix
4 Simulation Results
4.1 System with Various Beam Selection Algorithms and Amplitude-Clustering
4.2 System with Various Beam Selection Algorithms and Correlation-Clustering
5 Conclusions
References
A Novel Low-Complexity Joint Range-Azimuth Estimator for Short-Range FMCW Radar System
Abstract
1 1 Introduction
2 2 FMCW Radar Signal Model
3 3 Proposed Algorithm
4 4 Experimental Results and Analysis
5 5 Conclusion
References
Comparative Simulation for Nonlinear Effect of Hybrid Optical Fiber-Links in High-Speed WDM Systems
1 Introduction
2 System Model
3 Problem Description and Analysis
4 Simulation and Analysis
5 Conclusion
References
POI Recommendation Based on Heterogeneous Network
1 Introduction
2 Representation Learning Model Based on Heterogeneous Network
3 Base on Deep Neural Network Recommendation Framework
4 Experiments
5 Conclusion
References
A Survey on Named Entity Recognition
Abstract
1 Introduction
2 Rule-Based and Dictionary-Based Methods
3 Statistical Learning Based Method
3.1 HMM
3.2 CRF
4 Hybrid Method
5 Deep Learning Based Approach
6 Latest Method
6.1 Attention Mechanism
6.2 Transfer Learning
6.3 Semi-supervised Learning
7 Summary and Outlook
Acknowledgements
References
A Hybrid TWDM-RoF Transmission System Based on a Sub-Central Station
Abstract
1 1 Introduction
2 2 Architecture of the TWDM-RoF System
3 3 Simulation and Results
4 4 Conclusions
Acknowledgements
References
Optimal Subcarrier Allocation for Maximizing Energy Efficiency in AF Relay Systems
Abstract
1 Introduction
2 System Model and Problem Formulation
2.1 System Model
2.2 Problem Formulation
3 Optimal Solution
4 Simulation Results
Acknowledgements
References
A Study on D2D Communication Based on NOMA Technology
Abstract
1 Introduction
2 System Model
3 Problem Description
4 Resource and Power Allocation Algorithm
5 Simulation and Analysis
6 Conclusion
Acknowledgements
References
Research on Deception Jamming Methods of Radar Netting
Abstract
1 1 Introduction
2 2 Radar Netting Modeling
3 3 Track Jamming of Netted Radar
3.1 The Parameter Settings on Track’s Deception Jamming
3.1.1 Interference Power
3.1.2 Time Delay on Distance
3.1.3 Time Delay on Bearing
3.1.4 The Doppler Frequency Shift
3.1.5 The Jamming Signal Form
3.2 The Track Jamming of Netted Radar
3.2.1 Algorithm Analysis
3.2.2 The Simulation Experiment
3.2.3 The Error Analysis
4 4 Conclusion
Acknowledgements
References
Cluster Feed Beam Synthesis Network Calibration
Abstract
1 1 Introduction
2 2 Beam Synthesis Network Calibration
2.1 Point Frequency Amplitude-Phase Detection
2.2 Code Division Amplitude-Phase Detection
3 3 Amplitude-Phase Detection of Ground-Based Beamforming Network
4 4 Simulation Analysis
5 5 Conclusions
References
Design and Optimization of Cluster Feed Reflector Antenna
Abstract
1 1 Introduction
2 2 Reflector Beam Optimization of Cluster Feed
2.1 Envelope Method for Optimizing Object Modeling
2.2 Optimum Design of Target Beam
3 3 Beam Optimization by Improved Genetic Algorithm
3.1 Coding
3.2 Choice
3.3 Crossing
3.4 Variation
3.5 Treatment of Constraints
4 4 Simulation Analysis
4.1 Test Results of Reflector Principle Prototype
4.2 Optimum Design and Simulation
5 5 Conclusions
References
Cognitive Simultaneous Wireless Information and Power Transfer Based on Decode-and-Forward Relay
Abstract
1 Introduction
2 System Model
3 Problem Constraint Model
4 Simulation Result
5 Conclusion
Acknowledgements
References
A Deep-Learning-Based Distributed Compressive Sensing in UWB Soil Signals
1 Introduction
2 LSTM-DCS Design
2.1 Traditional Recovery Methods for the DCS
2.2 Proposed LSTM-DCS
3 Experimental Results and Discussion
3.1 UWB Soil Echo Signals
3.2 Analysis and Comparison of Different Methods
3.3 Stability Analysis of the Proposed Methods
4 Conclusion and Future Work
References
An Improved McEliece Cryptosystem Based on QC-LDPC Codes
Abstract
1 1 Introduction
2 2 An Improved McEliece Cryptosystem
2.1 Key Generation
2.2 Algorithm
3 3 Simulation Results and Analysis
4 4 Security Analysis
5 5 Conclusion
Acknowledgements
References
Research on Multi-carrier System Based on Index Modulation
Abstract
1 1 Introduction
2 2 Design and Analysis of OFDM-IM Model
3 3 Comparative Analysis of Simulation
4 4 Concluding Remarks
References
A GEO Satellite Positioning Testbed Based on Labview and STK
Abstract
1 1 Introduction
2 2 CON Module and Serial Port Transmission
2.1 STK CON Module
2.2 Serial Transmission
3 3 Design and Implementation of Simulation Verification System
4 4 Conclusion
Acknowledgements
References
SVR Based Nonlinear PA Equalization in MIMO System with Rayleigh Channel
1 Introduction
2 MIMO System Model with Nonlinear Channel
3 SVR Based Nonlinear PA Distortion Equalizer
4 Simulation Results
5 Conclusions
References
LoS-MIMO Channel Capacity of Distributed GEO Satellites Communication System Over the Sea
Abstract
1 Introduction
2 System Model
3 Maximum MIMO Capacity
4 The Arrangement of the Antenna Position
5 Conclusions
Acknowledgements
References
Design and Implementation of Flight Data Processing Software for Global Flight Tracking System Based on Stored Procedure
Abstract
1 Introduction
2 System Construction and Operational Procedure
3 Software Architecture and Interface Design
4 Software Implementation Based on Stored Procedure
5 Performance Test Based on Stored Procedure
6 Concluding Remarks
Acknowledgements
References
Face Recognition Method Based on Convolutional Neural Network
Abstract
1 Introduction
2 Convolutional Neural Network
2.1 Network Model Based on ResNet Design
2.2 Experimental Results and Analysis
3 Summary
References
An On-Line ASAP Scheduling Method for Time-Triggered Messages
Abstract
1 1 Introduction
2 2 Scheduling Models
3 3 Path Selection for Easier Scheduling
4 4 Time Slots Allocation
5 5 Request and Acknowledge Protocols to Free Conflict
6 6 Conclusions
References
Soil pH Value Prediction Using UWB Radar Echoes Based on XGBoost
1 Introduction
2 Data Preprocessing
3 Soil pH Value Prediction Using XGBoost Algorithm
3.1 XGBoost in Soil pH Value Prediction
3.2 Algorithm Simulation and Results Analysis
4 Conclusion
References
A Novel Joint Resource Allocation Algorithm in 5G Heterogeneous Integrated Networks
Abstract
1 Introduction
2 System Model
3 The Proposed Algorithm
4 The Simulations Results
5 Conclusion
Acknowledgements
References
A Vehicle Positioning Algorithm Based on Single Base Station in the Vehicle Ad Hoc Networks
Abstract
1 1 Introduction
2 2 System Model
3 3 The Proposed Vehicle Positioning Algorithm Based on Single Base Station
3.1 Measuring Distance Based on Power Loss Model
3.2 Estimating the Angle of Vehicle
3.3 Vehicle Positioning Algorithm Based on Power Loss Prediction Model and ESPRIT
3.3.1 Location of Uniform Speed Vehicle
3.3.2 Location of Variable Speed Vehicle
4 4 The Simulations Results
4.1 Simulation Parameters
4.2 Simulation Results
4.2.1 Vehicle Distance Simulation Results
4.2.2 Vehicle Angle Simulation Results
4.2.3 Vehicle Angular Distance Integrated Location
5 5 Conclusion
Acknowledgements
References
Heterogeneous Wireless Network Resource Allocation Based on Stackelberg Game
Abstract
1 1 Introduction
2 2 Stackelberg Game with Multi-master and Multi-slave
3 3 Resource Allocation Based on Stackelberg Game
4 4 Nash Equilibrium Point of User Non-cooperative Game
5 5 Simulation Results
6 6 Conclusions
Acknowledgements
References
On the Performance of Multiuser Dual-Hop Satellite Relaying
Abstract
1 Introduction
2 System Model
3 Outage Performance Analysis
3.1 Exact Outage Probability
3.2 Asymptotic Outage Probability
4 Numerical Results
5 Conclusion
References
Architectures and Key Technical Challenges for Space-Terrestrial Heterogeneous Networks
Abstract
1 Introduction
2 Architecture of Space-Terrestrial Heterogeneous Networks
3 Protocols Architectures
4 Integrated Routing of Space-Terrestrial Heterogeneous Networks
5 Conclusions
Acknowledgements
References
Design and Implementation of the Coarse and Fine Data Fusion Based on Round Inductosyn
Abstract
1 Introduction
2 Principle of Inductosyn
3 Analysis of Data Fusion Algorithms
3.1 Simple Table Look-up Algorithm
3.2 Improved Table Look-Up Algorithm
4 Implementation of Data Fusion and Thermal Test
5 Conclusions
References
Lossless Flow Control for Space Networks
1 Introduction
2 Related Works
2.1 Space Network
2.2 Flow Control
3 Motivation
4 Lossless Flow Control
5 Discuss
5.1 Implement Details
5.2 Power Saving
6 Conclusion
References
Heterogeneous Network Selection Algorithm Based on Deep Q Learning
1 Introduction
2 System Model
3 Markov Process of Network Selection
3.1 Definition of NSMDP
3.2 Reward Function
4 Network Selection Algorithms Based on DQN
4.1 Deep Q Network for NSMDP
5 Simulation Results
6 Conclusion
References
Vertical Handover Algorithm Based on KL-TOPSIS in Heterogeneous Private Networks
1 Introduction
2 Heterogeneous Network System Model
3 Algorithm Description
3.1 Subjective Weight Calculation Based on AHP
3.2 Objective Weight Calculation Based on Entropy Method
3.3 Candidate Network Sorting Based on KL-TOPSIS
4 Simulation Results
5 Conclusion
References
A Deep Deformable Convolutional Method for Age-Invariant Face Recognition
1 Introduction
2 Related Work
3 Proposed Method
4 Experiments
4.1 Experiment on CACD Datasets
4.2 Experiment on FGNET Datasets
4.3 Experiment on LFW Datasets
5 Conclusion
References
Weight Determination Method Based on TFN and RST in Vertical Handover of Heterogeneous Networks
1 Introduction
2 Related Work
3 Subjective Weight Based on TFN
4 Objective Weight Based on RST
5 Comprehensive Weight
6 Simulation and Numerical Analysis
7 Conclusion
References
Deep Learning-Based Device-Free Localization Using ZigBee
Abstract
1 Introduction
2 Proposed Localization Model
3 Experimental Setup, Results and Analyses
4 Conclusions
Acknowledgements
References
A Modified Genetic Fuzzy Tree Based Moving Strategy for Nodes with Different Sensing Range in Heterogeneous WSN
1 Introduction
2 The GFT Moving Strategy and Moving Model
2.1 Moving Model of the Heterogeneous WSN
3 Simulations and Analysis
4 Conclusion
References
Wireless Indoor Positioning Algorithm Based on RSS and CSI Feature Fusion
1 Introduction
2 Related Work
2.1 Positioning Algorithm Based on RSS
2.2 Positioning Algorithm Based on CSI
3 Positioning Algorithm Based on RSS and CSI Feature Fusion
3.1 Off-Line Phase
3.2 Online Phase
4 Experiment Analysis
5 Conclusion
References
Design and Verification of On-Board Computer Based on S698PM and Time-Triggered Ethernet
Abstract
1 Introduction
2 Introduction of the S698PM Processor and Time-Triggered Ethernet
3 Overall Design of the On-Board Computer
3.1 On-Board Computer Internal Bus Selection
3.2 Composition of the On-Board Computer
4 Design Verification
5 Conclusion
References
An Optimal Deployment Strategy for Radars and Infrared Sensors in Target Tracking
1 Introduction
2 Sensor Deployment Model in Target Tracking
2.1 Basic Description
2.2 Data Fusion
2.3 Scoring System
3 Dimensional-Reduced PSO Algorithm
3.1 Drawbacks of Classical PSO
3.2 Nonlinear Inertia Weight Model
3.3 Dimensional Reduction for Particles
4 Simulation Results
5 Conclusions and Future Work
References
Integrity Design of Spaceborne TTEthernet with Cut-Through Switching Network
Abstract
1 Introduction
2 Safety Design of Cut-Through Switching
2.1 A Brief Introduction to TTEthernet
2.2 CRC Fast Verification Method in Cut-Through Forwarding Mode
3 Network Planning Verification by SMT
4 Conclusion and Future Work
References
Image Mosaic Algorithm Based on SURF
Abstract
1 Image Mosaic Technology
2 SURF Feature Detection Operator
2.1 Computational Integral Images
2.2 Scale Space Construction and Feature Detection
2.3 Determination of the Main Direction and Description Sub-calculation
3 Image Mosaic Algorithms Based on SURF
4 Conclusion
Acknowledgements
References
Research on Dynamic Performance of DVR Based on Dual Loop Vector Decoupling Control Strategy
Abstract
1 DVR Control Strategy and Mathematical Model
2 Control Design of DVR
3 Simulation Verification
4 Conclusion
Acknowledgements
References
Facial Micro-expression Recognition with Adaptive Video Motion Magnification
1 Introduction
2 Related Works
2.1 Traditional Features and Classifiers
2.2 Deep Neural Networks
3 Method
3.1 Adaptive Video Motion Magnification
3.2 CNN Architecture
4 Experiment
4.1 Databases and Preprocessing
4.2 Adaptive Video Motion Magnification
4.3 Performance on CASMEII
5 Conclusion
References
Computation Task Offloading for Minimizing Energy Consumption with Mobile Edge Computing
1 Introduction
2 System Model and Problem Formulation
3 Efficient Computation Task Offloading Algorithm
4 Numerical Results
5 Conclusion
References
Soil pH and Humidity Classification Based on GRU-RNN Via UWB Radar Echoes
1 Introduction
2 Field Experiment
3 Soil Classification
3.1 Data Preprocessing
3.2 GRU Algorithm for Soil Classification
4 Simulation and Analysis of Soil Classification
5 Conclusion
References
Bit Error Rate Analysis of Space-to-Ground Optical Link Under the Influence of Atmospheric Turbulence
Abstract
1 1 Introduction
2 2 Analysis of Influence Factors
2.1 Influence of the Absorption and Scattering
2.2 Influence of the Background Light
2.3 Influence of the Atmospheric Turbulence
3 3 Analysis of Incoherent Space-to-Ground Optical Link
3.1 Probability Density of the Received Light Intensity
3.2 BER Analysis Under the Influence of Atmospheric Turbulence
3.3 Comprehensive Model
4 4 Conclusion
Acknowledgements
References
Performance Analysis of Amplify-and-Forward Satellite Relaying System with Rain Attenuation
Abstract
1 Introduction
2 System and Channel Models
2.1 System Model
2.2 Channel Models
3 Outage Performance Analysis
4 Numerical Results
5 Conclusions
References
Threat-Based Sensor Management For Multi-target Tracking
1 Introduction
2 Threat-Based Sensor Management Method
2.1 Target Threat Model
2.2 Sensor Management Model
3 Numerical Studies
3.1 System Setup
3.2 Simulation Results
4 Conclusion
References
Research on Measurement Matrix Based on Compressed Sensing Theory
Abstract
1 1 Coherence Condition
2 2 Common Measurement Matrices
3 3 Analysis of Common Measurement Matrix Performance
4 4 Improved Bernoulli Matrix
5 5 Conclusion
References
PID Control of Electron Beam Evaporation System Based on Improved Genetic Algorithm
Abstract
1 1 Introduction
2 2 System Mathematical Model
3 3 Tuning of Common PID Parameter Optimization Methods
4 4 Genetic Algorithms to Optimize the Setting of PID Parameters
5 5 Conclusion
References
Doppler Weather Radar Network Joint Observation and Reflectivity Data Mosaic
Abstract
1 Introduction
2 Soomthing Strategy
3 Verification
4 Conclusion
References
Numerical Calculation of Combustion Characteristics in Diesel Engine
Abstract
1 1 Introduction
2 2 Diesel Engine In-Cylinder Spray and Combustion Model
2.1 Spray Mixing Process Gas Flow Turbulence Model
2.2 Breakup Model
2.3 Spary-Wall Model
2.4 Evaporation Model
2.5 Turbulent Combustion Model
3 3 Diesel Engine Cylinder Working Process Modeling
3.1 Computational Area Meshing
3.2 Calculate Boundary Conditions
4 4 Analysis of Calculation Results
4.1 Diesel Engine Performance
4.2 Combustion Characteristic
5 5 Summary
References
A NOMA Power Allocation Strategy Based on Genetic Algorithm
Abstract
1 1 Introduction
2 2 System Model
3 3 Power Allocation Algorithm
4 4 Simulation and Analysis
5 5 Conclusions
Acknowledgements
References
AUG-BERT: An Efficient Data Augmentation Algorithm for Text Classification
1 Introduction
2 Related Work
3 Data Augmentation
3.1 Masked Language Model
3.2 Aug-BERT
4 Experiments
4.1 Metrics and Implementation Details
4.2 Datasets
4.3 Baselines
4.4 Results
5 Conclusion
References
Coverage Performance Analysis for Visible Light Communication Network
1 Introduction
2 System Model
3 QoE Probability Coverage Model
4 Simulation Results
5 Conclusion
References
An Intelligent Garbage Bin Based on NB-IoT
Abstract
1 1 Introduction
2 2 System Overall Design
3 3 Hardware Design
3.1 Infrared Sensor
3.2 Ultrasonic Sensor
3.3 Intelligent Mobile and Obstacle Avoidance
3.4 Motor Drive
3.5 Temperature and Humidity Sensor
4 4 Software Design
4.1 Software Design of Main Controller
4.2 NB-IoT Module Workflow
4.3 ONENET Platform
5 5 Conclusion
Acknowledgements
References
Research on X-Ray Digital Image Defect Detection of Wire Crimp
Abstract
1 1 Background
2 2 Measurement of Typical Characteristic Parameters
2.1 Resistance Clamp Detection
2.2 Detection of Steel Core in the Connecting Pipe
2.3 Measurement of Other Characteristic Parameters
3 3 Chroma and Contrast Adjustment
4 4 Conclusion
Acknowledgements
References
Architecture and Key Technology Challenges of Future Space-Based Networks
Abstract
1 1 Introduction
2 2 Architecture of Space-Based Networks
3 3 Physical Structure
3.1 GEO Function Nodes
3.2 MEO/LEO Function Nodes
4 4 Logical Structure
4.1 Virtual Nodes
4.2 Virtual Networks
5 5 Architecture of Space-Based Network Technology
5.1 Resource Layer
5.2 Service Layer
5.3 Application Layer
5.4 Operation and Maintenance Control Domain
5.5 Security Protection Domain
6 6 Conclusions
Acknowledgements
References
Filter Bank Design for Subband Adaptive Microphone Arrays
Abstract
1 1 Introduction
2 2 Theory
2.1 Complex (DFT) Modulated Filter Banks
3 3 Implementation
3.1 Analysis of the DFT Filter Banks
3.2 Synthesis of the DFT Filter Banks
3.3 Some Programs About Different Function of the Complex (DFT) Modulated Filter Banks
3.4 Test of the Program
4 4 Application
4.1 Filter Bank Design for Subband Adaptive Microphone Arrays
4.2 Synthesis Filter Bank Design
5 5 Conclusion
References
Communication System Based on DFT Spread Spectrum Technology to Reduce the Peak Average Power Ratio of CO-OFDM System
Abstract
1 1 Introduction
2 2 System Theoretical Analysis
2.1 Principle of COOFDM System
2.2 Analysis of PAPR
2.2.1 DFT-Spread-OFDM
2.2.2 Computation Complexity
3 3 System Simulation
4 4 Simulation Result
4.1 PAPR Under Different Algorithms
4.2 System Analysis Without Channel
5 5 Conclusion
Acknowledgements
References
Low-Complexity Channel Estimation Method Based on ISSOR-PCG for Massive MIMO Systems
1 Introduction
2 System Model
3 The Proposed Low-Complexity ISSOR-PCG Method
3.1 Conventional MMSE Channel Estimator
3.2 Proposed ISSOR-PCG Channel Estimation
3.3 Relaxation Parameter and Complexity Analysis
4 Simulation Results
5 Conclusion
References
Ship Classification Methods for Sentinel-1 SAR Images
Abstract
1 1 Introduction
2 2 Multi-feature Extraction of Ships for SAR
3 3 Deep CNN Networks for SAR Ship Classification
4 4 Experimental Results and Analysis
4.1 OpenSARShip Dataset
4.2 Feature Extraction and SVM Based Classification
4.3 Ship Classifier Based on Modified LeNet
5 5 Conclusions
References
Wheat Growth Assessment for Satellite Remote Sensing Enabled Precision Agriculture
Abstract
1 1 Introduction
2 2 Related Work
2.1 BP Neural Network
2.2 MR Algorithm
3 3 Proposed Framework
3.1 Data Preparation
3.2 Designed Method
4 4 Results
5 5 Conclusion
Acknowledgements
References
An Improved ToA Ranging Scheme for Localization in Underwater Acoustic Sensor Networks
Abstract
1 Introduction
2 Design Challenges
3 Description of the Improved ToA Ranging Scheme
4 Experimental Results
5 Conclusion
Acknowledgements
References
Performance Analysis of Three-Layered Satellite Network Based on Stochastic Network Calculus
1 Introduction
2 System Model
3 System Model Analysis
3.1 Finite-State Markov Channel Model (FSMC)
3.2 Two-State G-E Model
3.3 Three-Layered Satellite Network Service Curve
4 Numerical Analysis
5 Conclusion
References
Robust Sensor Geometry Design in Sky-Wave Time-Difference-of-Arrival Localization Systems
1 Introduction
2 Basic Fundamentals
2.1 Signal Model
2.2 Cramer-Rao Bound Without Ionosphere-Layer Height Errors
2.3 Problem of Designing Sensor Geometries
3 Design of Grouped Sensor Geometry
3.1 Proposition of Grouped Sensor Geometry Scheme
3.2 Model and Analysis
4 Cramer-Rao Band of Grouped Sensor Geometry Scheme
5 Simulation Results
6 Conclusion
References
A NOMA Power Allocation Method Based on Greedy Algorithm
Abstract
1 Introduction
2 System Model
3 Power Allocation Algorithm
3.1 Inter-carrier Power Allocation
3.2 Intra-carrier Power Allocation
4 Simulation and Analysis
5 Conclusion
Acknowledgements
References
Multi-sensor Data Fusion Using Adaptive Kalman Filter
1 Introduction
2 Attitude Algorithm Based on Quaternion
2.1 Quaternion Representation of Attitude Angle
2.2 Updating Equation of Attitude Quaternion
3 Data Fusion Based on Multi-sensor
3.1 Multi-sensor Measurement
3.2 Adaptive Kalman Filter
4 Experimental Verification and Analysis
5 Conclusions
References
Feasibility Study of Optical Synthetic Aperture System Based on Small Satellite Formation
Abstract
1 Introduction
2 Present Research
3 Small Satellite Formation Schemes
3.1 Single Satellite Unfolding Structure Scheme
3.2 Multi-satellite Formation Network Scheme
3.3 Multi-satellite Intersection Docking Scheme
3.4 Synthetic Aperture Light Field Imaging Scheme
4 Technology Challenges
4.1 Load Deployment
4.2 Satellite Formation
4.3 Sparse Aperture Optical Machine Technology
4.4 Image Restoration and Reconstruction Processing
5 Conclusions
Acknowledgements
References
A New Coded-Modulated Pulse Train for Continuous Active Sonar
Abstract
1 Introduction
2 GSFM-Costas Pulse Train Model
2.1 The GSFM Waveforms
2.2 The Costas Sequence
2.3 The GSFM-Costas Pulse Train
3 Simulation Result
4 Conclusion
Acknowledgements
References
Region Based Hierarchical Modelling for Effective Shadow Removal in Natural Images
Abstract
1 1 Introduction
2 2 Shadow Image Model
3 3 The Proposed Method
3.1 Shadow Detection
3.2 Umbra Removal
3.3 Penumbra Removal
4 4 Experiment
5 5 Conclusion
Acknowledgements
References
Collaborative Attention Network for Natural Language Inference
Abstract
1 1 Introduction
2 2 Background
2.1 Structured Self-Attention
2.2 Decomposable Attention
3 3 Approach
4 4 Experiments
4.1 Data
4.2 Details
4.3 Results
5 5 Conclusion
Acknowledgements
References
Three-Dimensional Imaging Method of Vortex Electromagnetic Wave Using MIMO Array
Abstract
0 1 Introduction
0 2 Optimization Array Model
0 3 Imaging Reconstruction
0 4 Simulation
0 5 Conclusion
References
Energy Storage Techniques Applied in Smart Grid
Abstract
1 1 Introduction
2 2 Power System and Smart Grid
2.1 Power System Problem
2.2 Smart Grid
3 3 Energy Storage Technology
3.1 The Significance of Energy Storage Technology
3.2 Energy Storage Technology
3.3 Chemical Battery Energy Storage
4 4 Summary
References
A Robust Hough Transform-Based Track Initiation Method for Multiple Target Tracking in Dense Clutter
Abstract
1 Introduction
2 The Theory of Hough Transform (HT) for Track Initiation
3 Real Time Hough Transform Based Track Initiation Method
4 The Proposed Algorithm
4.1 Grid-Based Velocity Test
4.2 Grid-Based Clustering Method
4.3 Hough Transform on Each Group Data
5 Experiment Result
6 Conclusion
References
Structure Design and Analysis of Space Omni-Directional Plasma Detector
Abstract
1 1 Overview
2 2 Constraints and Requirements of Design
2.1 Constraints of Physical Design
2.2 Constraints of Electrical Design
2.3 Constraints of Mechanical Design
2.4 Design Criterion of Statics and Dynamics
3 3 Design
4 4 Simulation and Analysis by FEM
4.1 Setting Up the Model
4.2 Simulation Conditions
4.3 Modal Analysis and Frequency Response Analysis
5 5 Vibration Test
6 6 Conclusions
References
Design and Implementation of GEO Battery Autonomous Management System for Lithium Battery with Balanced Control Function
Abstract
1 1 Introduction
2 2 Requirements Analysis
3 3 Design of Autonomous System
3.1 Architecture Design
3.2 Autonomous Management Module
4 4 Implement the Application
4.1 Autonomous Charge and Discharge Management
4.2 Autonomous Balanced Management
4.3 Autonomous Overcharge Protection Management
4.4 Autonomous Overvoltage Protection Management
5 5 Conclusion
References
Target Direction Finding in HFSWR Sea Clutter Based on FRFT
Abstract
1 Introduction
2 Target and Sea Clutter Signal Model
3 Direction Finding Based on FRFT
4 Simulation
5 Conclusion
Acknowledgements
References
Adaptive Non-uniform Clustering Routing Protocol Design in Wireless Sensor Networks
Abstract
1 1 Introduction
2 2 System Model
2.1 Network Model and Assumptions
2.2 Energy Consumption Model of the Nodes
3 3 The Proposed AUCR Protocol Design
3.1 Node Clustering
3.2 Inter-cluster Multi-hop Routing
3.3 Cluster Radius Adjustment
4 4 Simulation Results
4.1 Cluster Radius Adjustment
4.2 Life Cycle and Load Balancing
5 5 Conclusion
Acknowledgements
References
Comparative Analysis of Reflectivity from an Updated SC Dual Polarization Radar and a SA System in CINRAD Network
Abstract
1 1 Introduction
2 2 Equipment Introduction
3 3 Radar Data Processing and Comparative Analysis
4 4 Quantitative Precipitation Estimation and Analysis of Dual-Line Polarization Radar
5 5 Conclusion
References
Subcarrier Allocation-Based SWIPT for OFDM Cooperative Communication
1 Introduction
2 System Model and Problem Formularion
2.1 System Model
2.2 Problem Formulation
3 Optimal Solution
3.1 OPTIMIZING p1* with GIVEN G
3.2 Optimizing p2* with Given G
3.3 Obtaining the Optimal G
4 Simulations Result
5 Conclusions
References
Power Control for Underlay Full-Duplex D2D Communications Based on D. C. Programming
Abstract
1 Introduction
2 System Model
3 Problem Formulation and Power Control Algorithm
4 Simulation Results
5 Conclusion
Acknowledgements
References
Power Control for Underlay Full-Duplex D2D Communications Based on Max-Min Weighted Criterion
Abstract
1 Introduction
2 System Model
3 Power Control
3.1 Problem Description
3.2 Algorithm Description
4 Numerical Results
5 Conclusion
Acknowledgements
References
Analysis on the Change of Dynamic Output Degree Distributions in the BP Decoding Process of LT Codes
1 Introduction
2 Preliminary and Definitions
3 Analysis of BP Decoding Process for LT Codes
4 Simulation Results
5 Conclusion
References
A Stable and Reliable Self-tuning Pointer Type Meter Reading Recognition Based on Gamma Correction
Abstract
1 Introduction
2 The Reading Recognition of Pointer Meter
2.1 Image Enhancement
2.2 Skeleton Extraction
2.3 Target Detection Based on Hough Transform
2.4 Angle Interpretation
3 Experimental Results and Analysis
3.1 Effectiveness of the Enhanced Algorithm
3.2 Validity of Pointer Extraction
3.3 Accuracy of Interpretation Method
3.4 Real-Time Performance of the Algorithm
4 Conclusion and Future Work
Acknowledgements
References
Spectral Efficiency for Multi-pair Massive MIMO Two-Way Relay Networks with Hybrid Processing
1 Introduction
2 System Model
3 Large M Analysis
4 Simulation
5 Conclusion
References
An Improved Frost Filtering Algorithm Based on the Four Rectangular Windows
Abstract
1 1 Introduction
2 2 Frost Filtering Algorithm
3 3 Improved Frost Filtering Algorithm
3.1 Construction of Four Rectangular Windows
3.2 An Improved Frost Filtering Algorithm Based on Four Rectangular Windows
4 4 Experiments and Analysis
5 5 Conclusion
References
Waterline Extraction Based on Superpixels and Region Merging for SAR Images
Abstract
1 1 Introduction
2 2 Proposed Algorithm
3 3 Experiment Result and Evaluation
4 4 Conclusion
References
Coastline Detection with Active Contour Model Based on Inverse Gaussian Distribution in SAR Images
Abstract
1 1 Introduction
2 2 Active Contour Model Based on Gamma Distribution
3 3 Active Contour Model Based on Inverse Gaussian Distribution
4 4 Experiments
5 5 Conclusion
References
Facial Expression Recognition Based on Subregion Weighted Fusion and LDA
1 Introduction
2 Proposed Algorithm
3 Experiment Results and Evaluation
3.1 Comparative Analysis of Different Dimensions
3.2 Classification Recognition Result
4 Conclusion
References
Extended Target Tracking Using Non-linear Observations
1 Introduction
2 System Model
3 Theoretical Basis of Single Target Tracking
3.1 Tracking Process of Single Extended Target
3.2 Generation of Nonlinear Observation Data
4 Theoretical Basis of Multiple Target
5 Numerical Results
6 Conclusion
References
A Coordinated Multi-point Handover Scheme for 5G C/U-Plane Split Network in High-Speed Railway
1 Introduction
2 Network Model
3 Handover Scheme
4 Performance Analysis
5 Simulation Results
6 Conclusion
References
A Hierarchical FDIR Architecture Supporting Online Fault Diagnosis
Abstract
1 Introduction
2 Overview of Hierarchical FDIR Architecture
3 Highly Decoupled Runtime Model
4 Unified FDIR Model
5 Conclusion
Acknowledgements
References
College Students Learning Behavior Analysis Based on SVM and Fisher-Score Feature Selection
Abstract
1 Introduction
2 Support Vector Machine Algorithm
3 Feature Selection Based on Fisher-Score
4 Experimental Results
5 Conclusion
References
Network Traffic Text Classification Based on Multi-instance Learning and Principal Component Analysis
Abstract
1 1 Introduction
2 2 Multi-instance Learning Algorithm
3 3 Feature Selection Based on Principal Component Analysis
3.1 Principal Component Analysis
3.2 Algorithm Description
4 4 Experimental Results
4.1 Experimental Condition
4.2 Experimental Data Preparation
4.3 Classification Result
5 5 Conclusion
References
Calculation and Simulation of Inductive Overvoltage of Transmission Line Based on Taylor’s Formula Expansion Double Exponential Function
Abstract
1 Introduction
2 Lightning Current Waveform Model
3 Vertical Electric Field Analytical
4 Field Line Sensing Model
5 Analysis of Simulation Results of Inductive Overvoltage on Transmission Lines
6 Conclusion
References
Deep Learning Based Exploring Channel Reciprocity Method in FDD Systems
Abstract
1 Introduction
2 System Model and the Existence of Uplink-CSI to Downlink-CSI Mapping
2.1 System Model
2.2 The Existence of Uplink-CSI to Downlink-CSI Mapping
3 Architecture and Principles of the Proposed CLSTM-Net
4 Experiments and Discussions
5 Conclusions
References
Steering Machine Learning Mechanism Based on Big Data Integrated Cooperative Collision Avoidance for MASS
1 Introduction
2 System Model
2.1 Vessel Network
2.2 Synergetic Avoidance Mechanism
3 Problem Formulation
4 Problem Solutions
4.1 Population Initialization
4.2 Calculation of Collision Risk
5 Simulation Results
5.1 The Efficiency of Improved Genetic Algorithms
5.2 Simulation Collision
6 Conclusion
References
A Weighted Fusion Method for UAV Hyperspectral Image Splicing
Abstract
1 1 Introduction
2 2 Image Fusion
2.1 Average Fusion
2.2 Weighted Average Fusion
3 3 Experimental Results and Analysis
4 4 Conclusion
Acknowledgements
References
Hyperspectral Target Detection Based on Spectral Weighting
Abstract
1 1 Introduction
2 2 Spectral Weighting Method
2.1 Statistic Characteristics Analysis of Hyperspectral Data
2.2 Estimation of Weighting Coefficients Based on Spectral Separability Criterion
2.3 Spectral Weighting Target Detection Algorithms
3 3 Experimental Analysis
4 4 Conclusion
Acknowledgements
References
A Framework for Analysis of Non-functional Properties of AADL Model Based on PNML
Abstract
1 Introduction
2 Framework Overview
3 Unified Model Transformation
4 Conclusion
Acknowledgements
References
A Golden Section Method for Univariate One-Dimensional Maximum Likelihood Parameter Estimation
1 Introduction
2 Maximum Likelihood Parameter Estimation
3 Gradient Algorithm and One-Dimensional Search Algorithm
3.1 MLE Based on Gradient Descent Method
3.2 MLE Based on Linear Search
4 Simulation Results
4.1 Linear Dynamic System Model
4.2 Nonlinear Dynamic System Model
5 Conclusions
References
Network Service Analysis Based on Feature Selection Using Improved Linear Mixed Model
1 Introduction
2 Background and Related Work
2.1 Causal Inference
2.2 Linear Mixed Model
2.3 Parameter Estimation
3 Feature Selection Method Based on Improved LMM Algorithm
4 Dataset
5 Experiment
5.1 Feature Selection and QoE Prediction
5.2 Different QoEs Prediction
6 Discussion
6.1 Feature Selection Effect and Prediction Performance
7 Conclusion
References
SFSSD: Shallow Feature Fusion Single Shot Multibox Detector
1 Introduction
2 Related Work
3 SFSSD
3.1 The Novel SFSSD Architecture
3.2 Feature Fusion Module
4 Experiments
4.1 Results on PASCAL VOC
5 Conclusion and Future Work
References
Beamforming Based on Energy State Feedback for Simultaneous Wireless Information and Power Transmission
Abstract
1 Introduction
2 The Model of Multi-user Simultaneous Wireless Information and Power Transmission System Based on Power Splitting
3 Beamforming Optimization Problem
4 Beamforming Optimization Method for Simultaneous Wireless Information and Power Transmission Based on Node Power State Feedback
5 Simulation and Numerical Analysis
6 Conclusion
References
Research on Cross-Chain Technology Architecture System Based on Blockchain
1 Introduction
2 Difficulties in Cross-Chain
3 Architecture and Analysis of Cross-Chain
4 Conclusion
References
Research on Data Protection Architecture Based on Block Chain
1 Introduction
2 Data Protection Architecture Based on Block Chain
3 Analysis of Data Protection Architecture Based on Block Chain
4 Conclusion
References
Research on Active Dynamic Trusted Migration Scheme for VM-vTPCM
Abstract
1 Introduction
2 Related Work
2.1 Active Immune Trusted Computing
2.2 VM-vTPM Migration Based on TPM
3 Security Issues and Requirements
4 Active Trusted Migration Scheme Based on AITC
4.1 Active Dynamic Trusted Migration Architecture
4.2 Active Dynamic Trusted Migration Protocol
4.2.1 Pre-Migration Preparation Phase
4.2.2 VM-vTPCM Data Migration Phase
5 Experiment
6 Conclusion
Acknowledgements
References
Multiple Hybrid Strategies Filtrate Localization Based on FM for Wireless Sensor Networks
Abstract
1 1 Introduction
2 2 FMFL Algorithm
2.1 Position Calibration Based on FM
2.2 FMFL Algorithm
3 3 Analysis of the Simulation
3.1 Experimental Environment Settings
3.2 Performance Analysis
4 4 Conclusion
References
Localization Algorithm Based on FM for Mobile Wireless Sensor Networks
Abstract
1 FM-MCL Algorithm
2 Performance Analysis
2.1 Anchors Heard
2.2 Motion Speed
3 Conclusion
References
Coherent State Based Quantum Optical Communication with Mature Classical Infrastructure
Abstract
1 Introduction
2 Security of Coherent State Based QOC
3 Secret Key Rate with Discrete Modulation
4 Conclusions
Acknowledgement
References
Design of Codebook for High Overload SCMA
Abstract
1 1 Introduction
2 2 SCMA Model Analysis
3 3 SCMA Codebook Design and Simulation Results
3.1 Phase Rotation Optimization Program
3.2 Modeled on LDPC Coding Design
4 4 Conclusion
Acknowledgements
References
A Spectrum Allocation Scheme Based on Power Control in Cognitive Satellite Communication
Abstract
1 Introduction
2 Scenario Introduction and Signal and Interference Model Description
2.1 Scenario Introduction
2.2 Signal and Interference Model
3 Joint Power and Carrier Allocation Mechanisms (JPCA)
3.1 Power Control
3.2 Joint Carrier and Power Allocation
3.2.1 Independent Carrier Allocation in Two Bands
3.2.2 Carrier Allocation Combining Two Bands
4 Performance Evaluation
5 Conclusion
Acknowledgements
References
Fruit Classification Through Deep Learning: A Convolutional Neural Network Approach
Abstract
1 Introduction
2 Dataset and Materials
3 Proposed Architecture
3.1 CNN Component
3.1.1 Convolutional Layer
3.1.2 ReLU Layer
3.1.3 Pooling Layer
3.1.4 Dropout
3.1.5 Softmax
3.2 CNN Learning Algorithm
4 Results and Discussion
5 Conclusion
Acknowledgements
References
Correction to: Research on Image Encryption Algorithm Based on Wavelet Transform and Qi Hyperchaos
Correction to: Chapter “Research on Image Encryption Algorithm Based on Wavelet Transform and Qi Hyperchaos” in: Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 796–810, 2020 https://doi.org/10.1007/978-981-13-9409-6_94
Author Index
Recommend Papers

Communications, Signal Processing, and Systems: Proceedings of the 8th International Conference on Communications, Signal Processing, and Systems (Lecture Notes in Electrical Engineering, 571)
 9811394083, 9789811394089

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Electrical Engineering 571

Qilian Liang Wei Wang Xin Liu Zhenyu Na Min Jia Baoju Zhang Editors

Communications, Signal Processing, and Systems Proceedings of the 8th International Conference on Communications, Signal Processing, and Systems

Lecture Notes in Electrical Engineering Volume 571

Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Naples, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology, Karlsruhe, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, Munich, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Junjie James Zhang, Charlotte, NC, USA

The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning: • • • • • • • • • • • •

Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Associate Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Executive Editor ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, MetaPress, Web of Science and Springerlink **

More information about this series at http://www.springer.com/series/7818

Qilian Liang Wei Wang Xin Liu Zhenyu Na Min Jia Baoju Zhang •









Editors

Communications, Signal Processing, and Systems Proceedings of the 8th International Conference on Communications, Signal Processing, and Systems

123

Editors Qilian Liang Department of Electrical Engineering University of Texas at Arlington Arlington, TX, USA Xin Liu School of Information and Communication Engineering Dalian University of Technology Dalian, Liaoning, China Min Jia School of Electronics and Information Engineering Harbin Institute of Technology Harbin, China

Wei Wang College of Electronic and Communication Engineering Tianjin Normal University Tianjin, China Zhenyu Na School of Information Science and Technology Dalian Maritime University Dalian, Liaoning, China Baoju Zhang College of Physical and Electronic Information Tianjin Normal University Tianjin, China

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-13-9408-9 ISBN 978-981-13-9409-6 (eBook) https://doi.org/10.1007/978-981-13-9409-6 © Springer Nature Singapore Pte Ltd. 2020, corrected publication 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Contents

Flame Detection Method Based on Feature Recognition . . . . . . . . . . . Ti Han, Changyun Ge, Shanshan Li, and Xinqiang Zhang

1

Small Cell Deployment Based on Energy Efficiency in Heterogeneous Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yinghui Zhang, Shuang Ning, Haiming Wang, Jing Gao, and Yang Liu

9

Research on Knowledge Mining Algorithm of Spacecraft Fault Diagnosis System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lianbing Huang, Wenshuo Cai, Guoliang Tian, Liling Li, and Guisong Yin

20

Performance Analysis of SSK in AF Relay over Transmit Correlated Fading Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiyishu Li, Yaping Hu, and Xiangbin Yu

28

The JSCC Algorithm Based on Unequal Error Protection for H.264 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiarui Han, Jiamei Chen, Yao Wang, Ying Liu, Yang Zhang, and Liang Qiao Mean-Field Power Allocation for UDN . . . . . . . . . . . . . . . . . . . . . . . . Yanwen Wang, Jiamei Chen, Yao Wang, Qianyu Liu, and Yuying Zhao

35

42

Design of Gas Turbine State Data Acquisition Instrument Based on EEMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhonglin Wei, Pengyuan Liu, Feng Wang, and Tianhui Wang

48

Cramér–Rao Bound Analysis for Joint Estimation of Target Position and Velocity in Hybrid Active and Passive Radar Networks . . . . . . . . Chenguang Shi, Wei Qiu, Fei Wang, and Jianjiang Zhou

56

A Hinged Fiber Grating Sensor for Hull Roll and Pitch Motion Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Wang, Libo Qiao, Yuliang Li, Jingping Yang, and Chuanqi Liu

66

v

vi

Contents

Natural Scene Mongolian Text Detection Based on Convolutional Neural Network and MSER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yunxue Shao and Hongyu Suo

75

Coverage Probability Analysis of D2D Communication Based on Stochastic Geometry Model . . . . . . . . . . . . . . . . . . . . . . . . . Xuan-An Song, Hui Li, Zhen Guo, and Xian-Peng Wang

83

Study of Fault Pattern Recognition for Spacecraft Based on DTW Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guoliang Tian, Lianbing Huang, and Guisong Yin

94

A Joint TDOA/AOA Three-Dimensional Localization Algorithm for Spacecraft Internal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yin Long, Ke Zhu, and Cai Huang

103

A Study on Lunar Surface Environment Long-Term Unmanned Monitoring System by Using Wireless Sensor Network . . . . . . . . . . . . Yin Long and Zhao Cheng

110

A Study on Automatic Power Control Method Applied in Astronaut Extravehicular Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yin Long, Pei Guo, and Yusheng Yi

115

Design of EVA Communications Method for Anti-multipath and Full-Range Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yin Long, Kewu Huang, and Xin Qi

121

High Accurate and Efficient Image Retrieval Method Using Semantics for Visual Indoor Positioning . . . . . . . . . . . . . . . . . . . . . . . Jin Dai, Lin Ma, Danyang Qin, and XueZhi Tan

128

Massive MIMO Channel Estimation via Generalized Approximate Message Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muye Li, Xudong Han, Weile Zhang, and Shun Zhang

137

Study of Key Technological Performance Parameters of Carbon-Fiber Infrared Heating Cage . . . . . . . . . . . . . . . . . . . . . . . . Fei Xu, Yan Xia, Guoqing Liu, Yuzhong Li, Jinming Chen, and Chun Liu

145

Research on Switching Power Supply Based on Soft Switching Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhihong Zhang and Hong He

156

Grid Adaptive DOA Estimation Method in Monostatic MIMO Radar Using Sparse Bayesian Learning . . . . . . . . . . . . . . . . . . . . . . . . Yue Wang, Kangyong You, Dan Wang, and Wenbin Guo

165

Contents

Global Deep Feature Representation for Person Re-Identification . . . . Meixia Fu, Songlin Sun, Na Chen, Xiaoyun Tong, Xifang Wu, Zhongjie Huang, and Kaili Ni

vii

179

Hybrid Precoding Based on Phase Extraction for Partially-Connected mmWave MIMO Systems . . . . . . . . . . . . . . . . Mingyang Cui, Weixia Zou, and Ran Zhang

187

Research on the Fusion of Warning Radar and Secondary Radar Intelligence Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jinliang Dong, Yumeng Zhang, Baozhou Du, and Xiaoyan Zhang

196

Antenna Array Design for Directional Modulation . . . . . . . . . . . . . . . Bo Zhang, Wei Liu, and Cheng Wang Capturing the Sparsity for Massive MIMO Channel with Approximate Message Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xudong Han, Shun Zhang, Anteneh Mohammed, Weile Zhang, Nan Zhao, and Yuantao Gu An On-Line EMC Test System for Liquid Flow Meters . . . . . . . . . . . . Haijiao An, Xin Shi, and Xigang Wang Research on Kinematic Simulation for Space Mirrors Positioning 6DOF Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Yalin, Liang Fengchao, He Haiyan, Wang Chun, Tan Shuang, and Lin Zhe

206

214

223

231

A Dictionary Learning-Based Off-Grid DOA Estimation Method Using Khatri-Rao Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weijie Tan, Chenglin Zheng, Judong Li, Weiqiang Tan, and Chunguo Li

239

Radar Adaptive Sidelobe Cancellation Technique Based on Spatial Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yumeng Zhang, Jinliang Dong, and Huifang Dong

249

On the Spectral Efficiency of Multiuser Massive MIMO with Zero-Forcing Precoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chenglin Zheng, Weijie Tan, and Yazhen Chen

259

A Signal Sorting Algorithm Based on LOF De-Noised Clustering . . . . Zhenyuan Ji, Yan Bu, and Yun Zhang

268

Design of a Small-Angle Reflector for Shadowless Illumination . . . . . . Guangzhen Wang

276

Anti-interference Communication Algorithm Based on Wideband Spectrum Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minti Liu, Chunling Liu, Ran Zhang, and Yuanming Ding

284

viii

Contents

A Multi-task Dynamic Compressed Sensing Algorithm for Streaming Signals Eliminating Blocking Effects . . . . . . . . . . . . . . . . . . . . . . . . . . Daoguang Dong, Guosheng Rui, Wenbiao Tian, Ge Liu, Haibo Zhang, and Zhijun Yu Thunderstorm Recognition Algorithm Research Based on Simulated Airborne Weather Radar Reflectivity Volume Scan Data . . . . . . . . . . Rui Liao, Xu Wang, and Jianxin He FPGA-Based Fall Detection System . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng Wang, Fanning Kong, and Hui Wang Artificial Intelligence and Game Theory Based Security Strategies and Application Cases for Internet of Vehicles . . . . . . . . . . . . . . . . . . Zhiyong Wang, Miao Zhang, He Xu, Guoai Xu, Chengze Li, and Zhimin Wu

294

303 314

322

The Effect of Integration Stage on Multimodal Deep Learning in Genomic Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fariba Khoshghalbvash and Jean X. Gao

330

An Advanced Aerospace High Precision Spread Spectrum Ranging System Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ning Liu, Pingyuan Lu, and Xiaohang Ren

339

Weight-Assignment Last-Position Elimination-Based Learning Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haiwei An, Chong Di, and Shenghong Li

348

Nonlinear Multi-system Interactive Positioning Algorithms . . . . . . . . . Xin-xin Ma, Ping-ke Deng, and Xiao-guang Zhang

355

Bandwidth Enhancement of Waveguide Slot Antenna Array for Satellite Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pengfei Zhao, Shujie Ma, Peiyao Yang, Fan Lu, and Shasha Zhang

366

Design of an Enhanced Turbulence Detection Process Considering Aircraft Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuandan Fan, Xiaoguang Lu, Hai Li, and Renbiao Wu

373

Rain-Drop Size Distribution Case Study in Chengdu Based on 2DVD Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Liu, Debin Su, and Hongyu Lei

382

Analysis of the Influence on DPD with Memory Effect in Frequency Hopping Communication System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Lu, Shi Hairan, Gao Shujin, and Duan Jiangnian

390

Contents

FPGA-Based Implementation of Reconfigurable Floating-Point FIR Digital Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ning Zhang, Xin Wei, Bingyi Li, and He Chen High Precision Spatiotemporal Datum Design Based on Ground Observation Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yufei Huang, Ji Gao, Dan Wang, Yong Liu, Zhengji Song, Jia Xu, and Lantao Liu

ix

400

408

Study on Two Types of Sensor Antennas for an Intelligent Health Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yang Li, Licheng Yang, Xiaonan Zhao, Bo Zhang, and Cheng Wang

415

A Fiber Bragg Grating Acceleration Sensor for Measuring Bow Slamming Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jingping Yang, Wei Wang, Yuliang Li, Libo Qiao, and ChuanQi Liu

422

Improving Indoor Random Position Device-Free People Recognition Resolution Using the Composite Method of WiFi and Chirp . . . . . . . . Xiaokun Zheng, Ting Jiang, and Wenling Xue

431

Optimal Design of an S-Band Low Noise Amplifier . . . . . . . . . . . . . . . Hai Wang, Zhihong Wang, Guiling Sun, Ming He, Ying Zhang, Ke Liang, and Rong Guo

439

A Triangular Centroid Location Method Based on Kalman Filter . . . . Yunfei Suo, Tao Liu, Can Lai, and Zechen Li

448

Research on Spatial Network Routing Model Based on Price Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ligang Cong, Huamin Yang, and Xiaoqiang Di

459

The TDOA and FDOA Algorithm of Communication Signal Based on Fine Classification and Combination . . . . . . . . . . . . . . . . . . . Chi Zhang

469

An Adaptive DFT-Based Channel Estimation Method for MIMO-OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiao Deng and Xiao Ming Wu

479

A Novel Gradient L0-Norm Regularization Image Restoration Method Based on Non-local Total Variation . . . . . . . . . . . . . . . . . . . . Mingzhu Shi

487

Study on Interference from 5G System to Earth Exploration Satellite Service System in High Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yi Wang, Baoju Zhang, and Wei Wang

494

Sparse Planar Antenna Array Design for Directional Modulation . . . . Bo Zhang, Wei Liu, Yang Li, Xiaonan Zhao, and Cheng Wang

503

x

Contents

Research on the Linear Interpolation of Equal-Interval Fractional Delay Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shen Zhao, Yunwei Zhang, XiWei Guo, and Deliang Liu Single-Channel Grayscale Processing Algorithm for Transmission Tissue Images Based on Heterogeneity Detection . . . . . . . . . . . . . . . . . Baoju Zhang, Chengcheng Zhang, Gang Li, Ling Lin, Cuiping Zhang, and Fengjuan Wang Handwriting Numerals Recognition Using Convolutional Neural Network Implemented on NVIDIA’s Jetson Nano . . . . . . . . . . . . . . . . Huan Chen, Songyan Liu, Haining Zhang, and Wang Cheng Implementation of Image Recognition on Embedded Systems . . . . . . . Haining Zhang, Songyan Liu, Huan Chen, and Wang Cheng

512

520

529 536

A Precise 3-D Wireless Localization Technique Using Smart Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuang Feng, Desheng Chi, Jingyu Dai, and Xiaorong Zhu

544

A Two-Phase Fault Diagnosis Algorithm Based on Convolutional Neural Network for Heterogeneous Wireless . . . . . . . . . . . . . . . . . . . . Yong Wang, Lei Zhang, and Xiarong Zhu

555

A Wireless Power Transfer System with Switching Circuit of Power Grid and Solar Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ze Song, Xin Zhang, Xiu Zhang, Ruiqing Xing, and Lei Wang

564

A Fiber Bragg Grating Stress Sensor for Hull Local Strength Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chuanqi Liu, Wei Wang, Yuliang Li, Libo Qiao, and Jingping Yang

572

Direct Wave Parameters Estimation of Passive Bistatic Radar Based on Uncooperative Phased Array Radar . . . . . . . . . . . . . . . . . . . Jiameng Pan, Panhe Hu, Qian Zhu, and Qinglong Bao

579

Noncooperative Radar Illuminator Based Bistatic Receiving System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Caisheng Zhang, Hai Zhang, and Xiaolong Chen

588

Research on Simulation Technology for Remote Sensing Image Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hezhi Sun, Yugao Li, Xiao Mei, Yuting Gao, and Dong Yang

596

Distributed Measurement of Micro-vibration and Analysis of the Influence on Imaging Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . Yugao Li, Hezhi Sun, Chen Ni, Xiang Li, and Dong Yang

605

Contents

Analysis and Verification of the Effect of Space Debris on the Output Power Decline of Solar Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enzhu Bao, Li Ma, Peng Tian, Linchun Fu, and Shijie Chen A New Nonlinear Method for Calculating the Error of Passive Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuncheng Tan, Guohong Wang, Chengbin Guan, Hongbo Yu, Siwen Li, and Qian Cao A Static Method for Stack Overflow Detection Based on SPARC V8 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tao Zhang, Rui Zhang, Ruijun Li, Yanfang Fan, and Hongjing Cheng Enhanced Double Threshold Based Energy Detection . . . . . . . . . . . . . Omar Aitmesbah and Zhuoming Li

xi

614

622

629 638

Self-generating Topology Coloring Scheduling for Interference Mitigation in Wireless Body Area Networks . . . . . . . . . . . . . . . . . . . . Jiasong Mu, Yunna Wei, and Xiaorun Yang

646

Smart Parking and Recommendation System Under Fog Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiasong Mu, Yunna Wei, and Xiaorun Yang

654

Speech Synthesis Method Based on Tacotron + WaveNet . . . . . . . . . . Yingnan Liu, Qitao Ma, and Yingli Wang

662

A Novel Spatial Domain Based Steganography Scheme Against Digital Image Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zheng Hui and Quan Zhou

671

Losen: An Accurate Indoor Localization System by Integrating CSI of Wireless Signal and MEMS Sensors . . . . . . . . . . . . . . . . . . . . . . . . Zengshan Tian, Linxiao Xie, Ze Li, and Mu Zhou

679

A Direct Target Recognition Algorithm for Low-Resolution Radar with Unbalanced Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kefan Zhu, Jiegui Wang, and Miao Wang

688

DFT-Spread Based PAPR Reduction of OFDM for Short Reach Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yupeng Li, Yaqi Wang, and Longwei Wang

696

Underdetermined Mixed Matrix Estimation of Single Source Point Detection Based on Noise Threshold Eigenvalue Decomposition . . . . . Miao Wang, Xiao-xia Cai, and Ke-fan Zhu

704

Optimization of APTEEN Routing Protocol for Wireless Sensor Networks Based on Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . Minghao Wang, Shubin Wang, and Bowen Zhang

712

xii

Contents

Optimization of APTEEN Routing Protocol in Wireless Sensor Networks Based on Particle Swarm Optimization . . . . . . . . . . . . . . . . Bowen Zhang, Shubin Wang, and Minghao Wang Research Status of Wireless Power Transmission Technology . . . . . . . Xudong Wang, Changbo Lu, Feng Wang, Wanli Xu, and Shizhan Li

722 731

Flexible Sparse Representation Based Inverse Synthetic Aperture Radar Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lu Wang, Guoan Bi, and Xianpeng Wang

739

Localization Schemes for 2-D Molecular Communication via Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shenghan Liu, Shijian Bao, and Chenglin Zhao

749

Research on Support Vector Machine in Estimating Source Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoli Zhang, Jiaqi Zhen, and Baoyu Guo

757

Wireless Electricity Transmission Design of Unmanned Aerial Vehicle Charging Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yashuo He, Jingjing Wu, Sumeng Shi, Ze Song, Qijing Qiao, and Cheng Wang

762

An ITD-Based Method for Individual Recognition of Secondary Radar Radiation Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tianqi Li, Yu Zhang, and Xiaojing Yang

769

Gaussian Mixture Model Based Multi-region Blood Vessel Segmentation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yaqing Fu, Maolin Wang, and Ting Liu

778

Research on the Enhancement of VANET Coverage Based on UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tianci Liu, Lixin Zhao, Bin Li, and Chenglin Zhao

787

Research on Image Encryption Algorithm Based on Wavelet Transform and Qi Hyperchaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhiyuan Li, Aiping Jiang, and Yuying Mu

796

A Design of Satellite Telemetry Acquisition System . . . . . . . . . . . . . . . Meishan Chen, Qiang Mu, Jinyuan Ma, and Xin Li

811

Fingerprint Feature Recognition of Frequency Hopping Radio with FCBF-NMI Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongguang Li, Ying Guo, Zisen Qi, Ping Sui, and Linghua Su

819

Integrated Design of High Speed Uplink and Emergency Telemetry and Control for LEO Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiang Mu, Hongwei Shi, Jinyuan Ma, and Meishan Chen

832

Contents

Imaging Correction Based on AIS for Moving Vessels in Spaceborne SAR Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuting Gao, Guangjun He, Tao Zhang, Dongqiang Zhou, Dong Yang, and Jindong Li Research on Flying Catkins Detection and Removal in Target Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hualin Liu, Haipeng Wang, Limin Zhang, and Xueteng Li Robust Context-Aware Tracking with Temporal Regularization . . . . . Tianhao Li, Tingfa Xu, Yu Bai, Axin Fan, and Ruoling Yang

xiii

840

848 858

Research on Motor Speed Estimation Method Based on Electric Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jian He and Bo Li

866

A Novel Virtual Cell Power Allocation and Interference Merging Algorithm in UDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liting Song, Weidong Gao, Gang Chuai, and ZiWei Si

877

Device-Free Sensing for Gesture Recognition by Wi-Fi Communication Signal Based on Auto-encoder/decoder Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yi Zhong, Yan Huang, and Ting Jiang Detection of Sleep Apnea Based on Cardiopulmonary Coupling . . . . . Haojing Zhang, Weidong Gao, and Peizhi Liu Study on a Space-Air-Ground Integrated Data Link Networks Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia Guo, Shasha Zhang, Fan Lu, Jingshuang Cheng, Yuanqing Zhao, and Nuo Xu Similar Cluster Based Continuous Bag-of-Words for Word Vector Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weikai Sun, Yinghua Ma, Shenghong Li, and Shiyi Zhang Research on Integrated Waveform of FDA Radar and Communication Based on Linear Frequency Offsets . . . . . . . . . . . Lin Zhang, Kefei Liao, Shan Ouyang, Yuan Ma, Jingjing Li, Ningbo Xie, and Gaojian Huang Research on Parameter Configuration of Deep Neural Network Applied on Speech Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoyu Zhan, Yongjing Ni, and Ting Jiang Mid-Infrared Characteristic Analysis of Stability Index of Vehicle Gasoline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lianling Ren, Hongjian Li, Lei Guo, Deyan Wang, Jianping Song, and Xin Hu

887 895

904

911

919

927

936

xiv

Contents

Application of Mid-Infrared Characteristic Analysis Technology in Gasoline Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lianling Ren, Jianping Song, Hongjian Li, Caichao Deng, Lei Guo, and Xin Hu A Generalized Sampling Based Method for Digital Predistortion of RF Power Amplifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ke Li and Hairan Shi Optimum Design of Intersatellite Link Based on STK . . . . . . . . . . . . . Guanghua Zhang, Jian Li, Jingqiu Ren, and Weidang Lu

943

953 960

Integrated Detection and Tracking in Asynchronous Moving Radar Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jinhui Dai, Junkun Yan, Penghui Wang, and Hongwei Liu

969

Fault-Tolerant Decompression Method of Compressed Chinese Text Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuyang Wang, Xiaoqun Zhao, and Digang Wang

977

Classification of Human Motion Status Using UWB Radar Based on Decision Tree Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guoqing Wang and Zhengliang Zhu

985

A Sub-aperture Division Method for FMCW CSAR Imaging . . . . . . . Depeng Song, Binbing Li, Yi Qu, Yijun Chen, and Heng Wang

993

An Experimental Study of Sea Target Detection of Passive Bistatic Radar Based on Non-cooperative Radar Illuminators . . . . . . . . . . . . . 1002 Jie Song, Guo-qing Wang, and Xiao-long Chen Design of a Quasi-Real-Time Communication System for LEO Satellites Using Beidou Short-Message Service . . . . . . . . . . . . . . . . . . . 1010 Fan Lu, Jingshuang Cheng, Shasha Zhang, Ning Liu, and Hongjie Zhang A Physically Decoupled Onboard Control Plane for Software Defined LEO Constellation Network . . . . . . . . . . . . . . . . . . . . . . . . . . 1019 Peicong Wu, Kanglian Zhao, Wenfeng Li, Zhifeng Liu, and Zhenming Sun A Dynamic Programming Based TBD Algorithm for Near Space Targets Under Range Ambiguity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029 Hongbo Yu, Shuncheng Tan, Qian Cao, Xiangyu Zhang, Lin Li, and Qiang Guo Research and Design of Home Care System of Internet of Things Based on Wireless Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038 Xiaoguang Su, Xiangyu Zhao, Lili Yu, Jingyuan Jia, and Zhian Deng

Contents

xv

Design of Wind Pendulum Control System Based on STM32F407 . . . . 1045 Hai Wang, Zhihong Wang, Guiling Sun, Ming He, Ying Zhang, Ke Liang, and Rong Guo A High-Speed Parallel Accessing Scheduler of Space-Borne Nand Flash Storage System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055 Xin Li, Ji-Yang Yu, Ke Li, Mei-Shan Chen, and Jin-Yuan Ma Two Dimensional Joint ISAR Imaging Algorithm Based on Matrix Completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063 Jian-fei Ren, Le Kang, Xiao-fei Lu, Yijun Chen, and Ying Luo The Satellite GPS Antenna In-Orbit Phase Center Calibration Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1072 Ning Liu and Yufei Huang Migrating Target Detection Under Spiky Clutter Background . . . . . . . 1080 Zhiyong Niu, Tao Su, Jibin Zheng, and Wentong Li A Novel Range Super-Resolution Algorithm for UAV Swarm Target Based on LFMCW Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088 Tianyuan Yang, Tao Su, and Jibin Zheng An Improved PDR/WiFi Integration Method for Indoor Pedestrian Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096 Boyuan Wang, Xuelin Liu, Baoguo Yu, Ruicai Jia, Lu Huang, and Haonan Jia An Adaptive Radar Resource Scheduling Algorithm for ISAR Imaging Based on Step-Frequency Chirp Signal Optimization . . . . . . . 1104 Yijun Chen, Ying Luo, Yi Qu, and Hao Lou A Task-Dependent Flight Plan Conflict Risk Assessment Method for General Aviation Operation Airspace . . . . . . . . . . . . . . . . . . . . . . . 1112 Zhe Zhang, Li An, Xiaoliang Wang, Peng Wang, Ping Han, and Renbiao Wu A Uniform Model for Conflict Prediction and Airspace Safety Assessment for Free Flight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120 Zhe Zhang, Li An, Peng Wang, Xiaoliang Wang, and Renbiao Wu Optimization of Power Allocation for Full Duplex Relay-Assisted D2D Communication Underlaying Wireless Cellular Networks . . . . . . 1128 Ranran Zhou and Liang Han Scene Text Recognition Based on Deep Learning . . . . . . . . . . . . . . . . . 1136 Yunxue Shao and Yuxin Chen

xvi

Contents

Spectrum Sensing Algorithm Based on Twin Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144 Xiaorong Wang, Zili Wang, Dongyang Guo, and Huiling Zhou Applicability Analysis of Plane Wave and Spherical Wave Model in Blue and Green Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153 Songlang Li, Zhongyang Mao, Chuanhui Liu, and Min Liu A Study of the Influence of Resonant Frequency in Wireless Power Transmission System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1162 Xiaohui Lu, Xiu Zhang, Ruiqing Xing, Xin Zhang, Yupeng Li, and Liang Han Direction of Arrival Estimation Based on Support Vector Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169 Baoyu Guo, Jiaqi Zhen, and Xiaoli Zhang Bistatic ISAR Radar Imaging Using Missing Data Based on Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174 Luhong Fan, Zongjie Cao, Jin Li, Rui Min, and Zongyong Cui Medical Images Segmentation Using a Novel Level Set Model with Laplace Kernel Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186 Jianhua Song, Zhe Zhang, and Jiaqi Zhen Research on Multi-UAV Routing Simulation Based on Unity3d . . . . . . 1190 Cong Chen, Yanting Liu, Fusheng Dai, Yong Li, Weidang Lu, and Bo Li Video Target Tracking Based on Adaptive Kalman Filter . . . . . . . . . . 1198 Futong He, Jiaqi Zhen, and Zhifang Wang Compressed Sensing Image Reconstruction Method Based on Chaotic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202 Yaqin Xie, Erfu Wang, Jiayin Yu, Shiyu Guo, and Xiaomin Zhang An Underdetermined Blind Source Separation Algorithm Based on Variational Mode Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 1210 Shiyu Guo, Erfu Wang, Jiayin Yu, Yaqin Xie, and Xiaomin Zhang A Ranking Learning Training Method Based on Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218 Yulong Lai and Jiaqi Zhen Research on Temperature Characteristics of IoT Chip Hardware Trojan Based on FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222 Junru An, Zhiwei Cui, Zhenhui Zhang, Liji Wu, and Xiangmin Zhang

Contents

xvii

Wireless Communication Intelligent Voice Height Measurement System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231 Danfeng Zhao and Peidong Zhuang Design of Intelligent Classification Waste Bin with Detection Technology in Fog and Haze Weather . . . . . . . . . . . . . . . . . . . . . . . . . 1241 Ailing Zhang, Peidong Zhuang, Yuehua Shi, and Danfeng Zhao A False-Target Jamming Method for the Phase Array Multibeam Radar Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250 Liu Tao, Zong Siguang, Tian Shusen, and Peng Pei Analysis of TDOA Location Algorithm Based on Ultra-Wideband . . . 1257 Wenquan Li and Bing Zhao Algorithm Design of Combined Gaussian Pulse . . . . . . . . . . . . . . . . . . 1262 Xunchen Jia and Bing Zhao A Network Adapter for Computing Process Node in Decentralized Building Automation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267 Liang Zhao, Zexin Zhang, Tianyi Zhao, and Jili Zhang Model Reference Adaptive Control Application in Optical Path Scanning Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272 Lanjie Guo, Hao Wang, Wenpo Ma, and Chun Wang UAV Path Planning Design Based on Deep Learning . . . . . . . . . . . . . 1280 Song Chang, Ziyan Jia, Yang Yu, Weige Tao, and Xiaojie Liu Research on Temperature and Infrared Characteristics of Space Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289 Xiang Li and Jindong Li A Multispectral Image Edge Detection Algorithm Based on Improved Canny Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1298 Baoju Zhang, FengJuan Wang, Gang Li, CuiPing Zhang, and ChengCheng Zhang A Green and High Efficient Architecture for Ground Information Port with SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1308 Peng Qin, Jianming Li, Xiaohong Xue, Hongmei Zhang, Chang Jiang, and Yunlong Wang Marked Watershed Algorithm Combined with Morphological Preprocessing Based Segmentation of Adherent Spores . . . . . . . . . . . . 1316 Jiaying Wang, Yaochi Zhao, Yu Wang, Wei Chen, Hui Li, Yugui Han, and Zhuhua Hu

xviii

Contents

Data Storage Method for Fast Retrieval in IoT . . . . . . . . . . . . . . . . . . 1324 Juan Chen, Lihua Yin, Tianle Zhang, Yan Liu, and Zhian Deng Equivalence Checking Between System-Level Descriptions by Identifying Potential Cut-Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 1328 Jian Hu, Guanwu Wang, Guilin Chen, Yun Kang, Long Wang, and Jian Ouyang An Improved Adversarial Neural Network Encryption Algorithm Against the Chosen-Cipher Text Attack (CCA) . . . . . . . . . . . . . . . . . . 1336 Yingli Wang, Haiting Liu, Hongbin Ma, and Wei Zhuang Hardware Implementation Based on Contact IC Card Scalar Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344 Feng Liang, Yanzhao Yin, Zhenhui Zhang, Liji Wu, and Xiangmin Zhang Tiered Spectrum Allocation for General Heterogeneous Cellular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353 Haichao Wei, Anliang liu, and Na Deng Human Action Recognition Algorithm Based on 3D DenseNet-BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361 Yujiao Cui, Yong Zhu, Jun Li, Luguang Wang, and Chuanbo Wang Color Image Encryption Based on Principal Component Analysis . . . . 1368 Xin Huang, Xinyue Tang, and Qun Ding Research on Transmitter of the Somatosensory Hand Gesture Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376 Fei Gao, Jiyou Fei, Hua Li, Xiaodong Liu, and Ti Han Research on Image Retrieval Based on Wavelet Denoising in Visual Indoor Positioning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 1385 Zhonghong Wang, Guoqiang Wang, and Guoying Zhang Analysis of the Matching Pursuit Reconstruction Algorithm Based on Compression Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392 Zhihong Wang, Hai Wang, Guiling Sun, and Yangyang Li Super-Resolution Based and Topological Structure for Narrow Road Extraction from Remote Sensing Image . . . . . . . . . . . . . . . . . . . . . . . . 1402 Guoying Zhang, Guoqiang Wang, and Zhonghong Wang Evaluation on Learning Strategies for Multimodal Ground-Based Cloud Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1411 Shuang Liu, Mei Li, Zhong Zhang, and Xiaozhong Cao SAR Load Comprehensive Testing Technology Based on Echo Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1418 Zhiya Hao, Zhongjiang Yu, Kui Peng, Linna Ni, and Yinhui Xu

Contents

xix

A New Traffic Priority Aware and Energy Efficient Protocol for WBANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429 Wei Wang, Dunqiang Lu, Xin Zhou, Baoju Zhang, Jiasong Mu, and Yuanyuan Li Design of Modulation and Demodulation System Based on Full Digital Phase-Locked Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . 1438 Hongli Zhu and Jin Chen Ethanol Gas Sensor Based on SnO2 Hierarchical Nanostructure . . . . . 1445 Ming Zhu, Yongxiang Pi, and Huijun Zhang Generative Model for Person Re-Identification: A Review . . . . . . . . . . 1450 Zhong Zhang, Tongzhen Si, and Shuang Liu Location Fingerprint Indoor Positioning Based on XGBoost . . . . . . . . 1457 Hongbin Ma, Yanlong Ma, Yingli Wang, Xiaojie Xu, and Wei Zhuang An Information Hiding Algorithm for Iris Features . . . . . . . . . . . . . . . 1465 Jiahui Feng, Hongbin Ma, Qitao Ma, Yingli Wang, Haiting Liu, and Hong Chen Thin Film Transistor of CZ-PT Applied to Sensor . . . . . . . . . . . . . . . 1474 Yongxiang Pi, Ming Zhu, and Huijun Zhang An Image Dehazing Algorithm Based on Single-Scale Retinex and Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482 Hong Wu and Zhiwei Tan Survey of Gear Fault Feature Extraction Methods Based on Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494 Hong Wu and Can Wang Hyperspectral Image Classification Based on Bidirectional Gated Recurrent Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505 Yong Liu, Hongchang He, Xiaofei Wang, Yu Wang, and Runxing Chen A Survey of Pedestrian Detection Based on Deep Learning . . . . . . . . . 1511 Runxing Chen, Xiaofei Wang, Yong Liu, Sen Wang, and Shuo Huang Detection of Anomaly Signal with Low Power Spectrum Density Based on Power Information Entropy . . . . . . . . . . . . . . . . . . . . . . . . . 1517 Shaolin Ma, Zhuo Sun, Anhao Ye, Suyu Huang, and Xu Zhang A Hybrid Multiple Access Scheme in Wireless Powered Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1528 Yue Liu, Zhenyu Na, Anliang Liu, and Zhian Deng Gas Sensing Properties of Molecular Sieve Modified 3DIO ZnO to Ethanol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1533 Fangxu Shen, Xinping He, Xiu Zhang, Hefei Gao, and Ruiqing Xing

xx

Contents

FiberEUse: A Funded Project Towards the Reuse of the End-of-Life Fiber Reinforced Composites with Nondestructive Inspection . . . . . . . 1541 Yijun Yan, Andrew Young, Jinchang Ren, James Windmill, Winifred L. Ijomah, and Tariq Durrani Autonomous Mission Planning and Scheduling Strategy for Data Transmission of Deep-Space Missions . . . . . . . . . . . . . . . . . . . . . . . . . 1548 Jionghui Li, Liying Zhu, Shi Liu, Xiongwen He, and Xiaofeng Zhang Preparation of TiO2 Nanotube Array Photoanode and Its Application in Three-Dimensional DSSC . . . . . . . . . . . . . . . . . . . . . . . 1558 Zhiwei Cui, J. R. An, and Y. W. Dou Block-Based Data Security Storage Scheme . . . . . . . . . . . . . . . . . . . . . 1567 Yina Wang, Hongbin Ma, Qitao Ma, Hong Chen, Dongdong Zhang, and Yingli Wang Chaos Synchronization and Voice Encryption of Discretized Hyperchaotic Chen Based on Euler Algorithm . . . . . . . . . . . . . . . . . . 1576 Xinyue Tang, Jiaqi Zhen, Qun Ding, Bing Zhao, and Jie Yang Multiple UAV Assisted Cellular Network: Localization and Access Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1581 Yiwen Tao, Qingyue Zhang, Bin Li, and Chenglin Zhao WiFi Location Fingerprint Indoor Positioning Method Based on WKNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589 Xinxin Wang, Danyang Qin, and Lin Ma The Digital Design and Verification of Overall Power System for Spacecraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1597 Ning Xia, Qing Du, Zhigang Liu, Xiaofeng Zhang, and Yan Chen The Analysis and Practice of Backup Spacecraft Tele Command Based on Chang’E-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605 Xiaoguang Li, Xiaohu Shen, Mei Yang, and Shi Liu A Modified Hough Transform TBD Method for Radar Weak Targets Using Plot’s Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611 Bao Zhonghua, Tian Shusen, and Lu Jianbin Analysis of the Effects of Climate Teleconnections on Precipitation in the Tianshan Mountains Using Time-Frequency Methods . . . . . . . . 1620 Baoju Zhang, Lixing An, Yonghong Hao, and Tian-Chyi Jim Yeh An Improved Cyclic Spectral Algorithm Based on Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629 Jurong Hu, Ying Tian, Yu Zhang, and Xujie Li

Contents

xxi

Video Deblocking for KMV-Cast Transmission Based on CNN Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639 Yingchun Yuan and Qifei Lu Improved YOLO Algorithm for Object Detection in Traffic Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647 Qifei Lu and Yingchun Yuan Task Allocation for Multi-target ISAR Imaging in Bi-Static Radar Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656 Dan Wang, Jia Liang, Qun Zhang, and Feng Zhu A New Tracking Algorithm for Maneuvering Targets . . . . . . . . . . . . . 1666 Jurong Hu, Yixiang Zhu, Hanyu Zhou, Ying Tian, and Xujie Li Research on an Improved SVM Training Algorithm . . . . . . . . . . . . . . 1674 Pan Feng, Danyang Qin, Ping Ji, Min Zhao, Ruolin Guo, Guangchao Xu, and Lin Ma Modeling for Coastal Communications Based on Cellular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1681 Yanli Xu Research of Space Power System MPPT Topology and Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689 Qing Du, Ning Xia, Bo Cui, Zhigang Liu, Yi Yang, Hao Mu, and Yi Zeng Far-Field Sources Localization Based on Fourth-Order Cumulants Matrix Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697 Heping Shi, Zhiwei Guan, Lizhu Zhang, and Ning Ma ONENET-Based Greenhouse Remote Monitoring and Control System for Greenhouse Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 1703 Wei-tao Qian, Jiaqi Zhen, and Tao-tao Shen Design of Multi-Node Wireless Networking System on Lunar . . . . . . . 1709 Panpan Zhan, Yating Cao, Lu Zhang, Xiaofeng Zhang, Xiangyu Lin, and Zhiling Ye Algorithm Improvement of Pedestrians’ Red-Light Running Snapshot System Based on Image Recognition . . . . . . . . . . . . . . . . . . . 1718 Zhiqiang Wang, Xiaodong Sun, Xiaoxu Zhang, Ti Han, and Fei Gao A Datacube Reconstruction Method for Snapshot Image Mapping Spectrometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1727 Xiaoming Ding and Cheng Wang LFMCW Radar DP-TBD for Power Line Target Detection . . . . . . . . . 1737 Xionglan Chen, Guanghe Chen, and Zhanfeng Zhao

xxii

Contents

Review of ML Method, LVD and PCFCRD and Future Research for Noisy Multicomponent LFM Signals Analysis . . . . . . . . . . . . . . . . 1744 Jibin Zheng, Kangle Zhu, Hongwei Liu, and Yang Yang Research on Vision-Based RSSI Path Loss Compensation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1751 Guangchao Xu, Danyang Qin, Ping Ji, Min Zhao, Ruolin Guo, and Pan Feng Efficient Energy Power Allocation for Forecasted Channel Based on Transfer Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1758 Zhangliang Chen and Qilian Liang A Modular Indoor Air Quality Monitoring System Based on Internet of Thing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766 Liang Zhao, Guangwen Wang, Liangdong Ma, and Jili Zhang Performance Analysis for Beamspace MIMO-NOMA System . . . . . . . 1773 Qiuyue Zhu, Wenbin Zhang, Lingzhi Liu, Bowen Zhong, and Shaochuan Wu A Novel Low-Complexity Joint Range-Azimuth Estimator for Short-Range FMCW Radar System . . . . . . . . . . . . . . . . . . . . . . . . 1782 Yong Wang, Yanchun Li, Xiaolong Yang, Mu Zhou, and Zengshan Tian Comparative Simulation for Nonlinear Effect of Hybrid Optical Fiber-Links in High-Speed WDM Systems . . . . . . . . . . . . . . . . . . . . . . 1787 Zhan-Heng Dai, Wei-Feng Chen, Li-Min Li, Ruo-Fei Ma, Bo Li, and Gongliang Liu POI Recommendation Based on Heterogeneous Network . . . . . . . . . . 1795 Yan Wen, Jiansong Zhang, Geng Chen, Xin Chen, and Ming Chen A Survey on Named Entity Recognition . . . . . . . . . . . . . . . . . . . . . . . . 1803 Yan Wen, Cong Fan, Geng Chen, Xin Chen, and Ming Chen A Hybrid TWDM-RoF Transmission System Based on a Sub-Central Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1811 Anliang Liu, Haichao Wei, Zhenyu Na, and Hongxi Yin Optimal Subcarrier Allocation for Maximizing Energy Efficiency in AF Relay Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1819 Weidang Lu, Shanzhen Fang, Yiyang Qiang, Bo Li, and Yi Gong A Study on D2D Communication Based on NOMA Technology . . . . . 1826 Xiumei Wang, Kai Mao, Huiru Wang, and Yin Lu Research on Deception Jamming Methods of Radar Netting . . . . . . . . 1835 Xiaoqian Lu, Hu Shen, Wenwen Gao, and Xiaoyu Zhong

Contents

xxiii

Cluster Feed Beam Synthesis Network Calibration . . . . . . . . . . . . . . . 1844 Zhonghua Wang, Yaqi Wang, Chaoqiong Fan, Bin Li, and Chenglin Zhao Design and Optimization of Cluster Feed Reflector Antenna . . . . . . . . 1855 Zhonghua Wang, Yaqi Wang, Chaoqiong Fan, Bin Li, and Chenglin Zhao Cognitive Simultaneous Wireless Information and Power Transfer Based on Decode-and-Forward Relay . . . . . . . . . . . . . . . . . . . . . . . . . 1864 Xiaoyan Li, Yiyang Qiang, Weidang Lu, Hong Peng, and Bo Li A Deep-Learning-Based Distributed Compressive Sensing in UWB Soil Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1872 Chenkai Zhao, Jing Liang, and Qin Tang An Improved McEliece Cryptosystem Based on QC-LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1880 Fan Bu, Zhiping Shi, Lanjun Li, Shujun Zhang, and Dandi Yang Research on Multi-carrier System Based on Index Modulation . . . . . . 1887 Dong Wang, Jie Yang, and Bing Zhao A GEO Satellite Positioning Testbed Based on Labview and STK . . . . 1892 Yunfeng Liu, Qi Zhang, Shuai Han, and Deyue Zou SVR Based Nonlinear PA Equalization in MIMO System with Rayleigh Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1900 Bowen Zhong, Wenbin Zhang, Shaochuan Wu, and Qiuyue Zhu LoS-MIMO Channel Capacity of Distributed GEO Satellites Communication System Over the Sea . . . . . . . . . . . . . . . . . . . . . . . . . 1908 Chi Zhang, Hui Li, Xuan An Song, Jie Cheng, and Li Jie Wang Design and Implementation of Flight Data Processing Software for Global Flight Tracking System Based on Stored Procedure . . . . . . 1916 Peng Wang, Wanwei Wang, Zhe Zhang, Min Chen, and Jun Yang Face Recognition Method Based on Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925 Yunhao Liu and Jie Yang An On-Line ASAP Scheduling Method for Time-Triggered Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1930 Guevara Ania, Qiao Li, and Ruowen Yan Soil pH Value Prediction Using UWB Radar Echoes Based on XGBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1941 Tiantian Wang, Chenghao Yang, and Jing Liang

xxiv

Contents

A Novel Joint Resource Allocation Algorithm in 5G Heterogeneous Integrated Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1948 Qingtian Zeng, Qiong Wu, and Geng Chen A Vehicle Positioning Algorithm Based on Single Base Station in the Vehicle Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1958 Geng Chen, Xueying Liu, Qingtian Zeng, and Yan Wen Heterogeneous Wireless Network Resource Allocation Based on Stackelberg Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1969 Shouming Wei, Shuai Wei, Bin Wang, and Sheng Yu On the Performance of Multiuser Dual-Hop Satellite Relaying . . . . . . 1977 Huaicong Kong, Min Lin, Xiaoyu Liu, Jian Ouyang, and Xin Liu Architectures and Key Technical Challenges for Space-Terrestrial Heterogeneous Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1986 Yang Zhang, Chao Mu, Zhou Lu, Fangmin Xu, and Ye Xiao Design and Implementation of the Coarse and Fine Data Fusion Based on Round Inductosyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1993 Li Jing, Cui Chenpeng, and Zhao Xin Lossless Flow Control for Space Networks . . . . . . . . . . . . . . . . . . . . . . 2002 Zhigang Yu, Xu Feng, Yang Zhang, and Zhou Lu Heterogeneous Network Selection Algorithm Based on Deep Q Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2011 Sheng Yu, Chen-Guang He, Wei-Xiao Meng, Shuai Wei, and Shou-Ming Wei Vertical Handover Algorithm Based on KL-TOPSIS in Heterogeneous Private Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2020 Chen-Guang He, Qiang Yang, Shou-Ming Wei, and Jing-Qi Yang A Deep Deformable Convolutional Method for Age-Invariant Face Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2029 Hui Zhan, Shenghong Li, and Haonan Guo Weight Determination Method Based on TFN and RST in Vertical Handover of Heterogeneous Networks . . . . . . . . . . . . . . . . . . . . . . . . . 2038 Chen-Guang He, Jing-Qi Yang, Shou-Ming Wei, and Qiang Yang Deep Learning-Based Device-Free Localization Using ZigBee . . . . . . . 2046 Yongliang Sun, Xiaocheng Wang, and Xuzhao Zhang A Modified Genetic Fuzzy Tree Based Moving Strategy for Nodes with Different Sensing Range in Heterogeneous WSN . . . . . . . . . . . . . 2050 Xiaofeng Yu, Bingjie Zhang, Hanqin Qin, Tian Le, Hao Yang, and Jing Liang

Contents

xxv

Wireless Indoor Positioning Algorithm Based on RSS and CSI Feature Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2057 Shi-Xue Zhang, Xin-Yue Fan, and Xiao-Yong Luo Design and Verification of On-Board Computer Based on S698PM and Time-Triggered Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2068 Cuitao Zhang, Xiongwen He, Panpan Zhan, Zheng Qi, Ming Gu, and Dong Yan An Optimal Deployment Strategy for Radars and Infrared Sensors in Target Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2075 Lanjun Li and Jing Liang Integrity Design of Spaceborne TTEthernet with Cut-Through Switching Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2085 Ji Li, Huagang Xiong, Dong Yan, and Qiao Li Image Mosaic Algorithm Based on SURF . . . . . . . . . . . . . . . . . . . . . . 2093 Qingfeng Sun, Hao Yang, Liang Wang, and Qingqing Zhang Research on Dynamic Performance of DVR Based on Dual Loop Vector Decoupling Control Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 2099 Hao Yang and Liang Wang Facial Micro-expression Recognition with Adaptive Video Motion Magnification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107 Zhilin Lei and Shenghong Li Computation Task Offloading for Minimizing Energy Consumption with Mobile Edge Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2117 Guangying Wang, Qiyishu Li, and Xiangbin Yu Soil pH and Humidity Classification Based on GRU-RNN Via UWB Radar Echoes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2124 Chenghao Yang, Tiantian Wang, and Jing Liang Bit Error Rate Analysis of Space-to-Ground Optical Link Under the Influence of Atmospheric Turbulence . . . . . . . . . . . . . . . . . 2132 Xiao-Fan Xu, Ni-Wei Wang, and Zhou Lu Performance Analysis of Amplify-and-Forward Satellite Relaying System with Rain Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2140 Qingquan Huang, Guoqiang Cheng, Lin Yang, Ruiyang Xing, and Jian Ouyang Threat-Based Sensor Management For Multi-target Tracking . . . . . . . 2147 Yuqi Lan and Jing Liang

xxvi

Contents

Research on Measurement Matrix Based on Compressed Sensing Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2155 Zhihong Wang, Hai Wang, Guiling Sun, and Yi Xu PID Control of Electron Beam Evaporation System Based on Improved Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2163 Wenwu Zhu Doppler Weather Radar Network Joint Observation and Reflectivity Data Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2169 Qutie Jiela, Haijiang Wang, Jiaoyang He, and Debin Su Numerical Calculation of Combustion Characteristics in Diesel Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2173 Xudong Wang, Chunhua Xiong, Feng Wang, Gaojun An, and Dongkai Ma A NOMA Power Allocation Strategy Based on Genetic Algorithm . . . . Lu Yin, Wang Chenggong, Mao Kai, Bao Kuanxin, and Bian Haowei

2182

AUG-BERT: An Efficient Data Augmentation Algorithm for Text Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2191 Linqing Shi, Danyang Liu, Gongshen Liu, and Kui Meng Coverage Performance Analysis for Visible Light Communication Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2199 Juan Li and Xu Bao An Intelligent Garbage Bin Based on NB-IoT . . . . . . . . . . . . . . . . . . . 2208 Yazhou Guo, Ming Li, Kai Mao, Zhuoan Ma, and Yin Lu Research on X-Ray Digital Image Defect Detection of Wire Crimp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2217 Yanwei Wang and Jiaping Chen Architecture and Key Technology Challenges of Future Space-Based Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2223 Ni-Wei Wang, Xiao-Fan Xu, Ying-Yuan Gao, Yue Cui, Fei Xiao, and Zhou Lu Filter Bank Design for Subband Adaptive Microphone Arrays . . . . . . 2230 Hongli Jia Communication System Based on DFT Spread Spectrum Technology to Reduce the Peak Average Power Ratio of CO-OFDM System . . . . . 2241 Yaqi Wang and Yupeng Li

Contents

xxvii

Low-Complexity Channel Estimation Method Based on ISSOR-PCG for Massive MIMO Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2251 Cheng Zhou, Zhengquan Li, Qiong Wu, Yang Liu, Baolong Li, Guilu Wu, and Xiaoqing Zhao Ship Classification Methods for Sentinel-1 SAR Images . . . . . . . . . . . . 2259 Jia Duan, Yifeng Wu, and Jingsheng Luo Wheat Growth Assessment for Satellite Remote Sensing Enabled Precision Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2270 Yuxi Fang, He Sun, Yijun Yan, Jinchang Ren, Daming Dong, Zhongxin Chen, Hong Yue, and Tariq Durrani An Improved ToA Ranging Scheme for Localization in Underwater Acoustic Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2278 Jinwang Yi, Zhipeng Lin, Fei Yuan, Xianling Wang, and Jiangnan Yuan Performance Analysis of Three-Layered Satellite Network Based on Stochastic Network Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2285 Ying Zhou, Xiaoqiang Di, Ligang Cong, Weiwu Ren, Weiyou Liu, Yuming Jiang, and Huilin Jiang Robust Sensor Geometry Design in Sky-Wave Time-Difference-ofArrival Localization Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2295 He Ma, Xing-peng Mao, and Tie-nan Zhang A NOMA Power Allocation Method Based on Greedy Algorithm . . . . 2304 Yin Lu, Shuai Chen, Kai Mao, and Haowei Bian Multi-sensor Data Fusion Using Adaptive Kalman Filter . . . . . . . . . . . 2314 Yinjing Guo, Manlin Zhang, Fong Kang, Wenjian Yang, and Yujie Zhou Feasibility Study of Optical Synthetic Aperture System Based on Small Satellite Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2321 Ni-Wei Wang, Xiao-Fan Xu, and Zhou Lu A New Coded-Modulated Pulse Train for Continuous Active Sonar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2328 Chengyu Guan, Zemin Zhou, Di Wu, and Xinwu Zeng Region Based Hierarchical Modelling for Effective Shadow Removal in Natural Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2336 Ping Ma, Jinchang Ren, Genyun Sun, Paul Murray, and Tariq Durrani Collaborative Attention Network for Natural Language Inference . . . . 2343 Shiyi Zhang, Yinghua Ma, Shenghong Li, and Weikai Sun Three-Dimensional Imaging Method of Vortex Electromagnetic Wave Using MIMO Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2350 Jia Liang, Yan Li, Ping-fang Zhang, Xiang-wei Jiang, and Bin Cai

xxviii

Contents

Energy Storage Techniques Applied in Smart Grid . . . . . . . . . . . . . . . 2357 Youjie Zhou, Xudong Wang, Xiangjing Mu, Zhizhou Long, Changbo Lu, and Lijie Zhou A Robust Hough Transform-Based Track Initiation Method for Multiple Target Tracking in Dense Clutter . . . . . . . . . . . . . . . . . . 2364 Qian Zhu, Panhe Hu, Jiameng Pan, and Qinglong Bao Structure Design and Analysis of Space Omni-Directional Plasma Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373 Junfeng Wang, Tao Li, Hua Zhao, Qiongying Ren, Yi Zong, and Zhenyu Tang Design and Implementation of GEO Battery Autonomous Management System for Lithium Battery with Balanced Control Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2380 Lijun Yang, Bohan Chen, Yan Du, Liang Qiao, and Jiaxiang Niu Target Direction Finding in HFSWR Sea Clutter Based on FRFT . . . . Shuai Shao, Changjun Yu, Aijun Liu, Yulin Hu, and Bo Li

2390

Adaptive Non-uniform Clustering Routing Protocol Design in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2398 Qingtian Zeng, Tianyi Zhang, Geng Chen, and Ge Song Comparative Analysis of Reflectivity from an Updated SC Dual Polarization Radar and a SA System in CINRAD Network . . . . . . . . . 2410 Yue Liu, Debin Su, Xue Tan, and Haijiang Wang Subcarrier Allocation-Based SWIPT for OFDM Cooperative Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418 Xueying Liu, Xin Liu, Bo Li, and Weidang Lu Power Control for Underlay Full-Duplex D2D Communications Based on D. C. Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2427 Zanyang Liang and Liang Han Power Control for Underlay Full-Duplex D2D Communications Based on Max-Min Weighted Criterion . . . . . . . . . . . . . . . . . . . . . . . . 2435 Yingwei Zhang and Liang Han Analysis on the Change of Dynamic Output Degree Distributions in the BP Decoding Process of LT Codes . . . . . . . . . . . . . . . . . . . . . . . 2443 Shuang Wu A Stable and Reliable Self-tuning Pointer Type Meter Reading Recognition Based on Gamma Correction . . . . . . . . . . . . . . . . . . . . . . 2448 Yucui Liu, Kunfeng Shi, Zhiqiang Zhang, Zhihong Hu, and Anliang Liu

Contents

xxix

Spectral Efficiency for Multi-pair Massive MIMO Two-Way Relay Networks with Hybrid Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2459 Hongyan Wang, Zhengquan Li, Xiaomei Xue, Baolong Li, Yang Liu, Guilu Wu, and Qiong Wu An Improved Frost Filtering Algorithm Based on the Four Rectangular Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2467 Xinpeng Zhang, Xiaofei Shi, Min Zhang, and Li Li Waterline Extraction Based on Superpixels and Region Merging for SAR Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2474 Xige Liu, Xiaofei Shi, Zhigang Wang, and Li Li Coastline Detection with Active Contour Model Based on Inverse Gaussian Distribution in SAR Images . . . . . . . . . . . . . . . . . . . . . . . . . 2479 Kuiyuan Ni, Xiaofei Shi, Yuelong Zhang, and Li Li Facial Expression Recognition Based on Subregion Weighted Fusion and LDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2485 Hui Lin, Yan Wang, Zhenzhen Wang, Mengli Sun, Shiqiang Zhang, Xiaofei Shi, and Xiaokai Liu Extended Target Tracking Using Non-linear Observations . . . . . . . . . 2490 Qifeng Sun, Wangfei Quan, Lei Hou, and Tingting Zhang A Coordinated Multi-point Handover Scheme for 5G C/U-Plane Split Network in High-Speed Railway . . . . . . . . . . . . . . . . . . . . . . . . . 2498 Xuanbing Zeng, Gang Chuai, and Weidong Gao A Hierarchical FDIR Architecture Supporting Online Fault Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2506 Cangzhou Yuan, Ran Peng, Panpan Zhan, and Fayou Yuan College Students Learning Behavior Analysis Based on SVM and Fisher-Score Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 2514 Qiumin Luo, Hongzhi Wang, Gang Li, and Zunyi Shang Network Traffic Text Classification Based on Multi-instance Learning and Principal Component Analysis . . . . . . . . . . . . . . . . . . . . 2519 Hongzhi Wang, Qiumin Luo, Zunyi Shang, Gang Li, and Xiaofei Shi Calculation and Simulation of Inductive Overvoltage of Transmission Line Based on Taylor’s Formula Expansion Double Exponential Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2525 Yucheng Qiu, Donghui Li, and Xiaofei Shi Deep Learning Based Exploring Channel Reciprocity Method in FDD Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2533 Jie Wang, Guan Gui, Rong Wang, Yue Yin, Hao Huang, and Yu Wang

xxx

Contents

Steering Machine Learning Mechanism Based on Big Data Integrated Cooperative Collision Avoidance for MASS . . . . . . . . . . . . 2542 Chengzhuo Han, Tingting Yang, Siwen Wei, Hailong Feng, Jiupeng Wang, and Genglin Zhang A Weighted Fusion Method for UAV Hyperspectral Image Splicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2550 Yulei Wang, Yao Shi, Qingyu Zhu, Di Wu, Chunyan Yu, Meiping Song, and Anliang Liu Hyperspectral Target Detection Based on Spectral Weighting . . . . . . . 2555 Di Wu, Yulei Wang, Yao Shi, Qingyu Zhu, and Anliang Liu A Framework for Analysis of Non-functional Properties of AADL Model Based on PNML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2562 Cangzhou Yuan, Hangyu He, Panpan Zhan, and Tao Chen A Golden Section Method for Univariate One-Dimensional Maximum Likelihood Parameter Estimation . . . . . . . . . . . . . . . . . . . . 2571 Ruitao Liu and Qiang Wang Network Service Analysis Based on Feature Selection Using Improved Linear Mixed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2581 Chen Lu, Dong Liang, Dongxu Wang, and Yilin Zhao SFSSD: Shallow Feature Fusion Single Shot Multibox Detector . . . . . 2590 Dafeng Wang, Bo Zhang, Yang Cao, and Mingyu Lu Beamforming Based on Energy State Feedback for Simultaneous Wireless Information and Power Transmission . . . . . . . . . . . . . . . . . . 2599 Chunfeng Wang and Naijin Liu Research on Cross-Chain Technology Architecture System Based on Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2609 Jianbiao Zhang, Yanhui Liu, and Zhaoqian Zhang Research on Data Protection Architecture Based on Block Chain . . . . 2618 Jianbiao Zhang, Yanhui Liu, and Zhaoqian Zhang Research on Active Dynamic Trusted Migration Scheme for VM-vTPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625 Xiao Wang, Jianbiao Zhang, Ai Zhang, Xingwei Feng, and Zhiqiang Zeng Multiple Hybrid Strategies Filtrate Localization Based on FM for Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2634 Wei-Cheng Xue, Yu Hua, and Jun Ju Localization Algorithm Based on FM for Mobile Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2642 Wei-cheng Xue, Yu Hua, and Jun Ju

Contents

xxxi

Coherent State Based Quantum Optical Communication with Mature Classical Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . 2647 Ming Li and Li Li Design of Codebook for High Overload SCMA . . . . . . . . . . . . . . . . . . 2654 Min Jia, Shiyao Meng, Qing Guo, and Xuemai Gu A Spectrum Allocation Scheme Based on Power Control in Cognitive Satellite Communication . . . . . . . . . . . . . . . . . . . . . . . . . 2663 Xiaoye Jing, Xiaofeng Liu, Min Jia, Qing Guo, and Xuemai Gu Fruit Classification Through Deep Learning: A Convolutional Neural Network Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2671 Tahir Arshad, Min Jia, Qing Guo, Xuemai Gu, and Xiaofeng Liu Correction to: Research on Image Encryption Algorithm Based on Wavelet Transform and Qi Hyperchaos . . . . . . . . . . . . . . . . . . . . . Zhiyuan Li, Aiping Jiang, and Yuying Mu

C1

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2679

Flame Detection Method Based on Feature Recognition Ti Han1, Changyun Ge1, Shanshan Li2(&), and Xinqiang Zhang1 1

Intelligent Science and Technology Department, Dalian Neusoft University of Information, 116023 Dalian, China {hanti,gechangyun,zhangxinqiang}@neusoft.edu.cn 2 College of Electrical and Information Engineering, Huaihua University, 418000 Huaihua, China [email protected]

Abstract. This paper introduced technique of current flame detecting system based on the CCD camera from which the color images are transferred into a computer, then the image processing algorithm is used to determine whether there is fire in the image sequence, the monitoring method is the most important in the whole system. The initiation of flame is a slowly process in which the image characteristics are very clearly, As the shape, area, and intensity of the flame in different time, each one varies every time. The image information of flame is analyzed in this paper, the regularity is summarized in color feature and dynamic characteristics, which is the main basis for the design of the identification algorithm. The color model is established based on the analysis of the characteristics of flame color, and the dynamic characteristics of the flame are identified according to the irregularity, the similarity and the stability of the flame, so as to provide the accurate basis for the flame detection. Keywords: Flame recognition

 Colour character  Dynamic character

1 Introduction Along with the development of computer science and image processing technology, it was found that when the fuel is burning, the fuel will emit light from ultraviolet to infrared. In the visible band, the flame image has a unique feature, such as chromatography, texture, etc., so that it is obviously different from the background in the image [1–3]. Using these features, image processing method is used to identify the fire. Because the visual information is the medium of light, the image detection can be more quickly than the traditional detection method. The image information is rich and intuitive, and it has laid a foundation for the identification and judgment of early identification of fire, any other fire detection technology can not provide such a rich and intuitive information, can be used in large space, large area of the environment. As the combustion process is a typical unstable process, affected by fuel, environment and climate. In natural environment, the process of combustion is more complicated than that of general power plant, there are variety of characterization parameters. At the same time, there are various kinds of interference factors in the field. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1–8, 2020 https://doi.org/10.1007/978-981-13-9409-6_1

2

T. Han et al.

Such as sunlight, lighting, etc. will affect the results of the identification. It is difficult to obtain higher accuracy and wide applicability if the early fire detection is carried out by a single parameter measurement [4, 5]. The characteristics are mainly discussed in the paper about the area of the light spot in the flame video image, and the large background region. By using suspected area of the extraction of the segmentation, a number of characteristics of the flame are identified. It can further reduce the false alarm rate of image type flame detection system.

2 The Process of Flame Identification The image signal collected by the camera is processed by the digital image signal. In order to make the system application and update flexible, all the processing of image data is realized by software. The whole software flow chart of fire identification system is shown in Fig. 1. The overall idea of the design is to use a common CCD camera to take a picture of the scene. Then put the image into the real-time background subtraction to determine whether the abnormal situation, if there are unusual circumstances, the flame identification procedure is carried out. Image with abnormal condition after flame segmentation and then color feature recognition and dynamic feature recognition begins. If these two characteristics are consistent with the existence of flame, fire alarm warning is issued.

3 Color Feature Recognition of Flame After the color image segmentation collected by camera, then begin to focus on the analysis of its objectives to determine whether it is fire flame. Method is to put the target into different color spaces to see if they meet certain rules, thus can derive an effective judgment method [6–8]. This paper is about empirical analysis of color space based on the flame image segmentation. Because the image of input system is a RGB image, the image is converted from RGB space to YCbCr space according to the Formula 1, then get the images of the Cr Cb distribution. 2

3 2 3 2 Y 0 0:2990 0:5870 4 Cb 5 ¼ 4 128 5 þ 4 0:1687 0:3313 Cr 128 0:5000 0:4187

32 3 0:1140 R 0:5000 54 G 5 0:0813 B

ð1Þ

The distribution of flame image in (Cr, Cb) color space is close to the normal distribution, but other interference objects with flame color do not have. Flame image Cr and Cb have their own normal distribution characteristics relative to Y, the flame image point in two-dimensional space (Cr, Cb) approximately obey the normal distribution. In order to make the image easy to understand, Fig. 2 is Cr alone compared with Y.

Flame Detection Method Based on Feature Recognition

Begin CCD real time image acquisition N Abnormal condition detection Y Flame image segmentation

Judge for non flame N

Color feature recognition of flame Y

N

Dynamic feature recognition of flame Y Judge for the flame

Issue fire early warning

Fig. 1. Flow chart of flame detection

(a) Flame image

(b) Other image

Fig. 2. Cr-Y space distribution

3

4

T. Han et al.

Using normal distribution function to carry out quantitative analysis, the expression of normal distribution function is as below. ( "    2 #) ðx  lx Þ y  ly y  ly 1 1 ðx  lx Þ2 exp  þ2 þ Fðx; yÞ ¼ 2prx ry 2 r2x r2y rx ry

ð2Þ

Among them, lx andly are respectively the mean of x and y, rx and ry are respectively the sample standard deviation of x and y. Thus can calculate the mean (mu_x) and variance (sigma_x) of Cr and Cb of flame image. The F function distribution by Formula 3 is shown as Fig. 3.

(a) Flame image

(b) Other image Fig. 3. F function distribution

f ðiÞ ¼

1 2p  sigma x  sigma y ( " #) 1 ðxðiÞ  mu xÞ2 ðxðiÞ  mu xÞðyðiÞ  mu yÞ ðyðiÞ  mu yÞ2 þ  exp  þ2 sigma x2 sigma y2 2 sigma x  sigma y

ð3Þ When Cr and Cb in an image meet the above-mentioned distribution in the mean and standard deviation interval, In other words, F (x, y) of Cr and Cb meet the above similar distribution, the image can be considered flame. Turning Cr and Cb matrix into one-dimensional vector, the general statistical characteristic value of Cb and Cr can be get, shown as Table 1(take 0.05 confidence interval).

Flame Detection Method Based on Feature Recognition

5

Table 1. General statistical characteristic value of Cb and Cr

Cr parameter Cb parameter

Mean point estimation 154.293

Standard deviation point estimation 22.264

Mean interval estimation [154.214,154.372]

Standard deviation interval estimation [22.209, 22.320]

154.879

27.104

[154.783,154.975]

[22.209, 22.320]

And so on, you can get the feature vector of color statistics of every frame image. When the image of the following frame is segmented, if the target is in accordance with the statistical characteristic of the above, that is to identify the occurrence of fire. Through the fire scene of each scene to make a pattern of training, get a characteristic parameter of the flame, then can get a relatively reliable threshold value to determine the data obtained. In this experiment, the threshold of statistical vector is selected as: Cr mean interval (150, 160), standard deviation interval (20, 23), Cb mean interval (150, 160), standard deviation interval (25, 28). On this basis, for a given threshold value T, if the Cr, Cb values of a point meet f(i) > T, then the image is considered to have a flame color. Here set T to 0.005, the image points of the flame color region taken out by the above section are counted, to make f(i) > T image points more than 60, thus the flame points of CrCb space fit the normal distribution.

4 Dynamic Feature Recognition of Flame The shape of an object is an important feature of resolution and recognition for the human visual system. After many times of search, analysis and comparison, the dynamic characteristics of the system is based on the identification of the scene of the image gray, shape, changes in the recognition target. According to the theory of computer graphics, these features are quantified and normalized as a criterion for identification [9, 10]. After the analysis, the system adopts the following flame image dynamic feature identification criterion. 4.1

Irregularity

According to the irregular shape of the flame but most of the interference sources (such as electric torch, incandescent lamp, etc.) are characterized by a high degree of shape regularity, circular degree can be used as one of the flame criterion. The circular degree represents the complexity of the shape of the object, the formula is shown as Formula 4. Circularity ¼ L2=ð4p  SÞ

L  perimeter; S  area

ð4Þ

The circular degree to the circular object takes the minimum value 1, the more complex the shape of an object is, the greater its value is. First of all, extract the edge chain code to the results of color detection, calculated L. Secondly, calculate the area of

6

T. Han et al.

the flame area in the image, that is, the total number of pixels that have been set to black in the segmented image, get S. Finally, the circle degree is calculated, and the average value of continuous N frame image is obtained. Experiments have demonstrated that the circularity of flame is greater than 7.5. 4.2

Similarity

In the recognition of flame, we can consider the change law of the similarity of flame shape in the early fire. This change law is in fact the shape change non regularity of the fire flame with respect to other common interference phenomenon, but this kind of non regularity has certain similarity from the shape change, the space change, the space distribution, In particular, the flame shape characteristic of each successive frame image is similar to that of a short interval. Therefore, it can be described by the structural similarity of continuous images, this is taken into account that although the fire flame presents a trend of constant development and change,The method of calculating frame difference similarity can be used to describe the characteristic. Let bi(x, y) be the target image binary image sequence, mark the pixels with the value of 1 in bi(x, y), get the possible flame area Xi in each frame of the image sequence. After the discovery of suspicious flame regions, use the method of similarity calculation of consecutive frames image to classify the flame and interference pattern. The similarity of continuous frame image is defined as Formula 5. P bi ðx; yÞ \ bi þ 1 ðx; yÞ ni ðx; yÞ ¼

ðx;yÞ2X

P

ðx;yÞ2X

bi ðx; yÞ [ bi þ 1 ðx; yÞ

i ¼ 1; 2; . . .; N

ð5Þ

After obtained a number of similarity, the mean similarity n for several consecutive frames is used as the criterion. In general, when the n is less than the threshold, that is a fast moving object highlights. And when it is greater than a certain threshold, there is fixed light emitting region. When it is between the two threshold, the area can be considered as the flame region. Experiments have demonstrated that the threshold range of flame similarity is between 0.3 and 0.7. 4.3

Stability

From the point of view of flame recognition, this paper uses the centroid characteristic of the flame image, the stability is expressed by the centroid [11, 12]. For a flame image, first calculate the centroid, the expression is defined as Formula 6. ðx; yÞ ¼ ðM10 =M00 ; M01 =M00 Þ

ð6Þ

M is the matrix moment, for a binary image f(i, j), the definition of its pq order moment is defined as Formula 7.

Flame Detection Method Based on Feature Recognition

Mpq ¼

XX

f ði; jÞip jq

7

ð7Þ

Among them, i, j are the horizontal and vertical coordinates of the image. The aim of computing image stability is to take into account the unique nature of the constant change of flame shape, reflected in the image of the digital feature that is the performance of its center of mass is also the transformation of the disorder. Correspondingly, if the change in the same time as the same time increases, it is shown that there is a high brightness object move to the camera direction, this can eliminate interference phenomenon. Experiments have demonstrated that the range of centroids is within 20 pixels.

5 Conclusion To sum up, for the flame image sequences, through the flame segmentation, color feature recognition and dynamic feature recognition, in accordance with the characteristics of the flame, the flame identification result is the existence of flame, and finally issued a fire warning. Flame has more characteristics, it needs to find a method suitable for fire identification and processing speed. Therefore, a further study need to be made in theory and technology, and for in-depth research and testing. As a fire identification system, all aspects of the factors should be considered, in order to achieve technical, performance indicators in line with the actual needs. Acknowledgements. Project supported by Natural Science Foundation Project of Liaoning Province, No. 20180520011.

References 1. Zhou F, Li X, Zhang X (2015) PCNN based Otsu multi-threshold segmentation algorithm for noised images. J Comput Inf Syst 21(8):7791–7798 2. Chena J, Heb Y, Wanga J (2010) Multi-feature fusion based fast video flame detection. Build Environ 45(5):1113–1122 3. Cheong P, Chang KF, Lai YH (2011) A ZigBee-based wireless sensor network node for ultraviolet detection of flame. IEEE Trans Ind Electron 58(11):5271–5277 4. Marques JS, Jorge PM (2000) Visual inspection of a combustion process in a thermoelectric plant. Sig Process 80(8):1577–1589 5. Xu X, Guan YD, Xu XY (2008) Application of image compression technology in forest fire prevention and control system. Sci Technol Rev 26(6):34–37 6. Chen TH, Kao CL, Chang S (2003) An intelligent real-time fire-detection method based on video. In: Proceedings of IEEE 37th annual 2003 international Carnahan conference on security technology. Taipei, pp 104–111 7. K.R.Castleman. Digital Image Processing. Electronic Industry Press, 2011, pp. 389 * 391 8. Lin KY, Wu JH, Xu Lh (2005) A survey on color image segmentation techniques. Image Graph 10(1):1–10

8

T. Han et al.

9. Horng WB, Peng JW, Chen CY (2005) A new image-based real-time flame detection method using color analysis. In: Proceedings of 2005 IEEE international conference on networking, sensing and control. Tucson, Arizona, USA, pp 100–105 10. Chen TH, Yin YH, Huang SF, Ye YT (2006) The smoke detection for early fire-alarming system base on video. In: Proceedings of the 2006 international conference on intelligent information hiding and multimedia signal processing (IIH-MSP’06) processing, pp 427–430 11. Dedeoglu Y, Toreyin BU, Gudukbay U et al (2005) Real-time fire and flame detection in video. In: Proceedings of IEEE ICASSP’05, pp 669–672 12. Mueller M, Karasev P, Kolesov I, Tannenbaum A (2013) Optical flow estimation for flame detection in videos. IEEE Trans Image Process 22(7):2786–2797

Small Cell Deployment Based on Energy Efficiency in Heterogeneous Networks Yinghui Zhang1, Shuang Ning1, Haiming Wang1, Jing Gao2, and Yang Liu1(&) 1

College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China [email protected] 2 Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China

Abstract. The deployment of the next generation mobile networks increasingly relies on the deployments of small cell. In this paper, we propose and evaluate an optimal energy efficiency (EE) small cell deployment scheme to solve the problem of small cell deployment in Massive MIMO system, considering the impact of base station (BS) location, number of antennas and BS density on the EE of the system in different scenarios. Single-cell model and multi-cell model are considered. In the single-cell model, the system allocates the location of the small cell by minimizing the system power consumption when the scheme satisfies the target transmission rate constraint of the user. In the multi-cell model, the optimal BS density is obtained by deriving the EE expressions of different optimization parameters. The simulation results show that the scheme can achieve high EE and validate the effectiveness of the proposed scheme. Keywords: Massive MIMO efficiency

 Heterogeneous network  Small cell  Energy

1 Introduction In recent years, massive MIMO is widely considered as an effective means to improve system capacity in the Fifth Generation Mobile Networks (5G) [1]. The large number of degrees of freedom obtained by the base station (BS) through the deployment of large-scale antenna arrays can be used to improve the system performance [2]. However, the traditional macro cell network cannot meet the development of future communication, because the harsh requirements of future communication systems for high speed, low time delay and high energy efficiency (EE) are need. Therefore, massive MIMO [3] and small cell [4] is considered as an effective means to improve the capacity, high data rate, lower delay and higher spectral efficiency (SE) and EE required by 5G wireless communication systems. This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61761033, and Natural Science Foundation of Inner Mongolia Autonomous Region of China under Grant 2016MS0604. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 9–19, 2020 https://doi.org/10.1007/978-981-13-9409-6_2

10

Y. Zhang et al.

Although massive MIMO heterogeneous system is an effective means to improve wireless network capacity and realize high data rate, it still faces many challenges. On the one hand, the deployment of BSs is relatively random in macro cell and small cell heterogeneous networks. A series of problems caused by deployment can be reduced obviously if the BSs can be deployed according to certain rules. On the other hand, the increase in the number of antennas also brings about the problem of high circuit power consumption for massive MIMO systems. Although deploying a large number of antennas can correspondingly reduce the transmission power consumption of the system [5], it will also correspondingly increase the power consumption of the radio frequency link circuit. In order to solve the related problems, this paper studies the deployment of small cell in massive MIMO heterogeneous networks. In a single cell system, the power consumption of the system is controlled and the method of uniform deployment of small cell is adopted to maximize EE. In a multi-cell system, a nonuniform deployment of small cell is adopted, which is more consistent with the actual situation, and the setting of deployment parameters under the condition of optimal EE is obtained. The rest of the paper is organized as follows. In Sect. 2, we illustrate the system model. Section 3 optimal small cell deployment scheme is proposed. Meanwhile, the properties of the proposed scheme are studied. Section 4 provides simulation results and, finally, conclusions are drawn in Sect. 5.

2 System Model This paper involves two deployment schemes. The first is a single cell uniform deployment, and small cell is uniformly deployed within the coverage area of macro cell, which is relatively easy and has good coverage effect. The second is a multi-cell non-uniform deployment scheme, in which small cell are distributed in areas outside the radius of macro cell in a two-dimensional Poisson process. This section considers the downlink of single cell and multiple users, including one macro cell, m small cells and k service users. Macro cell is equipped with N transmitting antennas and the maximum number of antennas is four [6]. The received signal of the kth user is expressed as yk ¼ h H k;0 x0 þ

Xm i¼1

hH k;i xi þ zk ;

ð1Þ

NMBS 1 NSBS 1 where hH and hH represent the channel fading coefficient vectors k;0 2 C k;i 2 C between macro cell, small cell and user k respectively. x0 and xi are the transmission signals of macro cell and small cell respectively. zk represents additive Gaussian noise with zero mean value and a variance of r2 . The deployment model of cell is shown in Fig. 1.

Small Cell Deployment Based on Energy

(a) Single cell heterogeneous networks

11

(b) Multi-cell heterogeneous networks

Fig. 1. Heterogeneous networks

As shown in Fig. 1a, macro cell is deployed in the center of the whole cell, and small cell is uniformly deployed in different positions of macro cell coverage radius in a certain proportion, and users are randomly distributed. The multi-cell heterogeneous network deployment model which uses Voronoi topology is shown in Fig. 1b. We consider multi-cell heterogeneous networks serving multiple users. Users are evenly distributed in the area with density kU . Macro cell is distributed as a two-dimensional Poisson process with a density of k1 . Small cell is distributed in the area outside the radius of macro cell [7] with Poisson process of density k2 (the inner area is within the radius of macro cell and the outer area is outside the radius. The large circle in the Fig. 1b is the coverage area of macro cell, the small circle in blue is the small cell, and the small cell is deployed in the area outside the coverage area of macro cell [8]. The inner region is denoted as Ainside ¼ [ x2u1 Bðx; DÞ which is the union of positions whose distance to the nearest macro cell is not greater than d, while the outer region, denoted Aoutside ¼ > >
@ PK > > : i¼1 i6¼k

1 Pm C PK Tr ðHi;j Wk;j Þ : j¼0 P  C A  ck k¼1 trðQ0;1 Wk;j Þ  qj;1 8k;j m 2 2 þr Tr ðHi;j Wk;j Þ k j¼0

ð9Þ

 Set rank Wk;j ¼ 0. Using semi-define relaxation [12], we can obtain min

m X K X

trðWk;j Þ þ Pstatic

j¼0 k¼1

   8P PK m H 1 2 > h 1 þ  W W > k;j k;j hk;j  rk i¼1 #k < j¼0 k;j  PK : s:t: > k¼1 tr Q0;1 Wk;j  qj;1 8k;j > : Wk;j  0

ð10Þ

Given the target user SINR and antenna transmission maximum power constraints, the optimization parameters can be solved by Eqs. (8)–(10). 3.2

Multiple Cells

In this section, the downlink of multi-user massive MIMO system is considered to maximize EE while ensuring quality of service (QoS) [13]. Under the condition of maximizing EE, how to select the number of transmitting antennas and the number of active users of the BS is discussed.

14

Y. Zhang et al.

The power consumption (PC) of the system should not only consider the radiated power, but also consider the loss of analog hardware, backhaul signaling and other overhead costs (such as cooling and power loss). PC is defined as PC ¼

XS XK

qx i¼1

Cða2 þ 1Þ a

j¼1 ðpk2 Þ2 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

þ P0 þ PBS M þ PUE K þ MKPCE ; |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}

ð11Þ

Pstatic

Pdynamic

where K is the number of users, M is the antenna of macro cell, qx

Cða2 þ 1Þ a

ðpk2 Þ2

is the

average transmit power of user K in cell j, P0 is constant and the power consumption of hardware circuits caused by node cooling system, control signals, loop infrastructure, etc. ðPBS M þ PUE K Þ is the power consumption of base transceiver stations. Moreover, MKPCE is the power consumed by the user and the signal processing at the BS. Solving the optimal parameter group h ¼ ðk2 ; K; M Þ is as follows max EEðhÞ ¼ P

 B  log2 1 þ S j¼1

PK

i¼1 qx

Cða2 þ 1Þ a ðpk2 Þ2



q2 KsðMKÞ 1 þ qK

þ P0 þ PBS M þ PUE K þ MKPCE

:

ð12Þ

s:t: SINR  c Then the optimal deployment scheme for EE optimization will be obtained by setting different parameters. In massive MIMO system, the circuit power consumption of the BS cannot be ignored [14]. Considering that the actual number of antennas requires corresponding RF transmission link support, deploying a large number of antennas can reduce the transmission power consumption of the system, but it will also increase the power consumption of the RF link circuit accordingly [15]. Therefore, this paper will coordinate the circuit power consumption caused by deploying a large number of antennas and select the number of BS antennas with the best EE. In order to find the optimal value of M, EE maximization problem is described as EE1 ¼

 B  log2 1 þ

q2 KsðMKÞ 1 þ qK



P0 þ PBS M þ PUE K þ MKPCE

;

ð13Þ

where s is the pilot sequence of user k, q is the ratio coefficient of received signal to SINR. The ratio of M to K is defined as c ¼ M K , which is the number of antennas per user. For given K, c, the maximum value of EE is

EEM ¼

 B  log2 1 þ

q2 Ksð11cÞ 1 þ qK



P0 þ ðKPBS þ K 2 PUE Þc

;

ð14Þ

According to Eq. (11) and Eq. (12), replace the PC in Eq. (12) with Eq. (11), we can obtain the Eq. (15).

Small Cell Deployment Based on Energy

15

In the heterogeneous networks, the deployment density of small cell affects the EE of the whole system [16]. Therefore, the optimized density of small cell is taken as a research parameter. In this case, the EE maximization expression is

EEk2 ¼ P P S K j¼1

i¼1 qx

 B  log2 1 þ Cða2 þ 1Þ a ðpk2 Þ2



q2 KsðMKÞ 1 þ qK

þ P0 þ PBS M þ PUE K þ MKPCE

:

ð15Þ

EEk2 is a monotonic increasing function of k2 , and k2 ! 1 is the case when EE is maximized. However, an infinitely high-density of BSs is not feasible in practice. Therefore, we can get the parameters ðM; k2 Þ settings under the condition of optimal EE, which will be verified through simulating in the following part.

4 Simulation Results 4.1

Single Cell

In Fig. 2, the selection of the optimal location of the BSs under different QoS is simulated. We can see there will be different power consumption when the small cells are placed in the radius of different distances of the macro cell, It can be seen that with

Fig. 2. Power consumption of different BS location under different QoS

the increase of QoS value, when the small cell is 200 and 450 m away from the macro cell, the power consumption is relatively large, and the power required for cell deployment near the position of 350 m is minimum. Therefore, it is optimal to deploy the small cell at a position near 350 m away from the Macro cell in this case. Figure 3 shows that the power consumption of the system is greatly reduced with the increase of the number of macro cell antennas. It shows that the power consumption

16

Y. Zhang et al.

can be reduced by deploying multi-antenna BSs, but the power consumption of the system also increases with the increase of the number of antennas, because the increase

Fig. 3. Power consumption under different macro cell antennas

of static part Pstatic in the circuit obviously exceeds the decrease of dynamic part Pdynamic . Therefore, the number of macro cell antennas cannot be increased indefinitely and the power consumption of static circuits cannot be ignored. Meanwhile, the power consumption is the least when the small cell distance from macro cell is around 350 m.

Fig. 4. Comparison of Gaussian distribution and uniform deployment

Small Cell Deployment Based on Energy

17

This paper also compares Gaussian distribution of small cell with uniform deployment. As can be seen from Fig. 4, under different QoS constraints, the power consumption required to deploy small cell in Gaussian distribution mode in a single cell is smaller. It provides meaningful reference for the deployment of small cell and can be further studied.

Fig. 5. User signal-to-interference-noise ratio diagram under different scenarios

Considering the multi-cell scenario, Fig. 5 is the SINR cumulative distribution function (CDF) under different deployment scenarios. Comparing the multi-cell system model with the traditional cellular network and the system model without dividing

Fig. 6. Optimal inner region radius map

18

Y. Zhang et al.

Macro Cell and Small Cell areas, it can be seen that the multi-cell system model in this paper is better than others. It can be seen from Fig. 6 that the system EE increases with the increase of the number of BS antennas, and the system EE is close to optimal when the number of

Fig. 7. Optimal small cell density

antennas reaches 120. At the same time, when the radius of the internal area is 500 m, the system has the optimal EE. We know that unlimited deployment of small cell is neither feasible nor practical. Therefore, the density of small cell is simulated. As shown in Fig. 7, the optimal EE are obtained when the density is k ¼ 7 BS=km2 and the macro cell antenna number is M ¼ 120.

5 Conclusion In this paper, we have proposed and investigated the deployment of small cell combined with massive MIMO dual-layer heterogeneous network. Research and simulations show that the combination of massive MIMO and small cell deployment can effectively improve the EE of the heterogeneous network. In a single-cell uniform deployment scenario, the power consumption of the system is minimal when the small cell is deployed at an optimal distance from the macro cell. At the same time, we consider the power consumption of static circuits, which affects the system EE and cannot be ignored. Considering the multi-cell non-uniform deployment, the system EE can be greatly improved by choosing the optimal density of small cell, which shows that small cell is a promising solution for maximum EE deployment.

Small Cell Deployment Based on Energy

19

References 1. Andrews JG, Buzzi S, Choi W et al (2014) What will 5G be. IEEE J Sel Areas Commun 32(6):1065–1082 2. Bjornson E, Jorswieck EA, Debbah M et al (2014) Multi-objective signal processing optimization: the way to balance conflicting metrics in 5G systems. IEEE Signal Process Mag 31(6):14–23 3. Larsson EG, Edfors O, Tufvesson F et al (2014) Massive MIMO for next generation wireless systems. IEEE Commun Mag 52(2):186–195 4. Hoydis J, Kobayashi M, Debbah M (2011) Green small-cell networks. IEEE Veh Technol Mag 6(1):37–43 5. Li C, Zhang J, Letaief KB (2014) Throughput and energy efficiency analysis of Small Cell networks with multi-antenna base stations. IEEE Trans Wireless Commun 13(5):2505–2517 6. Björnson E, Kountouris M, Debbah M (2013) Massive MIMO and small cells: improving energy efficiency by optimal soft-cell coordination. In: International conference on telecommunications (ICT). Casablanca, May 2013 7. Ng DWK, Lo ES, Schober R (2012) Energy-efficient resource allocation in OFDMA systems with large numbers of base station antennas. IEEE Trans Wireless Commun 11(9): 3292–3304 8. Wang H, Zhou X, Reed MC (2014) Coverage and throughput analysis with a non-uniform small cell deployment. IEEE Trans Wireless Commun 13(4):2047–2059 9. Cui S, Goldsmith AJ, Bahai A (2005) Energy-constrained modulation optimization. IEEE Trans Wireless Commun 4(5):2349–2360 10. Andrews JG, Baccelli F, Ganti RK (2011) A tractable approach to coverage and rate in cellular networks. IEEE Trans Commun 59(11):3122–3134 11. Bjornson E, Sanguinett L, Hoydis J et al (2015) Optimal design of energy-efficient multiuser MIMO systems: is massive MIMO the answer. IEEE Trans Wireless Commun 14(6): 3059–3075 12. Huang Y, Palomar DP (2010) Rank-constrained separable semi-definite programming with applications to optimal beamforming. IEEE Trans Signal Processing. 58(2):664–678 13. Bjornson E, Matthaiou M, Debbah M (2015) Massive MIMO with non-ideal arbitrary arrays: hardware scaling laws and circuit-aware design. IEEE Trans Wireless Commun 14(8):4353–4368 14. Huh H, Caire G, Papadopoulos HC et al (2012) Achieving ‘massive MIMO’ spectral efficiency with a not-so-large number of antennas. IEEE Trans Wireless Commun 11(9): 3226–3239 15. Kim H, Chae CB, Veciana G et al (2009) Across-layer approach to energy efficiency for adaptive MIMO systems exploiting spare capacity. IEEE Trans Wireless Commun 8(8): 4264–4275 16. Peng J, Hong P, Xue K (2015) Energy-aware cellular deployment strategy under coverage performance constraints. IEEE Trans Wireless Commun 14(1):69–80

Research on Knowledge Mining Algorithm of Spacecraft Fault Diagnosis System Lianbing Huang(&), Wenshuo Cai, Guoliang Tian, Liling Li, and Guisong Yin Institute of Manned Space System Engineering, Beijing 100094, China [email protected]

Abstract. The change of telemetry data of spacecraft is usually caused by telecommand or fault, which conforms to the causality model of remote-control input and telemetry output under different conditions of spacecraft. Traditional expert system relies on static knowledge of experts to diagnose telemetry parameters. In order to solve the problem of rule-based expert system knowledge acquisition and less manual intervention, considering the characteristics of spacecraft telemetry, this paper proposes an expert knowledge acquisition algorithm based on successful data envelope line and conditional probability from two dimensions of analog and digital quantities respectively. Through data mining of historical telemetry, this algorithm achieves the threshold of analogue quantities and automatic extraction of causal rules at different stages of product life cycle. The experimental results show that the algorithm is effective and the simulation value is more accurate than the product design index and the redundancy of causal rules is less. After knowledge mapping, the algorithm can be applied in the spacecraft fault diagnosis expert system. Keywords: Data mining

 Knowledge acquisition  Causal rule

1 Introduction In recent years, with the characteristics of many parallel missions, short development cycle, high launch density and long on-orbit operation time, the downlink telemetry data volume will increase exponentially compared with the previous spacecraft, both during the comprehensive test period and in-orbit operation phase. How to use data mining technology to mine useful information from mass downlink telemetry data and apply it to practical projects to improve the quality and development efficiency of spacecraft products is an urgent problem to be considered and solved. At present, most of the current fault diagnosis modes of spacecraft in China rely on expert systems [1]. The telemetry data is sent down to the expert system, and the inference engine infers, analyses and judges the telemetry according to the knowledge and rules in the knowledge base, and outputs the diagnosis results to the users through the man-machine interface. The knowledge of expert system is the basis for expert system to draw conclusions. The level of knowledge determines the success or failure of system diagnosis [2]. At this stage, expert knowledge acquisition still depends on expert manual editing rules in advance. This semi-automatic diagnosis mode is not © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 20–27, 2020 https://doi.org/10.1007/978-981-13-9409-6_3

Research on Knowledge Mining Algorithm

21

intelligent enough, and has the problems of low efficiency and poor flexibility. Therefore, this paper aims to solve the problem of automatic knowledge acquisition and engineering application in spacecraft expert system based on data mining technology.

2 Background Using data mining technology to analyze and intelligently learn historical telemetry data, find fault signs in time and take effective preventive measures, it is possible to avoid serious failures or accidents of spacecraft. NASA has developed some data driver applications for spacecraft fault detection and diagnosis, such as Orca system for space shuttle, IMS system for International Space Station, and so on. At present, NASA has developed some data driver applications for spacecraft fault detection and diagnosis. The research in this field is still in its infancy.

3 Introduction of Expert System At present, there are many mature expert system products and tools that can be used in the rapid development of spacecraft fault diagnosis expert system, such as CLIPS, EXSYS, G2, etc. [3]. The United States, Russia, Japan and other countries have developed a number of expert systems for spacecraft fault diagnosis. Typical expert system structure, as shown in Fig. 1, mainly includes inference engine, knowledge base, interpreter, man-machine interface and other modules. Reasoning engine is responsible for translating knowledge into internal executable computer language according to certain rules of knowledge base knowledge, interpreter is responsible for translating knowledge into internal executable computer language according to specific grammar rules, domain experts are responsible for editing diagnostic knowledge and submitting it to the database, and users are responsible for reviewing the diagnostic results of daily telemetry data and processing the diagnostic results in time. Obviously, the knowledge base is the mapping of experts’ knowledge in the computer, and the inference engine is the mapping of the ability of using knowledge to reasoning in the compute [4].

Fig. 1. Composition of typical expert system

22

L. Huang et al.

4 Requirement Analysis Spacecraft telemetry information can be roughly divided into analog and digital quantities. Analog quantity refers to the quantity that changes continuously in a certain range, such as current, voltage, power, etc. Physical quantities which are discrete in time and quantity are called digital quantities, such as valve switch state, bus communication state, etc. At present, in spacecraft fault diagnosis expert system, the diagnosis of analog signals mainly depends on threshold values, such as maximum and minimum currents; for digital diagnosis, it mainly depends on event-driven, such as judging the change of digital signals when sending remote control instructions. The current model has the following shortcomings: (1) The design index range of analog threshold is large. If it is directly converted into knowledge, there is a risk of missing fault information. (2) It is difficult to identify the identity of experts. There are some gaps in the knowledge compiled by different technicians, which affects the diagnostic ability of the system. (3) The internal correlation of telemetry data is easy to neglect, and the deep-seated potential faults are difficult to find; the change of digital quantity caused by a single remote control command is easy to find, and the causal relationship between remote control command chain and telemetry is difficult to excavate. (4) The mode of manual editing knowledge is inefficient, and it is difficult to meet the needs of mass telemetry data downlink and long-term on-orbit operation of subsequent spacecraft.

5 Overall Design The core idea of fault diagnosis system based on data mining is to acquire the operation status and knowledge of spacecraft subsystem equipment by analyzing historical data, so as to solve the problem of knowledge acquisition. In the design of data mining algorithm, the characteristics of spacecraft telemetry and rule-based expert system are fully considered, and the knowledge that can be translated directly is mined to realize engineering application. This paper divides the database telemetry data into two parts: analog and digital. Time and remote control instructions are used as auxiliary variables to mine the data from two dimensions. The general framework is as follows (Fig. 2). 5.1

Analog Telemetry Information Mining

The core idea of analog data mining is to extract specific thresholds by using successful historical data and corresponding data mining methods. Data envelope analysis, such as longitudinal test and test, is carried out for similar or similar products with successful flight experience. Successful envelope is constructed and threshold is extracted. Data Envelopment Line (DEL) analysis steps: Firstly, the analysis object is determined, and the normality of telemetry parameters is tested by Epps-Price method. Then, outliers in successful data are screened and eliminated by Grubbs test method.

Research on Knowledge Mining Algorithm

23

Fig. 2. Overall framework of expert knowledge mining

Finally, the envelope of data is obtained by using the principle of single-valued control chart. The envelope range consists of the upper bound of envelope UE and the lower bound of envelope LB. The calculation formulas are shown in 1 and 2. UB ¼ minfx þ 3  r; maxfxðiÞ ; i ¼ 1; 2;    mgg

ð1Þ

LB ¼ minfx  3  r; minfxðiÞ ; i ¼ 1; 2;    mgg

ð2Þ

Including: x ¼ m1 5.2

m P i¼1

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m P xðiÞ ; r ¼

ðxðiÞ xÞ

i¼1

m1

.

Digital Telemetry Information Mining

Telemetry commands and digital changes have a time-dependent relationship, and there is a causal relationship between them. It can be understood that if the premise of event B is to send remote control command event A, then event A must occur before event B, and this logical relationship conforms to the thought of conditional probability. The knowledge obtained from mining can not be directly applied to practical projects. This paper focuses on the use of conditional probability statistics to discover the relationship between remote control commands and digital changes, and to form causal knowledge. Definition 1 Remote control command sequence (CS). A set of remote control instructions arranged in chronological order is called the sequence of remote control instructions, which is recorded as CS, CS = , satisfying ei.timestamp
0.3 the curves of BER increases at a faster rate when ρt increases.

Performance Analysis of SSK in AF Relay

33

Fig. 2. BER performance with different correlation coefficient versus SNR.

Fig. 3. The BER performance versus correlation coefficients.

5

Conclusion

In this paper, we have investigated the error performance of the transmit correlated dual-hop AF-SSK system. A closed form average BER expression is obtained. Computer simulation validates the accuracy of the theoretical analysis.

34

Q. Li et al.

From the simulations we observe that the theoretical BER curves are consistent with the simulation curves at high SNR. Besides, the transmit coefficient ρt has a great impact on the BER performance of the system especially when ρt > 0.3.

References 1. Nosratinia A, Hunter TE, Hedayat A (2004) Cooperative communication in wireless networks. IEEE Commun Mag 42(10):74–80 2. Yang P, Di Renzo M, Xiao Y, Li S, Hanz L (2015) Design guidelines for spatial modulation. IEEE Commun Surveys Tuts 17(1):6–26 3. Som P, Chockalingam A (April 2013) End-to-end BER analysis of space shift keying in decode-and-forward cooperative relaying. In: Proceedings of the IEEE wireless communications network conference (WCNC), Shanghai, China 4. Altin G, Aygolu U, Basar E, Celebi ME (2017) Multiple-input-multiple output cooperative spatial modulation systems. IET Commun 11(15):2289–2296 5. Mesleh R, Ikki SS (2013) Performance analysis of spatial modulation with multiple decode and forward relays. IEEE Wireless Commun Lett 2(4):423–426 6. Mesleh R, Ikki SS, Alwakeel M (2011) Performance analysis of space shift keying with amplify and forward relaying. IEEE Commun Lett 15(12):1350–1352 7. Mesleh R, Ikki SS (2015) Space shift keying with amplify-and-forward MIMO relaying. Trans Emerg Telecommun Technol 26(4):520–531 8. Koca M, Sari H (September 2012) Performance analysis of spatial modulation over correlated fading channels. In: Proceedings of the IEEE VTC-Fall, (2012) Quebec City, Canada, pp 1–5 9. Gradshteyn IS, Ryzhik IM (March 2007) Table of integrals, series, and products, 7th ed. In: Jeffrey A, Zwillinger D (eds) Academic Press 10. Abramovitz M, Stegun IA (1974) Handbook of mathematical function with formulas, graphs, and mathematical tables. Dover publications, New York 11. Proakis JG (2007) Digital communications, 5th edn. McGraw-Hill, New York

The JSCC Algorithm Based on Unequal Error Protection for H.264 Jiarui Han1, Jiamei Chen1(&), Yao Wang2, Ying Liu1, Yang Zhang1, and Liang Qiao1 1

College of Electrical and Information Engineering, Shenyang Aerospace University, Shenyang 110136, China [email protected], [email protected] 2 Communication Department, Shenyang Artillery Academy Company, No. 31 Dongdaying Avenue, Dongling Area, Shenyang 110161, China

Abstract. Joint Source and Channel Coding (JSCC) considers the source coding and channel coding of communication system to optimize the design. Firstly, an Unequal Error Protection (UEP) scheme based on the Rate Compatible Punctured Turbo(RCPT) code is proposed. The simulation results show that the UEP scheme is superior to the Equal Error Protection (EEP) scheme when the channel rate is fixed. Both the objective data and subjective video show that the UEP scheme is superior to the EEP scheme without increasing channel redundancy and achieves channel coding design based on source characteristics. Secondly, using UEP schemes with different bit rates, a channel adaptive JSCC system is designed. The system can adjust the bit rate allocation between sources and channels adaptively according to channel conditions, and realize the joint source and channel coding. Experiments show that the video restoration quality of this scheme is better than that of the single UEP scheme. Keywords: JSCC

 UEP  RCPT  H.264

1 Introduction In the wireless and mobile network environment, the demand of multimedia applications is growing. The technology of wireless video transmission has been more widely used. The coding efficiency of video signals is also increasing. Accompanied by many the introduction of new access technologies, wireless channel has had higher throughput, which makes the transmission of video signals in the radio channel is possible. However, wireless networks have time-varying, limited bandwidth, relatively higher bit error rate characteristics, and the channel change not only varies greatly with the base station and terminal location and direction, but also causes serious and sudden decline of the error. Therefore, video coding methods for wireless communication should not only have a high compression capability but should also have a good anti-error performance. The objective of the source coding is under the target bitrate premise, making the coding distortion minimum [1]. And the objective of the channel coding is under the permissible channel capacity conditions, transmitting data as reliably as possible. In the bandwidth limited multimedia communications system, these two objectives are © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 35–41, 2020 https://doi.org/10.1007/978-981-13-9409-6_5

36

J. Han et al.

contradictory. Therefore, if the source encoder and channel encoder are designed separately, the purpose of efficient and reliable transmission of the information will not be achieved. The solution of this problem is to jointly consider the source coding and channel coding [2–4], that is, the joint source and channel coding (JSCC) [5]. JSCC can cascade up the source encoder with the channel encoder while maintaining the independence between them, and jointly optimize their encoded parameters. By reasonably allocating the transmission bandwidth between the multimedia data and the protected data, the source-channel coding system with joint optimization of parameters can be realized. JSCC on the one hand can reduce system complexity, and on the other hand can achieve the purpose of the joint coding by assigning rate between the source and channel [6]. Based on the H.264 [7] video standard and the principle of Rate Compatible Punctured Turbo (RCPT), this paper proposes a unequal error JSCC protection method for the H.264 video information. Firstly, H.264 video information is compressed according to different important levels, and then different levels of video information are protected by different bit rates of RCPT.

2 H.264 Data Segmentation In this section, the data segmentation mode of H.264 is proposed to classify the three data types A, B and C into two categories. Namely, type A as the first important class and type B and C as the second important class. In addition, as long as the bandwidth is allowed, video data transmission does not need to constrain the code rate and quality. However, in the actual channel environment, when the channel quality is poor, the current channel state cannot meet the transmission rate required by the video stream. Under these circumstances, the encoding and sending rate of video stream must be controlled. The rate control algorithm is to dynamically adjust the parameters of the encoder and obtain the target bit number. In H.264, after quantization is located in DCT, the quantized DCT coefficients will appear more 0 values, which will help VLC to achieve higher compression ratio. The greater the value of 0, the less bits of bits required after VLC, the lower the code rate. H.264 USES a non-vector quantizer. Definitions and implementations are complex and need to be avoided involving floating-point operations. The basic forward quantizer operation is as follows. Zij ¼ roundðYij =Qstep Þ

ð1Þ

where, Yij is a transformation coefficient described above, Qstep is the size of quantization step, Zij is a quantization coefficient. Changing the quantification factor can control the change of code rate in a large range. Quantitative change factors can achieve the goal of adaptive adjustment rate, but it also has certain side effects, quantitative factors besides of rate effect, is have a significant impact on video quality, quantitative parameters and bit rate are contradictory.

The JSCC Algorithm Based on Unequal Error Protection for H.264

37

3 Design and Implementation of Unequal Error Protection Scheme As shown in Fig. 1, the basic process of unequal error protection schemes is based on data segmentation, and channel encoders selectively implement UEP mechanisms for different classes of data. Under the EEP mechanism, a unified rate error control channel coding is applied to the data.

Original video file *. yuv

Video file after process *.yuv

JM10.2 video coding

JM10.2 video decoding

File reading

File output

Bit stream segmentation

Bit stream merging

Important part

Low bit rate RCPT coding

Unimportant part

High bit rate RCPT coding

Low bit rate RCPT decoding

Bit merging

MQAM modulation

Unimportant part

Important part

High bit rate RCPT decoding

MQAM demodulation

Channel

MQAM demodulation

Fig. 1. Flow chart of unequal error protection scheme

In order to compare the performance of UEP and EEP, the channel code rate is fixed to 3 Mbps, and the Rate control function in the JM10.2 software coding parameter is opened. The RateControlEnable is set to 1 and the bit rate is set to 1.5 Mbps so that the total bit rate of the channel is fixed and the total bit rate of the Turbo code should be fixed when the channel is uploaded and transported. The total bit rate is determined by the next type Pc ki roverall ¼ Pc i¼1 i¼1 ðki =ri Þ

ð2Þ

38

J. Han et al.

Among them, roverall is the total rate of the RTCP code, ki is the length of each data classification frame, and ri is the rate of each data classification.

4 Performance Evaluation 4.1

Campare the Function Between UEP and EEP

According to the formula, we design the following rate matching, as the Table 1 shows. Among the four configurations, although each data classifies the sub-bitrate into different numbers, it can be calculated that the total bitrate is always 1/2, thus, the formula of channel code rate is 1.5 Mbps * 2 = 3 Mbps. Table 1. Distribution of same bit rate UEP and EEP Encoding method Match1 Match2 Match3 Match4

EEP UEP1 UEP2 UEP3

Channel rate Partition A 1/2 4/9 2/5 1/3

Partition B 1/2 10/19 5/9 5/8

Partition C 1/2 10/19 5/9 5/8

In the H2.64 frame, the maximum movement search scope is 16 pixels, the maximum number of reference frames is stated as the first 5 frames. Entropy coding type adopts CAVLC. The size of the image is 176 * 144. The frame rate is set to 30 fps. The encoding frame is I-P-P-P, and the B frame is not inserted. The value of the quantization parameter is uniformly set to 35. And the channel is a Gauss channel. According to the different channel conditions, UEP and EEP will eventually reconstruct the PSNR-Y value of the video, as shown in Fig. 2. From the graph, we can see that the 3 UEP schemes improve the PSNR-Y value of the image brightness component without additional channel redundancy. The reason that UEP3 is superior than UEP2, and UEP2 is superior than UEP1 is that the rate of class A data classification in the schemes is 1/3 < 2/5 < 4/9. That is, the protection for the important data is reduced in turn. Finally, because the channel redundancy is the same, and the code source rate is certain, when the signal to noise ratio increases to a certain extent, the PSNR-Y value will no longer rise. This value is determined by the source compression ratio. Figure 3 shows the restoration quality of video information under different channel protection methods. The video is taken at a construction site. As the Fig. 3 shows, when the EEP method is used to restore the image, the image and the scene can not be seen clearly because of the poor channel condition. However, the quality of the video

The JSCC Algorithm Based on Unequal Error Protection for H.264

39

40

PSNRY (dB)

35

30

25

20

15 0.9

EEP UEP3 UEP2 UEP1

1

1.1

1.2

1.3

1.4

1.5

1.6

Eb/No(dB)

Fig. 2. UEP and EEP reconstruction video PSNR-Y values

recovery from UEP1 to UEP3 is higher in turn. And the recovery image of UEP3 is only a small number of mosaic in the characters’ chin and the scene. From the previous PSNR curve, it is found that the PSNR-Y in the UEP3 mode is 6.66 dB more than the EEP, and the PSNR-Y in the UEP1 mode is also improved 3.58 dB than the EEP. So from the visual sense of restoring the image and the objective data from PSNR-Y data statistics, the effect of using UEP to restore the quality of the image is consistent. Both show that the quality improvement of the image recovery using the UEP method is very obvious.

(a) The second frame of original video

(b) The second frame of EEP

(c) The second frame of UEP1, UEP2 and UEP3

Fig. 3. UEP and EEP video recovery map

40

4.2

J. Han et al.

Performance Analysis of UEP with Different Bit Rate

From the previous section, it is concluded that the UEP method can effectively improve the quality of the restored video without increasing the channel redundancy. In order to further study the relationship between source and channel coding, we study the performance of various UEP schemes with different bit rates and their relationship with the source code parameter QP. In the DP mode, the H.264 video coding standard uses three different types of data partitions. According to different channel states, the source code and channel coding can be matched effectively. The rate of data classification is as shown in Table 2.

Table 2. UEP allocation scheme with different bit rate Encoding method Match1 Match2 Match3

UEP1 UEP2 UEP3

Channel rate Partition A 1/3 2/5 4/9

Partition B 1/2 1/2 1/2

Partition C 1/2 1/2 1/2

From Table 2 we can see that the channel protection for partition A decreases in turn, and same for partition B and C. The simulation results shows that, at the low signal to noise ratio, the highest bit rate is obtained. With the improvement of channel condition, the bit error rate decreases. The three cause of the reverse arrangement is that the UEP1 has the lowest bit rate and the channel redundancy is the most. When the channel rate is fixed to 3 Mbps, the data information is reduced and the PSNR-Y value is reduced. Therefore, the higher the channel redundancy, the better the video quality can be restored. In the case of limited channel bandwidth, the more the channel redundancy, the greater the compression ratio of the source, and the effect of the coding parameters on the bit rate. When the source compression is too large, it is not possible to restore the video quality no matter how well the channel protection is done (Fig. 4). 40

PSNR Y (dB)

35

30

25

20 UEP1 UEP2 UEP3

15 0.9

1

1.1

1.2

1.3

1.4

1.5

1.6

Eb/No(dB)

Fig. 4. Comparison of PSNR_Y values for different bit rate UEP schemes

The JSCC Algorithm Based on Unequal Error Protection for H.264

41

5 Conclusion This paper proposes a JSCC protection method for H.264 video information based on the idea of unequal error protection. Firstly, UEP and EEP are set at the same total bit rate. In the Gauss white noise channel, a lot of simulation verification is done by using MATLAB software. Without increasing redundancy, important information is effectively protected. Thus, the system can enhance the channel protection ability in the case of high bit error rate. Compared with EEP protection mode, the quality of video restoration is significantly improved in decoding. Then, the protection of information by different bit rate protection measures is studied. The simulation results show that the JSCC technology can ensure the reliable transmission of video information and effectively utilize the limited bandwidth resources of the channel. Using limited redundancy to achieve effective error protection, a better trade-off between redundancy consumption and error protection is obtained. Acknowledgements. This research was supported by National Natural Science Foundation of China (Grant No. 61501306), Doctoral Scientific Research Foundation of Liaoning Province (Grant No. 20170520228).

References 1. Chen YM, Wu FT, Li CP, Varshney PK (2019) An efficient construction strategy for nearoptimal variable-length error-correcting codes. IEEE Commun Lett 23(3):398–401 2. Köken E, Tuncel E (2017) Joint source-channel coding for broadcasting correlated sources. IEEE Trans Commun 65(7):3012–3022 3. Deka S, Sarma KK (2017) Joint source channel coding with MC-CDMA in capacity approach. In: 4th international conference on signal processing and integrated networks (SPIN), pp 489–493, Feb 2017 4. Mamatha AS, Sagar K, Sumanth J, Tharun Thejus JP, Varun R, Singh V (2018) Joint source channel coding for hyperspectral imagery. In: IEEE India council international conference (INDICON), pp 16, Oct 2018 5. Chen C, Wang L, Liu S (2018) The design of protograph LDPC codes as source codes in a JSCC system. IEEE Commun Lett 22(4):672–675 6. He J, Li Y, Wu G, Qian S, Xue Q, Matsumoto T (2017) Performance improvement of joint source channel coding with unequal power allocation. IEEE Wireless Commun Lett 6 (5):582–585 7. Zhu X, Chen CW (2016) A joint source-channel adaptive scheme for wireless H.264/AVC video authentication. IEEE Trans Inf Forensics Secur 11(1):141–153

Mean-Field Power Allocation for UDN Yanwen Wang1, Jiamei Chen1(&), Yao Wang2(&), Qianyu Liu1, and Yuying Zhao1 1 College of Electrical and Information Engineering, Shenyang Aerospace University, No. 37 Daoyi South Avenue, Shenbei New Area, Shenyang, China [email protected] 2 Department of Air Defense Forces, Noncommissioned Officer Academy, Institute of Army Artillery and Air Defense Forces, No. 31 Dongdaying Avenue, Shenhe Area, Shenyang, China [email protected]

Abstract. Ultra Dense Network (UDN) is an effective solution to the explosive growth of traffic in the future 5G networks. In this paper, a mean-field power allocation algorithm is proposed for UDN. It imbeds the power allocation decision problem into a Dynamic Stochastic Game (DSG) model. And then it finds the optimal decision by deriving the model into a mean-field game model. The simulation results show that compared with the other methods, the proposed method can achieve better performance in terms of the CDF and the Utility EE, and can also guarantee the Quality of Service (QoS). Keywords: Mean-field theory Dynamic stochastic game

 Ultra dense network  Power control 

1 Introduction UDN greatly improves the system capacity and the flexibility of services sharing among various access technologies and coverage levels [1, 2]. However, it still faces a number of challenges. A lot of micro base stations in UDN are more likely to be deployed by users themselves without planning, which making the network topologies and features extremely complex [3]. The unplanned and dense deployment of MiBSs in UDN will bring a lot of energy consumption to the network. This problem make the power control more challenging than the traditional sparse network deployment. Therefore, it is necessary to find an efficient power allocation scheme [4]. For the power allocation problems in the traditional networks, the convex optimization theory can provide the best solution. However, it needs the global network information and centralized control, thus producing significant signaling overhead and computing complexity [5, 6]. In order to avert this limitation of optimization, the game theory has attracted wide attention. It can describe the rational behaviors, analyze the dynamic equilibrium, and design the distributed control algorithms [7]. But, the huge number of the access points in UDNs will lead to the well-known curse of

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 42–47, 2020 https://doi.org/10.1007/978-981-13-9409-6_6

Mean-Field Power Allocation for UDN

43

dimensionality in Game Theory. Because of such large participation of the equipments, the methods used in the small scale networks are insufficient to explore the power optimization of UDNs [8, 9]. In this paper, a joint resource allocation method for energy efficient power control in UDNs networks is proposed. The strategy transforms the difficult dynamic stochastic game problem into a relatively low complexity mean field equilibrium problem. The ultimate goal is to optimize the power control strategy, and ultimately improve energy efficiency.

2 Problem Description of Dynamic Stochastic Game In the UDNs, suppose B BSs share the spectrum with bandwidth x, and B ¼ f1; 2; . . .; b; . . .Bg is the set of the BSs. These BSs serve a total of M users, which is expressed as set M and M ¼ M1 [ . . . [ MB , where Mb is a user set served by the BS b 2 B. The channel gain between the user m 2 Mb and the BS b is defined as hbm ðtÞ. With the hypothesis of an additive Gauss white noise with zero mean and variance r2 , the instantaneous data rate of user m is: rbm ðtÞ ¼ x log2

pb ðtÞjhbm ðtÞj2 1þ Ibm ðtÞ þ r2

! ð1Þ

  where pb ðtÞ 2 0; pmax is the transmission power of BS b, and Ibm ðtÞ ¼ b P 2 8b0 2Bnfbg pb0 ðtÞjhb0 m ðtÞj is the interference to BS b from the other BSs. Suppose Eb ðtÞ

is the energy available for BS b at time t. Thus, we can define Xb ðtÞ ¼ ½Eb ðtÞ; hb ðtÞT as the system state in time t, and its state space is X ¼ ðX1 ðtÞ [    [ XB ðtÞÞ. Where, hb ðtÞ ¼ ½hbm ðtÞm2M . With the above basic assumptions, the utility of the BSs at a time t can be defined as ub ðpb ðtÞÞ ¼ rb ðt; Xb ðtÞÞ=ðpb ðtÞ þ p0 Þ

ð2Þ

where, ub ðÞ represents the packet success rate and also the energy efficiency. And our goal is to determine the control strategy for each base station to maximize the utility function ub ðÞ while guaranteeing the QoS of users. The limiting time average Pt1 _ pðsÞ. expectation of the control variables pb ðtÞ can be expressed as p ¼ limt!1 1t s¼0 And then, the problem of maximum utility of BS b can be written as follows: max ub ð^pb ; ^pb Þ ^pb

s:t:8m 2 Mb ð1Þ; ð2Þ yb ðtÞ 2 yb ðt; xÞ 8t

ð3Þ

44

Y. Wang et al.

In this way, the power control problem can be expressed as a dynamic stochastic game:   g ¼ B; fPb gb2B ; fXb gb2B ; fCb gb2B

ð4Þ

where Cb is the average utility of participant b 2 B, and depends on the state xðT Þ ! xðT 0 Þ. When jBj [ 2, solving the coupled equations of jBj is very complex.

3 Mean-Field Solution to the Problem The SDG becomes more and more difficult to analyze when jBj increases. Fortunately, our problem has a special structure which can simplifies the problem when jBj is large. Indeed, from a SBS standpoint, what matters in terms of utility is a weighted sum of the actions. The relevant quantity which affects the utility of SBS b is the impact of other SBS on the given SBS b 2 B appears as a form of interference: X p 0 ðt; xðtÞÞjhb0 m ðtÞj2 ð5Þ Ibm ðt; xðtÞÞ ¼ 8b0 2Bnfbg b It can be proven that if Ibm ðt; xðtÞÞ converges, h  so SDG  converges  ito a mean-field ^ ^   game. As a result, in the MF, the solution C t; x ðtÞ ; q t; x ðtÞ of mean-field   ^ equilibrium is the solution of dynamic stochastic game of jBj SBSs. Where, C t; x ðtÞ   ^ is obtained by solving the HJB equation, and q t; x ðtÞ is educed by solving the FPK equation. Therefore, the optimal transmission power can be given from the following: @ h  ^ i C t; x ðtÞ þ f ðtÞ @x pðtÞ  1 @ 2 h  ^ i þ trðX2z 2 Þ C t; x ðtÞ @x 2

p ðtÞ ¼ arg max½Xt

ð6Þ

4 Simulation Results The simulation evaluates the relative performance of MF-Game algorithm proposed in this paper by comparing it with an existing Stackelberg-Game algorithm in literature [10] for high and low loads k. Here the load k = MB represents the number of UEs served by an SBS. We use low load for k = 3 UEs per SBS and high load for k = 8 UEs per SBS. The SBS density ks of the system refers to the number of base stations per square kilometer. From Fig. 1, it can be seen that, for a dense network, our proposed method improves the utility EE of about 4.2% compared to the existing baseline model with k = 3. However, as the user load increases to k = 8, this EE improvement will reach up

Mean-Field Power Allocation for UDN

45

Fig. 1. Utility EE for high and low loads k under different density of SBSs

to 20.1%. In dense scenarios, our proposed method has more advantages in EE because it can better adapt to the dynamic characteristics of the network. The common point of these two methods is that they both try to optimize the transmission power to maximize EE. Nevertheless, as the number of the users is huge, the conventional game theory Stackelberg-Game algorithm must face the well-known curse of dimensionality. Fortunately, MF-Game algorithm, as an advanced game-theoretic method, is very expert at analyzing the control policy. Figure 2 shows that the total throughput of the system increases with the increase of the number of available base stations. This is because when the total number of base stations increases, the number of base stations available to users increases, that is, the

Fig. 2. Total throughputs of users for high and low loads k under different density of SBSs

46

Y. Wang et al.

constraints of the original optimization problem become larger, so the total throughput of the system will increase to a certain extent. The density of SBS is large and the interference is large, and the growth rate of throughput is slow. With the increase of user density, the system throughput is increasing, but the rate of increase will be more and more gentle, and eventually will be stable, which is due to the limited total capacity of the system.

5 Conclusion A power control method based on mean field is proposed in this paper. In order to improve the energy efficiency of the network, the original DSG problem is transformed into an average field problem which is easy to solve, and the optimal power control method is sought. The simulation results show that the mean field method proposed in this paper is superior to the classical game theory method in terms of energy efficiency and network throughput. Acknowledgements. This research was supported by National Natural Science Foundation of China (Grant No. 61501306), Doctoral Scientific Research Foundation of Liaoning Province (Grant No. 20170520228), College Students’ innovation and entrepreneurship training program (Grant No. 110418092).

References 1. de Mari M, Calvanese Strinati E, Debbah M, Quek TQS (2017) Joint stochastic geometry and mean field game optimization for energy-efficient proactive scheduling in ultra dense networks. IEEE Trans Commun Netw 3(4):766–781; Voronkov A (2004) EasyChair conference system. Retrieved from easychair.org 2. Aziz M, Caines PE (2017) A mean field game computational methodology for decentralized cellular network optimization. IEEE Trans Control Syst Technol 25(2):563–576 3. Samarakoon S, Bennis M, Saad W, Debbah M, Latva-aho M (2015) Energy-efficient resource management in ultra dense small cell networks: a mean-field approach. In: 2015 IEEE global communications conference (GLOBECOM), San Diego, CA, pp 1–6 4. Yang C, Li J, Sheng M, Anpalagan A, Xiao J (2018) Mean field game-theoretic framework for interference and energy-aware control in 5G ultra-dense networks. IEEE Wirel Commun 25(1):114–121 5. Yang C, Li J, Guizani M (2016) Cooperation for spectral and energy efficiency in ultra-dense small cell networks. IEEE Wireless Commun. 23:64–71 6. Al-Zahrani AY, Yu FR, Huang M (2016) A joint cross-layer and colayer interference management scheme in hyperdense heterogeneous networks using mean-field game theory. IEEE Trans Veh Technol 65(3):1522–1535 7. Park J, Jung SY, Kim SL, Bennis M, Debbah M (2016) User-centric mobility management in ultra-dense cellular networks under spatio-temporal dynamics. In: Proceedings of IEEE global communication conference (GLOBECOM), pp 1–6 8. Xiao Y, Niyato D, Han Z, DaSilva LA (2015) Dynamic energy trading for energy harvesting communication networks: a stochastic energy trading game. IEEE J Sel Areas Commun 33 (12):2718–2734

Mean-Field Power Allocation for UDN

47

9. de Mari M, Calvanese Strinati E, Debbah M, Quek TQS (2017) Joint stochastic geometry and mean field game optimization for energy-efficient proactive scheduling in ultra dense networks. IEEE Trans Cognitive Commun Netw 3(4):766–781 10. Shafigh AS, Mertikopoulos P, Glisic S (2016) A novel dynamic network architecture model based on stochastic geometry and game theory. In: 2016 IEEE international conference on communications (ICC), Kuala Lumpur, pp 1–7

Design of Gas Turbine State Data Acquisition Instrument Based on EEMD Zhonglin Wei(&), Pengyuan Liu, Feng Wang, and Tianhui Wang Shijiazhuang Campus, Army Engineering University, No. 97, Heping West Road, Shijiazhuang 050003, Hebei, China [email protected]

Abstract. In order to carry out the condition monitoring tasks in working process of gas turbine, a multi-channels data acquisition instrument was designed based on high-speed AD and FPGA, which can collect temperature, rotational speed and vibration signals in real time. The data is transmitted to PC through USB interface, then PC uses EEMD to analysis the vibration data and LabVIEW software to process and display data. At the same time the instrument has both on-line data processing module and storage module, and it can analyze data offline in special working environment. The instrument is characterized by good communication ability with host computer and strong anti-interference ability, so it can provide reliable state data for fault detection and analysis of gas turbine, and it is feasible and practical to carry out data acquisition and condition monitoring in a complex environment. Keywords: Data acquisition monitoring

 Empirical mode decomposition  Condition

1 Introduction In order to monitor the condition of gas turbine, a high-precision real-time acquisition instrument was designed, which is mainly composed of sensors, signal acquisition and modulation circuits, with LabVIEW as the software platform. Considering the necessary conditions and special working environment of gas turbine fault detection, the acquisition instrument can collect high-speed and highprecision data from two vibration signals, two temperature signals, one speed signal and two switching signals, then transfers data to the host computer through USB interface, and real-time analyses the data.

2 Hardware Design The hardware of the gas turbine state data acquisition instrument consists of signal acquisition and conditioning, data processing, data transmission and power processing circuits, its hardware composition is shown in Fig. 1. External sensor signal is converted to A/D after conditioning circuit, and dual SDRAM is used for data cache in FPGA. Cached data is sent to PC through USB © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 48–55, 2020 https://doi.org/10.1007/978-981-13-9409-6_7

Design of Gas Turbine State Data Acquisition

49

CAN bus

Start signal

Optocoupler level conversion

Vibration signals

Operational amplifier

High-speed AD

Speed signal

Operational amplifier

Comparator

Temperature signals

DSP

SATA interface

FLASH FPGA SDRAM

Operational amplifier

SDRAM

USB interface

Fig. 1. Hardware block diagram

interface, and processed, displayed and stored by PC. The advantage of this method is that it has strong data processing ability, can conduct in-depth state analysis and fault detection of gas turbine, and display intuitively, and has a large amount of data storage. When the working environment of gas turbine is not suitable for the above way, the cached data can be transmitted to the DSP module through the FPGA, and on-line data processed by the DSP, then stored in the FLASH chip. The data in FLASH chip can be transmitted over a long distance through CAN bus, or can be taken out offline through USB interface for offline analysis. 2.1

Temperature Signal Measuring Circuit

The instrument uses K-thermocouple temperature measuring circuit which supports cold junction compensation. The AD sampling circuit of thermocouple adopts MAX31855 output converter, which can convert K-thermocouple signal into digital quantity, and output 14-bit signed data in read-only format through SPI compatible interface. The thermocouple measuring circuit is shown in Fig. 2. 2.2

Speed Signal Measuring Circuit

Speed measuring motor is used to measure speed signal. When the gas turbine works normally, the speed measuring motor outputs AC voltage signal. The frequency of the signal is proportional to the speed, so it is only necessary to collect the frequency information. In order to ensure that the signal amplitude meets the input requirements of the later circuit, a peak-cutting circuit is designed in the input stage of the speed signal measuring circuit. The input stage circuit is shown in Fig. 3.

Z. Wei et al.

R72

50

U15 1 GND 2 T3 T+ 4 VCC

L1 T1C115

T1_OUT T1_CS T1_CLK

MAX31855

L5

T1+

8 N.C 7 SO 6 /CS 5 SCK

VC33

C131

Fig. 2. Thermocouple measuring circuit

+15V

R100 C123

D16 D6 C111

R92 J6 BNC Input

AGND

AGND C116 D12

SIP-2P

R89 R101

AGND

AGND

Limited Output

-15V

Fig. 3. Input stage of speed signal measuring circuit

D2

C124

Design of Gas Turbine State Data Acquisition

51

In order to ensure the effective and undistorted transmission of the signal, the gain adjustable operational amplifier circuit is used to amplify and shape the signal. Because the signal has been processed by op-amp and its DC component has been filtered, a zero-crossing comparator circuit is used in the comparator stage to obtain the standard TTL square wave signal. The output signal is converted to level, then the frequency signal is sent to the FPGA. The transmission stage of speed signal measuring circuit is shown in Fig. 4.

1 2 3 4 5 6 7

U24 14 13 12 11 10 9 8

+

+ -

R96

4 5

ANALOG_+15V

C106

AGND

U21-A +

2

TTL

R97

D10

AGND

AGND

ANALOG_-15V

R94

1

AGND

3

R95

Clipping signal

2 WR1

Fig. 4. Transmission stage of speed signal measuring circuit

The R96 is the matching resistance, which can effectively eliminate ringing by choosing the appropriate resistance value. The D10 is a germanium diode, which can limit the reverse input signal of the comparator to −300 mV and protect the chip from damage. 2.3

Vibration Sensor Protection Circuit

R92

+24V

The built-in IC piezoelectric accelerometer is used to measure vibration signal. In order to ensure the normal operation of the sensor and protect the sensor at the same time, the vibration sensor protection circuit is designed, which has the protection function of open circuit and short circuit. The sensor protection circuit is shown in Fig. 5.

R90

R91

SIGALE

+24V

1 2 3 4

U20

8 7 6 5

Fig. 5. Sensor protection circuit

R93 LED

+24V

52

2.4

Z. Wei et al.

Data Transmission Circuit

The USB interface adopts CY7C68013, which supports the USB2.0 protocol. All USB interface operations are coordinated by FPGA. The CAN bus interface uses CTM1050T as the transceiver, and the controller uses SJA1000. The FPGA is directly connected to SJA1000 for modification and debugging.

3 Software Design The acquisition software was developed based on LabVIEW programming, which is used to process the uploaded signals. In the working process of the acquisition instrument, the signals are collected by sensors, processed by FPGA and sent to the PC for data analysis and judgment. The software user interface is shown in Fig. 6.

Fig. 6. Software user interface diagram

Temperature and speed data are processed to display the specific values intuitively. Vibration is an important symbol of gear fault diagnosis, so it is necessary to analyze it in time domain, amplitude and frequency domain during data processing. 3.1

EEMD Algorithm

When gears fail, their vibration signals will show strong non-stationary characteristics due to AM and FM. Aiming at this feature and avoiding mode mixing, the core algorithm uses Ensemble Empirical Mode Decomposition (EEMD). EEMD is an efficient and adaptive signal decomposition method, which is suitable for processing

Design of Gas Turbine State Data Acquisition

53

non-linear and non-stationary signals [1, 5]. It makes use of the statistic characteristic of uniform frequency distribution of Gauss white noise to make the signal with Gauss white noise have continuity on different scales, thus effectively solving the problem of mode confusion, and at the same time improving the resolution. The decomposition process of EEMD is as follows [2–4]: Step 1 Gauss white noise sequence is added to the target data. Step 2 Decomposing the Sequence: The sequence with Gauss white noise is decomposed into a family of intrinsic mode functions (IMFs) according to EMD algorithm. Step 3 Repeating the process: Adding different Gauss white noise sequences with the same amplitude each time, and repeating steps 1 and 2. Step 4 Take the mean of each decomposed IMF as the final result, i.e. Cj ðtÞ ¼

N 1X Cij ðtÞ N i¼1

ð1Þ

In (1), Cj(t) denotes the jth IMF component obtained by EEMD decomposition of the original signal; N is the number of times of adding white noise. Assuming that the given time signal is X(t), the signal is transformed into Hibert transform: _

x ðtÞ ¼

1 p

Z

þ1 1

xðsÞ 1 ds ¼ xðtÞ ts pt

ð2Þ

_

Then the analytic signal of X(t) is zðtÞ ¼ xðtÞ þ jx ðtÞ. According to the properties of the analytic signal, the spectrum Z(jX) of Z(t) is equal to the frequency spectrum obtained by multiplying the negative frequency of (jX) to zero and the positive frequency to 2. 3.2

Vibration Analysis

The accelerometer selected by the acquisition instrument has a sensitivity of 15 mV/g, a sampling frequency of 100 kHz and a sampling point of 10,240. The original time domain signal measured in the actual working process is shown in Fig. 7. The time domain signal is decomposed by EEMD. The added white noise amplitude is 0.2 times the standard deviation of the original signal amplitude. The set aggregate number is 200. The extracted IMF is shown in Fig. 8. It can be seen that the clear spectrum can be obtained by EEMD and Hibert transform, and the phenomenon of mode aliasing can be effectively solved.

54

Z. Wei et al.

Fig. 7. Time-domain diagram of the original signal

Fig. 8. IMF diagram

4 Conclusions The acquisition instrument achieves high-speed and accurate acquisition of temperature, rotational speed and vibration acceleration signals in actual working environment, and provides reliable data for fault detection and analysis of gas turbine. At the same time, the acquisition instrument has a good communication function with the host computer. It can be used as a fault detection instrument for equipment in special working environment, and it has good application flexibility and practicability.

References 1. Lin JS (2010) Gearbox fault diagnosis based on EEMD and Hilbert transform. J Mech Transm 34(5):62–64 2. Wu ZH, Huang NE (2008) Ensemble empirical mode decomposition: a noise assisted data analysis method. Adv Adapt Data Anal 1:1–41

Design of Gas Turbine State Data Acquisition

55

3. Zhang J (2008) Analysis and improvement of modal aliasing in EMD algorithms. In: University Of Science and Technology of China Master Thesis, Hefei 4. Lei YG, He ZJ, Zi YY (2009) Application of the EEMD method to rotor fault diagnosis of rotating machinery. Mech Syst Signal Process 23(4):1327–1338 5. Huang NE, Shen Z, Long SR (1998) The empirical mode decomposition and the Hilbert spectrum for non-linear and non-stationary time series analysis. Proc R Soc Ser A 454:903– 995

Cramér–Rao Bound Analysis for Joint Estimation of Target Position and Velocity in Hybrid Active and Passive Radar Networks Chenguang Shi1,2, Wei Qiu2, Fei Wang2(&), and Jianjiang Zhou2 1

Science and Technology on Electro-Optic Control Laboratory, Luoyang 471009, China 2 Key Laboratory of Radar Imaging and Microwave Photonics (Nanjing University of Aeronautics and Astronautics), Ministry of Education, Nanjing 210016, China [email protected]

Abstract. This paper examines the joint moving target parameter estimation in hybrid active and passive radar networks with sensors placed on moving platforms, which are composed of one dedicated linear frequency modulated (LFM)-based active radar transmitter, multiple frequency modulated (FM)-based illuminators of opportunity, and multichannel radar receivers. Firstly, target returns contributed from the active radar transmitter and multiple illuminators of opportunity are adopted to fulfill the radar purpose, resulting in a hybrid active and passive radar networks. Then, the CRLB for joint target parameter estimation is derived as the performance metric for the underlying system. Finally, the numerical results show that, the achievable CRLB can be decreased by exploiting the signals scattered off the target due to illuminators of opportunity transmissions. Keywords: Cramér–Rao lower bound (CRLB)  Fisher information matrix (FIM)  Hybrid radar network systems  Linear frequency modulated (LFM) signals  Frequency modulated (FM) signals

1 Introduction During recent years, the distributed radar network system has attracted significant attention from researchers due to its obvious advantages against other radar systems [1, 2], which is composed of several widely deployed radar nodes and can simultaneously emit multiple independent waveforms via different transmitting antennas. Research on target parameter estimation is becoming more and more popular [3–7]. It has been demonstrated that the signals scattered off the target due to illuminators of opportunity transmissions can be utilized to enhance the target detection performance and parameter estimation accuracy of the active radar system. In this paper, we consider a hybrid active and passive radar networks with sensors placed on moving platforms, which are composed of one dedicated linear frequency modulation (LFM)-based active

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 56–65, 2020 https://doi.org/10.1007/978-981-13-9409-6_8

Cramér–Rao Bound Analysis for Joint Estimation

57

radar transmitter, multiple frequency modulation (FM)-based illuminators of opportunity, and multichannel radar receivers. For the sake of simplicity, the antenna locations are known as prior knowledge. On the other hand, the signals transmitted from the illuminators of opportunity can be decoded and reconstructed at the multichannel radar receivers. Thus, target returns received at the radar receivers due to active radar transmission and illuminators of opportunity transmissions can be employed for joint target parameter estimation, forming a hybrid active and passive radar networks. However, to the best of our knowledge, there are still no published literatures that address the problem of joint moving target parameter estimation performance in hybrid active and passive radar networks. This gap motivates this work. This paper aims to investigate the Cramér–Rao lower bound (CRLB) for joint target estimation in hybrid active and passive radar networks with sensors placed on moving platforms, which are composed of one dedicated LFM-based active radar transmitter, multiple FM-based illuminators of opportunity, and multichannel radar receivers. We compute the joint CRLB for the target estimation of location and velocity in hybrid radar network systems, in which the non-coherent processing mode is considered. Finally, numerical simulations are provided to verify the accuracy of the theoretical derivations. This rest of this paper is organized as follows. Section 2 describes the signal model for hybrid radar networks. In Sect. 3, the joint CRLB is computed for the non-coherent processing scenario by deriving closed-form expressions of the FIM. The numerical simulations are provided in Sect. 4. Finally, conclusion remarks are drawn in Sect. 5.

2 Signal Model Consider a hybrid radar network architecture comprising of one dedicated LFM-based active radar transmitter, Nt FM illuminators of opportunity, and Nr multichannel receivers. Let the active radar transmitter and the ith, i ¼ 1; . . .; Nt FM-based illuminator be located at pt ¼ ½xt ; yt  and pti ¼ ½xti ; yti  respectively, in a 2-dimensional Cartesian coordinate system  for simplicity. Similarly, the jth, j ¼ 1;    ; Nr radar receiver is located at prj ¼ xrj ; yrj . The target position and velocity  are supposed to be deterministic unknown and denoted by p ¼ ½x; y and v ¼ vx ; vy . We define the unknown target state vector:  y h ¼ x; y; vx ; vy :

2.1

ð1Þ

LFM Signal Model in Active Radar Networks

It is assumed that the dedicated active radar transmitter transmits a sequence of LFM pulses or chirps. The complex envelope of the signal transmitted from the transmitter is pffiffiffiffiffi Pt sðtÞ, where Pt is the transmitted power. The complex envelope of the transmitted unitary power signal is then given by [6]:

58

C. Shi et al. N 1 1 X sðtÞ ¼ pffiffiffiffi s1 ðt  nTR Þ; N n¼0

ð2Þ

where  s1 ð t Þ ¼

2 p1ffiffi ejpkt ; s

0;

jtj  2s ; elsewhere

ð3Þ

N is the number of subpulses for each transmitted burst, TR is the pulse repetition interval (PRI) and s is the duration of each pulse, such that s\TR =2. Moreover, ks2 ¼ Bs is the effective time-bandwidth product of the signal and B is the total frequency derivation. Let stj denote the time delay corresponding to the jth path: stj ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2 x  xrj þ y  yrj ðx  xt Þ2 þ ðy  yt Þ2 þ c

  kp  pt k þ p  prj  ; ¼ c ð4Þ

where c is the speed of light. The Doppler shift of the moving target corresponding to the jth path is the time rate of change of the total jth path length:



2 3 xxrj yyrj yyt xxt þ vy kpp k þ pp þ v 7 t k rj k fc 6 x kppt k kpprj k

7; ftj ¼ 6 5 c4 xx yy yyt xxt þ vtx kpp þ vty kpp þ vxj pprj þ vyj pprj tk tk k rj k k rj k

ð5Þ

where fc denotes the carrier frequency of the radar transmitter. 2.2

FM Signal Model in FM-Based Passive Radar Networks

The complex envelope of the signal transmitted from the ith FM illuminator of pffiffiffiffiffi opportunity is Pti si ðtÞ, where Pti is the transmitted power of the ith FM transmitter. The transmitted signal si ðtÞ is the unitary power pulse, that is [3]:  si ðt Þ ¼

p1ffiffiffi ejbi sinð2pfi t þ ui Þ ; Ti

0;

jtj  T2t ; elsewhere

ð6Þ

where Ti is the observation time, bi is the modulation index, fi is the instantaneous frequency, and ui is the signal phase [3]. It is worth pointing out that the signals from different illuminators of opportunity can be estimated perfectly at each radar receiver from the direct path reception and separated in some domain. Let sij and fij denote the different bistatic delays and Doppler shifts corresponding to the ij th path associated with the target located at h:

Cramér–Rao Bound Analysis for Joint Estimation

sij ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2 ðx  xti Þ2 þ ðy  yti Þ2 þ x  xrj þ y  yrj c

2 xxrj yyti xxti þ v þ v x kpp k y kpp k pprj k ti ti fci 6 k

fij ¼ 6 c4 xxrj yyrj þ vxj pp þ vyj pp k rj k k rj k

59

  kp  pti k þ p  prj  ; ¼ c ð7Þ

3 yy þ pprj k rj k 7 7; ð8Þ 5

where fci represents the carrier frequency of the ith FM transmitter.

3 Joint Cramér–Rao Lower Bound 3.1

Non-coherent FIM for LFM-Based Active Radar Networks

Under the non-coherent processing mode, the target is assumed to be made up of several individual isotropic scatterers [3, 5]. The attenuation coefficient corresponding to the jth path is modeled as a zero-mean complex Gaussian random variable ftj  CN ð0; r2 Þ, which is constant over the observation interval [5] and varies with the angle of view. The received signal at the jth receiver due to the signal transmitted from the dedicated radar transmitter is given by: rtj ðtÞ ¼

  pffiffiffiffiffi Pt atj ftj s t  stj ej2pftj t þ ntj ðtÞ;

ð9Þ

where ntj ðtÞ is the additive noise corresponding to the jth path, which is a temporally white, zero-mean complex Gaussian random process with variance r2n . The term atj ¼ 1 kppt kkpprj k

represents the variation in the signal strength due to path loss effects.

Note that the signals from the active radar transmitter are supposed to be received and processed at the multichannel radar receivers. The joint log-likelihood ratio across all the transmitter-receiver pairs is: Lðh; rðtÞÞ ¼

Nr X j¼1

Z 1

2

  r2 a2 Pt þ r2n tj 

rtj ðtÞs t  stj ej2pftj t dt

þ C; 1 r2n r2 a2tj Pt þ r2n

ð10Þ

where rðtÞ ¼ ½rt1 ðtÞ; rt2 ðtÞ; . . .; rtNr ðtÞy denotes the received signals from the entire set of the receivers, and C is a constant that is not dependent on h. The derivations for non-coherent MIMO radar in [6] express the MIMO FIM as a combination of the constituent bistatic FIMs. After lengthy algebraic manipulations, we can write the non-coherent FIM for active radar networks as follows:

60

C. Shi et al.

2 Nr 8p2 r2 a2tj Pt X JA JA non ðhÞ ¼ ij ðhÞ; 2 2 2 2 j¼1 rn r atj Pt þ rn

ð11Þ

where the elements of the bistatic FIM JA ij ðhÞ for the non-coherent processing mode corresponding to the jth transmitter-receiver pair are identical to the results in [6]. 3.2

Non-coherent FIM for FM-Based Passive Radar Networks

Since it is assumed that the different transmitted FM signals can be separated at the radar receivers, the received signal at the jth receiver due to the signal transmitted from the ith FM illuminator of opportunity can be expressed in a similar way: rij ðtÞ ¼

  pffiffiffiffiffi Pti aij fij si t  sij ej2pfij t þ nij ðtÞ;

ð12Þ

where nij ðtÞ represents the additive clutter-plus-noise corresponding to the ijth path, which is a temporally white, zero-mean complex Gaussian random process with 0 variance rn2 . The term aij ¼ kpp k1 pp denotes the ijth path loss. Further, the target ti k rj k 0 attenuations fij are zero-mean Gaussian distributed with variance r 2 . It should be noted that rij ðtÞ are mutually independent for different transmitterreceiver pairs, which is due to the fact that the illuminators of opportunity and radar receivers are widely separated. The joint log-likelihood ratio across all the receivers for a given transmitted waveform can be written as:

Nt X Nr X L h; r ðtÞ ¼ 0

i¼1 j¼1

0



r 2 a2ij Pti

r0n2 r0 2 a2ij Pti þ r0n2

Z



1 1

rij ðtÞsi





t  sij e

j2pfij t

2

0 dt

þ C : ð13Þ

0 where r ðtÞ ¼ ½r11 ðtÞ; r12 ðtÞ;    ; rNt Nr ðtÞy denotes the received signals from the entire 0 set of the receivers, and C is a constant that is not dependent on h. Similarly, the non-coherent FIM with respect to h in FM-based passive radar networks can be calculated as follows:

0 2 Nt X Nr 8p2 r 2 a2ij Pti X JPij ðhÞ; JPnon ðhÞ ¼ 02 02 2 02 i¼1 j¼1 rn r aij Pti þ rn

ð14Þ

where the elements of the bistatic FIM JPij ðhÞ for the non-coherent processing mode corresponding to the ijth transmitter-receiver pair are identical to the results in [3].

Cramér–Rao Bound Analysis for Joint Estimation

3.3

61

Non-coherent CRLB for Hybrid Radar Networks

Now, we will compute the CRLB for the non-coherent processing scenario in hybrid radar networks by deriving the hybrid FIM expression. Using the derivations above, the FIM for hybrid LFM-based active and FM-based passive radar networks obtained from (11) to (14) is given by: P Jnon ðhÞ ¼ JA non ðhÞ þ Jnon ðhÞ 2 0 2 Nt X Nr Nr 8p2 r2 a2tj Pt 8p2 r 2 a2ij Pti X X JA JPij ðhÞ: ¼ ij ðhÞ þ 02 02 2 02 2 2 2 2 j¼1 rn r atj Pt þ rn i¼1 j¼1 rn r aij Pti þ rn

ð15Þ

Furthermore, the non-coherent CRLB matrix for joint estimation of target locations and velocities is derived as: CRLBnon ðhÞ ¼ J1 non ðhÞ:

ð16Þ

The CRLBs for the estimates of the unknown target locations and velocities are determined by the four diagonal elements of the CRLB matrix:    1  ) y CRLBxnon ðhÞ ¼ J1 non ðhÞ 1;1 ; CRLBnon ðhÞ ¼ Jnon ðhÞ 2;2 :    1  vy x CRLBvnon ðhÞ ¼ J1 non ðhÞ 3;3 ; CRLBnon ðhÞ ¼ Jnon ðhÞ 4;4

ð17Þ

Remark 1 We can clearly notice from the expressions of the entries of the hybrid FIM in (15) that the target parameter estimation performance can be remarkably enhanced with the information obtained from FM-based passive radar networks, which is because that the use of passive radar networks results in increase of the signal-to-noise ratio (SNR) values, implying the decrease of the obtained CRLB. These expressions for CRLB can be utilized as an important performance metric to optimize the hybrid radar networks for a predetermined accuracy requirement with the minimum system cost.

4 Simulation Results and Analysis In the sequel, numerical simulation results are dedicated to compute the joint CRLB for hybrid active and passive radar network systems as well as verify the accuracy of the theoretical derivations. Here, we consider a hybrid radar network with one dedicated active radar transmitter, four FM-based illuminators of opportunity, and four multichannel radar receivers, as depicted in Fig. 1. We set the signal parameters as follows: N ¼ 256, B ¼ 50 MHz, TR ¼ 104 s, fc ¼ 10 GHz, s ¼ 106 s, r2 ¼ 1, r2n ¼ 1013 W, c ¼ 3  108 m=s, Ti ¼ 0:1 s, bi ¼ 10, ui ¼ p=2, fi ¼ 25 KHz, fci ¼ 100 MHz, 0 0 r 2 ¼ 1, rn2 ¼ 1013 W, Pti ¼ 10 KW.

62

C. Shi et al.

Define the SNR as: Nr X

SNR ¼ 10l g

j¼1

, ! r2 a2tj Pt

r2n :

ð18Þ

Figure 2 illustrates the square root of CRLB (RCRLB) in the x-position and yposition dimensions versus SNR for the non-coherent processing mode. Similarly, Fig. 3 shows the velocity RCRLB as a function of the SNR. The curves show that the RCRLB decreases with the increase of SNR. The RCRLB is lower in the y-dimension for both the position and velocity. Moreover, it should be pointed out that the RCRLB of hybrid radar networks is much lower than that of active radar networks, which shows that the target parameter estimation performance can be significantly improved with the use of information obtained from FM-based passive radar networks, although the resolution of the FM-based passive radar networks is much worse when compared with the resolution of active radar networks. This is due to the fact that the use of passive radar networks leads to increase of the SNR values, which leads to the decrease of the RCRLB.

7000

5000

FM Transmitters Radar Transmitter Radar Receiver Target

4000

(80, 20)m/s

Y position[m]

6000

(30, 50)m/s

3000

(10, 70)m/s

2000 1000

(50, 50)m/s

(30, 50)m/s (60, 50)m/s

0 -1000 -1000

0

1000

2000

3000

4000

5000

6000

7000

X position[m]

Fig. 1. Target and the hybrid radar network configuration used in the numerical simulations.

Cramér–Rao Bound Analysis for Joint Estimation 2

px (Hybrid radar network) py (Hybrid radar network) px (Active radar network) py (Active radar network)

1.8

RCRLB[m]

1.6 1.4 1.2 1 0.8 0.6 0.4 -1 10

0

10

SNR[dB]

Fig. 2. Non-coherent RCRLB in the target position dimensions versus SNR.

2.2 2 1.8

RCRLB[m/s]

1.6 1.4 1.2 1

vx (Hybrid radar network) vy (Hybrid radar network) vx (Active radar network) vy (Active radar network)

0.8 0.6 0.4 0.2 -1 10

0

10

SNR[dB]

Fig. 3. Non-coherent RCRLB in the target velocity dimensions versus SNR.

63

64

C. Shi et al. 3

px (Hybrid radar network) py (Hybrid radar network) px (Active radar network) py (Active radar network)

2.5

RCRLB[m]

2

1.5

1

0.5

0 -1 10

0

10

SNR[dB]

Fig. 4. Non-coherent RCRLB in the target position dimensions versus SNR when ½px ; py  ¼ ½8500; 4000 m:

Furthermore, to investigate the dependence of the non-coherent CRLB on the geometry, we change the target position as illustrated in Fig. 4 for which we are computing the hybrid RCRLB to [8500, 4000] m. It is apparent that the RCRLBs are different from the earlier case, which is because that the geometry between the target and the hybrid radar network systems impacts the derivatives of the delay-Doppler terms with respect to the Cartesian coordinates significantly [3, 6]. For brevity, the noncoherent RCRLB curves in the target velocity dimensions versus SNR when ½px ; py  ¼ ½8500; 4000 m are omitted here.

5 Conclusion In this paper, new signal model has been built for the hybrid active and passive radar networks, which accounts for the target reflected signals contributed from both the active radar and illuminators of opportunity transmissions. The moving target parameter estimation and its CRLB have been discussed and derived subsequently. Finally, simulation results were provided to illustrate that the joint target parameter estimation accuracy can obviously be improved by employing the signals obtained from the passive radar networks. In future work, we will concentrate on the problem of optimization of the hybrid radar networks for a predetermined accuracy requirement with the minimum system cost. Acknowledgements. This work was supported in part by the National Natural Science Foundation of China (Grant No. 61801212), in part by the Natural Science Foundation of Jiangsu

Cramér–Rao Bound Analysis for Joint Estimation

65

Province (Grant No. BK20180423), in part by China Postdoctoral Science Foundation (Grant No. 2019M650113), in part by the Fundamental Research Funds for the Central Universities (Grant No. NT2019010), in part by the National Aerospace Science Foundation of China (Grant No. 20172752019, No. 2017ZC52036), in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), in part by the Key Laboratory of Radar Imaging and Microwave Photonics (Nanjing Univ. Aeronaut. Astronaut.), Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and in part by the Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space (Nanjing Univ. Aeronaut. Astronaut.), Ministry of Industry and Information Technology, Nanjing 210016, China.

References 1. Li J, Stoica P (2009) MIMO radar signal processing. Wiley, Hoboken, NJ, pp 1–20 2. Hack DE, Patton LK, Himed B et al (2014) Detection in passive MIMO radar networks. IEEE Trans Signal Process 62:2999–3012 3. Shi CG, Wang F, Zhou JJ (2016) Cramér-Rao bound analysis for joint target location and velocity estimation in frequency modulation based passive radar networks. IET Signal Process 10:780–790 4. Gogineni S, Rangaswamy M, Rigling BD et al (2014) Cramér-Rao bounds for UMTS-based passive multistatic radar. IEEE Trans Signal Process 62:95–106 5. He Q, Blum RS, Haimovich AM (2010) Noncoherent MIMO radar for location and velocity estimation: More antennas means better performance. IEEE Trans Signal Process 58:3661– 3680 6. Shi CG, Salous S, Wang F et al (2016) Cramér-Rao lower bound evaluation for linear frequency modulation based active radar networks operating in a rice fading environment. Sensors 16:1–17 7. Shi CG, Wang F, Sellathurai M et al (2016) Transmitter subset selection in FM-based passive radar networks for joint target parameter estimation. IEEE Sens J 16:6043–6052

A Hinged Fiber Grating Sensor for Hull Roll and Pitch Motion Measurement Wei Wang(&), Libo Qiao, Yuliang Li, Jingping Yang, and Chuanqi Liu Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, 300387 Tianjin, China [email protected]

Abstract. This paper introduces a novel fiber Bragg grating (FBG) sensor for hull roll and pitch motion measurement. The sensor is mainly composed of three parts: differential hinge structure, fiber grating and mass block. When the hull produces a dip angle affected by external forces, the fiber gratings fixed on the left and right sides are deformated due to the tensile and the pressure force. The relationship between the deformation of the fiber gating and its wavelength is subjected to the proportional function. By using compensation algorithm of the demodulator, we can get the fiber wavelength and the inclination angle of the ship. Keywords: Fiber Bragg gratings  Dynamic monitoring angle hinge structure  Mechanical temperature compensation

 Differential

1 Introduction Among all the transportation methods, ship transportation has the advantages of large load and low cost [1]. However, due to the influence of wind, waves and current, the hull on the sea often undergoes periodic rolling, pitching and swaying [2]. As a result of these movements, a series of negative consequences would be produced, such as ship stalling at the same power, serious damage to hull structure and crew seasickness [3]. Therefore, if these movements can be found in time, it can inevitably prolong the ship service life and reduce the discomfort ableness of crews caused by the hull fluctuations. Fiber Bragg grating is an important part of the fiber sensor. It has excellent antielectromagnetic interference capability and electrical insulation. Multiple gratings of different wavelengths can also be connected in series to the same fiber [4]. It has the characteristics of corrosion resistance, physical stability and small volume but plasticity. Therefore, fiber optic sensors are suitable for marine vessels. Nowadays, FBG sensors have been extensively used to measure common parameters such as temperature, stress and strain, vibration, displacement and acceleration. It has also made much progress in measuring angles. Ferdinand [5] used the optic fiber and pendulum structure to measure the angle of dip. However, the fiber grating is in a dangling state for a long time. At the same time, it is not well protected by the package and easy to be brittle. Xie [6] measured the angular change by utilizing the change of © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 66–74, 2020 https://doi.org/10.1007/978-981-13-9409-6_9

A Hinged Fiber Grating Sensor

67

buoyancy in the liquid. While after long-term use, the metal in the liquid is easily corroded, and the liquid is dangerous to some extent, which brings difficulties to the experimental process.

2 Theory 2.1

FBG Basic Sensing Principle

The sensing process of the FBG sensor is to change the wavelength of the FBG by changing the physical quantity of the object (such as temperature, displacement, pressure, etc.). Through the demodulator is modulated and demodulated, the change in physical quantity is measured. The principle is shown in Fig. 1 [7].

Fig. 1. Sensing principle of FBG

According to the fiber coupling mode, when the fiber Bragg grating is affected by external factors, the center wavelength of the fiber grating will drift. The variation of fiber wavelength can be expressed by kB ¼ 2neff K

ð1Þ

where: kB is the FBG wavelength, neff is the effective refractive index of the core of the fiber, Ʌ is the period of the FBG. Since temperature and strain can have a direct effect on neff and Ʌ. DkB ¼ 2Dneff K þ 2neff DK

2.2

ð2Þ

Temperature Compensation Method of FBG

The effect of temperature on hull roll and pitch motion measurements is eliminated using mechanical compensation. Two identical fiber gratings are pasted on the two opposite sides respectively where strain changes occur. Two fiber gratings are

68

W. Wang et al.

subjected to tensile strain and compressive strain respectively. The central wavelengths of two fiber gratings are k1 and k2, and the wavelength changes are expressed as follows [8]: Dk1 ¼ ae De1 þ aT DT1

ð3Þ

Dk2 ¼ ae De2 þ aT DT2

ð4Þ

where ae is the sensitivity coefficient of the fiber grating with respect to strain, and aT is the temperature sensitivity coefficient of the fiber grating. When the hull is angled by rolling or pitching, it will inevitably lead to strain change of the fiber grating. One of the fiber gratings is subjected to tensile strain while the other is subjected to compressive strain. And the values of the two strains are equal in magnitude and opposite in direction. it can be inferred that: DT1 ¼ DT2

ð5Þ

Dk1  Dk2 ¼ 2ae De1

ð6Þ

Therefore, once the sensor is packaged, the wavelength change in the center of the fiber grating will be unaffected by temperature. It is only related to the strain wavelength shift caused by the hull roll and pitch. Thereby, a temperature self-compensating fiber grating sensor is achieved. 2.3

Arc Hinge Flexibility Theory

The flexible hinge is the most important part of the design structure, and Fig. 2 is a schematic diagram of the elliptical flexible hinge. This section mathematically models the flexible arc hinge and gives a method to solve its flexibility [9].

Fig. 2. Arc flexible hinge schematic

In solving the hinge flexibility, the deformation of the flexible hinge is decomposed into the accumulation of bending deformations of a number of micro-element segments, where each segment is treated as an equal-section rectangular beam of length

A Hinged Fiber Grating Sensor

69

dx. R, a, w, t0 and hm are the notch radius, half notch length, hinge width, minimum thickness and maximum central angle of the hinge respectively. According to the geometric relationship, it can be known that: tðxÞ ¼ 2R þ t0  2

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R2  ðx  aÞ2

hm ¼ arcsin

a R

ð7Þ ð8Þ

According to the basic formula of material mechanics, the angular deformation az which is generated by micro-element around z-axis under the action of the moment Mz is computed as: Z az ¼ 0

1

Mz dx ¼ EIz ðxÞ

Z

1

0

Mz dx Ewt3 ðxÞ

ð9Þ

where E is the elastic modulus of the selected material. and Iz (x) is the moment of inertia of the micro-element cross section to the z-axis. It should be noticed that dx ¼ R cos h dh, tðhÞ ¼ 2R þ t0  2R cos h. The expression of flexibility under the action of the moment Mz is: az 12R N1 ¼ Mz Ew Z N1 ¼

hm

cosh dh 3 hm t ðhÞ

ð10Þ ð11Þ

3 Sensor Structure 3.1

Structure Description

The sensor structure is illustrated in Fig. 3. The sensor is composed of three parts: hinge structure, FBG and mass block. The hinge structure is a differential symmetric hinge structure, which can greatly enhance the sensitivity and improve the measurement accuracy of the sensor. The hinge structure and the mass are integrally formed to avoid the loss and maintenance problems caused by long-term use. The main body uses beryllium bronze material, which has the advantages of high strength, high elastic limit, corrosion resistance and fatigue resistance. Hence, it is suitable for hull sensors. Combining with the hinge structure and the mass block, FBG sensitive component can dynamically measure the angular change caused by the ship motion. Influenced by the gravity, the force of the mass block exerts on the hinge structure changes when the tilting angle of the hull is produced. The fiber gratings fixed on the left and right sides are deformed by the tensile force and the pressure, respectively, resulting in the change of the center wavelength of the fiber gratings. When the tilt

70

W. Wang et al.

angle changes continuously, it basically exhibits a linear function relationship with the change of the center wavelength of the fiber grating. By means of the compensation algorithm of the demodulation instrument, the relationship between the center wavelength of the fiber and the change of the tilt angle can be ultimately obtained.

Fig. 3. Sensor structure

3.2

Ansys Analysis

The structural model of the FBG sensor designed in this paper can be directly modeled and analyzed in Ansys Workbench15.0 software. After applying the fixed and meshing, the direction of the gravity of the sensor is sequentially changed to indicate the change of the tilt angle during the hull movement. Through the static analysis of the sensor displacement change in x-axis, it can be clearly seen that the measurement range of the sensor is from −90° to 90°. Along with the increasing of the tilt angle, the displacement of the stress point in the x-axis direction increases. The strain at the point of force can be obtained by the strain formula of the sensor. Figure 4 shows the static analysis of Ansys software at different angles. This subsection analyzes the structure of the sensor from two aspects, i.e., modal analysis and harmonious response analysis. Modal analysis is primarily used to determine the resonant frequency and mode shape of the sensor structure. The resonance phenomenon of the designed structure can be avoided by analyzing the frequency of the first resonance. The purpose of the harmonic response analysis is to calculate the response at several frequencies and to further observe the stress which is corresponding to the peak frequency. According to the above-mentioned dynamic analysis, the natural frequency of the FBG angle sensor is 105.07 Hz (Fig. 5). Moreover, the difference between the first order modal frequencies and the second to fifth order modal frequencies is rather large, which indicates that the cross-coupling of the structure is very small. Figure 6 is a harmonic response curve of the sensor from 0 to 150 Hz. It can be seen from the

A Hinged Fiber Grating Sensor

(a) θ = -90°

71

(b) θ = 90°

(c) θ = 0° Fig. 4. Ansys static analysis

diagram that there is an obvious resonance phenomenon at the 105 Hz. In reality, the frequency of the wind wave is between 0.01 and 3 Hz. Thus, the sensor can meet the practical requirements of measurement accuracy.

(a) First-order mode shape

(b) Frequency diagram under different modes

Fig. 5. Modal analysis

72

W. Wang et al.

Fig. 6. Harmonic response curve

4 Sensor Test Experiment During the experiment, the fiber grating angle sensor is placed on the standard angle block. The measurement range of the FBG sensor, i.e., −90° to 90° can be achieved by placing different angle blocks. The demodulation software can dynamically monitor the central wavelength of the fiber grating by using the compensation algorithm. The data of the center wavelength is recorded at every 5° from −90° to 90° at room temperature. When the wavelength data are stabilized, the wavelength values of each angle are recorded by the demodulator software. After collecting 2000 times per second in, a total of 60 s of data, the average value of the data is calculated. Meanwhile the curve between the sine value of the angle and the center wavelength of the fiber grating is plotted, as shown in Fig. 7. It can be seen from the figure, the measuring range of the sensor can reach −90° to 90°. By further calculating, the linear fitting between the angle sine value and the left fiber grating center wavelength value curve can achieve 99.64%, and the right curve linear fitting degree is up to 99.61%. After the differential between the left and right fiber gratings, the linear fitting degree of the curve is obviously improved, and the calculated value can reach 99.89%. The slope of the fitted line, K ¼ Dk=Dh, is also the sensitivity of the FBG angle sensor which is an important index to measure the performance of the sensor. According to the slope of the fitting line, the sensitivity of the left fiber is 14.26 pm/1°,

A Hinged Fiber Grating Sensor

73

Fig. 7. The linearity curve of sensor

the sensitivity of the right fiber is 13.77 pm/1°, and the sensitivity of the fiber after the differential is 28.03 pm/1°. It is shown that the design of the differential structure improves the sensitivity to a great extent, which proves the validity and feasibility of this structure.

5 Conclusions The FBG sensor designed in this paper can dynamically monitor the change of the sloping angle of the hull. The measurement range can reach −90° to 90°, and sensitivity coefficient is superior. The outer sealing shell of the sensor is processed by an entire aluminum alloy, which improves the water-tightness and anti-destructive function of the system. The side of the sensor uses a fiber optic waterproof plug connector to improve the anticorrosion capability of the sensor. Once the sensor is packaged, the change of the center wavelength of the fiber grating will not be affected by temperature, which will provide a stable and reliable device for long-term detection and health monitoring of the hull structure in the ocean-going ship field.

74

W. Wang et al.

Acknowledgements. This paper is supported by Natural Youth Science Foundation of China (61501326, 61401310). It also supported by Tianjin Research Program of Application Foundation and Advanced Technology (16JCYBJC16500).

References 1. lianhui JIA (2011) Research and development of the real-time monitoring system of ship’s motions and stresses. Harbin Engineering University 2. Li X (2013) Research on marine parallel stabilized platform. Jiangsu University of Science and Technology 3. Liu N (2018) Research on the vertical motions’ control system of a fast multi-body ship based on longitude damping devices. Harbin Engineering University 4. Moody V (2004) Fiber theory and formation. Elsevier Inc., 2004-06-15 5. Ferdinand P, Rougeault S (2000) Optical fiber Bragg grating inclinometry for smart civil engineering and public works. Proc SPIE Int Soc Opt Eng 4185:13–16 6. Xie T, Wang X, Li C, Tian S, Zhao Z, Li Y (2017) Fiber bragg grating differential tilt sensor based on mercury column piston structure. Acta Optica Sinica 37(03):170–176 7. Lee B (2003) Review of the present status of fiber sensors. Opt Fiber Technol: 3–4 8. Zhang Y (2016) Research of distributed inclination technology based on fiber Bragg grating and it’s application. Wuhan University 9. Li Y, Wu H, Yang X, Kang S, Cheng S (2018) Optimization design of circular flexure hinges. Opt Precis Eng 26(06):1370–1379

Natural Scene Mongolian Text Detection Based on Convolutional Neural Network and MSER Yunxue Shao(&) and Hongyu Suo College of Computer Science, Inner Mongolia University, Inner Mongolia, People’s Republic of China [email protected], [email protected]

Abstract. Maximum Stable Extreme Region (MSER) is the most influential algorithm in text detection. However, due to the complex and varied background of Mongolian text in natural scene images, it is difficult to distinguish between text and non-text connected regions, thus reducing the robustness of the MSER algorithm. Therefore, this paper proposes to extract the connected regions in the natural scene pictures by applying MSER, and then uses the convolutional neural network (CNN) to train a high-performance text classifier to classify the extracted connected regions, and finally obtaining the final detection results. This paper evaluates the proposed method on the CSIMU-MTR dataset established by the School of Computer Science, Inner Mongolia University. The recall rate is 0.75, the accuracy rate is 0.83, and the F-score is 0.79, which is significantly higher than the previous method. It shows the effectiveness of the proposed Mongolian text detection method for natural scenes. Keywords: Natural scene mongolian text detection  Maximum stable extreme region (MSER)  Convolutional neural network (CNN)

1 Introduction As the most direct representation of human high-level semantic information, text, especially in natural scene pictures, plays an indispensable role in image understanding, and has a wide range of practical applications with broad application prospects. In recent years, although plenty of Latin, Arabic or Chinese texts have been extracted from complex natural scene images, in the printing of Mongolian document images [1, 2], it is still in its infancy using complex the detection of Mongolian text in natural scene images. The Mongolian detection in the natural scene can play a key role in promoting the information management and control of the Mongolian network information platform and improving the quality of the content. It has broad application prospects in the development of information technology in Inner Mongolia. Natural scene text detection methods can be divided into two categories: sliding window based methods and connected body based methods [3]. The sliding window based method [4] uses a multi-scale sliding window search to traverse all image regions, and then distinguishes between textual and non-textual information through a

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 75–82, 2020 https://doi.org/10.1007/978-981-13-9409-6_10

76

Y. Shao and H. Suo

trained classifier, typically using a manually designed Low-level features [5, 6] to train classifiers, extraction from a sliding window. Such as local binary mode (LBP), scaleinvariant feature transform descriptor (SIFT), and gradient direction histogram (HOG) [7]. Although the sliding window based method is simple and intuitive, in order to obtain effective detection results, a large number of sliding windows are usually required, and the image regions in each window need to be classified, resulting in high computational complexity and slow speed. The natural scene text detection method based on the connected body [8–10] first generates a large number of connected body regions by the similarity between the interpixel attributes (color, texture, stroke width, etc.) in the aggregated image, and then extracts from the connected body region. Dividing into text areas and non-text areas, and finally getting the test results. Stroke width transform (SWT) [11] and maximum stable extremum region (MSER) [12, 13] are the two most widely used extraction methods in the process of extracting connected candidate regions. At present, the MSER-based method has the high ability to detect most text components in an image. The method is robust to viewing angle, character size, illumination variation, and has a fast and stable feature. Therefore, this paper proposes a natural scene Mongolian text detection method based on MSER, which uses deep convolutional neural network to learn the deep features of candidate connected regions, at the same time, the better selection among the combination between CNN classifier and non-maximum value suppression (NMS) [14], which improves text detection performance. The structure of this paper is organized as follows: In Sect. 2, we will discuss all the details of the proposed method. In Sect. 3, the experimental results are discussed. In the last section, the conclusion is given.

2 Related Work

Input image

pre-processing

MSER image

Detection results

CNN classification

Region filtering

Fig. 1. The flowchart of MSER-based Mongolian text detection method

The method flow in this paper is shown in Fig. 1. The proposed detection method consists of three main steps. First, a candidate connected region is generated by applying an MSER detection method on the input image; then the candidate connected region is divided into a text region and a non-text region by the trained CNN classifier; finally, the text region is displayed on the input image. The details of the method are as follows.

Natural Scene Mongolian Text Detection Based

2.1

77

Generating Candidate Connected Areas

MSER is the most influential algorithm in text detection. Its principle is similar to a diffuse water method, which is usually a grayscale image. For grayscale images, the grayscale value of the pixel ranges from (0, 1, …, 255), the image is binarized with a certain threshold, and the grayscale value is less than the threshold in the binarization operation. The pixel will be set to black, while pixels with gray values greater than or equal to the threshold will be set to white. When the threshold gradually increases from 0 to 255, the generated binary image will change from all white to all black. At some point in the change, black dots or black areas will appear in the image. Which is regarded as the local minimum area of grayscale, when the threshold increases, these areas will become larger, and finally the entire image is a black area. In the process of change, some special local minimum value regions change in the large interval of a certain gray level without substantially following the threshold change. The local minimum value region that achieves this requirement is the maximum stable minimum value. region. Accordingly, when the threshold is gradually decreased from 255 to 0, a series of maximum stable maximum regions are obtained. Typically, the result of the MSER is the union of the region of maximum stable minimum and maximum stable maximum.

Fig. 2. MSER test results

For the Mongolian text detection system, our goal is to detect as many textual connectivity areas as possible in the process of generating candidate connected areas, as it is difficult to recover lost textual connectivity areas in subsequent processes. Therefore, the threshold of the MSER is set to 2, which makes it possible to detect all the text-connected areas. As shown in the Fig. 2, although most of them are non-text connected areas, the real text connected areas are also correctly detected. At the same time, this method requires a powerful classifier to distinguish between a large number of non-text connected areas and text connected areas. The following describes a high performance classifier based on convolutional neural networks. 2.2

Training Text Classifier

In recent years, deep learning has accomplished many challenging tasks in the field of computer vision and made breakthroughs. Convolutional neural network (CNN) is a kind of feedforward neural network with convolutional computation and deep structure. It is one of the representative algorithms of deep learning. Traditional CNN networks

78

Y. Shao and H. Suo

have achieved great success in digital and handwritten character recognition [15, 16]. Natural scene text detection is an advanced visual task that is difficult to solve with a set of low-level operations or manually designed features. Compared with the previous method of using heuristic features to classify text connected regions and non-text connected regions, this paper uses convolutional neural network to train text classifiers [17] (Fig. 3) to robustly classify generated candidate connected regions.

Fig. 3. CNN text classifier

The structure of the CNN text classifier is similar to that in the Deep Residual Network (ResNets) [18]. ResNets consist of many “residual units”. Each unit can be expressed as: yl ¼ hðxl Þ þ F ðxl ; wl Þ

ð1Þ

xl þ 1 ¼ f ð y l Þ

ð2Þ

where xl and xl þ 1 are the input and output of the Lth unit, wl is a set of weights (and biases) associated with the Lth unit, and F represents a residual function. hðxl Þ ¼ xl represents an identity map and f represents a ReLU activation function. If both hðxÞ and f ðyÞ are identity maps, i.e. hðxl Þ ¼ xl ; f ðyl Þ ¼ yl then in the forward and reverse propagation phases of the training, the signal can be passed directly from one unit to Another unit makes training easier. That is, the above formula can be expressed as: xl þ 1 ¼ xl þ F ðxl ; wl Þ

ð3Þ

By recursion, you can get the expression of any deep cell L feature: xL ¼ x l þ

L1 X

F ðxi ; wi Þ

ð4Þ

i¼1

The advantages of this expression are: (1) The feature xL for any deep cell L can be P expressed as the feature xL of the shallow cell L plus a residual of the form L1 i¼l F function, which indicates that there is a residual characteristic between any of the units

Natural Scene Mongolian Text Detection Based

79

P L and l. (2) For any deep unit L, its characteristic xL ¼ x0 þ L1 i¼0 F ðxi ; wi Þ is the sum of all previous residual function outputs. A trained CNN text classifier is used to give a predicted value for each of the generated candidate connected regions to determine whether it is a text connected region. In our experiments, when generating candidate connected regions, many connected regions and other connected regions exist phenomenons of including and mostly intersect or mostly intersect. Therefore, we use NMS to select the highest scores in those connected regions and suppress those with low scores. Since the threshold of the MSER is set to 2, the generated candidate connected areas have different sizes and different shapes, with the screening process of the candidate connected areas is as shown in Fig. 4. The size of the candidate area with moderate area is adjusted to 32 * 32. Once inputted to the CNN text classifier, the CNN text classifier assigns a higher score to the text connected area, and assigns a lower score to the non-text connected area. The CNN text classifier exhibits strong robustness and high discriminating power in distinguishing between text and non-text connected regions.

Fig. 4. Candidate connected area screening process. a Candidate connected areas having an area larger than 1300, b candidate connected areas having an area of less than 150, c candidate connected areas of a moderate area

3 Experimental Results and Analysis We evaluated the proposed method at the CSIMU-MTR dataset [19] established by the School of Computer Science, Inner Mongolia University. 3.1

Data Sets and Evaluation Criteria

The CSIMU-MTR data set includes 560 color images, of which 460 images are used for the training set and 100 images are used for the test set. Since this paper needs to train the CNN text classifier, only 5679 character region positive samples and 8900 non-character region negative samples are extracted from the CSIMU-MTR training set. In that the training data is too small, the model will be over-fitting, therefore, we use 244,350 background Mongolian character regions and 82,706 natural scene images with no characters to synthesize positive samples of Mongolian text in 100,000 natural scenes, and use 2716 natural scene images without Mongolian characters to generate

80

Y. Shao and H. Suo

190,000 Negative samples of non-character regions, which are adjusted are adjusted to 32 * 32 size, as shown in Fig. 5, it is used to train the CNN text classifier.

Fig. 5. Training sample. a Positive sample, b negative sample

This paper adopts the database competition evaluation criteria proposed by Wolf [20] and others to evaluate the proposed method. The accuracy index of the evaluation index P and the recall index R and F are respectively expressed as: PN PjDi j j

i

Precision ¼

  MD Dij ; Gi

PN i

PN PjDi j Recall ¼

j

i

  MG Gij ; Di

PN i

F-score ¼ 2 

ð5Þ

jDi j

ð6Þ

jGi j

Precision * Recall Precision þ Recall

ð7Þ

The accuracy rate refers to the ratio of the correctly detected text connected area to the total number of all connected areas. The recall rate refers to the ratio of the correctly detected text connected area to the real text connected area, and the comprehensive index is the average of the reconciliations between accuracy rate and recall rate. 3.2

Experimental Results and Analysis

As shown in Table 1, the experimental comparison of the Mongolian text detection method and other methods proposed in this paper on the CSIMU-MTR dataset shows that the proposed method achieves the best on the CSIMU-MTR dataset.

Table 1. Comparison of test results of different methods Method Our method Edge + SVM MSER + SVM

Recall 0.75 0.61 0.64

Precision 0.83 0.72 0.74

F-score 0.79 0.66 0.68

Natural Scene Mongolian Text Detection Based

81

The accuracy, recall, and F-score are improved by about 0.11, 0.09, and 0.11, respectively, compared to the best method, indicating that the proposed method can correctly detect more real text. Our approach benefits from two aspects: on the one hand, MSER can extract most real text candidate regions; on the other hand, CNN-based text classifier can robustly identify textual connectivity regions from a large number of candidate connected regions, improving classification accuracy. Figure 6 shows the effect of the method in this paper. The characters in the figure are different in size, and the background is complex and changeable, with the ideal detection results the detection results are ideal. It shows that the proposed method is robust to Mongolian text detection in different natural scenes.

Fig. 6. Example of successful detection method in this paper

Although the method in this paper can successfully detect Mongolian text in most cases, in some cases the text cannot be successfully detected. When the text has some phenomenons, such as, uneven color, low resolution, uneven illumination, too low contrast, and the over exposure, the MSER is difficult to detect the text area, as shown in Fig. 7.

Fig. 7. Example of failure of detection method in this article

4 Conclusion Aiming at the complex and varied images in natural scenes, this paper proposes a MSER-based natural scene Mongolian text detection method. Compared with the traditional method to classify text and non-text connected regions, this paper uses the high performance and high capacity of the deep learning model to improve the final text and non-text classification accuracy. The results on the standard dataset show that the natural scene Mongolian text detection method proposed in this paper has strong robustness and improves the final accuracy, recall rate and F-score.

82

Y. Shao and H. Suo

Acknowledgements. This study was supported by the National Natural Science Foundation of China (NSFC) under Grant no. 61563039.

References 1. Gao G, Su X, Wei H et al (2011) Classical mongolian words recognition in historical document. In: International conference document analysis recognition, IEEE, pp 692–697 2. Wei H, Gao G (2014) A keyword retrieval system for historical mongolian document images. Int J Doc Anal Recognit (IJDAR) 17(1):33–45 3. Ye Q, Doermann D (2015) Text detection and recognition in imagery: a survey. IEEE Trans Pattern Anal Mach Intell 37(7):1480–1500 4. Jaderberg M, Vedaldi A, Zisserman A (2014) Deep features for text spotting. In: Computer vision—ECCV, pp 512–528 5. Chen X, Yuille AL (2004) Detecting and reading text in natural scenes. In: IEEE computer society conference on computer vision and pattern recognition, pp 366–373 6. Babenko B, Belongie S (2011) End-to-end scene text recognition. In: IEEE international conference on computer vision, pp 1457–1464 7. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. IEEE Comput Soc Conf Comput Vision Pattern Recognit 1:886–893 8. Chen H, Tsai SS, Schroth G et al (2011) Robust text detection in natural images with edgeenhanced maximally stable extremal regions. In: 18th IEEE international conference on image processing, pp 2609–2612 9. Yin XC, Yin X, Huang K et al (2014) Robust text detection in natural scene images. IEEE Trans Pattern Anal Mach Intell 36(5):970–983 10. He T, Huang W, Qiao Y et al (2015) Text-attentional convolutional neural networks for scene text detection. IEEE Trans Image Process 25(6):2529–2541 11. Epshtein B, Ofek E, Wexler Y (2010) Detecting text in natural scenes with stroke width transform. In: IEEE computer society conference on computer vision and pattern recognition, pp 2963–2970 12. Nistér D, Stewénius H (2008) Linear time maximally stable extremal regions. In: European conference on computer vision-ECCV, pp 183–196 13. Huang W, Qiao Y, Tang X (2014) Robust scene text detection with convolution neural network induced MSER trees. In: Computer vision–ECCV, pp 497–511 14. Neubeck A, Gool L (2006) Efficient non-maximum suppression. In: 18th ICPR 15. Shao Y, Wang C, Xiao B (2013) Fast self-generation voting for handwritten Chinese character recognition. Int J Doc Anal Recognit (IJDAR) 16(4):413–424 16. Shao Y, Wang C, Xiao B (2015) A character image restoration method for unconstrained handwritten Chinese character recognition. Int J Doc Anal Recognit (IJDAR) 18(1):73–86 17. Wang T, Wu DJ, Coates A, Ng AY (2012) End-to-end text recognition with convolutional neural network. In: IEEE international conference on pattern recognition, pp 3304–3308 18. He K, Zhang X, Ren S et al (2016) Identity mappings in deep residual networks 19. Shao Y, Gao G, Zhang L et al (2015) The first robust mongolian text reading dataset CSIMU-MTR, pp 781–788 20. Wolf C, Jolion JM (2006) Object count/area graphs for the evaluation of object detection and segmentation algorithms. Int J Doc Anal Recognit 8(4):280–296

Coverage Probability Analysis of D2D Communication Based on Stochastic Geometry Model Xuan-An Song1,2, Hui Li1,2(&), Zhen Guo1, and Xian-Peng Wang1 1

2

College of Information Science and Technology, Hainan University, Haikou 570228, China [email protected] Engineering Research Center of Marine Communication and Network in Hainan Province, Haikou 570228, China

Abstract. Relaying is a common application of D2D communication, which optimizes system capacity and increases the coverage of mobile cellular networks on shared downlink resources. We established a network model of cellular base-stations and adopted the theory of stochastic geometry. Based on the model, the coverage probability analysis of the network is analyzed to select a specific user as the relay node, and the relay point uses the forwarding strategy of the decoding and forwarding. Subsequently, D2D communication can help the edge user to communicate with the base-station. The coverage probability expression of the downlink cellular network is defined, then the coverage probability of the cellular link, the base-station to the relay link, and the relay to the edge user link are derived. Simulation results show that with the increasing of density of the macro base-stations, the coverage probability of the whole network will increase and the final coverage probability will become saturated. Keywords: Stochastic geometry probability

 Relay  D2D communication  Coverage

1 Introduction With the development of wireless networks, the challenges of future cellular networks and transmission reliability are enormous. The existing base-station deployment cannot meet the needs of users’ requirements. In general, the cellular network model uses the Wiener model to perform performance analysis on cellular links [1–3]. Wiener model has two disadvantages. Firstly, it is an overly idealized model. In this model, it is considered that the channel between the user and the base-station is an ideal channel, the interference within the base-station coverage is a constant, and the interference between the coverage of the base-station is negligible. However in reality, inter-base-station interference cannot be ignored due to an increase of signal interference between cells, and also due to an increase of mobile phone users in space. Moreover, as the density of users in the space changes, the interference inside the base-station also changes. So it is also ideal to assume the interference signal in the base-station as a constant. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 83–93, 2020 https://doi.org/10.1007/978-981-13-9409-6_11

84

X.-A. Song et al.

On the other hand, Wiener model assumes that all cells in space are strictly regular hexagons or circles. However, in the actual situation, the shape of the cell is random, so the traditional Wiener model is no longer suitable to be an analyzing modern, and it is no longer suitable for complex wireless communication systems [4, 5]. Due to these defects of the Wiener model, many researchers began to use random geometric models to describe the topology of modern communication networks. The stochastic geometric model considers that all base-stations and users in the space are randomly distributed, so that it can well meet the actual situation that contemporary users are randomly distributed throughout the space [6]. Literatures [7, 8] use a stochastic geometric model to analyze the signal-to-interference ratio of a randomly selected cell edge user in the downlink. They use a series of effective tools provided by modern stochastic geometry theory to obtain a simpler comparison than the Wiener model. In recent years, mobile communication technologies have developed rapidly, and business demands such as high speed, low energy consumption, low latency, and personalization have brought new challenges and promoted academic research on emerging communication technologies. Device-to-Device (D2D) communication technology is a promising technology in future mobile communication system. It is listed as the first important technique in the Universal Mobile Telecommunications System (UMTS) project group. D2D communication has become a vital research topic of 5G mobile communications. The communication technology can directly establish a communication link between terminals that are close to each other, communicate by using the licensed spectrum, thereby effectively reduce the load of the base-station and improve the utilization efficiency of the spectrum. At the same time, D2D communication, as a short-range communication method with low transmission power and high transmission rate, is conducive to improving energy efficiency, extending terminal life time, and bringing convenience to users [9, 10]. Device-to-device communication under cellular networks is considered one of the most promising methods for dealing with spectrum resource shortages [11], which allows mobile terminals to communicate with each other directly in the cellular network [12] and significantly mitigates the pressure on the base-station. D2D communication has recently attracted a lot of attention due to the advantages of improving spectrum utilization efficiency, increasing transmission rate, saving power and improving network. Based on the analysis of the coverage probability of the network and the random geometric model, the users are selected to be the relaying stations (RSs). The RS uses the forwarding strategy to transmit the D2D communication for the purpose of helping the edge user. The user equipment (UE) communicates with a base-station (BS). We define the coverage probability expression of the downlink relay cellular network, and then derive the coverage probability of the cellular link, the base-to-relay link, and the relay-to-edge user link. The conclusion of coverage probability under D2D communication based on stochastic geometric model is beneficial to practical applications. The rest of this paper is organized as the following. In Sect. 2, we present the system model and methodology of the analysis. In Sect. 3, the downlink performance of proposed mechanism is analyzed, using tools from stochastic geometry. Simulation results are presented in Sect. 4 and finally, conclusion are drawn in Sect. 5.

Coverage Probability Analysis of D2D Communication

85

2 System Model Consider a single-layer downlink network that only contains cellular base-stations and users, and uses stochastic geometry to construct a network model of cellular networks and users. The location of the BS follows a homogeneous Poisson point process (PPP) UB with density kB, and D2D users also follows the distribution PPP UR with density kR [13]. D2D communication is a scenario in which the D2D link multiplexes the downlink resources of the cell, as shown in Fig. 1, where the red area represents the communication area of the cellular link. UE2 outside the red area is only allowed to connect to the BS through the relaying link. We know that users outside the gray area can communicate with the relay or base-station of the neighboring cell to treat other links as interference. Therefore, there are two types of links in the system: cellular links and D2D communication link includes relay links of BS-RS (Base-to-relay link) and RSUE (Relay-to-user link). The forwarding policy adopted by the relay user is a decoding and forwarding policy. The working mode is half-duplex, and the specific RS communicates with the base-station for the closest geometric distance, and the coverage of the relay user is a circle with a radius of R.

Fig. 1. System relay model, where red area represents the communication area of the cellular link and other area represents D2D communication area

86

X.-A. Song et al.

It is assumed that the distance between the users to in the base-station is r. Since the base-station deployment is PPP with a density parameter of kB, the probability density of r is fr ðrÞ ¼ 2pkB expðpkB r 2 Þ [8]. The D2D user also obeys the law of independent PPP. However, the special relay point and the edge user obey the independent stationary point process in the circular area of the relay point [9]. Therefore, the probability density distribution function of the distance d between the edge user and the relay point is fR ðRÞ ¼ 2d=R2 .

3 Performance Analysis of Downlink Assuming that all channels in the network have experienced Rayleigh fading, the side channel gain obeys the exponential distribution with a parameter value of 1, i.e. h  exp. Considering a downlink cellular network, the transmitter x (base-station or relay user), and the receiver y (relay user or edge user) have a signal to interference plus noise ratio (SINR) [13] is given by SINRðx ! yÞ ¼ S=ðI þ NÞ

ð1Þ

where S is the power of the receiver y to receive the useful signal in the transmitter x, so S can be rewritten as S = Phr−a; where P is the transmitting power of x, h is the channel power gain caused by small-scale fading, a is the path loss factor, I is the interference from other transmitters in the same frequency band and N is the noise. We now assumes that the channel is ideal additive Gaussian white noise, so N = r2. Based on the above assumption, the probability of the coverage is defined as the probability that the SINR of the receiver y is greater than or equal to the threshold value b, which is p ¼ PðSINR  bÞ

3.1

ð2Þ

Coverage Probability of Cellular Links

It can be known from Formula (2) that when the base-station communicates with the user of UE1. According to the independent PPP distribution, it is assumed that the BS transmits to the cellular user UE1. For this cellular link, the coverage probability can be written as pcu ¼ PðSINRcu  bcu Þ

ð3Þ

P a where SINRcu ¼ PIcucu hþo rr2 ; Icu ¼ i2UC =fx0 g Pcu hi ria ; Pcu is the power transmitted by the BS to the cellular user UE1 and ho is the channel gain. Icu denotes the aggregate interference.

Coverage Probability Analysis of D2D Communication

87

Theorem 1 The downlink coverage probability of a cellular network with BS-UE1 and is given by Z expðpkB r 2 ð1 þ qðbcu ; aÞÞÞ  expðbcu r a r2 =Pcu Þrdr ð4Þ pcu ¼ 2pkB r[0

R1 1 dh; h is an identifier through Laplace transform. where qðbcu ; aÞ ¼ b2=a 2=a cu bcu 1 þ ha=2 Proof refers to Appendix 1. 3.2

Coverage probability of D2D links

The analysis of the D2D link consists of two parts: one is the coverage probability analysis of BS-RS, and the other part is the coverage probability analysis composed of RS-UE2 [14]. (1) Coverage probability analysis of BS-RS links It is assumed that when the RS receives the BS signal, whose SINR is greater than or equal to bcR, the RS can decode, and the power transmitted from the BS to the RS is PcR. Then, the coverage probability of the BS-RS downlink can be defined as pcR ¼ PðSINRcR  bcR Þ a

cR go r where SINRcR ¼ PIcR þ r2 ; IcR ¼ the aggregate interference.

P i

ð5Þ

PcR gi ria , and go is the channel gain and IcR denotes

Theorem 2 The coverage probability PcR of the BS-RS link can be expressed as Z expðpkB r 2 ðkB þ kqðbcR ; aÞÞÞ  expðbcR r a r2 =PcR Þr dr ð6Þ pcR ¼ 2pkB r[0 2=a

where k = min(kB ; kR Þ; qðbcR ; aÞ¼bcR

R1

2=a

bcR

1 dh. 1 þ ha=2

The proof and derivation pro-

cess of Theorem 2 is given in Appendix 2. (2) Coverage probability of RS-UE2 links The edge user UE2 receives the information of the base-station through the assistance of the RS, and distributes it in a smooth point process in a circular area, in which each of the relay users RS is centered and R is a radius. It is assumed that the PRU is the transmission power sent by the relay user to the edge user, b3 is the threshold value received by the UE2. And the IRU is the sum of the interferences received by the UE2 to the remaining relay users by ignoring the mutual interference between the base-station and the UE2 and each UE2. Then the coverage probability of the RS-UE link is given by pRU ¼ PðSINRRU  bRU Þ

ð7Þ

88

X.-A. Song et al.

P mo ra a where SINRRU ¼ PIRU 2 ; IBR ¼ i PBR mi ri , and mo is the channel gain and IRU RU þ r denotes the aggregate interference. Theorem 3 The coverage probability D2D (BS-RS) link can be expressed as pRU ¼

2 R2

ZR

expðpkr 2 pcR  qðbRU ; aÞÞ  expðbRU r a r2 =ð1 þ bRU ÞPRU Þr dr

ð8Þ

0

where k ¼ minðkB ; kR Þ; qðbRU ; aÞ¼



bRU 1 þ bRU

2=a R 1  0 1 þ1ha=2 dh. Proof of Formula (8)

refers to Appendix 3.

4 Simulation Analysis The simulation analysis is based on the probability expression derived from the previous section. The relevant parameters used in the simulation are set in Table 1.

Table 1. System parameters Symbol kB kR Pcu/PcR/PRU R bcu/bcR/bRU r2

Description Density of BSs Density of RSs Transmit power Radius of Rs SINR threshold Nois power

Value 10−5 BS/m2 10−4 BS/m2 43 dBm/30 dBm/33 dBm 40 m −10 dB −60 dBm

Simulation analysis is performed under MATLAB according to parameter settings, since the cellular link is a special case of a special BS-RS link. Therefore, Fig. 2 shows the variation of the coverage probability of the BS-RS link PcR vs. density of BSs kB with different a. The curve can be divided into two parts according to the relationship between kB and kR. (1) If kB< kR, the value of k is kB. In this part, as the value of a increases, the degree of curvature of the curve increases continuously. When a = 3, the probability of pcR coverage does not change with the change of kB. When a = 4 or 5, the pcR coverage probability increases as kB increases. (2) If kB> kR, the value of k is kR, and the probability coverage increases slowly as kB increases. In addition, under the difference of the path loss factor a, the magnitude relationship of pcR is not determined, which means that the magnitude of the transmitted signal and interference received by the RS is uncertain for different base-station densities.

Coverage Probability Analysis of D2D Communication

89

BS-RS Link Coverage Probability (p cR )

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 =5 =4 =3

0.2 0.1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Density of BS (10-4 m 2 )

Fig. 2. Relationship between density of base-stations and BS-RS link coverage probability under different path loss

Figure 3 shows the relationship of pRU and kB with different a and UE2 links. Similarly, the curve can be divided into two parts according to the magnitude relationship of kB and kR:

RS-UE2 Link Coverage Probability (pRU )

1 =5 =4 =3

0.98

0.96

0.94

0.92

0.9

0.88

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Density of BS (10-4 m 2 )

Fig. 3. Relationship between density of base-stations and RS-UE2 link coverage probability under different path loss

90

X.-A. Song et al.

(1) If kB< kR, the value of k is kB. In this part, as the value of a increases, the probability of pRU coverage decreases approximately linearly with the increase of kB. (2) If kB> kR, the value of k is kR. At this time, only pcU (in the pRU expression) changes with kB, and the curve decreases slowly as kB increases.

5 Conclusions Based on the stochastic geometry theory, we analyzed the downlink D2D communication networks and established a system relay model. According to the nature of the PPP, the relationship between the cellular BS and the user is established by using a mathematical random geometric model. Then, based on the model, the coverage probability analysis of the network is analyzed to select a specific user as the RS. The method of RS using decoding and forwarding is one of D2D communication technologies; we derive the coverage probability of cellular links, BS-RS links and RS-UE links. The simulation results show that as the density of the macro base-station increases, the coverage probability of the whole network will increase and the final coverage probability will become saturated. In the future work, we will study the network performance of the cellular network in a random geometric model and the performance of direct communication between devices and devices without considering RS. Acknowledgements. This work was supported by High and New Technology Project of Hainan Province Key R. & D. Plan (ZDYF2018012) and the National Natural Science Foundation of China (No. 61661018). Hui Li is the corresponding author.

Appendix 1: Proof of Theorem 1 According to the definition of Eq. (3) and SINR, the coverage probability of the BSUE1 link can be expressed as Z   P ho  bcU r a ðIcU þ r2 =PcU jrÞfr ðrÞdr ð9Þ pcU ¼ Er PðSINRcU  bcU jr Þ ¼ r[0

where fr(r) is BS probability density function (PDF) [8]. By ho * exp(1), we can rewrite Eq. (3) as    Pðho  bcU r a ðI cU þ r2 Þ=PcU jrÞ ¼ E P ho  bcU r a ðIcU þ r2 Þ=PcU jr; I cU    ¼ E exp bcU r a ðIcU þ r2 Þ=PcU ¼ expðbcU r a =PcU ÞLðbcU r a =PcU Þ

ð10Þ

Coverage Probability Analysis of D2D Communication

91

where L is Laplace transform of IcU. Defined by the Laplace transform and ho * exp (1), it can be written as 2 0 13 X a Pcu hi ri A5 LIcU ðsÞ ¼ EðexpðsIcU Þ ¼ E4exp@s 2 ¼ EUC 4

3

Y

i2UC

i2UC =fx0 g

0

1 5 @ a ¼ exp 2pkB 1 þ sP cU ri =fx g

Z1 

0



1 r

1

1 uduA 1 þ sP1 hi ria ð11Þ

The final step of the above derivation is obtained from the properties of the Q probability generation function of the PPP, which satisfies E x2U gðxÞ ¼  R1  exp k R2 ð1  gðxÞÞdx [14]. S is changed by bcU r a =PcU , and the interference IcU can be further derived as 0 1 Z1  1 uduA 1 LIcU ðb1 r a =PcU Þ ¼ exp@2pkB 1 þ bcU r a ua r 0 1 Z1  ð12Þ b cU A ¼ exp@2pkB a udu bcU þ ðu=rÞ r

¼ expð2pkB qðbcU ; aÞÞ 2=a

where qðbcU ; aÞ¼bcU

R1

2=a

bcu

1 dh; h 1 þ ha=2

¼

u . 1=a2 rbcU

And Eq. (4) can be obtained by com-

bining Eqs. (9)–(12).

B: Proof of Theorem 2 The proof of Theorem 2 is similar to Theorem 1, except that the subscripts are different. Where k = min(kB ; kR Þ value depended on pcR.

A: Proof of Theorem 3 By the definition of Eq. (7), PRU can be converted into pRU

 ZR PRU mo r a ¼P  bRU ¼ Pðlo  r a ðIRU þ r2 Þ=PRU jrÞfR ðrÞdr IRU þ r2

ð13Þ

0

In order to simplify the derivation, Ir ¼ IRU þ PRU mo r a is assumed to be signal transmitted from the RS, so PRU can be further rewritten as

92

X.-A. Song et al.

ZR  ZR  bRU ra ðIRU þ r2 Þ 2 b r a ðIRU þ r2 Þ rdr P mo  jr fR ðrÞdr ¼ 2 E exp  RU ð1 þ b3 ÞPRU R ð1 þ bRU ÞPRU 0

0

¼

2 R2

ZR 0

 exp 

 bRU r a r2 bRU r a L rdr ð1 þ bRU ÞPRU ð1 þ bRU ÞPRU

ð14Þ The above derivation uses fR ðrÞ ¼ 2r=R2 and mo  exp(1). LðÞ can be expressed as LðbRU r a =PRU Þ ¼ expðpr 2 kpcR qðbRU ; aÞÞ 2=a

where qðbRU ; aÞ¼bRU

R1

2=a

bRU

1 dh, 1 þ ha=2

ð15Þ

k ¼ minðkB ; kR Þ. Combining Eqs. (13)–(15), we

can get Eq. (8).

References 1. Wyner AD (1975) The wiretap channel. Bell Labs Tech J 54(8):1355–1387 2. Somekh O, Zaidel B, Shamai S (2007) Sum rate characterization of joint multiple cell-site processing. IEEE Trans Inf Theory 53(12):4473–4497 3. Jing S, Tse DNC, Soriaga JB et al (2008) Multicell downlink capacity with coordinated processing. EURASIP J Wireless Commun Netw: 586878 4. ElSawy H, Hossain E, Haenggi M (2013) Stochastic geometry for modeling, analysis and design of multi-tier and cognitive cellular wireless networks: a survey. IEEE Commun Surv Tutorials 15(3):996–1019 5. Haenggi M, Andrews J, Baccelli F et al (2009) Stochastic geometry and random graphs for the analysis and design of wireless networks. IEEE J Sel Areas Commun 27(7):1029–1046 6. Lee CH, Shih CY, Chen YS (2013) Stochastic geometry based models for modeling cellular networks in urban aeas. Wireless Netw 19(6):1063–1072 7. Ganti RK, Bacelli F, Andrews JG (2011) A new way of computing rate in cellular networks. In: IEEE international conference communications (ICC), June 2011 8. Andrews JG, Baccelli F, Ganti RK (2011) A tractable approach stochastic geometry for wireless networks coverage and rate in cellular networks. IEEE Trans Commun 59 (11):3122–3134 9. Universal Mobile Telecommunications System (UMTS), Selection procedures for the choice of radio transmission technologies of the UMTS, UMTS 30.03, version 3.2.0 10. Guidelines for evaluation of radio interface technologies for IMT-advanced, report ITU-R M.2135 11. Fodor G et al (2012) Design aspects of network assisted device-to-device communications. IEEE Commun Mag 50(3):170–177 12. Peng T, Lu Q, Wang H, Xu S, Wang W (2009) Interference avoidance mechanisms in the hybrid cellular and device-to-device systems. In: Proceedings of IEEE international symposium on personal indoor and mobile radio communications, pp 617–621 13. Al-Hourani A, Kandeepan S, Jammalipour A (2016) Stochastic geometry study on deviceto-device communication as a disaster relief solution. IEEE Trans Veh Technol 65(5):3005– 3017

Coverage Probability Analysis of D2D Communication

93

14. Yu H, Li Y, Xu X, Wang J (2014) Energy harvesting relay-assisted cellular networks based on stochastic geometry approach. In: 2014 international conference on intelligent green building and smart grid (IGBSG), Taipei, pp 1–6 15. Stoyan D, Kendall WS, Mecke J et al (1995) Stochastic geometry and its application, Chichester. Wiley Chichester, UK

Study of Fault Pattern Recognition for Spacecraft Based on DTW Algorithm Guoliang Tian(&), Lianbing Huang, and Guisong Yin Institute of Manned Space System Engineering, 100094 Beijing, China [email protected]

Abstract. A time series analysis method for spacecraft telemetry data is presented in this paper. For spacecraft testing and on-orbit flight, this method can monitor the changes of telemetry data automatically and identify the failure modes of spacecraft. Using dynamic time warping (DTW) algorithm, combining historical data samples as well as fault cases with this method analyzes the similarity of telemetry data transformed into time series. By comparing the results of analysis with the results of DTW distance calculation, the relative deviation of data is measured and the abnormal data in fault mode is identified. The results show that the telemetry data analysis method based on DTW algorithm can effectively detect data anomalies and realize fault identification, which has a certain application prospect. Keywords: DTW

 Spacecraft  Data analysis  Fault recognition

1 Introduction At present, Euclidean Distance is the main method to measure the similarity of time series,it directly calculates the distance between the corresponding points on the time axis [1]. This algorithm is simple and fast, but it can only be applied to time series of equal length, and it is sensitive to the migration and mutation of sequence on time axis. In the early stage, Euclidean distance or Euclidean-like distance (such as Lp distance) was widely used in time series similarity comparison [2]. Such distance has good mathematical properties and is easy to calculate. By calculating one-to-one correspondence between points, better results can be obtained when the amplitude of time series does not change much. However, when Euclidean distance is used to measure the difference between time series, the requirement of sequence is more stringent, and it is easy to produce large difference, which leads to the final dissimilarity of the original similar sequence, thus making the result of similarity comparison inaccurate [3]. Dynamic Time Warping (DTW) distance is widely used in speech recognition. Based on the theory of dynamic programming, it is a non-linear programming technology that combines time planning with distance measurement. DTW distance is first introduced into time series data mining by Berndt and Clifford to measure the similarity between two time series [4]. It does not require one-to-one matching between points in two time series, and allows sequence points to be self-replicated and then matched. DTW is a distance that allows time series to bend in the direction of time axis. It is not a point-to-point calculation, but can skip several points in the matching sequence within © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 94–102, 2020 https://doi.org/10.1007/978-981-13-9409-6_12

Study of Fault Pattern Recognition

95

local area, so that the two sequences can match in a more “co-ordinated” way. DTW not only maintains the matching between most point pairs, but also avoids the shortage of Euclidean-like distance.

2 Principle of DTW Algorithm The calculation of DTW distance can be done not only on the original sequence, but also on the reduced dimension sequence. It is assumed that the length of the two time series after dimensionality reduction is m, n: q½1 : m ¼ fq1 ; q2 ; . . .; qm g

ð1Þ

c½1 : n ¼ fc1 ; c2 ; . . .; cn g

ð2Þ

The DTW distance of q and c can be calculated recursively according to the original definition of DTW distance, but there will be many repeated calculations. Generally, a distance matrix of m n can be constructed, which is called a bending matrix, as shown in Fig. 1. The square (i, j) ð1  i  m; 1  j  nÞ in the  graph  corresponds to the matching between the data points qi and cj , its value is d qi ; cj , which is called base distance.    2 Consistent with reference [2], data point distance d qi ; cj ¼ qi  cj is used as base distance. M

Similar path

q

c Fig. 1. Similar distance matrix

After constructing the bending matrix, the correspondence between q and c points is transformed into a bending path from the square (1, 1) to the square (m, n) in the bending matrix: W ¼ w1 ; w2 ; . . .wi ðmaxðm; nÞ  l  m þ nÞ

ð3Þ

Define a mapping function f : ðq; cÞ ! W that maps q, c point pairs to a square in a curved path, that is:

96

G. Tian et al.

  wk ¼ fw qi ; cj ; 1  i  m; 1  j  n; 1  k  l

ð4Þ

In practical calculation, bending paths generally have the following properties: (a) Path continuity The adjacent (including diagonal) squares in a curved path are continuous, i.e. 

  wk ¼ fw qi; cj  ! i0  i þ 1; j0  j þ 1 w k þ 1 ¼ f w qi 0 ; c j 0

ð5Þ

(b) Monotonicity The curved path moves monotonously along the time axis, i.e. 

  wk ¼ fw qi; cj  ! i  i0 ; j  j0 wk þ 1 ¼ fw qi0 ; cj0

ð6Þ

The DTW distance between q and c is transformed into solving the bending path with the minimum distance in the bending matrix. DTW ðq; cÞ ¼ arg min

c X

! wi

ð7Þ

i¼1

3 Test Data Analysis Method Based on DTW Algorithms In the traditional spacecraft test, the parameter interpretation method mainly extracts the current value of the telemetry data stream, and interprets it according to the upper and lower limits given by the telemetry data. In traditional spacecraft testing, the main method of parameter interpretation is to extract the current value of the telemetry data stream, and then interpret the extracted data according to the upper and lower limits given by the telemetry data. There are some shortcomings in this interpretation method. Firstly, if the design range of telemetry parameters is wide (such as current value, temperature value, etc.), it is not easy to find abnormal changes in the parameter range. In addition, because the interpretation mechanism only aims at the correctness of single point telemetry, it is impossible to analyze the changes in overall trends of data flow. Thirdly, the information of the criterion given by the single point interpretation mechanism is insufficient, and it is not easy to carry out further analysis of the abnormal phenomena. The interpretation method introduced in this paper is as follows. The method transforms the spacecraft telemetry data into time series, quantifies the data changes by using the similarity analysis results of DTW distance calculation data and historical sample data, and then analyses whether the data migration changes exceed the design

Study of Fault Pattern Recognition

97

requirements, so as to realize data interpretation. This method improves the shortcomings of traditional interpretation methods in detecting data anomalies and trend changes. In spacecraft testing, such as the whole power-on start-up process and the subsystem power-on start-up process, there is a certain degree of repeatability. Therefore, sufficient historical data can be used as a sample for similarity calculation. The results of calculation are based on the support of a large number of data, which can more accurately use the similarity calculation results to find the trend of data change. 3.1

Data Analysis Workflow

The flow chart of interpretation application based on DTW is shown in Fig. 2. The specific flow chart is as follows: (1) Initialize the sample library and the related decision threshold database, in which the samples are iteratively revised according to the accumulation. (2) Sampling the telemetry needed for spacecraft analysis to form time series. (3) Find the corresponding telemetry samples from the sample library and calculate the DTW distance. (4) Comparing with the associated decision threshold, the interpretation result is generated. Judge threshold feedback correction, according to the statistical results of the data sample feedback correction.

Sample database

Modifing sample data

Decision threshold Library Threshold of DTW distance

Spacecraft testing or onorbit flight

Normal telemetry sampling

Calculating distance of DTW

Modified decision threshold sample

Judge result

Report and end

Fig. 2. DTW data interpretation process

The system is designed for automation. The remote sensing samples and the threshold setting are all self-learning. The system is revised by iteration in order to accurately reflect the calculation results. By using KNN classification method to optimize, the optimal classification point and its upper limit offset are found, and the threshold is automatically corrected. After processing, the decision threshold can reflect the calculated results more accurately and find the telemetry deviation. Data sampling time should be set according to specific needs. Such as the analysis and comparison of device startup process in a short time after instruction is sent. It can also be used to analyze the trend of the whole life cycle of the same test project.

98

3.2

G. Tian et al.

Analysis of Test Results

In order to illustrate the application of DTW in test interpretation, the author selected the starting current of a certain equipment after power-up as the telemetry parameter and sampled five sets of 800 s data respectively to form a time series. In this way, time series can be formed. The DTW distance between the time series and the historical samples is calculated, and the change of the data is assessed by comparing the calculated distance with the decision threshold. The data sample curve is shown in Fig. 3.

Fig. 3. Telemetry samples and data curves

The calculation results are shown in Table 1. The DTW distances of data samples and telemetry samples 1, 2, 3 and 4 are similar, and the calculated results are between 3 and 4. The calculation results are quite different from those of telemetry sampling 5. It shows that sample 5 deviates greatly from the sample. Combining with the curve of Fig. 3, we can see that telemetry sampling 5 has obvious downward jump point. The experimental results show that using DTW algorithm to calculate similarity, we can find the deviation of experimental data. If jump points occur, the distance of DTW increases significantly. The algorithm is sensitive to jump variation constant data and can reflect the impact of data deviation.

Study of Fault Pattern Recognition

99

Table 1. DTW distance between telemetry sampling and reference sample Category Telemetry Telemetry Telemetry Telemetry Telemetry

3.3

sampling sampling sampling sampling sampling

DTW 3.584 3.403 3.313 3.420 7.879

1 2 3 4 5

Threshold Determination Method

Using DTW algorithm for similarity judgment needs to compare the calculated results with the set similarity judgment threshold. When the results of comparison are within the prescribed scope, the similarity is identical and the data deviation meets the requirements. Therefore, the validity of the set threshold has a great influence on the whole interpretation. The determination of threshold is mainly based on experience and data accumulation, and the determination threshold of each parameter needs to set an appropriate range. The threshold setting should be self-learning and corrected by certain data accumulation.

4 Fault Recognition Method Based on DTW Algorithms The author further uses DTW algorithm to add fault recognition function on the basis of interpretation system. Firstly, according to the theoretical characteristics of historical data samples or abnormal data, a set of fault mode database is established. In the fault mode database, there are many kinds of fault samples formed by the fault occurred and the affected telemetry data. During spacecraft testing or on-orbit flight, if the collected telemetry data is judged to be abnormal, the system will retrieve the fault mode library samples, and calculate the DTW distance between the sampled data and the fault mode samples, and then analyze the results to match the most similar fault mode (Fig. 4). Telemetry fault mode library Fault reference sample 1

. . .

Fault reference sample 2

History data and model

Fault reference sample n

Spacecraft testing or onorbit flight

Confirming fault data

Calculating distance of DTW

ordering possible fault

Fig. 4. DTW data interpretation process

Report and end

100

4.1

G. Tian et al.

Workflow of Fault Recognition System

The application flow of fault identification based on DTW is as follows: (1) After being judged as abnormal data, the fault feature part of the data is extracted as the analysis object. (2) Extraction of fault mode samples associated with the parameter from the fault mode database. (3) Calculating DTW Distance Based on Fault Mode Samples. (4) Comparing with decision threshold, the matching result of fault mode is given. (5) If the result does not match, the fault is updated to the fault mode database as the fault pattern sample. 4.2

Analysis of Test Results

In order to make comparative analysis, the fifth telemetry sampling of current telemetry parameters in the previous chapter is selected. The sampling data is real-time data with jump points in 300 s. See Fig. 5 for details.

Fig. 5. Telemetry sample and fault sample curve

The DTW distances between telemetry sampling and normal data samples, telemetry sampling and fault mode samples are calculated by using the algorithm program. The results are shown in Table 2. From the calculation results, the DTW distance between telemetry sampling and fault mode 2 is the smallest compared with other samples, which indicates that fault mode 2 is the most likely to occur. The results of the analysis are consistent with the state shown in Fig. 5. According to the characteristics of parameter changes caused by faults, similar samples can be obtained by similarity calculation, and the most possible fault modes can be found.

Study of Fault Pattern Recognition

101

Table 2. Fault telemetry sampling and fault mode sample calculation DTW Category Telemetry sample Failure mode 1 Failure mode 1 Failure mode 1 Failure mode 1

DTW 21.9258 6.785 32.331 26.160 21.972

Fault samples can be obtained from historical fault cases. According to the theoretical calculation, the parameters related changes caused by simulation faults, such as current sag, surge and overload jump, can be modeled and simulated theoretically. DTW algorithm for fault identification can avoid the requirement of finding feature points in traditional fault identification methods, and does not require the number of fault samples to be consistent with the number of telemetry samples. As long as the fault samples can reflect the fault characteristics, this method can accurately measure the similarity of the fault and achieve fault identification.

5 System Performance Optimization DTW algorithm uses dynamic programming method to calculate the similarity between two time series. The complexity of the algorithm is O(N * M). When both time series are relatively long, the efficiency of DTW algorithm is relatively low. Therefore, in order to improve the recognition efficiency in DTW distance calculation, the lower bound function of DTW can be calculated. Preliminary screening is carried out to directly remove time series that do not meet the lower limit conditions, so as to narrow the scope of determination. The LB_Keogh lower bound function is used in the application of this paper. A sequence ½qL ; cU  is constructed to compute the part of the target sequence that exceeds the boundaries of the constructed sequence. The results are taken as the lower bound of DTW distance, where qL and cU are sequences of minimum and maximum values of data in sliding window with width of 2 W + 1, respectively. Similar thresholds can be quickly obtained by LB_Keogh lower bound function. This method can reduce the amount of calculation and improve the system performance.

6 Conclusion During the whole development cycle of spacecraft, a large amount of operational data is generated, which has a good data application foundation. Aiming at these data, DTW distance algorithm is used as analysis tool. By calculating the deviation between historical data samples and data samples, the abnormal changes of data can be effectively found and the trend of data changes can be analyzed. At the same time, on the basis of

102

G. Tian et al.

data interpretation, fault pattern recognition and other applications are introduced to enhance the ability of fast fault location and processing. It provides automatic and intelligent monitoring means for spacecraft testing and on-orbit flight.

References 1. Li J, Wang Y (2007) EA_DTW: early abandon to accelerate exactly warping matching of time series. In: Proceedings of international conference on intelligent systems and knowledge engineering (ISKE) 2. Keogh E, Ratanamahatana C (2005) Exact indexing of dynamic time warping. Knowl Inf Syst 7(3):358–386 3. Eamonn J, Michael J (2001) Derivative dynamic time warping. In: The first SIAM international conference on data mining, IEEE. Washington, pp 1–11 4. Berndt DJ, Clifford J (1996) Finding patterns in time series: a dynamic programming approach. In: Weld D, Clancey B (eds) Advances in knowledge discovery and data mining, AAAI/MIT, The MIT Press, Oregon, Portland, pp 229–248

A Joint TDOA/AOA Three-Dimensional Localization Algorithm for Spacecraft Internal Yin Long(&), Ke Zhu, and Cai Huang Institute of Manned Space System Engineering, China Academy of Space Technology, 100094 Beijing, China [email protected]

Abstract. Considering the lack of three-dimensional localization scheme for spacecraft internal, a joint TDOA/AOA three-dimensional localization algorithm based on Wireless Sensor Network (WSN) is proposed in this paper. WSN is deployed in the spacecraft which is composed of reference nodes and unknown nodes, and the reference nodes’ position are known which help to locate the unknown nodes. Only six reference nodes are enough for the proposed method to localize all the unknown nodes within the WSN in three-dimension theoretically, and the synchronization of the network is not necessary, satisfying the low complexity requirement of the WSN. TDOA (Time Difference of Arrival) is adopted to estimate AOA (Angle of Arrival), and the angle is estimated by the hierarchical deployment of the reference nodes by which the complicated antenna arrays for AOA are avoided. A three dimensional coordinate is established by setting the plane of the reference nodes as plane XOY and the z coordinate is computed according to the angle estimated by the AOA. Finally, the unknown node is projected on the plane XOY, and the x coordinate and y coordinate are computed by trilateration localization tragedy. Keywords: TDOA  AOA  ZigBee  Spacecraft internal  Three-dimensional localization

1 Introduction As the rapid development of spacecraft technique, the spacecraft get larger which could accept more astronauts in the future. The astronauts are working and resting in the spacecraft, and how to localize them is going to be studied. The GPS method can locate fast and precisely, however, it can only be employed outside as the GPS signal is badly decreased inside the spacecraft. At the present, the wireless sensor network (WSN) is applied for the inside localization widely, and the WSN can be based on ultrasonic, infrared, WiFi, Bluetooth, ZigBee and RFID. According to the theory of localization, the localization method can be divided by whether measuring the distance. The rangebased localization methods are more precise than the range-free ones, which concludes TOA [1] (Time of Arrival), TDOA [2] (Time difference of Arrival), AOA [3] (Angle of Arrival), RSSI [4] and fingerprint. However, these methods are only used for the twodimensional localization which could not be applied in the three-dimensional localization directly. For the three-dimensional localization, the Landscape-3D [5] and the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 103–109, 2020 https://doi.org/10.1007/978-981-13-9409-6_13

104

Y. Long et al.

SLBS [6] scheme are proposed, and the shortage of them is that they rely on the moving reference nodes which broadcast range measuring signal periodically. The APIT [7] method is proposed basing on the spherical coordinate computation which has high requirement for the calculating ability of the reference node. Therefore, a joint TDOA/AOA three-dimensional localization algorithm based on WSN is proposed, in which the moving reference nodes are not necessary and the complexity of the algorithm is acceptable.

2 Localization Algorithm The WSN is composed of reference nodes and unknown nodes, in which the position of the reference nodes are known while the unknown ones are not. On the assumption that all the reference nodes are deployed in the XOY plane such as A, B, C in Fig. 1, and the unknown node which is labeled as D is projected to the XOY as D′. Supposing the angle of AD and AD′ is h1, BD and BD′ is h2, and CD and CD′ is h3, setting the length of DD′ as h. A three dimensional coordinate is established and the plane which hold the reference node A/B/C is set as the plane xoy. Supposing the coordinate of D/D ′ is (x, y, z)/(x′, y′, 0), the length of AD/BD/CD is m1/m2/m3, and the length of AD′/BD ′/CD′ is r1/r2/r3, then comes the following formula:

Fig. 1. The localization 3D coordinate

2 3 2 03 x x 4 y 5 ¼ 4 y0 5 h z

ð1Þ

h ¼ m1  sin h1

ð2Þ

A Joint TDOA/AOA Three-Dimensional Localization Algorithm

105

h ¼ m2  sin h2

ð3Þ

h ¼ m3  sin h3

ð4Þ

r1 ¼ m1  cos h1

ð5Þ

r2 ¼ m2  cos h2

ð6Þ

r3 ¼ m3  cos h3

ð7Þ

Supposing the coordinate of reference node A/B/C in plane xoy is (x1, y1)/(x2, y2)/ (x3, y3), then comes the following formula:



x0 y0





qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r1 ¼ ðx1  x0 Þ2 þ ðy1  y0 Þ2

ð8Þ

r2 ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx2  x0 Þ2 þ ðy2  y0 Þ2

ð9Þ

r3 ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx3  x0 Þ2 þ ðy3  y0 Þ2

ð10Þ

2ðx1  x3 Þ 2ðy1  y3 Þ ¼ 2ðx2  x3 Þ 2ðy2  y3 Þ

1 

x21  x23 þ y21  y23 þ r32  r12 x22  x23 þ y22  y23 þ r32  r22



#1  2 3 2" 3 2 2 2 2 2 2 x 2ðx1  x3 Þ 2ðy1  y3 Þ x  x þ y  y þ r  r 1 3 1 3 3 1 7 4y5 ¼ 6 4 2ðx2  x3 Þ 2ðy2  y3 Þ x22  x23 þ y22  y23 þ r32  r22 5 z m  sin h 1

ð11Þ

ð12Þ

1

The unknown node D is equipped with ultrasonic and RF ejector, and it periodically broadcast signal for distance measuring in both ultrasonic and RF. The reference node is equipped with ultrasonic and RF receiver, and the distance between reference node and the unknown one can be estimated by the time difference of the two kinds of signal received as below: d ¼ v  tTDOA

ð13Þ

d denotes the distance, v denotes the speed of ultrasonic, tTDOA denotes the time difference of the ultrasonic signal and the RF received. Figure 2 shows the estimation method of the angle by TDOA/AOA. S denotes the position of reference node while P denotes the unknown node. Supposing the coordinate of S/P in the XOY plane are (xs, ys)/(xu, yu). S can both send and receive distance measuring signals while A can only receive. On the assumption that the length of AS is L, then PA equals PQ almost when d is much longer than L. On this condition, the length of SQ equals d − d1, then comes the formula:

106

Y. Long et al.

Fig. 2. The TDOA/AOA algorithm

d  d1 ¼ L  cos h h ¼ arccos

ð14Þ

d  d1 L

ð15Þ

In this formula, d and d1 can be figured out by the TDOA method, and the L is known. The h may be blurred as it could be [0, p] as well as [−p, 0], and this problem can be solved by apriority.

3 Localization Scenario Typical wireless sensor networks consist of a large number of small and battery. Powered nodes with short range radios, low cost processors and specific sensing functions. WSN is widely applied in the area of military, industry and agriculture. The communication of WSN replies on the wireless network protocol, such as ZigBee, Bluetooth, WiFi, IrDA etc, and the comparison of these protocol is listed in Table 1.

Table 1. The comparison of the common WSN Load of system Battery life Nodes afford Maximum distance Data rate frequency

Bluetooth heavy short 7 10 m 1 Mbps 2.4 GHz

WiFi heaviest shortest 30 100 m 11 Mbps 2.4 GHz

IrDA light long 2 1m 16 Mbps 980 nm

ZigBee lightest longest 255 1–100 m 250 kbps 2.4 GHz

A Joint TDOA/AOA Three-Dimensional Localization Algorithm

107

We can see that ZigBee has the advantage of low power and large scale, and its data rate meets the requirement of localization. The spacecraft is composed of several cabin, inside each cabin the WSN is deployed separately. The WSN is deployed in the way that all the reference nodes are deployed on the same plane whose position are already known, and the ultrasonic and RF receiver are equipped in each reference node. The astronauts wear the unknown node which is equipped with the ultrasonic and RF ejector. The reference node compute the position of the astronaut by receiving the signal ejected by astronaut periodically, and the position information is transmitted to the gateway which is linked to the 1553B bus, then the ground station can acquaint the position of the astronaut (Fig. 3).

Fig. 3. The localization scenario in spacecraft

WSN is deployed in the way below to avoid using the antenna array. The inside of the spacecraft is divided into several plane, and the distance between near plane is set as L. All the reference nodes are deployed in each plane. The amount of reference nodes in each plane must not be less than 3 and all the 3 nodes must not be deployed in one line. Furthermore, the reference node could be mapped directly into the near plane which means it have the same coordinate in each plane (Fig. 4). Number the reference nodes of each plane as chart 2. The localization is divided into two steps. In the first step, the astronaut received the information broadcasted by reference node which contains the ID of sender such as A. Then the astronaut choose the reference node B in the near plane which has the same coordinate with A. At last, the z coordinate of astronaut is figured out by TDOA/AOA method mentioned in Table 2. In the second step, the astronaut project itself onto the plane of reference node, and figured out the distance between itself and each reference node in the plane. Finally, choosing 3 reference nodes which has the same distance from the projection of the astronaut in the plane xoy, and the trilateration localization tragedy is used to estimate the x and y coordinate of astronaut.

108

Y. Long et al.

Fig. 4. The hierarchical deployment of the reference nodes Table 2. The allocation of the node ID Plane 1 1 1 2 2 2

Node A B C D E F

Node coordination x1, y1, z1 x2, y2, z1 x3, y3, z1 x1, y1, z2 x2, y2, z2 x3, y3, z2

Node ID 1 2 3 4 5 6

To improve the precision of the localization, the localization result should be invalid once the condition of L  d is not satisfied. By judging the time difference of the ultrasonic signal and the RF received (tTDOA), we can tell the distance between the reference node and the unknown node. Once tTDOA < tmin, the localization should be invalid.

4 Conclusion In this paper, a joint TDOA/AOA three-dimensional localization algorithm based on Wireless Sensor Network (WSN) is introduced, which has the advantage of moving anchors free, low nodes density and low complexity. Furthermore, there are some problems should be solved in the future: The multi-range effect should be studied. The performance should be studied with nodes density and power consumption, and a precision model should be set by taking these factors into account.

A Joint TDOA/AOA Three-Dimensional Localization Algorithm

109

References 1. Qiangmao G, Fidan B (2009) Localization algorithms and strategies for wireless sensor networks. In: HerShey information science reference, New York 2. Lu Xiaofeng, Hui Pan, Towsley Don et al (2010) Anti-localization anonymous routing for delay tolerant network. Comput Netw 54(11):1899–1911 3. Niculescu D, Nalh B (2003) Ad hoc positioning system (APS) using AOA. In: Proceedings of the 22nd annual joint conference of the IEEE computer and communications societies (INFOCOM’03). IEEE, New York, pp 1734–l743 4. Patwari N, Hero AO, Perkins M et al (2003) Relative location estimation in wireless sensor networks. IEEE Trans Signal Process 51(8):2137–2148 5. Zhang LQ, Zhou XB, Cheng Q (2006) Landscape-3D: a robust localization scheme for sensor networks over complex 3D terrains. In: Proceedings of the 31st IEEE conference. IEEE, New York, pp 239–246 6. Dai Guilan, Zhao Chongchong, Qiu Yan (2008) A localization scheme based on sphere for wireless sensor network in 3D. Acta Electronica Sinica 36(7):1297–1303 (in Chinese) 7. Liangbin Lü, Yang Cao, Xun Gao et al (2006) Three dimensional localization schemes based on sphere intersections in wireless sensor network. J Beijing Univ Posts Telecommun 29 (z1):48–51 (in Chinese)

A Study on Lunar Surface Environment LongTerm Unmanned Monitoring System by Using Wireless Sensor Network Yin Long(&) and Zhao Cheng Institute of Manned Space System Engineering, China Academy of Space Technology, 100094 Beijing, China [email protected]

Abstract. An idea for lunar surface environment exploration system by using WSN (wireless sensor network) is proposed for long-term unmanned monitoring, and the large temperature difference between day and night, the loose soil structure of lunar surface and the space radiation intensity are considered. The system is composed of WSN, relay satellite of lunar, relay satellite of earth and earth station. An energy-balanced routing protocol is proposed to prolong the network lifetime. The communication protocol stack for lunar surface, lunar relay satellite, earth relay satellite and earth station is designed. The earth-moon communication technique based on relay satellite is proposed to guarantee realtime data transmission. Compared with the traditional technique, the idea proposed in this paper has advantages as: more detecting objects, larger detection range, longer detection time, higher reliability and lower costs. Keywords: Lunar surface environment long-term unmanned monitoring system  Wireless sensor network  Node  Protocol stack

1 Introduction Lunar exploration has important strategic significance, the current major international space power and organizations will be the moon exploration as a starting point for deep space exploration, have launched a series of lunar activities. Human beings on the moon more than 50 years of scientific exploration, detection methods from the previous flying around the moon, hard landing development for the subsequent soft landing, moon lunar rover and astronauts field trips. Detection technology developed from the previous visible light, infrared development for the current full-month microwave detection [1, 2]. The detection covers the data processing and mapping of the lunar image map, the use of hyperspectral remote sensing, radar remote sensing means to detect lunar minerals, launch lunar rover to achieve a soft landing [3–5], access to live images and lunar sampling. The above means to obtain a certain degree of information on the moon, for the human understanding of the moon provides an important reference, but there are their own shortcomings. Through the visible light, infrared, microwave detection of the lunar environment, can not get through the actual © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 110–114, 2020 https://doi.org/10.1007/978-981-13-9409-6_14

A Study on Lunar Surface Environment Long-Term Unmanned

111

sample parameters at close range, the observation error. Through the lunar landing method, the detection range is limited, the terrain is limited, the detection parameters are very difficult to return, and the risk of single point of failure is higher. Through the astronauts site visits, in the moon to adapt to the environment difficult, detection time and space are limited, the cost and risk are high. This paper presents a lunar surface-based wireless sensor network [6–9], deployed by the lunar landing device, to achieve long-term unmanned monitoring of the moon. And collects the monitoring data of all the sensor nodes through the gateway node and carries on the data fusion, finally sends the fusion result to the lunar lander and returns to the earth.

2 Design of Lunar Surface Environment Detection System 2.1

The Impact of the Lunar Environment

The lunar surface of the lunar surface with outstanding adhesion, abrasive and permeability, may cause the sensor node buried, affecting the sensor node solar power and communication work. The day and night of the moon are 14 days and a half earth day, one moon day equals one day Earth day. Lunar surface temperature difference between day and night, requiring wireless sensor nodes to adapt to high and low temperature working environment. Lunar surface space radiation is serious, we must improve the wireless sensor node radiation resistance. The surface of the moon’s atmospheric pressure is very small, is 10–14 order of magnitude, is super-vacuum, will make the sensor node structure by 0.1 MPa additional internal pressure. Sensor nodes work on the ground in general use of battery-powered, through the replacement of the battery to achieve its continued work. As the node is difficult to reach the lunar surface, we must design a new type of unattended power supply means to achieve long-term wireless sensor network monitoring. 2.2

System Design

This paper presents a wireless sensor network using a monthly environment detection system (Fig. 1), can be achieved on the lunar environment real-time long-term unmanned monitoring. A large number of wireless sensor nodes (including ordinary nodes and cluster head nodes, yellow nodes and green nodes in Fig. 1) are randomly distributed through the lunar lander on the lunar surface. The nodes are formed by selforganizing networks and are configured according to the node The sensor real-time monitors the lunar environment parameters and passes the monitoring results to the lunar lander (sink node) in multi-hop mode. Ordinary nodes are used for data acquisition and sending, and cluster head nodes are only used for data forwarding. The common node and cluster head node configuration, cluster head node through the dynamic election, from the ordinary node. When the sensor nodes on the surface of the

112

Y. Long and Z. Cheng

moon form a larger sensor network, the sensing data often need to be forwarded by multiple cluster head nodes to reach the lunar lander. The lunar lander data fusion of the received data and sends the processed data over the wireless link to the lunar relay satellite. The lunar relay satellite forwards the data to the Earth relay satellite, which relays the data to the ground receiving equipment of the earth for real-time environmental parameters of the lunar surface. Similarly, the ground sends control information to the lunar wireless sensor network over the reverse link. The lunar lander’s hardware and software resources are rich, data processing and communication ability, so it can be used as a gateway between wireless sensor network and lunar relay satellite. Lunar relay satellites and earth relay satellites have the advantages of high coverage, high communication link bandwidth, and therefore can be used as a means of communication. The lunar surface based wireless sensor network uses IEEE802.15.4 communication protocol, according to the integration of space and ground design ideas, lunar lander, lunar relay satellite, earth relay satellite and ground receiving equipment adopt IP over CCSDS protocols. Ground receiving equipment and terrestrial network between the use of TCP/IP protocol. To achieve this article on the monthly wireless sensor network, to solve three difficulties, including: (1) designing a node of a wireless sensor network adapted to the lunar surface environment; (2) design a kind of energy balance of low-power wireless sensor network networking method; (3) Design a

Fig. 1. Architecture of wireless sensor network

A Study on Lunar Surface Environment Long-Term Unmanned

113

protocol stack for a wireless sensor network that supports Lunar-to-Earth space communications. Figure 2, the general node of the protocol stack from top to bottom, including application framework layer, application support layer, network layer, data link layer and physical layer, which applies the framework layer, the application support layer And the network layer conforms to the definition of the ZigBee specification, the data link layer and the physical layer conform to the IEEE802.15.4 standard. The protocol

Fig. 2. Protocol of wireless sensor network

stack of the cluster head node only includes the network layer, the data link layer and the physical layer, and conforms to the standard and the common node. 2.3

Energy Balance Routing Technology

As the energy of wireless sensor network nodes is limited, in order to ensure the longterm stability of the entire network, we need to adopt a kind of energy balance routing technology to enhance the entire network life cycle. When the wireless sensor network is large, the data collected by a node in the network must be forwarded through the remaining nodes to reach the sink node. The node that implements the forwarding function is called the cluster head node, and the energy consumption of the cluster head node is usually large. Therefore, the cluster head node must be rotated periodically to avoid the paralysis of a fixed node. By using the periodic node residual energy assessment, the cluster nodes are dynamically selected to realize the energy balance of the whole network and improve the life cycle and robustness of the wireless sensor network.

3 Feasibility and Advantage Analysis Based on the wireless sensor network, the detection method of the lunar surface environment, from the detection distance, the detection object, the detection range, the detection time, the system reliability, the cost and so on, than the traditional hyperspectral remote sensing, radar remote sensing, lunar rover, Astronauts inspection and other means of detection has advantages, as shown in Table 1.

114

Y. Long and Z. Cheng Table 1. Comparison of lunar surface exploration method

Range Object Terrain Lifetime Reliability Cost

Radar remote sensing Long Single Unlimited Mean Single High

Lunar rover Close Single Limited Short Single High

Astronauts Close Single Limited Short Single High

WSN Close Various Unlimited Long Redundance Low

4 Concluding Remarks According to the analysis, this idea has the advantages of high detection distance, wide detection range, long detection time, high system reliability and low cost, compared with the traditional lunar environment detection technology. Activities for reference. This article only as a system design, put forward the possible system architecture and some key support technology program. In the future, we need to carry out detailed design and demonstration, especially in combination with the application of microelectromechanical technology and advanced radio frequency technology, to demonstrate the specific volume and power consumption, how to adapt to the requirements of delivery and spreading, possible wireless communication Ability to meet the requirements and so on, in order to complete the feasibility study feasibility study.

References 1. Wei Z, Yang L, Xin R et al (2012) Design and implementation of three-dimensional visualization of the moon based on Chang’E-1 data of CCD camera and laser altimeter. J Comput-Aided Des Comput Graph 24(1):37–42, 49 (in Chinese) 2. Li Yun, Jiang Jingshan, Wang Zhenzhan et al (2013) Lunar surface physical temperature retrieved from the measurements by CE-1 lunar microwave sounder. Eng Sci 15(7):106–112 (in Chinese) 3. Elphie RC (1998) Lunar Fe and Ti abundance comparison of lunar prospector and Clementine data. Science 281:1493–1500 4. Meditch JS (1964) On the problem of optimal thrust programming for a lunar soft landing. IEEE Trans Autom Contr 4:477–484 5. Shu JR, Saw AL (2002) Obstacle detection and avoidance for landing on lunar surface. Avoid Astronaut Sci 110(2):35–45 6. Feng H, Chu H-W, Jin Z-K et al (2010) Review of recent progress on wireless sensor network applications. J Comput Res Dev 47(zl):81–87 (in Chinese) 7. Renfa Li, Ye Wei, Fubin Hua et al (2008) A review of middleware for wireless sensor networks. J Comput Res Dev 45(3):383–391 (in Chinese) 8. Sensor Webs of SmartDust: Distributed signal processing/data fusion/inferencing in large microsensor arrays 9. Miu Shifu, Liang Huawei, Meng Qing et al (2007) Design of wireless sensor networks nodes under lunar environment. Trans Microsys Technol 26(8):117–120 (in Chinese)

A Study on Automatic Power Control Method Applied in Astronaut Extravehicular Activity Yin Long(&), Pei Guo, and Yusheng Yi Institute of Manned Space System Engineering, China Academy of Space Technology, Beijing 100094, China [email protected]

Abstract. The space station mission faces the data interaction requirements between the space station and multiple extravehicular astronauts. The traditional wireless communication mode with constant transmitting power will cause the interference and incompatibility of communication due to the different positions of the extravehicular astronauts. In order to ensure the communication link stability of all extravehicular astronauts, an automatic power control method is proposed. The extravehicular communication device located in the space station receives the real-time data of all extravehicular astronauts, and the signal to noise ratio is estimated. According to the evaluation results, the power is automatically controlled by the two ways of outer loop and inner loop. Finally, the signal to noise ratio of all the astronauts received by the extravehicular communication device is the same, ensuring the quality of extravehicular communication. The method is verified by building the testbed and carrying out experiment, and the result shows that the multiple signal to noise ratio received is almost the same, and the reliability for multiple extravehicular activity is improved. Keywords: Space station  Communication system of astronaut extravehicular activity  Reversed signal  Automatic power control

1 Introduction With the vigorous development of manned space missions, EVA (extravehicular activity) became its feature and identification. Communication technology is the key of EVA as it supports the communication between the astronauts and the spacecraft or space station. There are only a few countries such as USA, Russia and China that can carry out EVA. TDMA is adopted by the international space station to solve the problems mentioned above, but it has the disadvantages of low efficiency and more power consume [1]. Shenzhou-7 task completed the first EVA of China [2–6], and the EVA system supports the transmission of telemetry and voice. The communication equipment of EVA worn by the astronaut has a constant unaltered transmission power which can only support the point to point communication. The space to space communication system which support transmitting data between two spacecraft also lack © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 115–120, 2020 https://doi.org/10.1007/978-981-13-9409-6_15

116

Y. Long et al.

the ability of automatic power control [7, 8]. The space station mission faces the requirement of multi-astronauts EVA, and the area of the EVA is increased. By the above means, the signal to interference ratio of all the EVA communication link will be different when the multi-astronauts move to different positions. As a result, the receiving and demodulation of the signals are effected. In order to improve the quality of the communication of multi-astronauts EVA, a method based on automatic power control is proposed to ensure the same signal to interference ratio of all the links.

2 Communication System of EVA The communication system of EVA is composed of EVA communication equipment, EVA communication antenna, space-suit antenna and space-suit communication equipment. EVA communication equipment is installed in the space station while the EVA communication antenna is fixed on the shell. Space-suit antenna and space-suit communication equipment are both built outside and inside of the space-suit worn by the astronaut. EVA communication equipment plays a key role in the system, and it is single configured. The amount of space-suit antenna and space-suit communication equipment are configured due to the task, which normally counts from 1 to 3. For example, the communication system of EVA for 3 astronauts is shown in Fig. 1. The bi-directional communication link between astronauts and space station is established through the system, and the information such as telemetry and voice are transferred. space-suit antenna a

EVA communication antenna

space-suit communication equipment a

space-suit antenna b

space-suit antenna b

EVA communication equipment

space-suit communication equipment b

space-suit communication equipment c

Fig. 1. Communication system for astronaut extravehicular activity

A Study on Automatic Power Control Method Applied in Astronaut

117

The UHF band is used by the communication system of EVA. The forward link is specified as EVA communication equipment to space-suit communication equipment, while the backward link is defined as space-suit communication equipment to EVA communication equipment. FDD (Frequency Division Duplex) and CDMA (Code Division Multiple Access) is adopted to realize the bi-directional communication and 1 to multi communication. QPSK method is applied for the modulation while convolution is adopted for the coding. The system can support 3 astronauts carrying out EVA at least, and error rate of the EVA communication is below 10–5.

3 Automatic Power Control Method for the Backward Signal In order to cover the full scope of EVA wirelessly and ensure the quality of EVA communication, the transmission power of backward signal should be controlled so that the near-far effect is suppressed and the interference is reduced. The power of backward link is controlled by the order transferred from EVA communication equipment to space-suit communication equipment, and the communication link remains consecutive during the power adjustment. EVA communication equipment generates a signal with constant power Pt, while the transmitting power of space-suit communication equipment is adjusted by EVA communication equipment’s order.

Fig. 2. Flow of reversed signal automatic power control

According to the synchronization of the backward link, the open-loop and closed-loop power control method is applied respectively, which is shown in Fig. 2.

118

3.1

Y. Long et al.

Open-Loop Automatic Power Control

The forward and backward links are both in synchronization process as soon as spacesuit communication equipment turned on. EVA communication equipment cannot control the transmitting power of space-suit communication equipment until the synchronization is achieved. Therefore, space-suit communication equipment can only initialize the transmitting power referred to the forward power received as the quality of backward link is unjustifiable. The power control worked in open-loop mode. After the forward synchronization is completed, space-suit communication equipment calculates the received power Pr according to the AGC. The loss of power through forward-link Ploss is calculated by Eq. 1. Ploss ¼ Pt  Pr

ð1Þ

The expected power received by EVA communication equipment is specified as Pexpect, and it is computed by Eq. 2. Pmin and Pmax represent the minimum and max power received respectively. Pmin  Pexpect  Pmax

ð2Þ

According to Pexpect, the initial transmitting power of space-suit communication equipment is minimized to gain the minimum interference on other EVA. The initial transmitting power of space-suit communication equipment is specified as Ptt, and can be calculated through Eq. 3. Psurplus is used to reduce the impact of other EVA. Ptt ¼ Ploss þ Pexpect  Psurplus

ð3Þ

After the initialization, space-suit communication equipment confirm whether the backward link is established by the ACK from EVA communication equipment. The transmitting power is added by ΔP1 each time until the ACK is received. Close-loop mode is altered until the forward and backward synchronization is finished. 3.2

Closed-Loop Automatic Power Control

After synchronization, data is transferred bi-directionally, and the power control method works in closed-loop. Closed-loop power control method is divided into outerloop mode and inner-loop mode. Outer-loop mode is adapted to get the minimum signal to interference ratio (SIR) which can maintain the link. Inner-loop mode is used to make sure SIR of all EVA received by EVA communication equipment is almost the same. 3.2.1 Outer-Loop Automatic Power Control SIR specified as St which is supposed to maintain the communication link of EVA is calculated according to the bit error rate (BER) of EVA communication equipment. As the environment is stable, the data rate of EVA communication equipment is invariable. The period of outer-loop automatic power control is set as multiple of TTTI

A Study on Automatic Power Control Method Applied in Astronaut

119

Fig. 3. Outer-loop procedure for power control

(Transmission Timing Interval). St is obtained every period and passed to the module of inner-loop automatic power control. The period is set as 1 s since the environment of channel is stable. St is set according to BER calculated in one period. The flow of outer-loop is shown as Fig. 3

Fig. 4. Frame format for physical layer

means the time of outer-loop. St is calculated periodically and the transmitting power of space-suit communication equipment is adjusted accordingly. 3.2.2 Inner-Loop Automatic Power Control Inner-loop automatic power control means that the power is controlled periodically according to the difference from the actually measured SIR called Sa and St. The control command specified as TPC is set in the physical layer frame which is shown as Fig. 4. The content of frame includes business data, pilot and TPC. TPC occupies 2 bits that 00 means increasing power, 01 stands for decreasing power, 11 means maintenance while 10 is reserved. Inner-loop automatic power control is carried out as follows. If Sa > St, TPC is set as 01. The transmitting power of space-suit communication equipment is expected to be reduced in one period. If Sa < St, TPC is set as 00. The transmitting power of space-suit communication equipment is expected to be increased in one period. If Sa = St, TPC is set as 11. The transmitting power of space-suit communication equipment is expected to be maintained. TPC is acquired when the frame is received and analyzed. The transmitting power of space-suit communication equipment is adjusted step-by-step according to TPC. The current power is specified as P(n), while the previous one is specified as P(n − 1). The equation below shows the relationship.

120

Y. Long et al.

PðnÞ ¼ Pðn  1Þ þ DP2  Tpc

ð4Þ

ΔP2 stands for the stepping value adjusted each time, and TPC means the coefficient for TPC. TPC = 1 when TPC equals 00. TPC = −1 when TPC equals 01. TPC = 0 when TPC equals 11. The near-far effect is reduced by the alteration of the power and assimilation of SIR.

4 Conclusion According to the demand of multi-person EVA, the automatic power control method is proposed. At first, the initial power is set in open-loop mode when the backward link is not synchronized. The closed-loop mode works since the forward and backward link is set up. St is acquired in outer-loop mode which ensure the maintenance of EVA communication. The power is controlled step-by-step through closed-loop to reduce the interference and near-far effect. The experiment only considered the free space fading, while the near-field effect and the shell of spacecraft is ignored. Further study should be carried out.

References 1. Yutao Hao, Liu Baoguo, Wang Ruijun et al (2014) Research on TT&C system in international space station. Manned Spaceflight 20(2):165–172 (in Chinese) 2. Zhi S, Bainan Z, Teng P et al (2009) Research and development of Shenzhou-7. Manned Spaceflight 15(2):16–21, 48 (in Chinese) 3. Chen Jindun, Liu Weibo, Chen Shanguang (2009) The system design and flight application of astronaut EVA in Shenzhou VII mission. Manned Spaceflight 15(2):1–9 4. Xiao Yu, Ma Xiaobing, Zhongqiu Gou (2010) Failure mode and countermeasure design and implement for Shenzhou spaceship’s extravehicular activity. Spacecraft Eng 19(6):56–60 (in Chinese) 5. Zhihao Pang (2008) Development of technologies of extravehicular activities. Sci Technol Rev 26(20):21–27 (in Chinese) 6. Zhu Guangchen, Shijin Jia (2009) The ground verification of spacecraft EVA functions. Manned Spaceflight 15(3):48–53 (in Chinese) 7. Shi Yunchi (2011) Space to space communication subsystem manned spaceflight and its key technology. Aerosp Shanghai 28(6):38–42 (in Chinese) 8. Cheng Qinglin, Liang Hong, Wu Yijie et al (2014) The design and implementation of multimode receiver for rendezvous and docking in space. Manned Spaceflight 20(1):58–64 (in Chinese)

Design of EVA Communications Method for Anti-multipath and Full-Range Coverage Yin Long(&), Kewu Huang, and Xin Qi Institute of Manned Space System Engineering, China Academy of Space Technology, 100094 Beijing, China [email protected]

Abstract. Considering the large-scale of manned spacecraft and the increasing scope of EVA, a full-range and anti-multipath communications method for EVA is proposed to solve the problem of low coverage and severe multipath effect which cannot be solved by traditional method. Multiple antennas are evenly distributed around the manned spacecraft to ensure the full communication coverage of EVA. FDD (Frequency Division Dual) is adopted and different frequency is assigned to the forward link and backward link respectively. DSCDMA (Direct-Sequence Code Division Multiple Access) is applied. Diverse spreading codes are distributed to each astronaut of EVA, and the problem of EVA communication interference for multiple astronauts is solved. In order to weaken the multipath effect brought by shield and reflection of manned spacecraft, a communication method by combination is proposed. Time diversity technique is applied that manned spacecraft transmits the forward message through multiple antennas in time staggered mode, and the astronaut of EVA is searching the maximum point in limited time by correlation of sliding window. The rest peaks are found near the original one, and the maximum ratio combining is carried out by the judge of peak value. Space diversity technique is also used that manned spacecraft receives the backward information of astronauts by multiple antennas, and all the peaks are found by the correlation through sliding window. The maximum ratio combining is implemented by the estimation. Simulation is made, and the result shows that by whole-scope communications method for EVA, the signal to noise ratio can be reduced 1–4 dB to realize the BER (Bit Error Rate) of 10–5 comparing with other methods, and it realize the full-range of EVA communication without interruption. Keywords: Full-range  EVA  Extravehicular communication Multipath effect  Time diversity  Space diversity

 CDMA 

1 Introduction As the fast development of manned space technology, EVA has become the key technology for human being’s exploring the space. Communication methods which realize the information exchange between the extravehicular astronaut and the

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 121–127, 2020 https://doi.org/10.1007/978-981-13-9409-6_16

122

Y. Long et al.

spacecraft has become the significant part of EVA. Since the quick development of space technique as well as the growing complexity of space missions, the future missions including space station and manned lunar landing have the fundamental requirements of EVA. Considering the demand of multiple extravehicular astronauts, increasing scope of activity, growing bandwidth as well as the multipath effect causing by the shielding, reflecting from the shell of large-scale spacecraft. At present, only a few countries such as USA, Russia and China command the communications technique of EVA all around the world. TDMA has the benefits of anti-multipath and high rate which was adopted by international space station [1], but it has the disadvantage of high power consume and low efficiency. Shen zhou-7 [2–6] realizes communication of single astronaut EVA by two ways which are communications method by umbilical cord and wireless communications method based on FDMA. umbilical cord based method has the advantage of high credibility as well as kind performance of antiinterference, but it is not fit for the wide range of EVA as it is length limited. FDMA communications method has a well performance of credibility, but it is only used for single astronaut’s EVA once the frequency is restricted. In order to meet the needs of future EVA, an anti-multipath communications method is proposed to solve the problem of multipath effect, single astronaut EVA supported and low communication efficiency, and the range of EVA is extended remarkably both in angle and distance.

2 EVA Communications Method EVA communications system is composed of EVA communication equipment, EVA communication antenna, space-suit antenna and space-suit communication equipment. EVA communication equipment is installed in the space station while the EVA communication antenna is fixed on the shell. Space-suit antenna and space-suit communication equipment are respectively built outside and inside of the space-suit worn by the astronaut. EVA communication equipment plays a key role in the system, and it is often single deployed. The amount of space-suit antenna and space-suit communication equipment are configured due to the task, which normally counts from 1 to 3. Multiple antennas method realized EVA communication through multiple antennas, and DS-CDMA is often applied. By multiple antennas, the angle range is remarkably improved which can cover 360° easily. The EVA communication method proposed is processed in the following steps and is shown in Fig. 1. Firstly, the system configuration is executed which includes the deployment of EVA communication antenna as well as DS-CDMA initialization. Then, the forward and the backward link are built up synchronously. The forward link applied time diversity technique while the backward uses space diversity technique.

Design of EVA Communications Method for Anti-multipath

123

Fig. 1. Flow of EVA communications method

2.1

Design of Antenna Array

EVA communication cannot be established between single antenna and the astronaut behind considering the shield of spacecraft and limitation of antenna pattern. EVA communication cannot cover 360° around the spacecraft by the single antenna. In order to improve the coverage of EVA communication, antenna array is designed. k antennas are equably deployed around spacecraft and the angel of two closer antennas equals 360°/k. Every antenna is connected to EVA communication equipment by cables and each antenna realizes bi-directional communication. Antenna array realized the coverage of 360° and the EVA range is improved. EVA communication system is shown in Fig. 2.

124

Y. Long et al.

Fig. 2. EVA communication system

2.2

DS-CDMA

Future space missions require multiple astronauts carrying out extravehicular activity. DS-CDMA [7] is used to share the time as well as frequency, and the need of multiple astronauts executing EVA is meet. Taking 3 astronauts as example, the forward link is using 3 spreading codes to identify each 3 astronaut’s communication with the spacecraft. The backward link is also using 3 spreading codes to realize all the communications simultaneously. In all, 6 spreading codes are applied. The physical layer frame is shown in Fig. 3 which includes the frame before convolution, the frame after convolution and the frame after spreading.

Fig. 3. Frame format for system

Design of EVA Communications Method for Anti-multipath

2.3

125

Time Diversity

Consuming all the forward signals are emitting simultaneously, the phase of each signal received will be various as the signal way differs and it results in the depressed performance of receiving. The signal channels can be distinguished by different spreading codes to avoid the non-positive phase combination. The space-suit communication equipment must be able to receive and demodulate k channels of signal which complicate EVA communication system observably. By spreading, the chips are non-correlative between each other which means that the decline of spreading signals with fixed chips delayed are not interrelated, and time diversity [8, 9] technique is based on the characteristic. EVA communication equipment transmits signals in time-staggered mode and the process is shown in Fig. 4. The spreading signal is transmitted through k channels by k antennas, and the interval between each closer channel is n chips. The longest interval among k channels is (k − 1) * n chips. Space-suit communication equipment receives the k channel signals and finds the peak value by sliding window correlation. The correlation length is set as integer multiple of frame length L as bL, and the peak value a1 is found by sliding window correlation. The other k − 1 peak values specified as a2, a3, …, ak are found around peak a1 within (k − 1) * n chips. The amounts of signals to be combined are determined by whether the peak value exceeds the bound, and by the combination of signals, the forward link turns to be much steadier.

Fig. 4. The technology of time diversity

2.4

Space Diversity

EVA communication equipment receives the backward signal by multiple antennas. As the fading characteristics of backward signals are independent with each other, combination of multiple signals is feasible by the space diversity technique. The spreading codes of all the space-suit communication equipment are previously stored in EVA

126

Y. Long et al.

communication equipment, and all the backward signals are respectively correlated by sliding window to get the peak value. The steps are taken as follows and shown in Fig. 5. Firstly, the backward signals are correlated in limited length by sliding window. Secondly, the peak value is acquired in designated time. Finally, all the peak values are estimated and chosen to be combined [10].

Fig. 5. The technology of space diversity

3 Conclusion This paper proposed a EVA communications method which has the advantage of full range coverage and anti-multipath. Multiple antennas supporting both emitting and receiving are equably deployed around the shell of spacecraft and covers 360°. Time diversity is applied in the forward link while the space diversity is used in the backward link. By combination of multiple signals, the performance of EVA communication is remarkably improved. Comparing with traditional methods, the method proposed in this paper get a better coverage, higher efficiency and better signal to noise ratio. Multipath effect is weakened and the method can be used in the future EVA communication.

References 1. Yutao Hao, Baoguo Liu, Wang Ruijun et al (2014) Research on TT&C system in international space station. Manned Spaceflight 20(2):165–172 (in Chinese) 2. Zhi S, Bainan Z, Teng P et al. (2009) Research and development of Shenzhou-7. Manned Spaceflight 15(2):16–21, 48 (in Chinese) 3. Chen Jindun, Liu Weibo, Chen Shanguang (2009) The system design and flight application of astronaut EVA in Shenzhou VII mission. Manned Spaceflight 15(2):1–9 (in Chinese) 4. Xiao Yu, Ma Xiaobing, Zhongqiu Gou (2010) Failure mode and countermeasure design and implement for Shenzhou spaceship’s extravehicular activity. Spacecraft Eng 19(6):56–60 (in Chinese)

Design of EVA Communications Method for Anti-multipath

127

5. Zhihao Pang (2008) Development of technologies of extravehicular activities. Sci Technol Rev 26(20):21–27 (in Chinese) 6. Guangchen Zhu, Shijin Jia (2009) The ground verification of spacecraft EVA functions. Manned Spaceflight 15(3):48–53 (in Chinese) 7. Zhou Geqiang, Xuan Yong, Zou Yongzhong (2010) Application analysis on the novel CDMA technology in extravehicular communication. Manned Spaceflight 3:14–18 8. Guodong Zhao, Xiaoting Chen, Liu Huijie et al (2009) Channel model of LEO satellite and high resolution rake receiver. Aerosp Shanghai 26(5):52–55 9. Li Miao, Lv Shanwei, Zhang Jianglin et al (2004) A novel multistage blind space-time multiple receiver for DS/CDMA. Acta Electronica Sinica 32(9):1553–1555 10. Zhang Lin, Qin Jiayin (2007) New efficient methods for performance analysis of maximal ratio combining diversity receivers. Chin J Radio Sci 22(2):347–350

High Accurate and Efficient Image Retrieval Method Using Semantics for Visual Indoor Positioning Jin Dai1, Lin Ma1(&)

, Danyang Qin2

, and XueZhi Tan1

1

2

School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, People’s Republic of China [email protected] Electronic Engineering College, Heilongjiang University, Harbin 150080, People’s Republic of China

Abstract. Visual indoor positioning has a wide application because of its good positioning performance without additional hardware requirement. However, as the indoor scenes and complexity increase, the offline database will inevitably become large and the online retrieval time will also become long, which make visual indoor positioning unpractical. To solve this problem, we propose a Semantic and Content-Based Image Retrieval (SCBIR) method. By dividing the offline database into semantic databases with different semantic types, the retrieval scope of the image is reduced, and the retrieval time is reduced. First, we use the semantic segmentation method to detect the semantics. Then we divide different semantic scenes in terms of the image order and basic pattern of the semantics in the scene. Finally we use the images belonging to each different semantic scene to build a semantic database, so as to achieve online accurate and fast image retrieval. The experiment results indicate that the proposed method is suitable for large scale retrieval database, and it can reduce the retrieval time in the online stage on the premise of ensuring the accuracy of image retrieval that is critical for visual indoor positioning. Keywords: Visual positioning Database classification

 Image retrieval  Semantic database 

1 Introduction Nowadays, location based service (LBS) receives extensive attention with the rapid development of smart device [1]. The method of visual indoor positioning is highlighted by its unique advantages because it does not need additional hardware installation to complete image acquisition and positioning [2]. Visual indoor positioning is the most prominent method for future indoor positioning and navigation services. At present, indoor positioning system classification based on vision has two stages: offline stage and online stage. The offline stage is a process of data acquisition and establishment of an offline database. The online stage is an image matching process for position estimation by retrieving the online query image within the images in the offline database. At present, there are mainly two ways to reduce the online retrieval time. One © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 128–136, 2020 https://doi.org/10.1007/978-981-13-9409-6_17

High Accurate and Efficient Image Retrieval Method

129

way is to find new image feature extraction method with lower complexity and more accurate classification. The other way is to classify the offline database, so as to reduce the search area and speed up the image retrieval. For the former one, many researches have been made on the feature extraction in low-complexity. In [3–5], Content-based Image Retrieval (CBIR) method was used for retrieval in large storage of medical images, street graffiti images and satellite remote sensing images. For the latter one, the offline database classification is analyzed to reduce the search scope while ensuring the accuracy at the same time. Kido et al. [6] used a convolutional neural network and regional convolutional neural network to classify pulmonary diseases and improved the performance of detailed classification. In [7], images in the database were roughly divided into four categories by clustering method, and then accurately retrieved from each category. However, the clustering method requires a lot of tests to determine the optimal clustering category, which leads to poor retrieval accuracy. To sum up, the performance of the above methods either focuses on the retrieval accuracy but fail to provide the retrieval efficiency or improve the real-time performance but fail to offer an accurate retrieval. Therefore, in view of the above problems, this paper improves the CBIR method and proposes a Semantic and Content-Based Image Retrieval (SCBIR) method. It succeeds to make up for the problem of CBIR poor retrieval efficiency due to accuracy requirement. It can not only ensure the number of classification categories but also accurately classify multiple landmarks in a single image. In addition, the pixel positions of various semantics in the image can be clearly obtained, which can be applied in more scenes according to requirements.

2 System Model 2.1

Visual Indoor Positioning System Overview

A typical visual indoor positioning system has two stages: offline stage and online stage. The main task of the offline stage is data acquisition, which is to provide the required database for the online stage. The main task of the online stage is to provide users with location services, complete the image user provided retrieval and the positioning. The main workflow is shown in Fig. 1. Offline Stage

Database Construction

User Image

Feature Extraction

Feature Extraction

Image Retrieval

Image Capture

Position Estimation Online Stage

Fig. 1. Flow chart of visual indoor positioning system

130

J. Dai et al.

In the offline stage, the image acquisition is carried out, and the images with their associated positions are modeled into a database. Then feature extraction (or image classification) is carried out for images in database to form a new feature database or sub-database, which is convenient for efficient and rapid retrieval in the online stage. In the traditional visual indoor positioning system, in order to obtain a higher positioning accuracy, a large number of image data needs to be collected. For the feature extraction method, the online retrieval time will also be increased exponentially, which will affect the real-time online positioning. For the database classification method, each sub-database is large that it cannot achieve to reduce the retrieval time. To solve this problem, this paper proposes an image retrieval method based on semantics and content, which not only improves the accuracy of classification but also increases the number of semantic databases when the database becomes large, so that the number of images in the semantic database will decrease accordingly. 2.2

SCBIR Method Overview

In this paper, we propose a SCBIR method and the frame is shown in Fig. 2. Sematic Database Semantic Database 1

Semantic Label Classification

Semantic Segmentation Module

Semantic Label Classification

Semantic Database i ...

User Image

Semantic Segmentation Module

...

Image Database

Semantic Database N

Precise Retrieval

Match Image

Fig. 2. Frame of SCBIR method

As can be seen from Fig. 2, the core step of the method is the process of classifying the offline database into semantic databases, which can greatly reduce the retrieval range of the traditional retrieval algorithm and reduce the retrieval time. In the semantic database, high complexity and high precision method can be used to retrieve the results most consistent with the input image to be retrieved. The precision retrieval process of the second step is premised on accuracy. It is assumed that the semantics contained in the positioning environment has c class (excluding the background class), and the total semantic library is defined as S ¼ ½S1 ; S2 ; . . .; Sc , where Si ði 2 cÞ is the corresponding semantics. For each image in the image database, the contents may contain multiple semantics, so the combination of different semantics is required for the classification of each image. According to the permutation and combination, the number of semantic combination types can be obtained as N ¼ 2c . We classify the images with the same semantic combination into one category, and finally get the semantic database DS ¼ ½D1S ; D2S ; . . .; DNS . DiS ði 2 NÞ is the semantic sub-database, which is formed by centralizing images with the same semantics after semantic discrimination of the offline image database.

High Accurate and Efficient Image Retrieval Method

131

3 Proposed Method 3.1

Semantic Segmentation Network Framework

On the basis of Sect. 2, we will analyze the semantic segmentation framework in detail in this section. In the semantic segmentation, we mainly involve the region-based full convolution network in applied machine learning. The main flow diagram is shown in Fig. 3. Semantic Discriminant

Image i

FCN

Feature Map

CONV Layer

RPN PositionSensitive Score Map

ROI

PositionSensitive Pooling Layer

Classify

Semantic Labels i

ROI Sub-network

Fig. 3. Flow chart of R-FCN framework

According to Fig. 3, the R-FCN Network is composed of the FCN (Fully Convolutional Network), RPN (Region Proposal Network), and ROI sub-network. In RPN, because the input image contains location information and category information of various semantics, the overlap rate of both ground truth and ROI need to be calculated to judge the location of the real semantics. This overlapping rate is defined as Intersection over Union, which is a standard to measure the accuracy of detecting the object. It is often evaluated by Jaccard coefficient: JðA; BÞ ¼

jA \ Bj jA \ Bj ¼ ; jA [ Bj j Aj þ jBj  jA \ Bj

ð1Þ

where A and B respectively represent the predicted range and the real range. 3.2

Precise Semantic Segmentation

In the ROI sub-network, the convolution operation is also carried out on the feature map output by the FCN [8]. The ROI sub-network uses the convolution operation to generate k position-sensitive score graphs for each category on the entire image. The value on each position-sensitive score map represents the score of the category at that position in the space. For a region proposal box of R  S size obtained by RPN, the box can be divided into k  k sub-regions, and the size of each sub-region is R  S=k 2 . Because too much data will interfere with the subsequent classification operations, it is necessary to compress the data with pooling operations. For any sub-region binði; jÞ, ð0  i; j  k  1Þ, define the position-sensitive pooling operation as:

132

J. Dai et al.

rc ði; jjHÞ ¼

X

1 zi;j;c ðx þ x0 ; y þ y0 jHÞ; n ðx;yÞ2binði;jÞ

ð2Þ

where rc ði; jjHÞ is the pooled response of sub-region binði; jÞ to category c, zi;j;c is the position sensitive score map corresponding to sub-region binði; jÞ, ðx0 ; y0 Þ represents the pixel coordinates in the upper left corner of the target candidate box, n is the number of pixels in the sub-region, and H represents all the parameters obtained by network learning. Finally, the mean value of the pooling response output rc ði; jjHÞ of the k  k sub-region is calculated, and the ðc þ 1Þ-dimensional feature map output by the ROI pooling layer is summed according to the dimensions to obtain a ðc þ 1Þdimensional vector: X rc ði; jjHÞ: ð3Þ rc ðHÞ ¼ i;j

By plugging this vector into the Softmax formula, we can use the Softmax regression class method to get the probability that the target in the search box belongs to each category and classify it according to the maximum probability: sc ðHÞ ¼ e

rc ðHÞ

, c X

erc0 ðHÞ :

ð4Þ

c0

Each semantic category is accompanied by a four-dimensional vector, denoted fx; y; w; hg, which respectively represents the central abscissa, central ordinate, width and height of the current semantic ROI area. In the network, loss function L is composed of classification loss function Lcis and position loss function Lreg : Lðs; tx;y;w;h Þ ¼ Lcis ðsc Þ þ k signðc ÞLreg ðt; t Þ  1 c [ 0 ; signðc Þ¼ 0 else

ð5Þ

where c stands for ground truth, and c ¼ 0 means the classification is correct. k represents the balance parameter, and if k ¼ 1, it means that the classification loss and the location loss are equally important. t represents the semantic location of prediction, and t represents the location of ground truth. In order to learn the extreme case, our method adopt the OHEM (online hard example mining) [9]. 3.3

Efficient Image Retrieval

The semantic information in the image is bound to the image in the form of the label. For input image Itest , if the image contains semantic components, the semantic label may be Stest ¼ ½S1 ; S2 ; . . .; Sk , where 1  k  c. Therefore, we define a semantic discriminant vector X ¼ ½x1 ; x2 ;    ; xc T , where:

High Accurate and Efficient Image Retrieval Method

 xi ¼ i2c

1 Si 6 ; : 0 else

133

ð6Þ

In this paper, the semantic information contained in each image is converted into a corresponding digital label. Define transformation vector K ¼ ½20 ; 21 ; . . .; 2c  to convert multiple semantic labels into unique digital label. l ¼ K  X.

ð7Þ

Each image in the offline database is input into the network,, the matching image will be found by applying the appropriate precise retrieval method in DiS .

4 Implementation and Performance Analysis 4.1

Experiment Environment

In order to test the performance of our proposed method and compare it with the existing method, we used the 12th floor of Information Building of Harbin Institute of Technology as the experimental environment. The floor plan is shown in Fig. 4.

Start Point

Image Acquisition Path

Fig. 4. Floor plan for the experimental scene

In this experiment, 0.5 m was taken as the data acquisition interval, and the images were collected in the forward and reverse direction respectively. After many experiments, we set the initial learning rate to 0.01 and the iteration times to 7000. 4.2

Experiment Results

After every image in the offline database is semantically labeled, the images in the offline database can be classified according to semantic components. The classification accuracy is shown in Table 1.

134

J. Dai et al. Table 1. Different semantic recognition accuracy

Semantic categories Door Window Heating Poster Exhibition board Ashbin Fire hydrant Emergency exit Vent

Identify correct 531 148 135 180 314 19 65 61 25

Identify wrong 8 1 3 6 7 3 1 2 1

Identify accuracy (%) 98.52 99.33 97.83 96.77 97.82 86.36 98.48 96.83 96.15

As shown in Table 1, the recognition accuracy of each semantic conforms to the accuracy requirements of offline database classification, and the following offline database classification algorithm can be carried out when each image is given a correct semantic label. The database classification confusion matrix of SCBIR algorithm is shown in Fig. 5. The algorithm in this paper automatically divides the database into 35 categories according to semantic information. Each row in the figure represents the real category of each image, and each column represents the predicted category of the image after passing through the neural network.

Fig. 5. SCBIR algorithm classifies confusion matrices

As can be seen from Fig. 5, semantic segmentation network classifies images accurately in most offline databases. For a small number of categories, the neural network has the lowest classification accuracy of 67%, but the misclassification of image database will not have a great impact on the following retrieval work. In this paper, the proposed algorithm is compared with the Mean Average Precision (MAP) of

High Accurate and Efficient Image Retrieval Method

135

the traditional CBIR algorithm [10]. The result is shown in Fig. 6a, and this paper also compared the retrieval time costs of the two algorithms under different database capacities, as shown in Fig. 6b.

(a) MAP of two retrieval algorithms

(b) time cost of two retrieval methods

Fig. 6. MAP and time cost of two retrieval algorithms

In Fig. 6a, the average retrieval accuracy of the SCBIR method proposed in this paper fluctuates around 90% with the increase of retrieval times, while the average retrieval accuracy of the traditional CBIR algorithm fluctuates around 60%. As can be seen from Fig. 6b, when the retrieval database capacity is small, the CBIR algorithm has certain advantages. However, after retrieving more than 100 images in the database, the advantages of SCBIR algorithm are revealed. Therefore, the method proposed in this paper plays an important role in reducing the time cost and improving the retrieval accuracy for large-scale retrieval databases.

5 Conclusion In visual indoor positioning, the image retrieval time within the offline database will increases when the offline database become large, which will affect the real-time performance of online positioning. Therefore, this paper proposes an efficient retrieval method based on semantics and content. Simulation results show this method can not only improve the accuracy of retrieval but also has good real-time performance in large database retrieval. Acknowledgements. This paper is supported by National Natural Science Foundation of China (61971162, 61771186, 41861134010) and Heilongjiang Province Natural Science Foundation (F2016019).

136

J. Dai et al.

References 1. Liu RP, Hedley M, Yang X (2013) WLAN location service with TXOP. IEEE Trans Comput 62(3):589–598 2. Yuda M, Xiangjun Z, Weiming S et al (2017) Target accurate positioning based on the point cloud created by stereo vision. In: International conference on mechatronics & machine vision in practice. IEEE 3. Parra A, Zhao B, Kim J, et al (2014) Recognition, segmentation and retrieval of gang graffiti images on a mobile device. In: IEEE international conference on technologies for homeland security. IEEE, pp 178–183 4. Bouteldja S, Kourgli A (2015) Multiscale texture features for the retrieval of high resolution satellite images. In: International conference on systems, signals and image processing. IEEE, pp 170–173 5. Pradhan J, Pal AK, Banka H (2017) A prominent object region detection based approach for CBIR application. In: Fourth international conference on parallel. IEEE 6. Kido S, Hirano Y, Hashimoto N (2018) Detection and classification of lung abnormalities by use of convolutional neural network (CNN) and regions with CNN features (R-CNN). In: 2018 international workshop on advanced image technology (IWAIT). IEEE 7. Xue H, Ma L, Tan X (2016) A fast visual map building method using video stream for visual-based indoor localization. In: International wireless communications and mobile computing conference. IEEE, pp 650–654 8. Girshick R (2016) Fast R-CNN. In: 2015 IEEE international conference on computer vision (ICCV). IEEE 9. Shrivastava A, Gupta A, Girshick R (2016) Training region-based object detectors with online hard example mining 10. Vikhar P, Karde P (2017) Improved CBIR system using Edge Histogram Descriptor (EHD) and Support Vector Machine (SVM). In: International conference on ICT in business industry & government. IEEE

Massive MIMO Channel Estimation via Generalized Approximate Message Passing Muye Li1 , Xudong Han1 , Weile Zhang2 , and Shun Zhang1(B) 1 Xidian University, Xi’an 710071, People’s Republic of China {myli 96,xdhan 1}@stu.xidian.edu.cn, [email protected] 2 Xi’an Jiaotong University, Xi’an 710049, People’s Republic of China [email protected]

Abstract. In this paper, we proposed a channel estimation scheme for an off-grid massive MIMO channel model, with the consideration of carrier frequency offset at the BS antenna array. We first developed an offgrid channel model for the spatial sample mismatching problem. Then, an EM based sparse Bayesian learning framework was built to capture the model parameters, i.e., the off-grid bias and the CFO. While in the learning process, a damped generalized approximate message passing algorithm was introduced to obtain accurate needed posterior statistics. Finally, simulation results are exhibited to certify the performance of our proposed scheme. Keywords: Massive MIMO · Off-grid Sparse Bayesian learning · GAMP

1

· Carrier frequency offset ·

Introduction

As has been a hot research spot for years, massive multiple-input multiple-output (MIMO) has become a critical technology for the 5th generation (5G) and beyond wireless networks, due to its efficient spectral and energy efficiency [1]. However, precise channel state information (CSI) is needed for utilizing the advantages, while it will cause tremendously training and feedback overhead in the frequencydivision duplex (FDD) system [2]. To get over this bottleneck, [3] proposed a low-rank model with the help of antenna array theory, and can implement channel estimation without the acquisition of channel covariance matrices (CCMs). Based on this, several channel estimation schemes were proposed [4,5]. As the channel sparsity was conveyed by utilizing normalized discrete Fourier transform (DFT) basis, it may cause serious spatial sample mismatching as well as energy leakage when considering c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 137–144, 2020 https://doi.org/10.1007/978-981-13-9409-6_18

138

M. Li et al.

the randomness of direction of arrivals (DOAs). Furthermore, as the carrier frequency offset (CFO) exists at the transmitter as well as the receiver [6], the using of existing schemes will also incur some estimation errors. In this paper, we build an off-grid channel model with the consideration of CFO. Then, an expectation maximization (EM) based sparse Bayesian learning framework was proposed to simultaneously estimate channel model parameters as well as the CFO. In the expectation step, generalized approximate message passing (GAMP) algorithm was introduced to achieve needed posterior statistics and reduce the computation complexity.

2

System Model and Channel Characteristics

Consider a downlink massive MIMO network, where Nt  1 uniform linear array (ULA) antennas are equipped at the BS, and K single-antenna users are randomly distributed in the field. We adopt a geometric channel model with L emerging paths to the k-th user. Denote θk,l,m as a direction of departure (DOD) of k-th user, l-th path, m-th block, and the BS antenna array spatial steering vector can be defined as: T  2πd 2πd (1) a(θk,l,m ) = 1, ej λ sin(θk,l,m ) , . . . , ej(Nt −1) λ sin(θk,l,m ) , where d ≤ λ/2 is antenna spacing of the BS; λ is the carrier wavelength. It is assumed that the DOD of each path is quasi-static during a block of Lc and changes from block to block, and the DL channels of different users are independent statistically. The downlink channel gk,m ∈ CNt ×1 from the BS to the k-th user during the m-th block can be written as [7] gk,m =

L 

αk,l a(θk,l ).

(2)

l=1

As in [8], the VCR can be utilized to dig the sparsity of gk,m as rk,m = FNt gk,m , where rk,m is the downlink virtual channel, and FNt is the Nt × Nt unified discrete Fourier transformation (DFT) matrix. In real transition process, the DODs would not exactly impinging on the DFT basis, and the direction mismatching emerges. Under such circumstance, define the bias vector ρk , we derive a bias-added DFT matrix, whose spacial index will be added with ρk , i.e. p∗ = p + [ρk ]p . Furthermore, as scattering rings exist, there may be not only one range of DOD to the BS for a specific user. Before proceeding, we use A to represent FNt for simplicity, the channel vector gk,m can be derived with the Taylor series expansion as gk,m ]Qk = [Φ(ρk )]H gk,m ]Qk , gk,m = [AH + BH diag(ρk )]:,Qk [˜ :,Qk [˜

(3)

where [BH ]:,p is obtained through taking derivative of [AH ]:,p with respect to p, while every element of ρk is the bias added on the corresponding predefined

Massive MIMO Channel Estimation

139

˜k,m grid. Φ(ρk ) is the pre-described bias-added DFT matrix. And the variable g ˜k,m ∼ CN (0, Γk ), where Γk = diag(γ) = is a complex Gaussian Markov vector, g ˜k,m . The spatial signature [9] diag([γ1 , γ2 , . . . , γNt ]) is the covariance matrix of g set is determined as    d  (4) Qk = pp + ρp = Nt sin(θk,l,m ), p ∈ Z , ρk,l ∈ [−0.5, 0.5]. λ As the DL channel model is constructed, the estimation of the channel is equivalent to learning the model parameters γ and ρ. Moreover, as the characteristics of the model change very slow comparing to the long coherence time block, the parameters can be seen as invariant in the following training phase.

3

Parameters Learning Through Generalized Approximate Message Passing Based EM

Without loss of generality, we assume that Lt symbol-time are utilized in the training phase, and the channel is invariant during a long block. As the transmission of each user is the same, we take one user as an example for the illustration simplicity. Denote S as a unified Lt × Nt random training matrix with its power σp2 and zero mean, and is known at both BS and the specific user. The received signal can be written as ˜ + n, y = ESΦ(ρ)H g 

(5)

J

denotes the independent additive complex Gaussian where n ∼ CN noise and σn2 is the noise variance, E = diag(1, e1×j2π/Lt , . . . , e(Lt −1)×j2π/Lt ) is the Lt × Lt CFO matrix generated by the timing offset at BS, and the CFO  is unknown. Define the set Ξ = {γ, ρ, }. With the receive signal ym , the aim of model parameters learning is to estimate the accurate Ξ. Thus, we employ an EM based sparse Bayesian learning (SBL) framework to capture the unknown parameters set Ξ, where GAMP is utilized in the expectation step. 0, σn2 INt

3.1

EM-based Sparse Signal Learning

The EM algorithm will produce a sequence of estimated Ξ with the iteration runs, and each iteration is separated into two steps: • Expectation step (E-step)     ˆ (l−1) = E ˜ ; Ξ) . Q Ξ, Ξ ˆ (l−1) ln p (y, g ˜ |y;Ξ g

(6)

• Maximization step (M-step)

ˆ (l−1) . Ξ (l) = arg max Q Ξ, Ξ Ξ

(7)

In the l-th iteration, the aim of E-step is to update those objective functions, while the M-step aims to update the estimation Ξ (l) by maximizing the current expectation function [10].

140

3.2

M. Li et al.

E-Step

In this subsection, we will first carefully derive the objective functions of the parameters to be estimated, while obtain the corresponding posterior statistics by employing GAMP. ˆ (l−1) ) can be derived as: The objective function Q(Ξ, Ξ     ˆ (l−1) ) + E ˆ (l−1) ) ˆ (l−1) ) = E ln p(y|˜ g , Ξ ln p(˜ g | Ξ Q(Ξ, Ξ (l−1) (l−1) ˆ ˆ ˜ |y;Ξ ˜ |y;Ξ g g   (l−1) ˆ g, Ξ ) + Eg˜ |y;Ξˆ (l−1) {ln p(˜ g|γ)} = Eg˜ |y;Ξˆ (l−1) ln p(y|˜ +Eg˜ |y;Ξˆ (l−1) {ln p(γ)} .

(8)

It is easy to find that

ˆ (l−1) ) = CN y; J˜ p(y|˜ g, Ξ g, σn2 INt , p(˜ g|γ

(l−1)

) = CN (˜ g; 0, Γ) .

(9) (10)

Plugging (9) and (10) into (8), we can rewrite the expectation function as follows:   H H H ˆ (l−1) ) = C− 1 yH y−E Q(Ξ, Ξ {2{y J˜ g }} + E {˜ g J J˜ g } (l−1) (l−1) ˆ ˆ ˜ |y;Ξ ˜ |y;Ξ g g σn2 ˆ˜ (l) Define g and

˜ } + ln p(γ). − ln |πΓ| − Eg˜ |y;Ξˆ (l−1) {˜ gH Γ−1 g (11)     H ˆ (l−1) , ˆ (l−1) , Θ(l) = E ˜ |y, Ξ ˜ ˜ = Eg˜ |y;Ξˆ (l−1) g |y, Ξ g g (l−1) ˆ ˜ |y;Ξ g

Δ = [1, e1×j2π/N , . . . , e(N −1)×j2π/N ]T , T

Δ1 = [0, 1 × j2π/N, . . . , (N − 1) × j2π/N ] , Δ2 = [0, (1 × j2π/N )2 , . . . , ((N − 1) × j2π/N )2 ]T ,

(12) (13) (14)

ˆ˜ 2j +τg˜ , where τg˜ is the posterior varifor further use. It is obvious that [Θ]j,j = g j j ˜j . By employing Taylor series expansion, Δ can be represented ance matrix of g as: Δ ≈ IN + Δ1  + Δ2 2 .

(15)

With the above operation, we will further derive the objective functions for each parameter by doing some useful calculations as: ˆ (l−1) ) = ln |πΓ| + tr{Γ−1 Θ} − ln p(γ) + C1 , Q(γ, Ξ ˆ Q(ρ, Ξ

(l−1)

T

H

) = ρ {(BS SB )  Θ}ρ ˆ˜ ∗ )BSH EH y −2{diag((g −diag(BSH SAH Θ)}T ρ + C2 ,

ˆ Q(, Ξ

(l−1)

(16)

H ∗

ˆ ˜ } ) = 2{ΔT1 diag(y∗ )SΦH g T ∗ ˆ ˜ }2 + C3 , + 2{Δ2 diag(y )SΦH g

(17) (18)

Massive MIMO Channel Estimation

141

where (15) and EH E = I is utilized, and C1 , C2 , C3 are the items not related with γ, ρ, and , respectively. It can be found that these functions are dependent on ˆ˜ and Θ, and now we turn to calculate the following two posterior statistics, i.e., g these terms. 3.3

GAMP for Posterior Statistics

ˆ ˜ and Θ In this subsection, our objective is to obtain the posterior statistics g under the channel model (3) observation equation (5). It is clear that the posterior joint PDF can be calculated with Bayes rule as ˆ (l−1) )p(˜ ˆ (l−1) ) (l−1) p(y|˜ g; Ξ g; Ξ ˆ )= p(˜ g|y; Ξ ˆ (l−1) ) p(y; Ξ

(19)

However, it is difficult to directly approach these terms, as the high dimensional integrals exists over the marginal distributions. To tackle this problem, and to embrace higher convergence performance, we will resort to the damped ˜ , with given prior knowledge of GAMP to achieve the MMSE estimation of g g). The details of the damped GAMP are p(˜ g) and a likelihood function p(ym |˜ summarized in Algorithm 1.

Algorithm 1. GAMP for posterior charactoristics 0

ˆ ˜ ← 0. 1: Initialize: T ← |J|2 , τˆ 0g˜ , γ 0 , (σ 2 )0 > 0, s0 , g 2: for k = 1, 2, . . . , Kmax do 3: 1/τ kp ← Tτ kg˜ . ˆk. ˜ 4: pk ← sk−1 + τ kp Jg k k  k k 5: τ s ← τ p gs (p , τ p ). 6: sk ← (1 − θs )sk−1 + θs gs (pk , τ kp ). 7: 1/τ kr ← TT τ ks . k ˆ ˜ − τ kr JT sk . 8: rk ← g ← τ kr gg˜ (rk , τ kr ). 9: τ k+1 ˜ g k+1 ˆ ˆ k + θg˜ gg˜ (rk , τ kr ) . ˜ ˜ 10: g ← (1 − θg˜ )g k+1 ˆ ˜ |2 + τ k+1 ). 11: Θk+1 = diag(|g ˜ g

k+1 k+1 2 ˆ ˆ k 2 /g ˆ ˜ ˜ ˜ 12: if g −g  < η then 13: BREAK. 14: end if 15: end for k+1 ˆ ˜ , Θk+1 . 16: return g

k+1

ˆ˜ In the algorithm, |J|2 and |g |2 is a component wise operation. θs , θg˜ ∈ (0, 1] are damping factors, Kmax is the maximum allowed number of GAMP iterations, η is the threshold parameter. With the help of sum-product algorithm,

142

M. Li et al.

the MMSE estimation problem is modified to a sequence of scalar MMSE estimates with intermediate variables p and r by using the output function and input function. the two functions are separately defined as:  zm p(ym |zm )CN (zm ; τppm , τp1 )dzm m m , (20) [gs (p, τ p )]m =  p(ym |zm )CN (zm ; τppm , τp1 )dzm m m  gn ; rn , τrn )d˜ gn g˜n p(xn )CN (˜ , (21) [gg˜ (r, τ r )]n =  gn ; rn , τrn )d˜ gn p(˜ gn )CN (˜ ˜ and z = J˜ where p and r denote the approximations of noise influenced g g, with covariance τ p and τ r , respectively. Further, for the parameterized prior imposed on g˜, we can derive: (p/τ p − y) , σn2 + 1/τ p γ gg˜ (r, τ r ) = r. γ + τr

gs (p, τ p ) =

3.4

(22) (23)

M-Step

ˆ (l) by maximizing (16)–(18). In the following, we will update Ξ (1) Updating γ: As we have derived the objective function (16) only related with γ, by taking the derivation with respect to γ and set as zero, we can obtain: (l)

(l)

ˆ˜ |2 + τ . γ (l) = |g ˜ g

(24)

(2) Updating ρ and : As ρ and  are uncoupled, we can separately update them. In the similar way, the following updating equations can be derived: ρ(l) = {(BSH SBH )∗ Θ}−1 {diag((˜ g∗ )BSH EH y H H − diag(BS SA Θ)}, ˜ˆ } {ΔT1 diag(y∗ )SΦH g (l) = − , T ∗ H ˆ˜ } {Δ2 diag(y )SΦ g

(25) (26)

With the estimated model parameters, we can further acquire the estimation of virtual channel accurately.

4

Simulations Results

In this section, we will evaluate the performance of our proposed estimation scheme through numerical simulation. We consider a massive MIMO network where the BS is equipped with Nt = 128 antennas. Lt = 64 is the length of training sequences. We take the DOD range within [−49◦ , −43◦ ] as an example to show the perfect performance. The signal-to-noise ratio SNR = σp2 /σn2 . The

Massive MIMO Channel Estimation

143

0

MSE of model parameters (dB)

-5 -10 -15 -20 -25 -30 -35

1

2

3

4

5

6

7

8

9

10

EM iterations

Fig. 1. Parameters MSE versus EM step with SNR = 20 dB

performance is measured as the average MSEs of the model parameters as well 2 τ i −xi  ˜ , ρ, . ,x = g as the virtual channel, i.e., MSEx = τ1 i=1 ˆxx 2  i First, we focus on the convergence of our method. Figure 1 presents the MSE of all the estimated parameters versus the number of EM iteration, with SNR = 20 dB. We can infer from Fig. 1 that the parameters get converged within 7 or 8 iterations, which shows the great convergence speed of our proposed scheme. Then we investigate the relationship between the performance and SNR. To get the best performance, we run EM algorithm for saturated iterations for each SNR case. With the increase of the SNR, Fig. 2 shows that the MSE of all parameters as well as the virtual channel are decreasing almost linearly. Furthermore, although the SNR is low, the MSE of virtual channel is also acceptable.

5

Conclusion

We proposed a novel channel estimation scheme for off-grid massive MIMO channel model in this paper, where the carrier frequency offset is taken into consideration. first, an off-grid channel model was built. Then, an EM based sparse Bayesian learning framework was introduced to capture the off-grid bias and the CFO. In the learning process, to acquire the needed posterior statistics, a damped GAMP algorithm was introduced. Numerical results showed the wonderful performance of our proposed scheme.

144

M. Li et al. 10

MSE of model parameters (dB)

0

virtual channel

-10

-20

-30

-40

-50

0

5

10

15

20

25

30

SNR

Fig. 2. Parameters MSE versus SNR

References 1. Marzetta TL (2010) Noncooperative cellular wireless with unlimited numbers of base station antennas. IEEE Trans Wireless Commun 9(11):3590–3600 2. Noh S, Zoltowski MD, Love DJ (2016) Training sequence design for feedback assisted hybrid beamforming in massive MIMO systems. IEEE Trans Commun 64(1):187–200 3. Xie H, Gao F, Zhang S, Jin S (2017) A unified transmission strategy for TDD/FDD massive MIMO systems with spatial basis expansion model. IEEE Trans Veh Technol 66(4):3170–3184 4. Tan W, Matthaiou M, Jin S, Li X (2017) Spectral efficiency of DFT-based processing hybrid architectures in massive MIMO. IEEE Wireless Commun. Letters 6(5):586–589 5. Ma J, Zhang S, Li H, Gao F, Jin S (2018) Sparse Bayesian learning for the timevarying massive MIMO channels: acquisition and tracking. IEEE Trans Commun, pp 1–1 6. Wu L, Zhang X, Li P (2008) A low-complexity blind carrier frequency offset estimator for MIMO-OFDM systems. IEEE Signal Process Lett 15:769–772 7. You L, Gao X, Swindlehurst AL, Zhong W (2016) Channel acquisition for massive MIMO-OFDM with adjustable phase shift pilots. IEEE Trans Signal Process 64(6):1461–1476 8. Zhao J, Gao F, Jia W, Zhang S, Jin S, Lin H (2017) Angle domain hybrid precoding and channel tracking for millimeter wave massive MIMO systems. IEEE Trans Wireless Commun 16(10):6868–6880 9. Parvazi P, Gershman AB (2010) Direction-of-arrival and spatial signature estimation in antenna arrays with pairwise sensor calibration. In: 2010 IEEE international conference on acoustics, speech and signal processing, pp 2618–2621 10. Wipf DP, Rao BD (2004) Sparse Bayesian learning for basis selection. IEEE Trans Signal Process 52(8):2153–2164

Study of Key Technological Performance Parameters of Carbon-Fiber Infrared Heating Cage Fei Xu(&), Yan Xia, Guoqing Liu, Yuzhong Li, Jinming Chen, and Chun Liu Beijing Institute of Spacecraft Environment Engineering, Beijing 100094, China [email protected]

Abstract. Using a thermal-vacuum test and Monte Carlo simulation analysis, this paper examined the key technical performance parameters of the carbonfiber heating cage and compared them with those of the traditional nickelchromium alloy heating cage. The results indicated that the heating capacity and temperature uniformity of the carbon-fiber heating cage for spacecraft were better than those of the traditional nickel-chromium alloy heating cage, and that the electro-thermal properties of the carbon-fiber infrared heating cage met the requirements of the spacecraft thermal-vacuum environment. Keywords: Carbon fiber  Infrared heating cage  Heat-flow density  Thermal vacuum test  Thermal balance test

1 Introduction The traditional infrared heating cage uses black paint coated with a nickel-chromium alloy belt as the heating body. The production process involves cutting nickelchromium alloy into belts, processing the skeleton of the heating cage and the PTFE belts, drilling holes in both the heating and PTFE belts and connecting them with screws and springs, fixing heating strips by spot welding, and finally spraying the cage with black paint and cleaning up. Most of these steps are manually performed and are labor- and time-intensive [10]. The use of carbon fiber as a heating material can avoid the above shortcomings. Carbon-fiber composites, which are widely used in the aerospace field [1, 9] are carbon materials with emissive values similar to those of blackbodies. They have the advantages of high specific strength, a high specific modulus, and high electro-thermal radiation efficiency [2, 3, 7, 8]. At the same time, their thermal expansion coefficient in high and low temperature environments is almost zero [4], and they can adapt to the complex thermal radiation environment of space. Xu et al. [10] studied the feasibility of using carbon fibers as heating-cage electrothermal materials. With respect to its convenience of assembly, improved thermal radiation efficiency, lightweight heating cage, and adaptability to a complex high and low temperature environment, the carbon fiber heating cage has great advantages over the traditional infrared heating cage. In this paper, the design and heating performance

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 145–155, 2020 https://doi.org/10.1007/978-981-13-9409-6_19

146

F. Xu et al.

of the carbon-fiber heating cage were analyzed and experimentally studied. The temperature stability and heat-flow uniformity of the carbon-fiber heating cage were specifically tested, as these two characteristics were key technical indicators of the performance of the heating cage and were also important indicators for evaluating whether the carbon-fiber heating cage could be applied in a thermal-vacuum test for spacecraft [5, 6].

2 Structural Design of Carbon-Fiber Heating Cage and Layout of Heat-Flow Meter Used in Testing 2.1

Structural Design of Carbon-Fiber Heating Cage

With reference to the structure of the traditional infrared heating cage, we designed and tested the performance of the carbon-fiber heating cage shown in Fig. 1 in a thermalvacuum environment. The main difference between the carbon-fiber heating cage and the nickel-chromium-alloy heating cage was their different heating belts, which meant their connection methods differed. Because the thermal expansion coefficient of the carbon-fiber belt was very low, the length of the belt varied very little during a high and

Fig. 1. Carbon fiber heating cage

Study of Key Technological Performance Parameters

147

low temperature cycle. In the high temperature stage, there was no possibility of contact between the belt and the surface of the spacecraft or other equipment because of the length of the belt. Therefore, the tension spring could be omitted, as could the hanging spring link. In processing, holes were drilled directly on the PTFE board above and below the skeleton of the heating cage, such that the strips could pass through the holes to form a loop, which greatly simplified the assembly process. In this study, we designed a cubic carbon-fiber heating cage to test its performance in a thermal-vacuum environment. This cage had a front, rear, left, right, and top side, with the dimensions of 1000 mm  1000 mm  1000 mm  1000 mm. Each side was controlled independently, and 12 loops were attached to each side. The belt spacing was 40 mm, the belt width was 5 mm, and the coverage coefficient was 0.125. Figure 1 shows a photograph of the carbon-fiber heating cage. 2.2

Layout of Heat-Flow Meter Used in Testing

When the heating cage was more than 100 mm away from the satellite surface, the uniformity of the heat flow to the satellite surface was ensured by the use of a heating cage with a coverage coefficient greater than 0.06. Therefore, the carbon-fiber heating cage adopted the same parallel belt layout as that of the traditional heating cage, and its coverage coefficient was 0.125. The distance between the heat-flow meter and the heating-belt surface was 280 mm. This design met the uniformity requirements. On the plane 280 mm below the infrared cage at the top, eight heat-flow meters were arranged that point in different directions. As shown in Fig. 2, the layout consisted of four stainless steel square tubes with a rod length of 700 mm and a middle square edge length of 350 mm.

Fig. 2. Layout of heat-flow meter

148

F. Xu et al.

3 Testing and Analysis of the Performance in a ThermalVacuum Environment 3.1

Test Preparation

Considering the testing cost and the actual situation to be provided by the space environment simulator, we selected the KM2F space environment simulator for this experiment. After installing the carbon-fiber heating cage and the test heat-flow meter in the vacuum container of the KM2F equipment, we connected the heating cable and measuring line. After performing a circuit conduction test to ensure its proper operation, we closed the door of the container. After evacuation, the vacuum vessel pressure reached 10−3 Pa, liquid nitrogen was applied to the heat sink, and the temperature of the heat sink reached 100 K. 3.2

Testing

When the environment of the KM2F simulated chamber met the testing requirement, that is, a vacuum pressure of 10−3 Pa and a heat sink temperature less than 100 K, the test was started. This test involved three aspects: measurement of heat-flow uniformity and heating capacity and a heating-capacity comparison test. 3.2.1

Measurement of Heat-Flow Uniformity in Carbon-Fiber Infrared Cage When the vacuum pressure in the simulated chamber was greater than 10−3 Pa and the heat sink temperature was lower than 100 K, the test began. When heating was applied, the 12 circuits of the top infrared cage were uniformly charged, that is, the applied current of each circuit was the same. When the heat flow recorded by the heat-flow meter became stable, the next current value was applied. The current values applied during the test were 1, 2, and 3 A. The standard definition used here for a stable heat flow was: within 1 min, the change in the heat-flow meter reading was within 0.1 °C. The following experimental results were based on this standard. 3.2.2 Testing the Heating Capacity of Carbon-Fiber Heating Cage As a satellite simulation specimen, we used a 1-mm thick aluminum plate to fabricate a five-sided box structure with no bottom panel. The dimensions of the box were 450 mm  450 mm  450 mm. The outer surface of the box was coated with black paint, and three thermocouple temperature-measurement points were fixed to each side. The five sides of the carbon-fiber infrared cage corresponded to the five sides of the box, with the belt surface 300 mm away from the surfaces of the simulated specimen. The 12 circuits of each infrared cage were uniformly charged, that is, the same current was applied to each circuit. When the heat-flow value measured by the heat-flow meter became stable, the next current value was applied. The current values applied during the test were 1, 2, and 3 A.

Study of Key Technological Performance Parameters

149

3.2.3

Comparison of Heating Capability with Traditional NickelChromium Alloy Heating Cage Using the same simulated specimen and temperature measurement points, we compared the above results with those of the traditional infrared cage, which has a coverage coefficient of 0.5 and envelope dimensions of 700 mm  700 mm  600 mm. The heating capacities of the two kinds of heating cages under different currents were compared using the same current ladder described above. 3.3

Test Results and Analysis of Heat-Flow Uniformity

Figure 3 shows the test results of the heat-flow meter, in which the abscissa is time and the ordinate is temperature in °C. As shown in Fig. 3, the heating temperature clearly increased as heating current changed from 1 to 2 A and then 3 A.

Fig. 3. Heat-flow meter data curve of carbon-fiber heating cage specimen

3.3.1 Calculation of Heat Flux To analyze the test data recorded by the heat-flow meter, we used the StefanBoltzmann formula, as follows: j ¼ erT4 where j heat-flow uniformity, W/m2; e the radiation coefficient of the carbon-fiber heating belt, which is 0.91 [10]; r Stefan constant, 5:67  108 W m2 K4 ; T absolute temperature, K

ð1Þ

150

F. Xu et al.

3.3.2 Calculation of Heat-Flow Uniformity The heat flux could be calculated using Formula (2): E ¼ ðJmax  Jmin Þ  ðJmax þ Jmin )  100%

ð2Þ

where E heat flux; Jmax maximum heating heat flow, W/m2 ; Jmin minimum heating heat flow, W/m2 In Formula (2), Jmax and Jmin were obtained based on the data obtained in the experiment by the eight heat-flow meters after heat-flow stabilization (Table 1). Table 1. Calculations of heat-flow data Heating current, I (A) 1 2 3

Mean heat flow density, Jmean ðW/m2 Þ 236.3 713.2 1458.9

Heat flow density uniformity, E (%) 5.25 7.61 8.76

In addition, we analyzed the data recorded by each heat-flow meter and their deviations from the mean, using the calculation method shown in Formula (3): di ¼ jJi  Jmean j  Jmean  100%

ð3Þ

where di the deviation in heat flow of the No. i heat-flow meter; where i ranges from 1–8; Ji the heat flow value of the No. i heat-flow meter, W/m2 : The degrees of deviation between the data and the mean value is shown in Fig. 4, where the abscissa identifies each of the eight Nos. i of the heat-flow meters, and the ordinate is di , the degree of heat-flow deviation of each No. i heat-flow meter. Figure 4a: The heat-flux balance of 1 A showed that except for the second meter, the deviation in the heat flux remains below the 5% line. Figure 4b: The heat flux balance of 2 A showed deviations at three points in meter Nos. 1, 2, and 8, which were above the 5% line. Figure 4c: The heat flux balance of 3 A shows deviations at four points in meter Nos. 1, 3, 7, and 8, which were above the 5% line.

Study of Key Technological Performance Parameters

(a) 1A

151

(b) 2A

(c) 3A Fig. 4. Heat-flow density deviation curves for eight heat-flow meters

3.4

Comparison and Analysis of Two Kinds of Heating Cage

3.4.1 Test Results and Analysis Figure 5 shows the temperature data obtained for the carbon-fiber heating cage, where the abscissa is time and the ordinate is temperature in °C. As shown in the figure, there was an obvious increase as the current increased from 1 to 2 A and then 3 A.

Fig. 5. Heating temperature data curve of carbon-fiber heating cage

152

F. Xu et al.

Fig. 6. Heating temperature data curves of nickel-chromium alloy heating cage

Based on temperature data recorded by 15 thermocouple thermometers, the average heating temperatures were 268.0 K, 358.9 K, and 426.7 K for the heating currents 1 A, 2 A, and 3 A, respectively. Figure 6 shows the temperature data obtained for the heating cage made of nickelchromium alloy, where the abscissa is time and the ordinate is temperature in °C. As shown in Fig. 6, there was an obvious increase as the currents progressed from 1 to 2 A and then 3 A. Based on the temperature data recorded by 15 thermocouple thermometers, the average heating temperatures were 236.1 K, 316.2 K, and 385.0 K for heating currents of 1 A, 2 A, and 3 A, respectively. Figure 7 shows the statistical results for the heating capacities of the two kinds of heating cage.

Fig. 7. Comparison of heating capacities of two kinds of heating cage

Study of Key Technological Performance Parameters

153

4 Simulation Analysis of Heat-Flow Uniformity 4.1

Monte Carlo Method

Using the Monte Carlo method, the radiation energy was considered to consist of energy beams or energy particles, with each particle having the same level of energy. The radiation direction of the particle was determined based on the probability distribution of Lambert’s law. In this calculation, only the heat flux was considered, and the absorption, reflection, or scattering of each particle was ignored. The basic principle of the Monte Carlo method was the establishment of a mathematical model. As such, the coordinates of the particles emitted by the carbon-fiber heating cage on the surface of the satellite were calculated by their geometrical relationship. Given the coordinates of the intersection points, a determination was made regarding whether the particles fall on the surface of the satellite. When enough particles were emitted, the number of particles in the grid on the surface of the satellite was used to visually express the relative heat flux on the satellite surface [11]. 4.2

Parameter Setting

When setting parameters, the heat-flow meter was regarded as being positioned on the satellite surface. In addition to making calculations for a belt spacing of 40 mm, belt spacings of 30 mm and 20 mm were also simulated and calculated. The parameters used in the simulation analysis were as follows: (1) Emissivity of carbon fiber heating belt: 0.91 [10]; (2) Conversion efficiency of carbon fiber heating: >98% [3]; (3) Distance of heating belt from heat-flow meter: 280 mm; (4) Width of carbon-fiber heating belt: 5 mm; (5) Belt spacings: 40, 30, 20 mm. 4.3

Simulation Results

Figure 8 shows the results of the Monte Carlo simulation analyses, in which the color indicates the heat flow, where red is the highest, and yellow, green, and blue indicate a gradual respective decrease in heat flow. Simulation analyses were performed for the three belt spacings of 40, 30, and 20 mm to determine the influence of different belt spacing on heat-flow uniformity. According to the figures, the inhomogeneity of the 40-mm spacing was 12.52%, that of 30-mm was 11.79%, and that of 20-mm was 11.32%. Therefore, the uniformity of the 40-mm belt spacing was lowest, and that of the 20-mm belt spacing was better than the other two.

154

F. Xu et al.

Fig. 8. Uniformity and average heat-flow location with 40/30/20-mm spacing

4.4

Comparison of Test and Simulation Analysis Data

For a belt spacing of 40 mm and a heating current of 3 A, Table 2 presents a comparison of the experimental results with those of the computer simulation analysis. Table 2. Comparison of test and Monte Carlo simulation results Mean heat flow density (W/m2) Heat flow density uniformity (%)

Test 1458.9 8.76

Monte Carlo simulation 1428.8 12.52

Table 2 shows that with respect to mean heat-flux density, the test results were basically consistent with the Monte Carlo simulation results. Because of the limitations of the test conditions, only eight heat-flow meters were used to obtain heat-flow measurements. The layout and mode of fixing the heat flow meters greatly influenced the measurement results. As such, the results of the heat-flux test deviated greatly from those of the Monte Carlo simulation analysis.

5 Conclusion Based on a comparison of experimental test results for a carbon-fiber heating cage and a traditional nickel-chromium heating cage in a thermal-vacuum environment and a Monte Carlo simulation analysis and test, the following conclusions were made: (1) When the heating currents of a carbon-fiber heating cage were 1, 2, and 3 A, the heat flux was less than 10%. (2) The heating capacity of the carbon-fiber heating cage was much greater than that of the traditional nickel-chromium alloy heating cage. (3) With respect to the mean heat flux, the results of our Monte Carlo simulation analysis were basically consistent with our experimental results. (4) The electrothermal characteristics of the carbon-fiber heating cage met the requirements of a spacecraft vacuum-thermal environment.

Study of Key Technological Performance Parameters

155

References 1. AliAl A, Philipp P, Michael S (2018) Eco-efficiency assessment of manufacturing carbon fiber reinforced polymers (CFRP) in aerospace industry. Aerosp Sci Technol 79:669–678 2. Cao WW, Zhu B, Cai X et al (2010) The simulation study of radiative heat flux intensity distribution with different assignments of carbon fiber electric heating elements. J Funct Mater 41:130–135 3. Cao WW, Zhu B, Wang CG (2007) Numerical simulation on the radiation intensity distribution of carbon fiber infrared electric heating radiator. Chin J Mech Eng-En 43:6–10 4. Dong K, Peng X, Zhang JJ et al (2017) Temperature-dependent thermal expansion behaviors of carbon fiber/epoxy plain woven composites: experimental and numerical studies. Compos Struct 176:329–341 5. Huang BC, Ma YL (2002) Space environment test technology of spacecraft. National Defense Industry Press, Beijing 6. Ji XY, Liu GQ, Wang J et al (2019) Experimental verification and comparison of different tailoring models for spacecraft electronics thermal cycling tests. Acta Astraunat 159:77–86 7. Qiu L, Guo P, Yang XQ et al (2019) Electro curing of oriented bismaleimide between aligned carbon nanotubes for high mechanical and thermal performances. Carbon 145:650– 657 8. Rani R, Suryasarathi B (2019) Electrodeposited carbon fiber and epoxy based sandwich architectures suppress electromagnetic radiation by absorption. Compos Part B-Eng 161:578–585 9. Sagar I, Nikhil T, Narayanamurthy V et al (2017) Analysis of CFRP flight interface brackets under shock loads. Mater Today Proc 4:2492–2500 10. Xu F, Li YZ, Yang WQ et al (2016) The feasibility of using carbon fiber as electro-thermal radiant material for infrared heating cage. Spacecraft Environ Eng 33:668–671 11. Yang XN, Sun YW (2008) Influence of infrared heating cage coverage coefficient on flux uniformity. Spacecraft Eng 17:38–41

Research on Switching Power Supply Based on Soft Switching Technology Zhihong Zhang1(&) and Hong He2 1

2

Tianjin City Electrical Engineering Technology Research Institute, Tianjin 300232, China [email protected] Tianjin Key Laboratory for Control Theory & Applications in Complicated Systems, Tianjin University of Technology, Tianjin 300384, China [email protected]

Abstract. Aiming at the problem that it is difficult to realize zero voltage and zero current switching (ZVZCS) in the current switching power supply, the resonance energy of the lagging arm is insufficient. In this paper, with TMS320F2812 as the control core, the DC/DC part of the switching power supply is designed by using the PWM phase shift control full-bridge ZVZCS technology in the series diode of the hysteresis arm of the converter circuit. The soft switch is well implemented in the case of load change. The MATLAB simulation results show that the soft switching power supply has the advantages of high output precision, fast dynamic response and small overshoot. Keywords: Switching power supply switching  PID control technology

 Digital signal processing  Soft

1 Introduction Energy plays an important role in many aspects of modern society. Switching power supply has been widely used because of its high efficiency and high control precision. At present, due to the high switching frequency and large operating current of the power supply, the switching loss and electromagnetic interference will occur in the process of switching on and off. So the soft switching technology realized by advanced control technology and digital microprocessor has been applied more and more in switching power supply [1]. Aiming at the problem that it is difficult to realize zero voltage and zero current switching (ZVZCS) in the current switching power supply, the resonance energy of the lagging arm is insufficient. In this paper, the control chip TMS320F2812, based on DSP is used to design the DC/DC part of the switching power supply in the phaseshifted full-bridge zero-voltage zero-current converter circuit by using the PID control technology and the PWM phase-shift control full-bridge ZVZCS technology. Under the condition of load change, the soft switch can be realized well, and the lead arm zero voltage switch (ZVS) and the lag arm zero current switch (ZCS) can be realized. The MATLAB simulation results show that the soft switching power supply has the advantages of high output precision, fast dynamic response and small overshoot [2]. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 156–164, 2020 https://doi.org/10.1007/978-981-13-9409-6_20

Research on Switching Power Supply Based

157

2 The Soft Switching Realization of Switching Power Supply 2.1

Section Heading (“H1”)

The soft-switching mode of phase-shifted full-bridge switching circuit is divided into zero voltage and zero current switch and their combination. The basic principle is to use the capacitance and inductance in the circuit to resonant to realize the current and voltage crossing. At this point, ZVS and ZCS. can be realized by switching on the switch [3]. The phase-shifted full-bridge ZVS forearm has enough resonant energy of inductance to realize zero voltage turn-on. However, because the primary current of transformer is small at turn on, the secondary rectifier diode forms a recurrent loop, which is similar to short circuit. Therefore, it is difficult to realize ZVS, with small inductance and large duty cycle loss. As shown in Fig. 1, the converter is composed of VD7 and VD8 diodes in series with the hysteresis arm to realize the ZVZCS. of the hysteresis arm. The leading arm is composed of VT1 and VT2, the lagging arm is composed of VT3 and VT4, the isolation capacitor is Cb, the primary current of transformer is Ip, and the secondary rectifier diode is VD5 and VD6.

The dc load

Fig. 1. Switch power supply circuit diagram

The working waveform of ZVZCS converter is shown in Fig. 2, where [t0–t6] is half a period and is divided into six switching modes. Switching mode 0 [t0]. At t0, VT1 and VT4 are on, and transformer primary current Ip charges isolation capacitor Cb. Switching mode 1 [t0, t1], turning off VT1, Ip from VT1 to C2 and C1, charging C1, discharging C2, increasing linearly the voltage of C1 from zero, decreasing voltage of C2 linearly from Uin to VT1 is zero voltage turn-off at t 0. At T1, the voltage of C2 drops to zero, and VT2s anti-parallel diode D2 naturally leads to the end of mode 1.

158

Z. Zhang and H. He

Fig. 2. Working waveform of ZVZCS convert

Switching mode 2 [t1, t2], when D2 is switched on, the VT2 zero voltage is turned on. Because D2 and VT4 are on at the same time, the leakage inductance of Uab = 0, is smaller, but the isolation capacitance is larger, the isolation capacitance voltage is basically unchanged, the primary current is linearly reduced, and at T2 moment, and the primary current dropped to zero. Switching mode 3 [t2, t3], the primary current Ip is zero, the A point to ground voltage is zero, and the B point to ground voltage is-Ucbp, secondary rectifier current. Switch mode 4 [t3, t4], when VT4, is turned off at T3, there is no current flowing through the VT4, so the VT4 is zero current turn-off. After a very small delay, the primary current can’t be abrupt because of the existence of leakage inductance. The VT3 is zero current on. Switching Mode 5 [t4, t5], starting from the T4 moment, the primary side provides energy for the load, and simultaneously recharges the isolation capacitor. The voltage on the isolation capacitor is ready for the next VT2 zero current turn-off and the VT4 zero current turn-on. 2.2

TMS320F2812 Control Hardware Implementation

The block diagram of the TMS320F2812 control system of the switching power supply is shown in Fig. 3, which is mainly composed of the controller TMS320F2812 and the circuit.

Research on Switching Power Supply Based

159

The main circuit Inverter circuit

transformer

Rectifier filter circuit

The dc load

The fault signal Protection circuit

Driver circuit

Current and voltage signal detection and adjustment module Current and voltage feedback signals A/D sampling, processing

PWM generator Current regulation

TMS320F2812

Voltage regulation

Voltage for a given

Fig. 3. TMS320F2812 control system block diagram of switching power supply

The main circuit includes inverter circuit, transformer, rectifier filter circuit, dc load, drive protection circuit and current and voltage signal detection and adjustment module. TMS320F2812 mainly realizes A/D sampling processing of voltage and current feedback signal, voltage and current regulation (PID operation), PWM pulse output and other functions. The current output voltage analog signals such as the current voltage signal detection and adjustment module test and adjust to the appropriate level to the A/D mouth of TMS320F2812, the digital PID algorithm with A given voltage, and the results as the given value of the current regulator, then through the current PID regulator, the output as the corresponding PWM output pulse generator control register values, adjust the output relative duty ratio of PWM pulse [4]. After the amplification of the power amplifier in the drive circuit, each switch tube is driven to operate to realize the dynamic regulation of output voltage and current and phase shift pulse. At the same time, the fault signal enters the TMS320F2812 protection I/O port, pulls down the D0 port of the general input/output interface (GPIO), and enters the protection pin PDPINTA to interrupt, making the output PWM pulse of the PWM generator present a high resistance state forcibly, thus realizing the protection of the hardware. 2.3

Software Implementation of TMS320F2812 Control

The flow chart of the program design is shown in Fig. 4, which can be divided into four modules: TMS320F2812 initialization, real-time data acquisition, interrupt, PWM pulse generation and PID regulation. • TMS320F2812 Initialization Module This module includes system, constant and variable initialization, ADC module initialization, GPIO port initialization, watchdog WD initialization, event management module EVA/B and interrupt initialization, etc., to ensure that the controller TMS320F2812 works reliably and stably.

160

Z. Zhang and H. He

• Real-Time Data Acquisition Module This module in each cycle collection and output current and voltage signal, in order to meet the accuracy requirement of the power to control the output voltage ripple, set the frequency of the A/D sampler to 20 kHz, selection of continuous sampling average of 3 times as A sampling of data and increase the accuracy of the order will be picked by the data operation result CMPR values in an array in preparation for the corresponding processing of late.

Program started

System, constant, variable, A/D, EVA,WD, Interrupt initialization

Start timer, open interrupt Read the A/D sample values Filtering and actual numerical conversion

Interrupt the entrance

The value of the dynamic assignment comparison register CMPR

Clear interrupt flag to allow for other interrupts

Interrupt return

PID calculation of voltage and current

Fig. 4. Flow chart of program design

• Interrupt and PWM Pulse Generation Module The voltage and current signals are collected in real time when A/D is interrupted. After filtering and rectification and corresponding processing and calculation, the output can be automatically and dynamically adjusted by dynamically changing the value of CMPR in the comparison register of TMS320F2812 event manager EVA/B when the event manager is interrupted or the cycle is interrupted. After the CMPR value of the comparison register is input into the PWM circuit and compared with the symmetric or asymmetric waveform set by T1CON, some square waves are generated. These square waves are input into the dead-time circuit to produce two dead-time signals. Then the output logic of each PWM is set through the output logic circuit to generate the required PWM signals. • ID Control Module The double closed-loop control mode of current inner loop and voltage outer loop is adopted to carry out PID operation according to the difference between sampling value and reference value. The result is used as the input value of PWM wave module to carry out pulse width modulation for PWM wave, and the relative duty ratio and phase

Research on Switching Power Supply Based

161

shift Angle of PWM signal are calculated in real time to make the converter output stable dc voltage and required current. The real-time parameters were calculated in advance and determined in the Simulink simulation environment, and then the parameters were adjusted through specific experiments to achieve the PID control with good steady-state and dynamic characteristics [5, 6]. 2.4

System Simulation Model

Simulink is used to simulate the system. The system simulation model is shown in Fig. 5.

Fig. 5. System simulation model

• PWM Module and PID Module Design In Fig. 5, the output of the Repeating Sequence module is the triangular wave of amplitude −1 to 1, as shown in Fig. 6.

162

Z. Zhang and H. He

Fig. 6. Triangular wave shape

PID output value and its negative value are compared with this triangular wave respectively. After Boolean operation, the output is inversed separately, that is, two groups of PWM waves with the same frequency as triangular wave are output, as shown in Fig. 7.

Fig. 7. PWM waveform

The traditional PID control is realized by the analog PID controller, while the current PID control is realized by the digital PID, namely the computer program. The main parameters of the PID module are set according to its control algorithm, where Kp ¼ 0:01 Ki ¼ 40; Kd ¼ 0:5e  1: • Power Supply Module and Transformer Module In practical industrial applications, most power supplies are three-phase 380 V, and the output voltage after a series of rectifier filtering is about 513 V dc voltage. Therefore, the output voltage of IGBT bridge is set as 513 V. The rated frequency of the transformer is set at 50 Hz, the rated power is set at 12000 W, the rated voltage of winding 1 is set at 256 V, and the rated voltage of winding 2 and winding 3 is set at 12 V.

Research on Switching Power Supply Based

163

3 Simulation Results and Analysis According to the system simulation model and parameter Settings in Fig. 5, the simulation output voltage waveform of the Manual Switch on the voltage side is shown in Fig. 8, and the simulation output current waveform of the current side on is shown in Fig. 9 (the simulation time is changed).

Fig. 8. Simulation output voltage waveform

Fig. 9. Simulation output current waveform

As can be seen from the simulation results and system parameter Settings of the two figures, the output voltage is 0–12 V, the voltage stabilizing accuracy is less than 1.8%, the output current is 0–1000 A, the current stabilizing accuracy is less than 1.3%, the ripple is less than 2%, and the stabilizing time is less than 0.001 s.

164

Z. Zhang and H. He

4 Conclusion In this paper, the soft switch technology energy saving anti-interference characteristics combined with TMS320F2812 controller of intelligent control, use of the advantages of simple and easy to implement PID control, the phase-shift full bridge zero voltage zero current converter circuit of lagging arm series diode, the MATLAB simulation results show that the design of the soft switch technology output switching power supply system has high precision, fast dynamic response, and the advantages of small overshoot amount. Acknowledgements. This work is supported by Fund Project: Tianjin Science and Technology Special Fund to support major science and technology projects (14ZCDGSF00028). And thanks Tianjin Key Laboratory for Control Theory and Application in Complicated Systems, Tianjin University of Technology, Tianjin 300191, and China.

References 1. Hou Y, Xia R (2017) Power emergency integrated communication system based on soft switching technology. Electr Power Inf Commun Technol 4:60–64 2. Gupta AC, Kalita N, Gaur H et al (2016) Peak of spectral energy distribution plays an important role in intra-day variability of blazars. Mon Not R Astron Soc 462(2):1508–1516 3. Shah P, Agashe S (2016) Review of fractional PID controller. Mechatronics 38:29–41 4. Lin F, Wang Y, Wang Z et al (2016) The design of electric car DC/DC converter based on the phase-shifted full-bridge ZVS Control. Energy Procedia 88:940–944 5. Johnson N, Fletcher JD, Humphreys DA et al (2017) Ultrafast voltage sampling using singleelectron wavepackets. Appl Phys Lett 110(10):102–105 6. Grebennikov A (2016) High-efficiency class-E power amplifier With shunt capacitance and shunt filter. IEEE Trans Circuits Syst I Regul Pap 63(1):12–22

Grid Adaptive DOA Estimation Method in Monostatic MIMO Radar Using Sparse Bayesian Learning Yue Wang1,2 , Kangyong You1 , Dan Wang1 , and Wenbin Guo1,2(B) 1

Wireless Signal Processing and Network Laboratory, Beijing University of Posts and Telecommunications, Beijing 100876, China {wangxiaoyue, ykyyiwang, wangdan121, gwb}@bupt.edu.cn 2 Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory, Shijiazhuang, China

Abstract. In monostatic Multi-Input Multi-Output (MIMO) radar system, Direction Of Arrival (DOA) estimation is important for target detection. However, conventional MIMO DOA estimation approaches suffers from the off-grid issue which refers that the real DOAs deviate from the predefined grid points. In this paper, a grid adaptive DOA estimation method is proposed to address the off-grid error and the improper initial grid problem for monostatic MIMO radar system. We construct a Bayesian learning framework with Laplacian prior to adjust grid and observation dictionary adaptively. Simulation results show the superior performance of the proposed method in terms of high angle resolution and robustness against the noise by comparing with the state-of-the-art DOA estimation methods in MIMO radar system.

Keywords: Monostatic MIMO radar Compressed sensing

1

· DOA estimation · Off-grid ·

Introduction

Compared with conventional phased-array radar, MIMO radar has the advantages of high reliability and high precision. Moreover, it also has the ability of anti-interference and anti-stealth [1]. According to the antenna configuration, MIMO radar can be classified into statistical MIMO radar and colocated MIMO radar. In the statistical MIMO radar [2], the distance between the antennas is significant to obtain a spatial diversity gain. In the colocated MIMO radar [3] which including monostatic and bistatic MIMO radar, both transmitting and receiving antennas elements are closely spaced. Colocated MIMO radar can achieve higher angle resolution because it introduces the idea of transmitting waveform diversity to form the large virtual aperture. In this paper, a novel DOA estimation method is investigated in the monostatic MIMO radar system. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 165–178, 2020 https://doi.org/10.1007/978-981-13-9409-6_21

166

Y. Wang et al.

Many DOA estimation methods [4–11] have been proposed for MIMO radar based on conventional array parameter estimation methods, such as Estimating Signal Parameters via Rotational Invariance Techniques (ESPRIT) method [5,6] and Multiple Signal Classification (MUSIC) method [7]. In recent years, compressed sensing (CS) [12] provides a new perspective for DOA estimation in MIMO radar [13, 14]. By discretizing the angle domain into a number of grid points or cells, the spatial sparsity of the target DOAs provides the possibility to apply compressed sensing method. A large quantity of literature has studied the DOA estimation model based on CS theory [15–17]. However, most of them assume that the target DOAs fall on the fixed grid points. When the real DOAs deviate from the fixed grid points, these algorithms will suffer from decreased performances. In [18], the off-grid error is treated as a disturbance, and the grid is adjusted once based on linear approximation. Then, the idea of interpolation is introduced and DOA estimation problem is solved by block sparse compressed sensing algorithm in [19]. However, these methods still need a suitable initial grid granularity. A coarse grid may result in performance degradation or losing effectiveness while a dense grid has to pay a price of massive computation. Therefore, it is necessary and important for MIMO radar system to develop a grid adaptive method for accurate DOA estimation, even in the case of a coarse grid. Bayesian compressive sensing (BCS) [20,21] method has attracted a growing interest because it is user parameter-free and can get the estimation variance. In this paper, we propose a grid adaptive DOA estimation method using sparse Bayesian learning. The core idea is to adjust the grid and the observation dictionary adaptively. We refer to our algorithm as grid adaptive DOA estimation algorithm (GADE) in the body of the paper. Numerical simulations show that the proposed method outperforms the state-of-the-art methods. Notations used in this paper are as follows. Vectors and matrices are signified by bold face letters. x ¯, xT and xH denote complex conjugate, transpose and conjugate transpose of a vector x, respectively. · denotes the -norm. T r(·) and |·| denote the determinant and trace operator, respectively. xj is the j-th entry of the vector x. Ai , Aj and Aij are the i-th row, j-th column and (i, j)-th entry of the matrix A, respectively. diag(x) is a diagonal matrix with vector x being its diagonal elements. diag(A) denotes a column vector composed of the  diagonal elements of matrix A. x (θ) is the derivative of x(θ) with respect to θ. , ⊗ and ◦ denote the Hadamard (element-wise) product operator, Kronecker product operator and Khatri-Rao product operator, respectively. The rest of the paper is organized as follows. The system model is described in Sect. 2. Section 3 explains the proposed method in detail. Simulation results are presented in Sect. 4. Section 5 concludes the whole paper.

Grid Adaptive DOA Estimation Method in Monostatic

2 2.1

167

System Model MIMO Radar Signal Model

The monostatic MIMO radar [17] is one type of colocated MIMO radar, which is equipped with closely-located transmitting antennas and receiving antennas. Considered a narrow-band monostatic MIMO radar system shown in Fig. 1. Specifically, Mt transmitting antennas (mt = 1, . . . , Mt ) are all located in a line with the distance of dt from each other, and Mr receiving antennas (mr = 1, . . . , Mr ) are all located in a line with the distance of dr from each other. Mt orthogonal narrow-band waveforms are transmitted simultaneously by the transmitting antennas, and the waveforms are donated by B. After reflected by targets, the echoes are received by the receiving antennas. Assuming that there are K far-field targets in the coverage area. Since all antennas are located in a small area, all the transmitting antennas and the receiving antennas view the k-th target from almost the same direction, i.e., DOA and DOD (Direction Of Departure) are the same and can be denoted by θk , k = 1, . . . , K. The signal from the transmitter to the K targets and received by the receiver is Z=

K 

ar (θk )ξk ej2πfdk t at T (θk )B + W ,

(1)

k=1

where ξk , fdk and W are the reflection coefficient, the Doppler shift of the k-th target and the additive Gaussian white noise, respectively. The steering vectors of the transmitter and the receiver are respectively denoted as at (θk ) = [1, e− ar (θk ) = [1, e−

j2πdt sin(θk ) λ j2πdr sin(θk ) λ

, . . . , e− , . . . , e−

j2π(Mt −1)dt sin(θk ) λ

T

] ,

j2π(Mr −1)dr sin(θk ) λ

T

] ,

(2) (3)

where λ is the wavelength of the signal. After matched filtering operation, the observed signals are y(t) = (AR ◦ AT )s(t) + w(t) = As(t) + w(t),

(4)

where AR = [ar (θ1 ), . . . , ar (θK )] ∈ C Mr ×K consists of K steering vectors of the receiver, AT = [at (θ1 ), . . . , at (θK )] ∈ C Mt ×K consists of K steering vectors of the transmitter, s(t) = [s1 (t), . . . , sK (t)]T = [ξ1 ej2πfd1 t , . . . , ξK ej2πfdK t ]T is a column vector whose elements are the products of the reflection coefficients and the Doppler shifts of K targets, and w(t) is Gaussian white noise. The transmit-receive steering matrix is denoted as A which can be formulated as A = [ar (θ1 ) ⊗ at (θ1 ), . . . , ar (θK ) ⊗ at (θK )].

(5)

Given T snapshots, the observed signal matrix can be expressed as Y = [y(t1 ), y(t2 ), . . . , y(tT )], leading to the multiple measurement vector (MMV) observation model [1] Y = (AR ◦ AT )S + W = AS + W ,

(6)

168

Y. Wang et al.

where S = [s(t1 ), s(t2 ), . . . , s(tT )] ∈ C K×T and W = [w(t1 ), w(t2 ), . . . , w(tT )] ∈ C Mr Mt ×T , respectively. 2.2

Traditional On-Grid Model

The central idea of DOA estimation based on CS is extending the observation dictionary A to form an overcomplete dictionary Φ, Φ = [ar (θ˜1 ) ⊗ at (θ˜1 ), . . . , ar (θ˜N ) ⊗ at (θ˜N )].

(7)

θ˜ = {θ˜1 , θ˜2 , . . . , θ˜N } is a fixed and uniform grid in angle range of [−90◦ , 90◦ ], where N denotes the grid number. The traditional on-grid model assumes that the real DOAs {θ1 , θ2 , . . . , θk } is a subset of {θ˜1 , θ˜2 , . . . , θ˜N }. When K < Mt Mr 0 and Λ = diag(α) being the covariance matrix. It is proved that all columns of X are independent and share the same sparse prior [22]. (3) Off-Grid Offset Model: We assume Δ follows a uniform prior 1 1 Δ ∼ U([− r, r]N ), 2 2

(19)

with r > 0 and r being the initialized grid interval. By combining all stages of the hierarchical Bayesian model, the joint PDF is formulated as p(X,Y ,α,β,Δ) = p(Y |X, β, Δ)p(X|α)p(α)p(β)p(Δ).

(20)

The distributions on the right side of the equation are defined by (16), (17), (18), (15) and (19), respectively. (4) Bayesian Inference: According to the chain rule, p(X, α, Δ, β|Y ) = p(X|Y , α, Δ, β)p(α, Δ, β|Y ).

(21)

It is derived that the posterior probability distribution of X obeys complex Gaussian distribution, i.e. p(X|Y , α, β, Δ) =

T  t=1

CN (x(t)|μ(t), Σ)

(22)

Grid Adaptive DOA Estimation Method in Monostatic

171

with μ(t) = βΣΦH y(t), t = 1, . . . , T, Σ = (βΦH Φ + Λ−1 )−1 .

(23)

Note that α, β and Δ are needed to calculate Σ and μ(t). However, p(X, α, Δ, β|Y ) can not be solved explicitly. Thus, we estimate α, β and Δ by its maximum a posteriori estimation (MAP) ˆ Δ) = arg max p(α, β, Δ|Y ). (α, β, α ,β,Δ

(24)

Equation (24) is equivalent to maximum likelihood (ML) estimation problem ˆ Δ) = arg max p(α, β, Δ, Y ). (α, β, α ,β,Δ

(25)

Notice that (25) is equivalent to maximize ln p(α, β, Δ, Y ). We use the expectation maximization (EM) algorithm to iteratively maximize the lower bound of the marginal likelihood p(α, β, Δ, Y ) by treating X as a hidden variable ˆ Δ) = arg max E{log p(Y , X, α, β, Δ)}, (α, β, α ,β,Δ

(26)

where E{·} denotes the expectation with respect to the posterior of X as given in (22). Denote U = {μ(1), . . . , μ(T )} = βΣΦH Y , X = √XT , Y = √YT , U = √UT and ζ =

ζ T

. Then, the parameters are updated as follows  n 1 + 4ζ(Σnn + U 22 ) − 1 new , αn = 2ζ βnnew =

(a − 1) + T Mr Mt b + T · E{ Y − ΦX 22 }

,

(27)

(28)

N with E{ Y − ΦX 22 } = Y − ΦU 22 +β −1 n=1 (1 − αn−1 Σnn ). For details, the readers can refer to [22]. For Δ, maximizing E{log p(Y |X, β, Δ)p(Δ)} is equivalent to minimizing E{

=

T 1 y(t) − Φx(t)22 } T t=1 T 1  y(t) − Φμ(t) 22 + tr{ΦΣΦH } T t=1

= ΔT P Δ + 2v T Δ + const,

(29)

172

Y. Wang et al.

where P is a positive semi-definite matrix ˜H B ˜  (Σ + U U H )}, P = R{B

(30)

T  ˜ H AΣ) ˜ − 1 ˜ H (y(t) − Bμ(t))}. ˜ v = R{diag(B diag(μ(t))B T t=1

(31)

As a result, the optimization problem of the grid offset vector Δ is represented as Δnew = arg

min

Δ∈[− 12 r, 12 r]N

{ΔT P Δ + 2v T Δ}.

(32)

Let J be the above optimization problem, ∂J = 2(P Δ + v). ∂Δ

(33)

When P is invertible, we have 1 1 Δnew = −P −1 v ∈ [− r, r]N 2 2

(34)

by making partial derivative equal 0. Otherwise, we update one δn by fixing the other entries of Δ at each step −v n − (P n )T−n Δ−n , δˆn = P nn

(35)

where (P n )T−n represents the rest except the n-th entry in the (P n )T vector. In order to constrain δn ∈ [− 12 r, 12 r], we have δnnew

⎧ ˆ ⎪ ⎨δn , = − 21 r, ⎪ ⎩1 2 r,

if if if

δˆn ∈ [− 12 r, 12 r]N ; δˆn < − 12 r; δˆn > 12 r.

(36)

With the grid offset vector, the grid can be refined as θ˜nnew = θ˜nold + δnnew .

(37)

Through the above analysis, problem (13) is solved. 3.2

The Proposed GADE Algorithm

(1) GADE : The proposed grid adaptive DOA estimation (GADE) algorithm is shown in Algorithm 1.

Grid Adaptive DOA Estimation Method in Monostatic

173

Algorithm 1 GADE Input: Y , K ˆ , θ˜ Output: X ˜ α, β, r, set a, b, o = 0; 1: Initialize θ, 2: while  Y − ΦX < threshold or o < max Outiter do 3: o = o + 1; //Update external loop iteration counter 4: Δ = 0, i = 0; ˜ ˜ B ˜ using current θ; 5: Calculate A, 6: while  Y − ΦX < threshold or i < max Initer do 7: i = i + 1; //Update internal loop iteration counter 8: Update Φ according to (9) using current Δ; 9: Calculate Σ, μ according to (23) using current α, β, Φ; 10: Update α, β, Δ according to (27) (28) (34) (35); 11: end while 12: Update θ˜ using current θ˜ and Δ according to (37); 13: end while ˜ ˆ ← U , θ; 14: Return X

(2) Implement Details: In fact, we only need to update the grid offset vector at the support. In other words, we only adaptively adjust the grid points closest to the real DOAs. In this way, we can reduce the N -dimensional calculation to K-dimensional calculation. Note that, in our algorithm, parameters are continually updated until the residual energy is less than the threshold or the number of iteration reach the maximum. The algorithm adjusts the grid based on the last estimation and makes a more accurate estimation based on the new grid. Therefore, the proposed algorithm has a strong grid adaptation capability. With the estimation of support, the corresponding coefficients of support are reflected ˜ are the estimated DOAs. powers, and the corresponding grid directions in θ

4

Numerical Simulations

In this section, numerical simulation results are presented. The simulation parameters are given in Table 1. In all of the simulations, a narrowband monostatic MIMO radar system is considered. We compare the proposed method with the state-of-the-art MIMO DOA estimation methods, i.e. MIMO MUSIC [8], MIMO CAPON [10], MIMO Propagator Method (PM) [11] and MIMO OGSBI [18] which is regarded as one of the best methods for off grid problems. In each trial, target DOAs are generated within [−90◦ , 90◦ ] randomly. The evaluation metric is the root mean square error (RMSE) which can be calculated over R = 500 trials

K R  1  2 θkr − θˆkr 2 . (38) RMSE = RK r=1 k=1

where the superscript r refers to the r-th trial.

174

Y. Wang et al. 10 Truth

MIMO MUSIC

MIMO PM

Proposed

Power (dB)

0

-10

-20

-30

-40

-80

-60

-40

-20

0

20

40

60

80

DOA (degrees)

Fig. 2. The spatial spectrum for DOA estimation Table 1. Simulation parameters Parameter

value

The number of snapshots T

100

The detection DOA range

[−90◦ , 90◦ ]

The number of transmitting antennas Mt 4

4.1

The number of receiving antennas Mr

4

The transmitting antennas space dt

0.5 wavelength

The receiving antennas space dr

2 wavelength

Spatial Spectrum

Firstly, to illustrate the accuracy of the proposed GADE method, we study the property of the spatial spectrum. Considering that there are three targets in far field located at θ1 = −55.5426◦ , θ2 = −5.5364◦ and θ3 = 35.5279◦ with SNR = 0 dB. The grid is set from −90◦ to 90◦ with the uniform interval 10◦ . Figure 2 plots the normalized spatial spectrums. From Fig. 2, we can see that the proposed GADE method achieves the best spatial spectrum, which can be explained by the off-grid model and the strong grid adaptation capability. More specifically, the spatial spectrum of the proposed GADE is highly peaked at the true DOAs. In contrast, MIMO MUSIC and MIMO PM method only estimate the first DOA at θ1 = −60◦ , but almost fail to estimate the rest DOAs. Therefore, we can conclude that the proposed GADE method is not only effective for MIMO DOA estimation, but also enjoys a higher accuracy even initialized with a coarse grid.

Grid Adaptive DOA Estimation Method in Monostatic

4.2

175

Robustness Against Measurement Noise

Secondly, we evaluate the impact of different measurement noise levels on the estimation performance. Considering three signals in the far field randomly impinging on the receiver from different directions. We compare our method with three classical methods (MIMO MUSIC, MIMO PM, MIMO CAPON) and MIMO OGSBI. The grid interval for all methods is set to 1◦ . Figure 3 presents the RMSE results when SNR varies from 0 to 10 dB. It is obvious that the proposed method outperforms the classic MIMO DOA estimation methods and MIMO OGSBI. To be specific, MIMO MUSIC and MIMO CAPON show the same performance and they are more accurate than MIMO PM under all noise levels. However, the performance platform appears on all the three classic MIMO DOA estimation methods, which can be explained by the bottleneck of the on-grid model. On the contrary, the methods based on off-grid model, i.e. MIMO OGSBI and the proposed method, can break through the on-grid bottleneck and obtain more accurate estimation. Interestingly enough in Fig. 3, although both MIMO OGSBI and the proposed GADE method adopt the off-grid model, the RMSE of MIMO OGSBI is nearly constant to the noise levels, while the RMSE of the proposed GADE method continuously decreases as SNR increases. The reason lies that MIMO OGSBI just refines the grid once while the proposed GADE method allows continuous grid adjustment by powerful internal and external iterative grid adaptation procedures. Thus, we can conclude that the proposed GADE method achieves more accurate estimation and is more robustness to the measurement noise. 0.9 MIMO MUSIC MIMO CAPON MIMO PM MIMO OGSBI Proposed

0.8

RMSE

0.7 0.6 0.5 0.4 0.3

0

1

2

3

4

5

6

7

8

9

10

SNR (dB)

Fig. 3. The DOA estimation performance versus SNR

176

Y. Wang et al. 1.4 MIMO OGSBI Proposed

1.3 1.2

RMSE

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4

1

2

3

4

5

6

7

Grid interval (degrees)

Fig. 4. The DOA estimation performance versus grid interval

4.3

Sensitivity to Initial Grid Granularity

Finally, we investigate the sensitivity of the proposed GADE method and MIMO OGSBI to the initial grid granularity. We set the DOAs the same as in Sect. 4.2 and set SNR = 5 dB. Figure 4 shows the results when the grid interval is changed from fine to coarse. It is observed from Fig. 4 that the proposed GADE method is tolerant to the grid granularity to some degree and outperforms the MIMO OGSBI method. In particular, the RMSE of MIMO OGSBI gradually increases as the grid interval increases, while the proposed GADE method is almost unaffected. With the grid interval increasing to 7◦ , the RMSE of MIMO OGSBI increases to 1.4070 while the RMSE of the proposed GADE method is still below 0.5. The reason is that MIMO OGSBI only adjusts grid once, which is only suitable for scenarios where the off-grid gap is relatively small, while the proposed GADE method continuously adjusts the initial grid which can continuously approach the real DOAs. Thus, it can be concluded that even with a coarse grid, the proposed GADE method can still achieve DOA estimation with a high angle resolution.

5

Conclusion

In this paper, in order to improve the estimation accuracy and solve the improper initial grid problem for DOA estimation in MIMO radar system, we established an dynamic grid model in angle domain. We constructed a hierarchical probability framework with Laplacian sparse prior for the model and proposed a novel grid adaptive DOA estimation method (GADE) from the perspective of sparse Bayesian learning. By updating the grid and the observation dictionary iteratively in the grid refinement procedure, the grid points approach to the real

Grid Adaptive DOA Estimation Method in Monostatic

177

target DOAs adaptively. Simulation results highlight the proposed method with high estimation accuracy and a good robustness against noise in MIMO DOA estimation. More importantly, the proposed method can still obtain a good angle resolution even initialized with a coarse grid.

References 1. Li J, Stoica P (2009) MIMO radar signal processing. Wiley-IEEE Press, Hoboken 2. Haimovich AM, Blum RS, Cimini LJ (2008) MIMO radar with widely separated antennas. IEEE Signal Process Mag 25(1):116–129 3. Li J, Stoica P (2007) MIMO radar with colocated antennas. IEEE Signal Process Mag 24(5):106–114 4. Liu J, Liu Z, Xie R (2010) Low angle estimation in MIMO radar. Electron Lett 46(23):1565–1566 5. Duofang C, Baixiao C, Guodong Q (2008) Angle estimation using esprit in MIMO radar. Electron Lett 44(12):770–771 6. Jinli C, Hong G, Weimin S (2008) Angle estimation using esprit without pairing in MIMO radar. Electron Lett 44(24):1422–1423 7. Gao X, Zhang X, Feng G, Wang Z, Xu D (2010) On the music-derived approaches of angle estimation for bistatic MIMO radar. In: International conference on wireless networks and information systems, pp 343–346 8. Schmidt R (1986) Multiple emitter location and signal parameter estimation. IEEE Trans Antennas Propag 34(3):276–280 9. Zhang X, Xu D (2010) Angle estimation in MIMO radar using reduced-dimension capon. Electron Lett 46(12):860–861 10. Yan H, Li J, Liao G (2008) Multitarget identification and localization using bistatic MIMO radar systems. Hindawi Publishing Corp 11. Zhang X, Wu H, Li J, Xu D (2012) Computationally efficient DOD and DOA estimation for bistatic MIMO radar with propagator method. Int J Electron 99(9):1207–1221 12. Eldar Y, Kutyniok G (2012) Compressed sensing: theory and applications. Cambridge University Press, Cambridge 13. Yu Y, Petropulu AP, Poor HV (2011) Measurement matrix design for compressive sensing based MIMO radar. IEEE Trans Signal Process 59(11):5338–5352 14. Rossi M, Haimovich AM, Eldar YC (2014) Spatial compressive sensing for MIMO radar. IEEE Trans Signal Process 62(2):419–430 15. Chen CY, Vaidyanathan PP (2009) Compressed sensing in MIMO radar. In: 2008 asilomar conference on signals, systems and computers, pp 41–44 16. Tohidi E, Radmard M, Majd MN, Behroozi H, Nayebi MM (2018) Compressive sensing MTI processing in distributed MIMO radars. IET Signal Process 12(3):327– 334 17. Shi W, Huang J, Zhang Q (2016) DOA estimation in monostatic MIMO array based on sparse signal reconstruction. In: IEEE international conference on signal processing, communications and computing, pp 1–4 18. Yang Z, Xie L, Zhang C (2013) Off-grid direction of arrival estimation using sparse Bayesian inference. IEEE Trans Signal Process 61(1):38–43 19. Abtahi A, Gazor S, Marvasti F (2018) Off-grid localization in MIMO radars using sparsity. IEEE Signal Process Lett 25(99):313–317

178

Y. Wang et al.

20. Ji S, Xue Y, Carin L (2008) Bayesian compressive sensing. IEEE Trans Signal Process 56(6):2346–2356 21. You K, Guo W, Liu Y, Wang W, Sun Z (2018) Grid evolution: joint dictionary learning and sparse Bayesian recovery for multiple off-grid targets localization. IEEE Commun Lett 22(99):2068–2071 22. Babacan S, Molina R, Katsaggelos A (2010) Bayesian compressive sensing using laplace priors. IEEE Trans Image Process 19(1):53–63

Global Deep Feature Representation for Person Re-Identification Meixia Fu1,2,3(&), Songlin Sun1,2,3, Na Chen1,2,3, Xiaoyun Tong1,2,3, Xifang Wu1,2,3, Zhongjie Huang1,2,3, and Kaili Ni1,2,3 1

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China [email protected], [email protected] 2 Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China 3 National Engineering Laboratory for Mobile Network Security, Beijing University of Posts and Telecommunications, Beijing, China

Abstract. Person re-identification (re-ID) has attracted tremendous attention in the field of computer vision, especially in intelligent visual surveillance (IVS). The propose of re-ID is to retrieval the interest person across different cameras. There are still lots of challenges and difficulties that are the same appearance such as clothes, the lens distance, various poses and different shooting angles, all of which influence the performance of re-ID. In this paper, we propose a novel architecture, called global deep convolutional network (GDCN), which applies classical convolutional network as the backbone network and calculates the similarity between query and gallery. We evaluate the proposed GDCN on three large-scale public datasets: Market-1501 by 92.72% in Rank-1 and 88.86% in mAP, CHUK03 by 60.78% in Rank-1 and 62.47% in mAP, DukeMTMC-re-ID by 82.22% in Rank-1 and 77.99% in mAP, respectively. Besides, we compare the experimental results with previous work to verify the state-of-art performance of the proposed method that is implemented by NVIDIA Ge-Force GTX 1080Ti. Keywords: Person re-identification  Computer vision surveillance  Global deep convolutional network

 Intelligent visual

1 Introduction Recently, person re-identification has increasingly become a popular task in academic world, which is also the basic research for high-level tasks such as action recognition, video understanding and pedestrian anomaly detection [1]. Re-ID aims to look for the pedestrian appearing under different cameras though given interest image. However, there are still improvement space due to the difficulties in this task, such as the similar appearance, various pose and illumination. Many prior methods have achieved well

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 179–186, 2020 https://doi.org/10.1007/978-981-13-9409-6_22

180

M. Fu et al.

GDCN backbone

FC Layer2

FC Layer1 2048

GAL

256

Query

1 1

share paremeters

1 1

Query

Gallery Rank

Distance Metric

...

Gallery backbone FC Layer2

2048

1 1

256

FC Layer1

GAL

1 1

Fig. 1. The structure of Global Deep Convolutional Network (GDCN) for re-ID

performance for re-ID [2]. There are two mainly categories about them: hand-crafted and deep learning. Hand-crafted applies the methods based-image processing into reID. However, we focus primarily on presenting the previous work of re-ID based on deep learning. Many prior methods have achieved well performance in person re-ID [3–7]. Barbosa et al. [3] proposed an inception-based network, SOMAnet capturing structural aspects of the human body, which showed robust performance under complex appearance. Zhao et al. [4] showed a novel Spindle Net to capture competitive fusion features though extracting body region features and global features. It adopted global pooling layer to transfer feature maps to feature vector. Su et al. [5] explicitly proposed a pose-driven Deep Convolutional (PDC) model to leveraged the global features based on a global network and local features based on a pose driven feature weighting network, in which global pooling was used. Zheng et al. [6] introduced a pedestrian alignment network (PAN) that could address misalignment problem with extra annotations, in which the global pooling was applied on the feature maps. Wei et al. [7] proposed a Global-Local-Alignment Descriptor (GLAD) that integrated global and local features and an efficient retrieval framework that gathered top-k groups in the gallery set to decrease large redundancy. GLAD still applied global average pooling to classification. The mentioned methods based on global-local features with the global pooling perform well in the task of re-ID. In this paper, we propose a novel GDCN, which represents the feature though a simple global deep convolutional network and calculate the similarity using cosine similarity function. Besides, the performance among four popular deep convolutional networks that achieved excellent results on ImageNet is compared to choose the backbone network. The reminder of this paper is organized as follows. In Sect. 2, we closely introduce the structure of the proposed GDCN and the evaluation method. In

Global Deep Feature Representation for Person Re-Identification

181

Sect. 3, we present the experimental results of the proposed method on three public datasets and compare with prior work. In Sect. 4, we give a conclusion about this paper and show the future plan.

2 The Proposed Method The structure of the proposed GDCN is shown in Fig. 1, which consists of two components that are the training structure and the testing structure. The training structure includes a X-net as the backbone network, the global average pooling (GAL) and two fully connection (FC) layers. Here, X-net is listed four popular deep convolutional networks: Inceptionv3 [8], Resnet50 [9], Resnet101 [9], Densenet121 [10]. We compare the performance among them for re-ID. For testing, the feature (batchsize, 2048) after GAL is the input of the distance metric. Query is the target dataset and gallery is the retrieval dataset. Then we calculate the similarity between them that share parameters in feature extraction and order gallery from large to small according the similarity score. For training, the loss function is the cross-entropy to compute one pedestrian image within one batch: Loss FunctionðX; labelÞ ¼

K X

Loss FunctionðX; labelÞk

k¼0

¼

K X

logðsoftmaxðXlabel ÞÞ

k¼0

¼

K X k¼0

exlabel log PN xn n¼0 e

ð1Þ

!

where, X is the output of the network and label is the target class. Xlable is the output that corresponds to the label in k loss function. N is the amount of the identifies. K is the value of the batchsize. A similarity measure is a popular tool to evaluate the similarity between two vectors [11]. The function is introduced as follows: similarity ¼

Pn AB i Ai  Bi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn Pn k AkkBk 2 2  ð A Þ i i i ðB i Þ

ð2Þ

Here, A and B are the two vectors, the dimension of which is n. A and B is the norm. The value of similarity is limited between 0 and 1. In this paper, A and B are the feature vectors before the first FC layer. We use the Cumulated Matching Characteristics (CMC) and the mean average precision (mAP) to evaluate the performance of the proposed method on three public datasets [12]. For CMC, we present the cumulated matching accuracy at Rank-1, Rank-5 and Rank10 to show the match accuracy among query and gallery. For mAP, we calculate the average match accuracy with single-query under considering recall accuracy.

182

M. Fu et al.

3 Experiments In this section, we present many experiments of the proposed method on three largescale datasets and compare with the prior work. In our experiments, all the images are resized to (384, 192) sent to the model. The value of batchsize is set to 32. The training times are set to 100 epochs. The initial learning rate is 0.1 and decays by a factor of 0.1 every 40 epochs. 3.1

Datasets

We evaluate the performance of the proposed method on three large-scale public person re-identification datasets. The evaluation results would be calculated at singlequery for these three datasets with and without re-ranking method in [13]. Three datasets are shown in Fig. 2. The first is Market1501 [12] that contains 36,036 images of 1501 identities captured from 6 cameras. It has 32,668 labeled bounding boxes and 3368 query images.

Market1501

CUHK03-detected

DukeMTMC-re-ID

Fig. 2. Sample images of Market1501, CUHK03-detected, DukeMTMC-re-ID

Global Deep Feature Representation for Person Re-Identification

183

This dataset is divided into two parts including training set that has 12,936 images from 751 identifies and testing set that has 19,732 gallery images and 3368 query images from another 750 identifies. The second is CUHK03 [14] that contains 14,096 images of 1467 identifies captured from 5-pair cameras. In order to obtain test protocol like Market1501, Zhun et al. [13] organized the dataset into two categories: “detected” data and “labeled” data. In this paper, we choose “detected” data that has 7364 images of 767 identifies for training set, and 1400 query images and 5332 gallery images of another 700 identifies for testing set. The third is DukeMTMC-re-ID [15] that has 36,411 images of 1812 identifies from 8 cameras. This dataset is divided into training set that has 16,522 images of 702 identifies, and testing set that has 2228 query images and 17,661 gallery images of another 702 identifies and 408 distractor identifies. 3.2

Comparison of Four Backbone Networks on GDCN

In this section, we compare the performance of GDCN with four backbone networks on Market1501, CUHK03-detected and DukeMTMC-re-ID. Rank accuracy (%) and mAP (%) are listed in Table 1. On Market1501, Densenet121 achieves the best performance with Rank-1 accuracy of 90.94% and mAP of 75.64% that are higher 1.54% and 1.27% than Resnet101 but higher 12.35% and 22.31% than Resnet50, and 25.56% and 40.57% than Inceptionv3. That means as the network deepens, we achieve more excellent performance on Market1501. Besides, we evaluate this dataset with Densenet121-reranking and obtain Rank-1 accuracy of 92.72% and mAP of 88.86%.

Table 1. Comparison of GDCN with four backbone networks on Market1501, CUHK03detected and DukeMTMC-re-ID. Rank accuracy (%) and mAP (%) are listed Market1501 Model Inceptionv3 Resnet50 Resnet101 Densenet121 Resnet101_rerank Densnet121_rerank

Rank-1 65.38 78.59 89.81 90.94 92.48 92.72

CUHK03-detected mAP 35.07 53.33 74.37 75.64 88.30 88.86

Rank-1 10.18 41.71 50.57 41.78 60.78 51.28

mAP 3.61 37.02 44.93 38.33 62.47 53.93

DukeMTMC-reID Rank-1 mAP 64.18 41.99 66.38 46.69 68.13 45.74 77.42 58.49 76.61 68.98 82.22 77.99

On CHUK03D-detected, Resnet101 achieves the best performance with Rank-1 accuracy of 50.57% and mAP of 37.02% that are higher 8.79 and 6.66% than Densenet121, which is a bit unexpected comparing with Market1501. The most possible reason is that the value of identifies between them are close but the images amount of Market1501 is five times as much as CHUK03D-detected. Moreover, we evaluate this dataset with Resnet101-reranking and obtain Rank-1 accuracy of 60.78% and mAP of 62.47%. On DukeMTMC-re-ID, Densenet121 still obtains the best performance with Rank-1 accuracy of 77.42% and mAP of 58.49%. We also obtain Rank-1 accuracy of

184

M. Fu et al.

82.22% and mAP of 77.99% using Densenet121-reranking. The experimental results on three datasets demonstrate the state-of-the-art performance of the proposed GDCN. 3.3

Comparison with Prior Methods

In this section, we compare our proposed method with prior methods based on deep leaning on three datasets. On Market1501, we list four methods, like SOMAnet [3], Spindle [4], PAN [6], AWTL [16]. AWTL integrated a convolutional neural network and an adaptive weighted triplet loss for re-ID. Our proposed method GDCN exceeds Rank-1 accuracy of 3.26% and mAP of 13.19% compared with AWTL that achieved the highest score in global feature group. On CUHK03-detected, there are three models that we present: PAN [6], SVDNet [17] and HA-CNN [18]. HA-CNN jointed learning of soft pixel attention and hard regional attention for feature representations based on deep learning. GDCN achieves Rank-1 accuracy of 60.78% and mAP of 64.27% that increase by 16.38 and 21.47% compared with HA-CNN. On DukeMTMC-re-ID, three previous methods are shown, such as PAN [6], SVDNet [17], AWTL [16]. GDCN obtains Rank-1 accuracy of 82.22% and mAP of 77.99%, which exceed 2.42 and 14.59% compared with AWTL. Hence, we verify the simple and effective performance of our method on three large-scale datasets (Table 2).

Table 2. Comparison with the previous work on Market1501, CUHK03-detected and DukeMTMC-re-ID Market1501 Model SOMAnet [3] Spindle [4] PAN [6] AWTL [16] GDCN (ours) CUHK03-detected Model PAN [6] SVDNet [17] HA-CNN [18] GDCN (ours) DukeMTMC-re-ID Model PAN [6] SVDNet [17] AWTL [16] GDCN (ours)

Rank-1 73.87 76.90 82.81 89.46 92.72

Rank-5 88.03 91.50 93.53 – 96.08

Rank-10 92.22 94.60 97.06 – 97.20

mAP 47.89 – 63.35 75.67 88.86

Rank-1 36.29 41.50 44.40 60.78

Rank-5 55.50 – – 73.00

Rank-10 75.07 – – 80.07

mAP 34.00 37.30 41.00 62.47

Rank-1 71.59 76.70 79.80 82.22

Rank-5 83.89 86.40 – 89.90

Rank-10 90.62 89.90 – 92.91

mAP 51.51 56.80 63.40 77.99

Global Deep Feature Representation for Person Re-Identification

185

4 Conclusion In this paper, we have proposed a global deep convolutional network that has achieved excellent performance on Densenet121 and Resnet101 comparing with other two shallow backbone networks. Besides, we compare the experimental results with previous work to verify the state-of-art performance of the proposed method. An adaptive model for feature extraction is an important task for re-ID. In the future, we intend to focus on attention mechanism and the correlation among local features. Acknowledgements. This work is supported by National Natural Science Foundation of China (Project 61471066) and the open project fund (No. 201600017) of the National Key Laboratory of Electromagnetic Environment, China.

References 1. Bedagkar-Gala A, Shah SK (2014) A survey of approaches and trends in person reidentification. Image Vis Comput 32(4):270–286 2. Zheng L, Yang Y, Hauptmann AG (2016) Person re-identification: past, present and future. arXiv preprint arXiv:1610.02984 3. Barbosa IB, Cristani M, Caputo B et al (2018) Looking beyond appearances: synthetic training data for deep CNNS in re-identification. Comput Vis Image Underst 167:50–62 4. Zhao H, Tian M, Sun S et al (2017) Spindle net: person re-identification with human body region guided feature decomposition and fusion. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1077–1085 5. Su C, Li J, Zhang S et al (2017) Pose-driven deep convolutional model for person reidentification. In: Proceedings of the IEEE international conference on computer vision, pp 3960–3969 6. Zheng Z, Zheng L, Yang Y (2018) Pedestrian alignment network for large-scale person reidentification. IEEE Trans Circ Syst Video Technol 7. Wei L, Zhang S, Yao H et al (2017) Glad: global-local-alignment descriptor for pedestrian retrieval. In: Proceedings of the 2017 ACM on multimedia conference. ACM, pp 420–428 8. Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826 9. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 10. Huang G, Liu Z, Van Der Maaten L et al (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708 11. Ye J (2011) Cosine similarity measures for intuitionistic fuzzy sets and their applications. Math Comput Model 53(1–2):91–97 12. Zheng L, Shen L, Tian L et al (2015) Scalable person re-identification: a benchmark. In: Proceedings of the IEEE international conference on computer vision, pp 1116–1124 13. Zhong Z, Zheng L, Cao D et al (2017) Re-ranking person re-identification with k-reciprocal encoding. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 3652–3661

186

M. Fu et al.

14. Li W, Zhao R, Xiao T et al (2014) Deepre-ID: deep filter pairing neural network for person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 152–159 15. Ristani E, Solera F, Zou R et al (2016) Performance measures and a data set for multi-target, multi-camera tracking. In: European conference on computer vision. Springer, Cham, pp 17– 35 16. Ristani E, Tomasi C (2018) Features for multi-target multi-camera tracking and reidentification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6036–6046 17. Sun Y, Zheng L, Deng W et al (2017) SVDNet for pedestrian retrieval. In: Proceedings of the IEEE international conference on computer vision, pp 3800–3808 18. Li W, Zhu X, Gong S (2018) Harmonious attention network for person re-identification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2285– 2294

Hybrid Precoding Based on Phase Extraction for Partially-Connected mmWave MIMO Systems Mingyang Cui1 , Weixia Zou1,2(B) , and Ran Zhang1 1

Key Laboratory of Universal Wireless Communications MOE, Beijing University of Posts and Telecommunications, Beijing 100876, People’s Republic of China [email protected] 2 State Key Laboratory of Millimeter Waves, Southeast University, Nanjing 210096, People’s Republic of China

Abstract. Millimeter wave (mmWave) massive multiple-input multipleoutput (MIMO) has been regarded as an attractive solution for the next generation of communications. Restricted by the hardware and energy consumption, a hybrid analog and digital precoding structure is widely adopted. However, high-computational complexity is the fundamental restrictions of the most existing hybrid precoding schemes. To overcome these limitations, this paper proposes a high-performance hybrid precoding algorithm for partially-connected mmWave MIMO systems. Due to the special partially-connected structure, we decompose the analog precoding problem into a series of optimization problems. For each subproblem, we use the method of phase extraction to optimize one column of analog precoding matrix. Then the digital precoding matrix is obtained based on the least square algorithm. Simulation results verify that the proposed algorithm outperforms the existing ones.

Keywords: mmWave extraction

1

· MIMO · Partially-connected · Phase

Introduction

Wireless data traffic is projected to skyrocket 1000-fold by the year 2020 [1], thus promoting the development of the fifth generation (5G) concept to cope with the requirements of high-data-rate applications [2]. The explosive growth of mobile traffic has significantly aggravated the spectrum congestion of traditional frequency bands. In this context, millimeter wave (mmWave) massive multipleinput multiple-output (MIMO) has drawn global attention due to its wide band and high transmission rate [3,4]. With the large dimension of antenna for mmWave massive MIMO, full digital precoding structure, which requires the allocation of individual radio frequency c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 187–195, 2020 https://doi.org/10.1007/978-981-13-9409-6_23

188

M. Cui et al.

(RF) chain to each data stream, is unfeasible due to the hardware constraint and power consumption. The hybrid precoding architecture is more practical and cost effective to deploy, which uses a combination of analog precoder in the RF domain, associated with digital precoder in the baseband [5,6]. According to the mapping from RF chains to antennas, two hybrid precoding structures have drawn much attention, which can be categorized into the fullyconnected and partially-connected structures. Since the first one can get full beamforming gain for each transceiver, it has become the focus of research [7–9]. In [7], exploiting the sparse structure of mmWave channels, the hybrid precoding problem is approximately solved by minimizing the Euclidean distance between hybrid precoding matrix and optimal digital precoding matrix. Based on the spatial sparseness of mmWave, [8] provides a new method of building the joint RF and baseband precoder that reduces the computation complexity and enables highly parallel hardware architecture. In addition, the authors of [9] formulate the hybrid precoder design as a matrix factorization problem, and propose an iteration algorithm based on manifold optimization. Partially-connected structure is also widely concerned by researchers because of its simple structure and easy implementation [10, 11]. The authors of [10] first decompose hybrid precoding optimization problem into a series of sub-rate optimization problems, and design a hybrid precoding algorithm based on successive interference cancelation (SIC), which can achieve desired results. Moreover, [9] proposes a semidefinite relaxation based AltMin (SDR-AltMin) algorithm, which is the first effort directly optimizing the hybrid precoders in such a structure. In this paper, we propose a hybrid precoding algorithm for partiallyconnected structure based on phase extraction (HPP-PE). Firstly, we decompose the analog precoding problem into a series of subproblems. Secondly, we complete the design of analog precoder through the method of phase extraction to optimize each column of analog precoding matrix. Finally, we obtain the digital precoding matrix by using the least square algorithm. According to the simulation results, we show that the proposed algorithm outperforms the existing ones. In the case that the channel state information (CSI) is imperfect, we show that the proposed HPF-PE algorithm also has superior adaptabilities. We use the following notation throughout this paper: A, a and a represent a matrix, a vector, and a scalar, respectively. E(·) denotes the expectation of a complex variable. angle(·) is the phase of a complex number. Ai,j is the entry on the ith row and jth column of A. AT , AH and |A| are its transpose, conjugate transpose and determinant of A, respectively. AF is its Frobenius norm. Tr(A) indicates the trace. IN is the N × N identity matrix.

2

System Model

In this paper, we focus on the downlink of a single-user massive MIMO system using hybrid analog and digital precoding and present the system model and channel model.

Hybrid Precoding Based on Phase Extraction

2.1

System Model

RF Chain

Ns

189

. . .

Baseband Precoder FBB

.

N RF . .

RF Chain Analog Precoder FRF

Chain

. . .

Nt

.. . M

.

N RF . .

RF Chain

.. . M

Fig. 1. Block diagrams of mmWave single-user system

Consider a single-user mmWave MIMO system using a hybrid structure as shown in Fig. 1 [10,11], where the transmitter is equipped with Nt = NRF × M antennas but only NRF independent RF chains to simultaneously transmit Ns data streams, which is subjected to Ns ≤ NRF ≤ Nt . The transmitted symbols are firstly processed by an NRF × Ns digital precoder FBB , then pass through RF chains. After that, the symbols are precoded by an Nt × NRF RF precoder FRF before transmission. Since FRF is implemented using analog phase shifters, its elements are constrained to satisfy |[FRF ]m,n |2 = 1. The normalized transmit 2 power constraint is given by FRF FBB F = Ns . Then the transmitted signal can be written as (1) x = FRF FBB s, where s is the Ns × 1 signal vector such that E[ssH ] = N1s INs . We consider a narrow band block-fading propagation channel which yields the received signal √ (2) y = ρHFRF FBB s + n, where ρ represents the average power of the received signal, and n is the additive white Gaussian noise vector of i.i.d. CN (0, σn2 ). H is the Nr ×Nt channel matrix, 2 satisfying the constraint E[HF ] = Nr Nt , where Nr is the number of antennas of the receiver. In this study, we implicitly assume that the perfect CSI is known at both transmitter and receiver, which can be efficiently obtained by channel estimation at the receiver and further shared at the transmitter [12]. When the Gaussian signals are transmitted via the mmWave channel, the spectral efficiency of the system is given by [10,11] R(FRF , FBB ) = log2 (|INs +

ρ H H HFRF FBB FH BB FRF H |). Ns σ 2

(3)

190

2.2

M. Cui et al.

Channel Model

According to the mmWave channel measurement results, the large-scale antenna array structure transceiver leads to high correlation of the antenna, and the number of propagation paths is much smaller than that of transmission antennas. Therefore, the mmWave channel will not obey the Rayleigh distribution. In this paper, we adopt the Saleh-Valenzuela model with a finite scattering to characterize the mmWave MIMO channel [13]. If the antenna elements are modeled as being ideal sectored elements, the channel matrix is given as  Ncl N ray  Nt Nr  H αi,j ar (ϕri,j )at (ϕti,j ) , (4) H= Ncl Nray i=1 j=1 where Ncl and Nray represent the number of clusters and the number of rays per cluster, respectively. αi,j is the complex gain of the jth ray in the ith cluster, which is a random identically distributed random variable that follows the i.i.d 2 2 ). σα,i denotes the power of cluster and the powers of clusters are CN (0, σα,i Ncl 2 subject to i=1 σα,i = Ncl . In this paper, the half-wavelength uniform linear array (ULA) is used. The expression of the array response vector is given by 1 aULA (ϕ) = √ [1, ejπ sin(ϕ) , . . . ejπ(Nt −1) sin(ϕ) ]T . Nt

3

(5)

Hybrid Precoding for the Partially-Connected Structure Based on Phase Extraction (HPP-PE)

As shown in [7,9], the design of precoder and decoder can be separated into two subproblems, i.e., the precoding and decoding problems. Therefore, we will mainly focus on the precoder design in the remaining part of this paper and the algorithm proposed in this paper can be equally applied for the decoder. The problem of precoding can be expressed as arg min FRF ,FBB ⎧ ⎨ s.t. ⎩

2

Fopt − FRF FBB F     (FRF )i,j  = 1, ∀i, j,

(6)

||FRF FBB ||2F = Ns ,

where Fopt stands for the optimal digital precoder. 3.1

Analog Precoder Design of HPP-PE

According to the special properties of partially-connected structure, we can write the analog precoder FRF as follows: ⎡ ⎤ p1 0 · · · 0 ⎢ 0 p2 · · · 0 ⎥ ⎢ ⎥ FRF = ⎢ . . . (7) . ⎥. ⎣ .. .. . . .. ⎦ 0 0 · · · pNRF

Hybrid Precoding Based on Phase Extraction

191

For convenience of analysis, we write FBB as a block matrix and each element of the block matrix corresponds to the M lines of the FBB , that is, FBB = T [q1 , q2 , . . . , qNRF ] . Similarly, we rewrite the optimal digital precoder as Fopt = T T [F1 , F2 , . . . , FNRF ] . We can obtain FRF FBB = [p1 q1 , p2 q2 , . . . , pNRF qNRF ] . So question (6) can be reformulated into a series of optimization problems 2

Fi − pi qi F , 1 ≤ i ≤ NRF .

arg min pi ,qi

(8)

We first design an analog precoding matrix, assuming that the digital precoding matrix is fixed. Taking the ith optimization problem as an example, (8) can converted as Fi − pi qi F = Tr[(Fi − pi qi )(Fi − pi qi )H ] 2 2 = Fi F + M qi F − 2Tr(pi qi FH i ). 2

2

(9)

2

Since Fi F and qi F are constants, the optimization problem (9) can be translated into Tr(pi qi FH (10) arg max i ), 1 ≤ i ≤ NRF . pi

Therefore, one solution is obtained by using phase extraction method, namely pi = e−j×angle(qi Fi ) , 1 ≤ i ≤ NRF . H

(11)

After calculating all pi (1 ≤ i ≤ NRF ), we can obtain the analog precoding matrix FRF . 3.2

Digital Precoder Design of HPP-PE

For given analog precoder, we can design digital precoder to improve spectral efficiency for transmitter. The problem of digital precoder can be written as arg min FBB

2

Fopt − FRF FBB F .

(12)

Then the objective function in (12) can be further recast as 2

Fopt − FRF FBB F H = Tr[(Fopt − FRF FBB ) (Fopt − FRF FBB )] 2 H H H H = Fopt F + Tr(FH BB FRF FRF FBB − Fopt FRF FBB − FBB FRF Fopt ).

(13)

We derive (13) with respect to FH BB and make the result equal to 0. The closed form solution of (12) can be obtained as −1 H FRF Fopt . FBB = (FH RF FRF )

(14)

Combining the design of analog precoder in 3.1 and the design of digital precoder in 3.2, the HPP-PE algorithm proposed in this paper can be described as follows.

192

M. Cui et al.

Algorithm 1. Hybrid Precoding for the Partially-connected Structure based on phase extraction (HPP-PE). Require: Fopt , NRF , M . 1: for i = 1 to NRF do 2: Obtain Fi , and Initialize pi ; −1 H pi Fi ; 3: qi = (pH i pi ) −j×angle(qi FH i ); 4: pi = e 5: end for 6: Reconstruct FRF with pi ; ¯ BB = (FH F )−1 FRF Fopt ; 7: F √ RF RFF ¯ BB 8: FBB = Ns ||F F 2 ; ¯ RF BB ||F Ensure: FRF , FBB .

As for the initialization of pi , the algorithm uses the phase of the correspondf , where Fi = [fi,1 , fi,2 , . . . , fi,Ns ]. ing column in Fi , such as pi = |Fi,i i,i | 3.3

Complexity Evaluation

In this subsection, we provide the complexity evaluation of different algorithms based on the number of complex multiplications. For the proposed algorithm, we can observe that the main complexity comes from step 3 and 4. The main complexity of step 3 is O(M + M NRF ), and that of step 4 is O(M NRF ). Therefore, the complexity of the proposed HPP-PE algorithm is approximately O(Nt ). According to the description of [9], we can know that the complexity of the SDR-AltMin algorithm mainly comes from the previous calculations and the SDR. The complexity of just calculating C in the previous calculation is O(Nt2 ), which is higher than the total complexity of the proposed algorithm. The authors of [10] point out that the complexity of the SIC algorithm is O(NRF S + Nr )M 2 , where S is the number of iterations. It can be seen that the complexity of the proposed algorithm is slightly lower than that of the SIC algorithm.

4

Simulation Results

In this section, we show the performance achieved by proposed and reference algorithms. The default simulation parameters are described as follows. The transmitter with Nt = 144 antennas and sends Ns = 3 data streams to a receiver with Nr = 36 antennas, while both are equipped with ULA. The channel includes Ncl = 5 clusters and each cluster contains Nray = 3 rays. The azimuth and elevation AoDs and AoAs follow the Laplacian distribution with uniformly distributed mean angles over [0, 2π) and angular spread of 7.5◦ . All the reported results are the average of 500 random channel realizations.

Hybrid Precoding Based on Phase Extraction

193

In Fig. 2, we provide spectral efficiency of the proposed HPP-PE algorithm, SDR-AltMin algorithm [9] and SIC algorithm [10] in the case of NRF = Ns = 3. Since the SIC algorithm can only design hybrid precoder at the transmitter, we assume that the optimal digital decoder is adopted at the receiver, which is also employed for the other algorithms. It can be seen that in all cases, the performance of the proposed algorithm exceeds the performance of the other two algorithms. Moreover, we evaluate the performance of different algorithms with imperfect CSI. According to [14], the estimated channel matrix can be expressed as  ˜ = δH + 1 − δ 2 Ψ (15) H where 0 ≤ δ ≤ 1 denotes the reliability of the estimated channel and Ψ is the noise matrix whose entries follow i.i.d CN (0, 1). Figure 3 plots the spectral efficiency of different algorithms with different CSI conditions for mmWave MIMO system. Compared with Fig. 2, it can be concluded that there is some performance loss for all algorithms in this case due to the error of channel estimation. It is observed that the proposed algorithm performs better than the other two algorithms under the same δ. Moreover, the performance of the algorithms will gradually deteriorate as δ decreases. Therefore, we need to estimate the CSI more accurately to ensure the performance.

Spectral Efficiency (bits/s/Hz)

40 35

HPP-PE SDR-AltiMin SIC

30 25 20 15 10 5 -20

-15

-10

-5

0

5

10

SNR (dB)

Fig. 2. Spectral efficiency of different algorithms given NRF = Ns = 3

194

M. Cui et al.

Spectral Efficiency (bits/s/Hz)

40 35 30 25 20

22 21

15

20

10 5 -20

19 -6

-15

-10

-5

0

-5

-4

5

10

SNR (dB)

Fig. 3. Spectral efficiency of different algorithms with different CSI conditions

5

Conclusion

In this paper, we propose a hybrid precoding algorithm under the partiallyconnected structure. Using the special partially-connected structure, we decompose the analog precoding problem into a series of optimization problems and optimize each column of analog precoding matrix based on phase extraction method. Then we use the least squares solution to design digital precoder. The numerical results demonstrate significant performance gains of the proposed algorithm over existing hybrid precoding algorithms. And it is still effective even with imperfect CSI. In our future work, it is a worthy problem to investigate the multi-user MIMO scenarios and channel estimation. Acknowledgements. This work was supported by NSFC (No. 61571055), fund of SKL of MMW (No. K201815), Important National Science & Technology Specific Projects (2017ZX03001028).

References 1. Huang H, Song Y, Yang J, Gui G, Adachi F (2019) Deep-learning-based millimeterwave massive MIMO for hybrid precoding. IEEE Trans Veh Technol 68(3):3027– 3032 2. Xiao M, Mumtaz S, Huang Y, Dai L, Li Y, Matthaiou M, Karagiannidis GK, Bj¨ ornson E, Yang K, Chih-Lin I, Ghosh A (2017) Millimeter wave communications for future mobile networks. IEEE J Sel Areas Commun 35(9):1909–1935 3. Heath RW, Gonz´ alez-Prelcic N, Rangan S, Roh W, Sayeed AM (2016) An overview of signal processing techniques for millimeter wave MIMO systems. IEEE J Sel Top Signal Process 10(3):436–453

Hybrid Precoding Based on Phase Extraction

195

4. Chen K, Qi C (2019) Beam training based on dynamic hierarchical codebook for millimeter wave massive MIMO. IEEE Commun Lett 23(1):132–135 5. Han S, Chih-Lin I, Xu Z, Rowell C (2015) Large-scale antenna systems with hybrid analog and digital beamforming for millimeter wave 5G. IEEE Commun Mag 53(1):186–194 6. Molisch AF, Ratnam VV, Han S, Li Z, Nguyen SLH, Li L, Haneda K (2017) Hybrid beamforming for massive MIMO: A survey. IEEE Commun Mag 55(9):134–141 7. Ayach OE, Rajagopal S, Abu-Surra S, Pi Z, Heath RW (2014) Spatially sparse precoding in millimeter wave MIMO systems. IEEE Trans Wireless Commun 13(3):1499–1513 8. Lee Y, Wang C, Huang Y (2015) A hybrid RF/baseband precoding processor based on parallel-index-selection matrix-inversion-bypass simultaneous orthogonal matching pursuit for millimeter wave MIMO systems. IEEE Trans Signal Process 63(2):305–317 9. Yu X, Shen J, Zhang J, Letaief KB (2016) Alternating minimization algorithms for hybrid precoding in millimeter wave MIMO systems. IEEE J Sel Top Signal Process 10(3):485–500 10. Gao X, Dai L, Han S, Chih-Lin I, Heath RW (2016) Energy-efficient hybrid analog and digital precoding for mmwave MIMO systems with large antenna arrays. IEEE J Sel Areas Commun 34(4):998–1009 11. Du J, Xu W, Shen H, Dong X, Zhao C (2018) Hybrid precoding architecture for massive multiuser MIMO with dissipation: Sub-connected or fully connected structures? IEEE Trans Wireless Commun 17(8):5465–5479 12. Gonz´ alez-Coma JP, Rodr´ıguez-Fern´ andez J, Gonz´ alez-Prelcic N, Castedo L, Heath RW (2018) Channel estimation and hybrid precoding for frequency selective multiuser mmwave MIMO systems. IEEE J Sel Top Signal Process 12(2):353–367 13. Akdeniz MR, Liu Y, Samimi MK, Sun S, Rangan S, Rappaport TS, Erkip E (2014) Millimeter wave channel modeling and cellular capacity evaluation. IEEE J Sel Areas Commun 32(6):1164–1179 14. Zhang D, Wang Y, Li X, Xiang W (2018) Hybridly connected structure for hybrid beamforming in mmwave massive MIMO systems. IEEE Trans Commun 66(2):662– 674

Research on the Fusion of Warning Radar and Secondary Radar Intelligence Information Jinliang Dong1(&), Yumeng Zhang1, Baozhou Du2, and Xiaoyan Zhang2 1

Nanjing Research Institute of Electronics Technology, Nanjing, China [email protected] 2 Troops of No. 63850, Baicheng, China

Abstract. Based on the research of multi-sensor fusion tracking, combined with the working characteristics of warning radar and secondary radar, this paper pro-poses a point fusion and track fusion structure suitable for its engineering application and a specific fusion process. The track fusion algorithm proposed in this paper not only approaches the point fusion algorithm in tracking accuracy, but also retains the advantages of distributed fusion structure and has broad application prospects. The effectiveness and superiority of the algorithm are verified by related simulations. Keywords: Warning radar  Secondary surveillance radar  Plot fusion  Track fusion  Fusion tracking

1 Introduction Secondary radar has been widely used in many aspects such as air traffic control, enemy and enemy identification and target tracking. Its development is almost parallel with primary radar. The fundamental difference between the two is the different working methods. The primary radar relies on the target’s reflection of the electromagnetic waves emitted by the radar to actively detect the target and locate the target, while the secondary radar must rely on the cooperation of the interrogator and the target transponder. The secondary radar works in a challenge-response mode, and the detection and localization of the target is accomplished by two active radiated electromagnetic waves (one inquiry and one response). With the cooperation of the target transponder, the secondary radar has many advantages that the radar does not have [1]: (1) The response echo of the secondary radar is only attenuated by the one-way propagation distance, and its interrogation distance is only proportional to the square root of the transmission power. When the specified distance is reached, the secondary radar transmit power can be much smaller than the primary radar’s transmit power, and the volumetric quality is also much smaller. (2) Since the RF wavelengths of the RF and response RF are different, the ground clutter, meteorological clutter and Sinpo can be eliminated. (3) The secondary radar echo is independent of the effective reflection area of the target and the target attitude, and there is no target flicker. (4) The height data of the secondary radar is derived from the barometric altimeter and can obtain three-dimensional coordinate estimation without complicated technology. Its accuracy © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 196–205, 2020 https://doi.org/10.1007/978-981-13-9409-6_24

Research on the Fusion of Warning Radar and Secondary Radar

197

is also higher than that of the primary radar. (5) Secondary radars can use coded signals to provide and exchange rich information such as identification, compromise, and faults. The use of the target transponder allows the secondary radar to gain an advantage while limiting its scope of use, i.e. requiring the target to carry an answering machine (also known as a cooperative target). In order to make up for this shortcoming, the secondary radar and the warning primary radar are often used together, which not only can expand the coverage area of the monitored airspace, but also improve the tracking accuracy of the cooperation target in the public coverage area [2]. Literatures [3] and [4] analyzed the key technologies and system performance for the radar network data fusion system. Literatures [5] and [6] respectively introduce the application of dot fusion and track fusion algorithm in multi-sensor fusion. Combining the working characteristics of warning radar and secondary radar, this paper proposes a fusion tracking process for warning radar and secondary radar suitable for engineering applications.

2 Point Fusion of Warning Radar and Secondary Radar 2.1

Point-and-Shoot Fusion Structure of Warning Radar and Secondary Radar

The point data reported by the warning radar and the secondary radar to the fusion center is usually not synchronized, and since the warning radar mainly detects the remote target, the sampling data rate is not very high. According to the above characteristics, in the centralized system, we adopt the dot-strip merging algorithm which is more suitable for the fusion tracking of the warning radar and the secondary radar. The specific dotted fusion processing structure is shown in Fig. 1.

Frame

Fusion Center

Scenic

Sliding Plot Warning Plot Window Radar Pre-processed

Space Calibration

Secondary Radar

Plot Correlation

Plot Serial Combination

Track Filtering and Updating

System Integrated Track Display and Control Console

Secondary Radar Plot

Fig. 1. Point-and-shoot fusion structure of warning radar and secondary radar

2.2

Point Fusion Process and Algorithm

(1) After the alert radar detects the target, the target echo is signal processed to give a frame point. Then, the frame trace report is pre-processed such as defuzzification and dot combination, and finally the attraction trace report is generated and sent to the fusion center. After the secondary radar finds the target, it calculates the

198

J. Dong et al.

distance and azimuth of the target according to its response signal, solves the target code and altitude data, and then sends the target point report directly to the fusion center. (2) The fusion center spatially calibrates the point data of the warning radar and the secondary radar, that is, respectively transforms them into the reference coordinate system of the information fusion center. (3) The points sent to the fusion center are sequentially associated with the system track in the database according to certain criteria in the order of their detection time. In order to accurately complete the track-track interconnection task, it is necessary to predict the state of the system track at the moment of the occurrence of the trace, in preparation for the point-track interconnection. (4) For the points on the association, the idea of serial combination of points is adopted [7], and the system track is updated by Kalman filtering technology to make the system track continue. For traces that are not associated, as a possible new target, in the following cycles, if there is no subsequent trace associated with it, it is considered a false marker and is eliminated according to certain criteria; otherwise, Start a new system track.

3 Track Fusion of Warning Radar and Secondary Radar 3.1

Track Fusion Structure of Warning Radar and Secondary Radar

In the distributed system, based on the idea of point-and-serial serial merging in centralized fusion tracking system, an alternative fusion algorithm for asynchronous trajectory in distributed fusion tracking system is proposed, and the warning radar and secondary radar are given. The track fusion process, the specific track fusion processing structure is shown in Fig. 2.

Frame

Fusion Center

Scenic

Sliding Plot Warning Plot Window Radar Pre-processed

Local Tracker Local Space Track Calibration

Secondary Radar

Secondary Radar Plot

Track Association

Alternative Track Fusion

System Track Integrated

Display and Control Console

Local Tracker

Fig. 2. Track fusion structure of warning radar and secondary radar

3.2

Track Fusion Process and Algorithm

(1) After the alert radar detects the target, the target echo is signal processed to give a frame point. Then, the frame trace report is subjected to pre-processing such as deblurring and dot combination to generate a scenic spot. The scenic spot is processed by the local tracker to form a radar track and sent to the fusion center.

Research on the Fusion of Warning Radar and Secondary Radar

199

After the target is found by the secondary radar, the parameters such as the distance and azimuth of the target are calculated according to the response signal, and the code and height data of the target are solved. After the local tracker processes, the secondary radar track is formed and sent to the fusion center. (2) The warning radar and the secondary radar track are sent to the fusion center via the communication network, and they are spatially calibrated at the fusion center, that is, they are respectively transformed into the reference coordinate system of the information fusion center. (3) The local track sent to the fusion center takes the system track from the sliding window in the system track library in the order of its output time, and correlates according to certain criteria. In order to accurately complete the track-track interconnection task, the system track state needs to be extrapolated to the moment when the next reported local track may occur. (4) For the local track on the association, the following alternate asynchronous track fusion algorithm is used to update the system track. For the track that is not associated, it is programmed into the system track library as a new system track. The alternate asynchronous track fusion timing diagram is shown in Fig. 3. Here, the two local sensors are given, and the results can be directly extended to the multisensor fusion system. The local trackers 1 and 2 respectively give tracking tracks of the same target by the two local sensors, and the sampling period of each local track and the update period of the system track are not fixed. Each local track is first associated with the system track according to the order of arrival to the fusion center, and then the system track is updated in turn. In most cases, the sample timestamps for each local track are different, so they can alternately update the system track. If at some time (such as the time in Fig. 3), the sample timestamps of the two local tracks are equal (according to actual needs, when the time difference between the two local track samples is less than a certain threshold, it can be considered equal). We can first simply combine the two local trajectories and then update the system trajectory with the fusion value [8].

Local Tracker1 System Track

t (k − 1)Tk −1

t

kTk

Local Tracker2 t − time

t

Fig. 3. Alternating asynchronous track fusion timing diagram

It is assumed that each local tracker and fusion center uses Kalman filtering to estimate the target state. The track reports reported by each local tracker to the fusion center include: sample time stamp, target state estimation, and error covariance matrix.

200

J. Dong et al.

The state estimation value of each local track and its error covariance are used as the original measurement and error covariance when the system track is updated, and the matrix is HðkÞ  I measured at this time. Set the system track update time to kTk , ^i ðkjkÞ of the k time sensor iði ¼ 1 or 2Þ and abbreviated as k. If the local state estimate X its error covariance matrix Pi ðkjkÞ are sent to the fusion center, the standard Kalman filter equation for the system track is: ^ ðkjk  1Þ ¼ Uðk; k  1ÞX ^ ðk  1jk  1Þ X

ð1Þ

Pðkjk  1Þ ¼ Uðk; k  1ÞPðk  1jk  1ÞUT ðk; k  1Þ þ Qðk; k  1Þ

ð2Þ

SðkÞ ¼ Pðkjk  1Þ þ Pi ðkjkÞ

ð3Þ

K ðk Þ ¼ Pðkjk  1ÞS1 ðk Þ

ð4Þ

  ^ ðkjkÞ ¼ X ^ ðkjk  1Þ þ K ðkÞ X ^i ðkjkÞ  X ^ ðkjk  1Þ X

ð5Þ

PðkjkÞ ¼ Pðkjk  1Þ  K ðk ÞSðk ÞK T ðk Þ

ð6Þ

¼ ½I  K ðk ÞPðkjk  1Þ

4 Simulation and Performance Analysis In order to verify the performance of the algorithm, we use the Monte Carlo method to perform 500 simulations in Matlab environment. The duration of each simulation tracking is 50 s. The Monte Carlo simulation flow chart is shown in Fig. 4. The actual trajectory of the tracked target is shown in Fig. 5. Each target is uniformly accelerated in the X, Y, and Z directions. The initial states are:  X1 ð0Þ ¼ 30; 000 m 400 m/s  X2 ð0Þ ¼ 30; 000 m 420 m/s  X3 ð0Þ ¼ 30; 000 m 380 m/s  X4 ð0Þ ¼ 30; 000 m 300 m/s

10 m/s2

40; 000 m 300 m/s

8 m/s2

20 m/s2

30; 000 m

6 m/s2

15 m=s 8 m/s

2

2

450 m/s

40; 000 m 480 m/s 30; 000 m

250 m/s

12 m/s2 5 m/s2

0 5 m/s2 ; 0 35; 000 m 380 m 8 m/s2 ;  0 35; 000 m 280 m 9 m/s2 ; 0 35; 000 m 400 m 5 m/s2 ;

35; 000 m

350 m

The multi-sensor fusion tracking system consists of a warning radar and a secondary radar. The system adopts a distributed structure. The root mean square error of the distance, azimuth and elevation angle of the warning radar is rq1 ¼ 130 m, rh1 ¼ 10 mrad, re1 ¼ 10 mrad, sampling start time t1 ¼ 0:5 s, sampling period T1 ¼ 1 s. The rms errors of the distance, azimuth and elevation angles of the secondary radar are rq2 ¼ 150 m, rh2 ¼ 13 mrad, re2 ¼ 6 mrad, sampling start time t2 ¼ 1 s, sampling period T2 ¼ 1 s. At the sampling center of the fusion center Tf ðkÞ ¼ kðk  1Þ and k is an integer, the two radars are asynchronously sampled, and the CA (even acceleration) model is used for the conversion measurement Kalman filter [9, 10].

Research on the Fusion of Warning Radar and Secondary Radar

201

Simulation Start Setting Simulation times N Generating the Real Track of the Target Generating Measurements from the Real Track of the Target N=N-1 Plot Fusion

Genearting Local Tracks by Radar Tracking and Filtering

Track filtering and updating

Track Fusion

No

N=1 Yes

Calculating the RMSE of Plot Fusion and Track Fusion of each radar Output of Simulation Results End of Simulation

Fig. 4. Monte Carlo simulation flow chart

Figures 6, 7, 8, 9 and 10 are the warning radar, the secondary radar, and the radial distance, azimuth, elevation, and radial velocity of the target 1 using the above-mentioned dot fusion and track fusion algorithm. The sum of the acceleration root mean square error (RMSE) curve is similar to the target 1 for the other target tracking fusion. It can be seen from the figure that the tracking accuracy of the target 1 is better than that of the single radar by the point fusion and the track fusion algorithm, and can provide a more detailed description of the trajectory. Because the centralized fusion structure has high requirements on the communication bandwidth and the processing power of the fusion center, the distributed fusion structure is usually selected when the clutter or target is dense and the system is vigilant. A centralized fusion structure is used when there are few clutter and targets that require more precise tracking of the target. The track fusion algorithm given in this section

202

J. Dong et al.

not only approaches the point fusion algorithm in tracking accuracy, but also retains the advantages of distributed fusion structure, so it has broad application prospects.

Z Coordinates/km

60 55 50 45 40 35 55

50

45

40

Y Coordinates/km 35

30 -30

-20

-10

0

10

20

30

X Coordinates/km

Fig. 5. The true trajectory of the target in Cartesian coordinates

0.16

Warning Radar

0.15

Secondary Radar Track Fusion

0.13 0.12 0.11 0.1 0.09



Radial Distance RMSE/km

Plot Fusion

0.14

0.08 0

10

20

30

40

Tracking Time/s

Fig. 6. Radial distance RMSE curve

50

Research on the Fusion of Warning Radar and Secondary Radar x 10

14

-3

Warning Radar Secondary Radar Plot Fusion Track Fusion

Azimuth RMSE/rad

13 12 11 10 9 8 7 6 5

0

10

20

30

40

50

Tracking Time/s

Fig. 7. Azimuth RMSE curve

11

x 10

-3

Warning Radar Secondary Radar Plot Fusion

Elevation RMSE/rad

10

Track Fusion

9 8 7 6 5 4 3

0

10

20

30

Tracking Time/s

Fig. 8. Elevation RMSE curve

40

50

203

204

J. Dong et al.

Radial Velocity RMSE/(km/s)

1.4

Warning Radar Secondary Radar Plot Fusion

1.2

Track Fusion

1 0.8 0.6 0.4 0.2 0

0

10

20

30

40

50

Tracking Time/s

Fig. 9. Radial velocity RMSE curve

Acceleration RMSE/(km/s/s)

3.5

Warning Radar Secondary Radar

3

Plot Fusion Track Fusion

2.5 2 1.5 1 0.5 0

0

10

20

30

40

50

Tracking Time/s

Fig. 10. Acceleration RMSE curve

5 Conclusion This paper first introduces the advantages of the combination of warning radar and secondary radar. Combined with the working characteristics of the warning radar and the secondary radar, this paper proposes a point fusion and track fusion structure suitable for its engineering application and a specific fusion process. Finally, through simulation, the advantages and disadvantages of the point fusion and track fusion algorithms are analyzed.

Research on the Fusion of Warning Radar and Secondary Radar

205

References 1. Lynn PA (1987) Secondary radar. Radar systems. Springer, US 2. Tan Y, Yang J, Li L et al (2012) Data fusion of radar and IFF for aircraft identification. J Syst Eng Electron 23(5):715–722 3. Ma K, Zhang H, Wang R et al (2018) Target tracking system for multi-sensor data fusion. Technology, networking, electronic & automation control conference. IEEE 4. Wang Y, Scharf LL, Santamaría I et al (2017) Canonical correlations for target detection in a passive radar network. In: Conference on signals, systems & computers. IEEE 5. Garcia F, Cerri P, Broggi A et al (2012) Data fusion for overtaking vehicle detection based on radar and optical flow. IEEE intelligent vehicles symposium. IEEE 6. Zhang B, Luo X, Lin H et al (2015) Researches on multiple-radar multiple-platform plot data fusion. Syst Eng Electron 37(7):1512–1517 7. Lee EH, Song TL (2017) Multi-sensor track-to-track fusion with target existence in cluttered environments. IET Radar Sonar Navig 11(7):1108–1115 8. Belmonte-Hernandez A, Hernandez-Penaloza G, Alvarez F et al (2017) Adaptive fingerprinting in multi-sensor fusion for accurate indoor tracking. IEEE Sens J:1 9. Sobhani B, Paolini E, Giorgetti A et al (2017) Target tracking for UWB multistatic radar sensor networks. IEEE J Sel Top Sign Process 8(1):125–136 10. Belik BV, Belov SG (2017) Using of extended Kalman filter for mobile target tracking in the passive air based radar system. Procedia Comput Sci 103:280–286

Antenna Array Design for Directional Modulation Bo Zhang1 , Wei Liu2 , and Cheng Wang1(B) 1 Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, College of Electronic and Communication Engineering Tianjin Normal University, Tianjin 300387, China [email protected], [email protected] 2 Communications Research Group, Department of Electronic and Electrical Engineering, University of Sheffield, Sheffield S1 4ET, UK [email protected]

Abstract. Directional modulation (DM) has been applied to linear antenna arrays to increase security of signal transmission. However, only the azimuth angle is considered in the design, due to inherent limitation of the linear array structure, since linear antenna array lacks the ability to scan in the three dimensional (3-D) space. To solve the problem, planar antenna arrays are introduced in the design, where both the elevation angle and azimuth angle are considered. Moreover, a magnitude constraint for weight coefficients is introduced. Design examples are provided to verify the effectiveness of the proposed design.

Keywords: Directional modulation antenna array

1

· Magnitude constraint · Planar

Introduction

Directional modulation (DM) as a physical layer security technique was introduced to keep known constellation mappings in a desired direction or directions, while scrambling them for the remaining ones [1]. In [2], a four-element reconfigurable array was designed, and the DM design can be achieved by changing elements for each symbol. Then, the genetic algorithm based on phased antenna array was introduced to DM [3], where the same carrier frequency was used for all antennas. By changing the weight coefficients properly for each symbol, DM design can be achieved, and its low bit error rate (BER) range is narrower than traditional beamforming design. In [4], directional antennas were used in the design to replace isotropic antennas, and the provided examples show that a narrower low BER range is achieved. Moreover, to solve the problem that both the eavesdroppers and the desired users will receive the same signal when they c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 206–213, 2020 https://doi.org/10.1007/978-981-13-9409-6_25

Antenna Array Design for Directional Modulation

207

are in the same direction of the antenna array, two positional modulation (PD) schemes were proposed. One introduces a reflecting surface [5], where the multipath effect is exploited, and signals via both line of sight (LOS) and reflected paths are combined at the receiver side. The other is to use multiple antenna arrays [6], and the principle of the design is that the eavesdropper located in the same direction as the desired user for one antenna array may not be in the same direction for another antenna array. To increase the capacity of the DM system, a multi-carrier based phased antenna array design was proposed, employing an inverse Discrete Fourier Transform (IDFT) structure [7,8]. Another solution is to use crossed-dipole antenna array as the transmitter [9], where DM and polarisation were combined together in the proposed design. A method named dual beam DM was introduced in [10]. Different from the traditional design where inphase and quadrature (IQ) components of signals are transmitted by the same antenna, dual beam DM design is to transmit these two components by different antennas. In [11], the BER was employed for DM transmitter synthesis by linking the BER performance to the settings of phase shifters. A pattern synthesis approach was presented in [12,13], where information pattern and interference patterns are created together to achieve DM, followed by an artificial-noise-aided zero-forcing synthesis approach in [14], and a multi-relay design in [15]. An eightelement time-modulated antenna array with constant instantaneous directivity in desired directions was proposed in [16]. The main idea of the design is that the array transmits signals without time modulation in the desired direction, while transmitting time-modulated signals in other directions. Recently, the introduction of artificial noise has further advanced the directional modulation technology. Artificial noise (AN) can be divided into ‘static’ AN and ‘dynamic’ AN. Static AN means that the introduced AN vector is fixed, so that the constellation points for the received signal in undesired directions do not change with time. As a result, after a long period of observation, it is possible for eavesdroppers to crack the received signal. To solve this problem, ‘dynamic’ AN is introduced, where the AN vector is continuously updated, and the constellation of the signal in the undesired direction changes constantly, increasing the difficulty of the eavesdroppers to decode the signal correctly. For the construction of AN, two methods were introduced. One is the orthogonal vector method [17,18], where the added AN vector is orthogonal to the steering vector of the desired direction. The other one is the AN projection matrix method [19, 20], where by designing an artificial noise projection matrix, the AN vector is projected into the zero space of the derivative of the desired direction. However, to our best knowledge, almost all of the existing studies are focused on one dimensional DM, which is normally achieved using linear antenna arrays, and these designs lack the ability to scan in the 3-D space. For effective DM in the 3-D space, in this work, we introduce a planar antenna array based design for two-dimensional DM, where both the elevation angle and azimuth angle are studied.

208

B. Zhang et al.

The remaining part of this paper is structured as follows. A review of planar antenna array based beamforming is given in Sect. 2. DM design for a uniform planar antenna array with the corresponding formulations is presented in Sect. 3. In Sect. 4, design examples are provided, with conclusions drawn in Sect. 5.

2

Review of Planar Antenna Array Based Beamforming

A narrowband planar antenna array for transmit beamforming is shown in Fig. 1, which consists of N equally spaced omni-directional antennas along the x-axis, and K equally spaced omni-directional antennas along the y-axis. The spacings from the first antenna to its subsequent antennas along the x-axis and y-axis are represented by dx,n and dy,k , respectively for n = 0, . . . , N − 1 and k = 0, . . . , K − 1. The elevation angle θ ∈ [0◦ , 180◦ ], and azimuth angle φ ∈ [0◦ , 180◦ ] ∪ [0◦ , −180◦ ]. The weight coefficient for the antenna on the n-th position of the axis and k-th position of the y-axis is denoted by wn,k (n = 0, . . . , N − 1 and k = 0, . . . , K − 1). The steering vector of the array as a function of angular frequency ω, elevation angle θ and azimuth angle φ, is given by s(ω, θ, φ) = [1, ejω(dx,0 sin θ cos φ+dy,0 sin θ sin φ)/c , . . . , ejω(dx,0 sin θ cos φ+dy,K−1 sin θ sin φ)/c , . . . ,

(1)

jω(dx,N −1 sin θ cos φ+dy,K−1 sin θ sin φ)/c T

e

] ,

where {·}T is the transpose operation, and c is the speed of propagation. For a uniform planar array (UPA) with a half-wavelength spacing (dx,n −dx,n−1 = λ/2 and dy,k − dy,k−1 = λ/2), the steering vector of the UPA is s(ω, θ, φ) = [1, ejπ(sin θ cos φ+sin θ sin φ) , . . . , ejπ(sin θ cos φ+(K−1) sin θ sin φ) , . . . ,

(2)

jπ((N −1) sin θ cos φ+(K−1) sin θ sin φ) T

e

] .

All weight coefficients can be put together to form a vector represented by w, w = [wx0 ,yo , wx0 ,y1 , . . . , wx0 ,yK−1 , . . . , wxN −1 ,yK−1 ]T .

(3)

Then the beam response of the array is given by p(ω, θ, φ) = wH s(ω, θ, φ),

(4)

where {·}H represents the Hermitian transpose.

3

DM Design for the Uniform Planar Antenna Array

DM design is to keep the received signal following known constellation mappings in a desired direction or directions, while scrambling the phase and make the magnitude as low as possible for the rest of directions. The method to achieve DM

Antenna Array Design for Directional Modulation

209

is to find the corresponding weight vector for each symbol. For M-ary signaling, such as multiple phase shift keying (MPSK), we assume the corresponding weight vector is given by wm = [wm,x0 ,y0 , wm,x0 ,y1 , . . . , wm,x0 ,yK−1 , . . . , wm,xN −1 ,yK−1 ]T ,

(5)

z d y,K

1

d y ,0 d x ,0 d x, N

y

1

x Fig. 1. An equally spaced planar array.

m = 0, . . . , M − 1. The desired response pm (ω, θ, φ) for the m-th constellation point, as a function of θ and φ is split into two regions: the mainlobe response and the sidelobe response, represented by pm,M L and pm,SL , respectively. Without loss of generality, we assume there are R elevation angles sampled for each azimuth angle φv (v = 0, 1, . . . , V − 1), and the desired directions in the 3-D space is θ0 , θ1 , . . . , θr−1 and φ0 . Then, we have pm,M L = [pm (ω, θ0 , φ0 ), pm (ω, θ1 , φ0 ), . . . , pm (ω, θr−1 , φ0 )], pm,SL = [pm (ω, θr , φ0 ), pm (ω, θr+1 , φ0 ), . . . , pm (ω, θR−1,φ0 ), pm (ω, θ0 , φ1 ), (6) . . . , pm (ω, θR−1 , φ1 ), . . . , pm (ω, θR−1 , φV −1 )]. As shown in (1), the steering vector of the array with a fixed θ and φ is the same for all M constellation points. Therefore, we have steering matrix SSL for sidelobe regions and SM L for mainlobe directions, SM L =[s(ω, θ0 , φ0 ), s(ω, θ1 , φ0 ), . . . , s(ω, θr−1 , φ0 )], SSL =[s(ω, θr , φ0 ), s(ω, θr+1 , φ0 ), . . . , s(ω, θR−1,φ0 ), s(ω, θ0 , φ1 ), . . . , s(ω, θR−1 , φ1 ), . . . , s(ω, θR−1 , φV −1 )].

(7)

210

B. Zhang et al.

For the m-th constellation point, its corresponding weight coefficients can be obtained by solving the following linearly constrained optimisation problem min subject to

||pm,SL − wH m SSL ||2

(8)

wH m SM L = pm,M L ,

where ||·||2 denotes the l2 norm. The cost function in (8) is to keep the minimum difference between desired and designed sidelobe responses, and the equality constraint is to make sure that the response in the mainlobe directions exactly takes the specified constellation values. Here, we set the desired phase response in sidelobe regions randomly and the beam responses as low as possible (pm,SL ) to keep the received signal scrambled in the IQ complex plane. Moreover, to restrain the maximum value of weight coefficient, we introduce the corresponding constraint ||wm ||∞ ≤ β,

(9)

where || · ||∞ represents the L-infinity norm, and β is the pre-defined maximum value for weight coefficients. Therefore, the DM design with the weight coefficient magnitude constraint is given by min subject to

||pm,SL − wH m SSL ||2 wH m SM L = pm,M L

(10)

||wm ||∞ ≤ β. The above problem can be solved using cvx in MATLAB, a package for specifying and solving convex problems [21, 22].

4

Design Examples

In this section, we consider an N × K = 21 × 20 uniform planar antenna array with a half wavelength spacing between adjacent antennas. Without loss

Beam pattern (dB)

0 -10 -20 -30 -40 Symbol 00 Symbol 01 Symbol 11 Symbol 10

-50 -60 -90

-60

-30

0

30

60

90

(degree)

Fig. 2. Resultant beam responses based on the UPA design in (10).

Antenna Array Design for Directional Modulation

211

180

Phase angle

135 90 45 0 -45 -90

Symbol 00 Symbol 01 Symbol 11 Symbol 10

-135 -180 -90

-60

-30

0

30

60

90

(degree)

Fig. 3. Resultant phase responses based on the UPA design in (10).

of generality, the desired elevation angle is 0◦ and azimuth angle is φ = 90◦ . The sidelobe regions are θSL ∈ [5◦ , 90◦ ] for φ = ±90◦ . The desired response in the mainlobe direction is a value of one (magnitude) with 90◦ phase shift (QPSK), i.e., √ √ √ √ √ √ √ √ 2 2 2 2 2 2 2 2 +i ,− +i ,− −i , −i (11) 2 2 2 2 2 2 2 2 for symbols ‘00’, ‘01’, ‘11’, ‘10’, and a value of 0.1 (magnitude) with random phase shifts over the sidelobe regions. The maximum value of weight coefficient β = 0.1. Bit error rate (BER) is also calculated based on in which quadrant the received signal lies in the IQ complex plane, BER =

Error bits . T otal number of bits

(12)

BER of QPSK with awgn

100

Bit error rate

10-1 10-2 10-3 10-4 10-5 -90

-60

-30

0

30

Elevation angle ( degree) with a fixed

60

= 90 °

Fig. 4. BER based on the UPA design in (10).

90

212

B. Zhang et al.

Here the signal to noise ratio (SNR) is set at 12 dB in the mainlobe direction, and then with the unit average power of all randomly generated 106 transmitted bits in the mainlobe, the noise variance σ 2 is 0.0631. We also assume that the additive white Gaussian noise (AWGN) level is the same for all directions, and a random noise with this power level is generated for each direction. The resultant beam pattern in (10) for each constellation point is shown in Fig. 2. Here we can see that all main beams are exactly pointed to the desired direction 0◦ with a low sidelobe level. As shown in Fig. 3, the phase in the desired direction 0◦ is 90◦ spaced, i.e., 45◦ , 135◦ , −135◦ and −45◦ for symbols ‘00’, ‘01’, ‘11’ and ‘10’, respectively, whereas the phase is random in the sidelobe directions. Moreover, Fig. 4 shows the BER for all transmission angles. It can be seen that in the desired direction BER is down to 10−5 , while it is around 0.5 in other directions, further demonstrating the effectiveness of the design.

5

Conclusions

Directional modulation has been applied to uniform planar antenna arrays for the first time, and two-dimensional directional modulation has been achieved effectively by the proposed design method. As shown in the provided design examples, the mainlobe is pointing to the desired direction, with a low power level for the rest of the directions; simultaneously, the transmitted signal’s phase in the desired direction follows the required constellations, whereas its values in other directions are scrambled. The BER result shows that error bits received in the desired direction is the lowest, while in other directions the BER is about 0.5, indicating that it would be extremely difficult for eavesdroppers located in these regions to crack the information. Acknowledgements. This work was supported by the Funding Program of Tianjin Higher Education Creative Team. The authors acknowledge the Natural Science Foundation of Tianjin City (18JCYBJC86000), and the Science & Technology Development Fund of Tianjin Education Commission for Higher Education (2018KJ153) for funding this work. C.W. acknowledges the Distinguished Young Talent Recruitment Program of Tianjin Normal University (011/5RL153).

References 1. Babakhani A, Rutledge DB, Hajimiri A (2009) Near-field direct antenna modulation. IEEE Microw Mag 10(1):36–46 2. Daly MP, Bernhard JT (March 2010) Beamsteering in pattern reconfigurable arrays using directional modulation. IEEE Trans Antennas Propag 58(7):2259– 2265 3. Daly MP, Bernhard JT (2009) Directional modulation technique for phased arrays. IEEE Trans Antennas Propag 57(9):2633–2640 4. Shi HZ, Tennant A (2013) Enhancing the security of communication via directly modulated antenna arrays. IET Microw Antennas Propag 7(8):606–611

Antenna Array Design for Directional Modulation

213

5. Zhang B, Liu W (2018) Antenna array based positional modulation with a two-ray multi-path model. In: Proceedings sensor array and multichannel signal processing workshop 2018 (SAM2018). Sheffield, pp 203–207 6. Zhang B, Liu W (2019) Positional modulation design based on multiple phased antenna arrays. IEEE Access 7:33 898–33 905 7. Zhang B, Liu W (2018) Multi-carrier based phased antenna array design for directional modulation. IET Microw Antennas Propag 12(5):765–772 8. Zhang B, Liu W, Li Q (2019) Multi-carrier waveform design for directional modulation under peak to average power ratio constraint. IEEE Access 1–1 9. Zhang B, Liu W, Lan X (2019) Orthogonally polarized dual-channel directional modulation based on crossed-dipole arrays. IEEE Access 7:34 198–34 206 10. Hong T, Song MZ, Liu Y (2011) Dual-beam directional modulation technique for physical-layer secure communication. IEEE Antennas Wirel Propag Lett 10:1417– 1420 11. Ding Y, Fusco V (2013) Directional modulation transmitter synthesis using particle swarm optimization. In: Proceedings Loughborough antennas and propagation conference. Loughborough, pp 500–503 12. Ding Y, Fusco V (2013) Directional modulation transmitter radiation pattern considerations. IET Microw Antennas Propag 7(15):1201–1206 13. Ding Y, Fusco V (2015) Directional modulation far-field pattern separation synthesis approach. IET Microw Antennas Propag 9(1):41–48 14. Xie T, Zhu J, Li Y (2017) Artificial-noise-aided zero-forcing synthesis approach for secure multi-beam directional modulation. IEEE Commun Lett PP(99):1–1 15. Zhu W, Shu F, Liu T, Zhou X, Hu J, Liu G, Gui L, Li J, Lu J (2017) Secure precise transmission with multi-relay-aided directional modulation. In: 2017 9th international conference on wireless communications and signal processing (WCSP), pp 1–5 16. Zhu QJ, Yang SW, Yao RL, Nie ZP (2014) Directional modulation based on 4-D antenna arrays. IEEE Trans Antennas Propag 62(2):621–628 17. Ding Y, Fusco V (2014) A vector approach for the analysis and synthesis of directional modulation transmitters. IEEE Trans Antennas Propag 62(1):361–370 18. Ding Y, Fusco V (2014) Vector representation of directional modulation transmitters. In: The 8th European conference on antennas and propagation (EuCAP 2014), pp 367–371 19. Hu J, Shu F, Li J (2016) Robust synthesis method for secure directional modulation with imperfect direction angle. IEEE Commun Lett 20(6):1084–1087 20. Hu J, Yan S, Shu F, Wang J, Li J, Zhang Y (2017) Artificial-noise-aided secure transmission with directional modulation based on random frequency diverse arrays. IEEE Access 5:1658–1667 21. Grant M, Boyd S (2008) Graph implementations for nonsmooth convex programs. In: Blondel V, Boyd S, Kimura H (eds) Recent advances in learning and control, ser. Lecture notes in control and information sciences. Springer Limited, pp 95–110. http://stanford.edu/∼boyd/graph$ $dcp.html 22. CVX Research: CVX: Matlab software for disciplined convex programming, version 2.0 beta. http://cvxr.com/cvx (2012)

Capturing the Sparsity for Massive MIMO Channel with Approximate Message Passing Xudong Han1,3 , Shun Zhang1,3(B) , Anteneh Mohammed1,3 , Weile Zhang2 , Nan Zhao1,3 , and Yuantao Gu4 1

3

Xidian University, Xi’an 710071, People’s Republic of China xdhan [email protected], [email protected], [email protected], [email protected] 2 Xi’an Jiaotong University, Xi’an 710049, People’s Republic of China [email protected] Dalian University of Technology, Dalian 116024, People’s Republic of China 4 Tsinghua University, Beijing 100084, People’s Republic of China [email protected]

Abstract. In this work, we propose a low-overhead characteristic learning mechanism for the time-varying massive MIMO channels. Specially, we exploit the common sparsity and temporal correlation of the channel. Firstly, using VCR and modeling the temporal correlation as an autoregressive process, we formulate the dynamic massive MIMO channel as a sparse signal model. Then, an expectation maximization (EM) based sparse Bayesian learning (SBL) framework is developed to learn model parameters. To achieve the posteriors of model parameters, approximate message passing (AMP) is utilized in the expectation step. Finally, we demonstrate the performance through numerical simulations. Keywords: Massive MIMO · Sparse Bayesian learning maximization · Approximate message passing

1

· Expectation

Introduction

Due to massive MIMO’s enhanced capacity and energy efficiency, it is a promising technology for the future-generation wireless cellular networks [1]. However, harvesting its full benefit requires accurate channel state information (CSI). For frequency-division duplex (FDD) scenario, this would lead to huge overhead and will lessen the possible improvement. Fortunately, under some scenarios, the massive MIMO channel contains the sparsity [2]. In [3], Wen et al. proposed a channel estimation scheme based on sparse Bayesian learning methods for a multicell environment to reduce the pilot contamination. In [4], Gao et al. designed adaptive channel estimation and feedback scheme with low-overhead to exploit the spatially common sparsity and c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 214–222, 2020 https://doi.org/10.1007/978-981-13-9409-6_26

Capturing the Sparsity for Massive MIMO Channel

215

temporal correlation in the channel. Our previous work [5] proposed a scheme for the time-varying massive MIMO channel tracking. However, the channel powers are closely related with the carrier frequency, which can not be perfectly inferred from the UL ones. This paper will further examine the spatial common sparsity and temporal correlation to design the effective channel estimation algorithm for the FDD massive MIMO networks. Specially, with the virtual channel representation, the time-varying massive MIMO channel is reformulated. Furthermore, one dynamical state space (DSS) model will be constructed to depict the channel’s temporal correlation. Then, we will design the expectation maximization (EM) algorithm based sparse Bayesian learning (SBL) framework to capture the unknown parameters, where the powerful approximate message passing (AMP) will be utilized to track the posterior in the expectation step.

2

System Model

In this work, we will consider a single-cell massive MIMO system, where the BS is equipped with Nv × Nh antenna array in the form of uniform Rectangular Array (URA). K single antenna users are randomly distributed in the coverage area. We assume that the channel is quasi-static during a block of Lc channel uses and changes from block to block. During the m-th time block, the physical DL channel from the BS to the k-th user can be written as max

+∞ θk 

ϕmax k

A(θ, ϕ)ej2πνmLTs k (θ, ϕ, ν)dθdϕdν,

Hk,m =

(1)

−∞ θ min ϕmin k k

where k (θ, ϕ, ν) is the joint angle-Doppler channel gain function of the k-th user corresponding to the Doppler frequency ν, the vertical and horizontal direction of departure (DOD) θ and ϕ; T1s is the system sampling rate. Moreover, A(θ, ϕ) denotes the BS’s array response vector with respect to the elevation angle θ and azimuth angle ϕ, and can be defined as ⎤ ⎡ ... ej(Nh −1)h) 1 ej(h) ⎢ ej(v) ej[v+h] ... ej[v+(Nh −1)h] ⎥ ⎥ ⎢ A(θ, ϕ) = ⎢ (2) ⎥ .. .. .. .. ⎦ ⎣ . . . . ej(Nv −1)v ej[(Nv −1)v+h] . . . ej[(Nv −1)v+(Nh −1)h] 2πd where v = 2πd λ cos θ, h = λ sin θ cos ϕ, λ is the signal carrier wavelength, and d represents the antenna spacing. The channels from the BS to different users are assumed to be statistically independent. As in [6], the virtual channel k,m = representation (VCR) can be utilized to dig the the sparsity of Hk,m as H k,m is the beam domain channel of Hk,m ; FN and FN FNv Hk,m FNh . where H v h are the normalized discrete Fourier transformation (DFT) matrices. In order

k,m = FT ⊗ to facilitate the operation, we rewrite the virtual channel as h Nh

216

X. Han et al.



FNv vec Hk,m . Furthermore, we can adopt the simultaneously sparse signal k,m as [5] model to depict the dynamics of h k,m =diag(ck )rk,m , h (3) rk,m =αk rk,m−1 + υ k,m , where rk,m ∈ C Nh Nv ×1 represents one time-varying Gaussian Markov random processes, αk is the transmission factor, υ k,m ∼ CN (0, Λk ) is the noise vector, and the diagonal matrix Λk = diag([λ2k,1 , λ2k,2 , . . . , λ2k,Nh Nv ]). The spatial signature vector ck can be determined by the set Qk = {(p, q)| Nv λd (cos θk )min  ≤ p ≤  Nv λd (cos θk )max ,  Nh λd (sin θk cos ϕk )min  ≤ q ≤ d  Nh λ (sin θk cos ϕk )max , p and q ∈ Z} as 1 if i = (q − 1)Nv + p, (p, q) ∈ Qk , ck,i = (4) 0 otherwise. ck and rk,m can With (3) and (4), the conditional  be writ PDF of hk,m on

 ten as p(hk,m |ck , rk,m ) = i∈Qk δ hk,m,i − rk,m,i i ∈Q / k δ hk,m,i . Further Nv Nh more we can achieve the prior distribution for hk,m as p(hk,m ) = i=1 [(1 −     λ2k,i ρk )δ ]. hk,m,i + ρk CN hk,m,i ; 0, 1−α 2 k From the above equation, we can know that the statistics of the virtual k,m can be achieved through capturing the model parameter set Ξk = channel h {ρk , αk , Λk }.

3

Learning Sparse Virtual Channel Model Parameters Through DL Training

Without loss of generality, we use M channel blocks. During the m-th block, the BS transmit the training matrix Xm of size Nh Nv × P to all the users. Then, within the m-th block, the received training signal at user k can be collected into a P × 1 vector as

H k,m + nk,m , (5) yk,m = XTm FTNh ⊗ FNv h where nk,m ∼ CN (0, σn2 ) noise vector. Obviously, different users can independently learn their prior model parameters and the spatial signature vectors. To simplify the notation, in the following, we will ignore the subscript k. 3.1

Problem Formulation

The objective of learning is to estimate the best fitting parameters set Ξ with the given observation vector y. Theoretically, the ML estimator for Ξ can be formulated as  = arg Ξ

max

1≥α≥0, λi ≥0, 1≥ρ≥0

ln p(y; Ξ).

(6)

Capturing the Sparsity for Massive MIMO Channel

217

and Obviously, such estimator involves all possible combinations of the h is not feasible to directly achieve the ML solution due to its high dimensional search. Nonetheless, one alternative method is to search the solution iteratively via the classical EM algorithm. 3.2

Expectation Step

Since the received samples y are known, with some calculations, the objective

 (l−1) can be expressed as function Q Ξ, Ξ  Q

 Ξ, Ξ

(l−1)

 =

M  

 2α tr

m=2 2

−α tr +



−1

Λ

N  



E

1− E

  

−1

Λ



E



H

 rm−1 rm y, Ξ  

 rm−1 rm−1 y, Ξ H

 ci |y, Ξ

(l−1)



(l−1)

(l−1)





−tr

ln (1 −ρ) + E



Λ 1 − α2

 ci |y, Ξ

−1

(l−1)

  E r1 rH 1 

 ln ρ

i=1



M 



2

ln Λ + ln 1 − α

+ C1 ,

(7)

m=1

where C1 is the items not related with Ξ.    (l−1) is dependent on four posFrom (7) it can be found that Q Ξ, Ξ terior statistics. Now we turn to the calculations of these  terms. Before    (l−1) , calculating posterior statistics, let us define ˆ c(l−1) = E c|y, Ξ      (l−1)  −1)   (l  (l−1)  (l−1) , Θ  (l−1) , and Π ˆ rm = E rm |y, Ξ = E rm rH m m |yk ,Ξ m−1,m =    (l−1) for further use. E rm−1 rH m |y,Ξ 3.3

Deriving the Posterior Statistics with AMP

 (l−1) , our objective is to infer the posterior statistics ˆ With given y and Ξ c(l−1) , (l−1)  (l−1) (l−1)  ˆ rm , Θm , and Π m−1,m under the state-space model: ˆ (l−1) rm−1 + υ m , rm = α

ym = XTm F∗Nh ⊗ FH Nv diag(c)rm + nm , m = 1, 2, . . . , M,   

(8) (9)

Bm

where υ m = [υ T1,m , υ T2,m , . . . , υ Tτ,m ]T ∼ CN (0, Λ(l−1) ), and the elements of the sparse vectors are i.i.d. Bernoulli distributed with the parameter ρˆ(l−1) . As it is intractable to directly compute the required posterior statistics, we will resort to the factor graph and message passing algorithms. First, the posterior joint

218

X. Han et al.

PDF can be factorized as 

r|y; Ξ p c, h,

 (l−1)



P M N       m ∝ p ym,p |h p hm,i |ci , rm,i m=1 p=1

i=1

N      (l−1) ˆ (l−1) ˆ , λi p ci ; ρˆ(l−1) , p rm,i |rm,i−1 ; α

(10)

i=1

 r|y; Ξ  (l−1) ) Fig. 1. The factor graph representation of p(c, h,

m) = CN(ym,p ; [Bm ]p,: h m , σ 2), p( where p(ym,p |h hm,i |ci , rm,i) = δ( hm,i − ci rm,i ), n (l−1) ˆ (l−1) (l−1) ˆ (l−1) )2 ), and p(ci ; ρˆ(l−1) ) = ˆ , λi ) = CN(rm,i ; α ˆ rm−1,i , (λ p(rm,i |rm−1,i ; α i 1−ci (l−1) ci

ρˆ . 1 − ρˆ(l−1) r|y; Ξ  (l−1) ) can be denoted with a factor graph, as shown in Fig. 1, Then, p(c, h, Due to the belief cycles, BP cannot be directly applied for Fig. 1. Nonetheless, the proper message scheduling and approximate belief propagation algorithms can be adopted to effectively approximate the posterior distribution within the given allowable iterations [7]. Specially, the message scheduling can be implemented through four steps, i.e., the message passing into the time block m, the message exchanging within the time blockm,themessageflowingoutofthetimeblock m,andthemessage exchanging between the adjacent time blocks. 3.4

Maximization Step

 (l) . Due to the uncou (l−1) ) we will derive Ξ In this step, by maximizing Q(Ξ, Ξ pled structure in (7), we can break down the maximization into two subproblems: N      (l) (l−1) (l−1) 1−ˆ c ln (1 − ρ) + ˆ c ln ρ (11) ρ = arg max ρ

  α(l) ,Λ(l) = argmax α,Λ



i=1

 M   −1  (l−1) 2 −1  (l−1) 2α tr( {Λ Πm−1,m })−α tr(Λ Θm−1 )

m=2

Capturing the Sparsity for Massive MIMO Channel

219

  M  −1 M  



Λ (l−1) 2 2 . (12) − ln Λ+ln 1−α − −tr Θ1 ln Λ+ln 1−α 1−α2 m=1 m=1 Taking the derivatives of (11) and (12) with respect to ρ(l) , α(l) and Λ(l) , respectively, and equating them to zero, we can obtain the estimation of these parameters in the l-th iteration. Furthermore, the related posterior characteristics in (11)–(12) can be derived from the achieved results in the expectation step as follows.

M B →c ρˆ(l−1) m=1 πfm,i i  = M 

M (l−1) B →c + 1 − ρ B 1 − π ρˆ(l−1) m=1 πfm,i ˆ fm,i →ci m=1 i −1  1 1 1 (l−1) + + × rˆm,i = C B C νfm+1,i ν ν →rm,i fm,i →rm,i fm,i →rm,i   C B →r C →r μfm+1,i μfm,i →rm,i μfm,i m,i m,i + + C B →r C →r νfm+1,i νfm,i →rm,i νfm,i m,i m,i −1  ! 2  1 1 1 (l−1) m = + + + rˆm,i Θ C B →r C →r νfm+1,i νfm,i νfm,i i,i →rm,i m,i m,i (l−1) cˆi

 m−1,m ]i,i = [Π

(13)

(14)

(15)

ˆ (l−1) )2 μ C (¯ C μ νrm−1,i+(¯ μrm−1,i )2 )+(λ ¯rm−1,i α ˆ (l−1) νrm,i→fm,i rm,i →fm, i i ˆ l−1 )2 ) C (νrm,i →fm,i + (λ i

,

(16) where

μ ¯rm−1,i

ν¯rm−1,i = νrm,i =

νr νr

C ν0 m−1,i →fm,i

C +ν0 m−1,i →fm,i

, μrm,i =

ˆ (l−1) )2 C (λi m,i →fm,i ˆ (l−1) )2 νr C +(λi m,i →fm,i νr

ν0 μr

=

C +μ0 νr C m−1,i →fm,i m−1,i →fm,i

νr

C +ν0 m−1,i →fm,i (l−1) 2 (l−1) ˆ α ˆ rm−1,i νr ) μr C +(λi C m,i →fm,i m,i →fm,i (l−1) 2 ˆ νr +( λ ) C i m,i →fm,i

,

, and

.

To clearly get the main ideas of the EM-based parameter learning, we put the corresponding algorithm in Algorithm 1 and Algorithm 2.

4

Simulations Results

In this section, we will evaluate the performance of our proposed algorithms numerically. The antenna array size is Nh × Nv = 32 × 32. The signal-to-noise ratio (SNR) is defined as SNR = 10 log10 λ2i 1−α2

σp2 2 σn

dB, and the variance of rk,m in (3)

= 1. We use the normalized MSEs for both the model parameters   −x||2 . We set the and the virtual channel which is defined as MSEx = E ||ˆx||x|| 2 is set as

220

X. Han et al.

Algorithm 1 Parameter learning 1: Input: , σn2 , y, Bm , N, M, maximum number of EM iteration (Lmax ) ˆ ˆ ρˆ, Λ 2: Output: Ξk = α, ˆ i,i = 0.01, μ C 3: Initialize: ∀m and ∀i: α ˆ (0) = 1, ρˆ(0) = 0.25, [Λ] f →r1,i = 0, νf C 1,i

(0)

1, πci →f C = ρˆ , μf C = 0, νf C = ∞, πf C →ci = 0.5. 1,i m+1,i →rm,i m+1,i →rm,i m,i 4: for l = 1, 2, ..., Lmax do 5: for m = 1, 2, ..., M do 6: Pass belief about ci and rm,i into the m time block: 

ρ ˆ(l−1)

πci →f B = m,i

π B f

m ,i

(1−π B f  →ci  m ,i m =  m ν C ν C f →rm,i f →rm,i m,i m+1,i +ν C f

f C →rm,i m,i

μrm,i→f B =νrm,i→f C m,i

7: 8:

m =m



(1−ρ ˆ(l−1) )

νrm,i→f B = ν m,i



→rm,i

μ m+1,i

m,i

f C →rm,i m,i

ν C f

m,i

→rm,i

1,i →r1,i

→ci

)+ρ ˆ(l−1)

  m =  m

,

π B f

m ,i

→ci

,



μ C f



→rm,i m+1,i

.

fC →rm,i m+1,i

Do the Algorithm 2. Pass belief about ci and rm,i out of the m time block within the m time block: (J)

πf B

m,i →ci

=

(J)

CN (0; φm,i − μf B

 (J) CN 0; φm,i −μf B

m,i

,χ ¯m + νf B →rm,i )  m,i  , (J) (J) (J) , χ ¯ +ν ¯m B m →rm,i f →rm,i +CN 0; φm,i , χ m,i →rm,i

m,i

1 (J) 1 (J) i ϑ¯f B →rm,i = (1−φ(, πci →f B ))CN (rm ; φm,i , 2 χ ¯m ) m,i m,i   (J) i (J) ¯m ). + φ(, πci →f B )CN (rm ; φm,i , χ m,i

9:

Perform the forward inter-block message passing: α ˆ (l−1) ( ν

νf C



m+1,i →rm+1,i

13: 14: 15:

16:

ν C f

μf C

m+1,i →rm+1,i

10: 11: 12:

=

m,i

ν

→rm,i f B →rm,i m,i

f

m,i

→rm,i

f

m,i

μ C f

)( ν

+ν B f C →rm,i f →rm,i m,i m,i ν B ν C (l−1) 2 fm,i →rm,i fm,i →rm,i (α ˆ ) ν C +ν B

→rm,i

m,i

→rm,i

f C →rm,i m,i

+

μ B f →rm,i m,i ν B f →rm,i m,i

(l−1) 2

ˆ ) + (λ i

) ,

end for for m = M − 1, M − 2, ..., 1 do Perform the backward inter-block message passing: = ∞, νf C m+1,i →rm,i  ν  ν fC →rm+1,i f B →rm+1,i m+2,i m+1,i 1 × μf C  (l−1) →r ν +ν m,i α ˆ m+1,i fC →rm+1,i fB →rm+1,i m+2,i m+1,i  μ μ fC →rm+1,i fB →rm+1,i m+2,i , + ν m+1,i ν C f →rm+1,i fB →rm+1,i m+2,i  m+1,i  ν C ν B f →rm+1,i f →rm+1,i (l−1) 2 m+2,i m+1,i 1 ˆ νf C  + ( λ ) i +ν B C m+1,i →rm,i (αˆ (l−1) )2 νfm+2,i →rm+1,i f →rm+1,i m+1,i Perform step 6 → 8. end for acquire the posteriors and :  the parameters (l−1) (l−1)  m−1,m , rk,m,i , [Θk,m ]i,i , Π ← (13)–(16). cˆi i,i

(l) ˆ ˆ (l) , Λ ← (11)–(12) ρˆ(l) , α i,i end for

),

Capturing the Sparsity for Massive MIMO Channel

221

Algorithm 2 AMP algorithm 1: Input: maximum number of AMP (J). (1) (1) (1) 2: Initialize: ∀p : κ ¯ m,p = ym,p , μh = 0, χ ¯m = N i=1 νrm,i →f B . m,i

m,i

3: for j = 1, 2, ..., J do P (j) (j) (j) ¯ m,p + μh . 4: φm,i= [Bm ]∗p,i κ m,i

p=1

5:

(j)

τm,i=

(1−π

ci →f B m,i

)(ν

rm,i →f B m,i (j) π χ ¯m ci →f B m,i

⎛ ⎡

(j)

+χ ¯m )

(j) |φm,i |2 +2 rm,i →f B m,i

1

m,i

1+τm,i

μh

7:

νh

8:

χ ¯m

9:

=

(j+1) m,i

=

(j)

1

(j) 1+τm,i

(j+1)

=σn2 +

(j+1) κ ¯ m,p =ym,p

μ∗

(j)

−χ ¯m |μ

|2 rm,i →f B m,i

⎥⎟ ⎦⎠.

(j) χ ¯m rm,i →f B m,i (j) ν +χ ¯m rm,i →f B m,i (j) ν χ ¯m 2 rm,i →f B (j) (j) m,i (j) m,i  h ν +χ ¯m m,i rm,i →f B m,i

ν

(j+1)

6:

⎤⎞



(j) (j) χ ¯m φm,i rm,i →f B m,i (j) (j) χ ¯m (ν +χ ¯m ) rm,i →f B m,i

ν

⎜ ⎢ exp ⎝−⎣

× 

1 P



10: end for

rm,i →f B m,i

(j)

φm,i +μ

.



N

i=1 N i=1

(j+1)

νh

m,i

  μ

   .

.

(j+1) [Bm ]p,i μh m,i

(j)

+

κ ¯ m,p P

N

i=1

(j) hm,i (j+1) κ ¯ m,p

ν

.

length of the training time as P = 150, the maximum number EM iteration as Lmax = 5, the maximum number of the AMP iteration as J = 50. Figure 2a gives the MSE performance of our model parameter versus SNR. We can see that, with the increase of the SNR, the MSE curves decrease. From Fig. 2a , it can be found that as the velocity increases, the MSEs of λ decrease while that of α increase. This can be justified as follows. With the increase in velocity, the temporal correlation between different time blocks will become less, and the less information will be achieved for α. On the other hand, with the fixed steady variance, the real values of the elements in λ become bigger, and the noise effect on this parameters’ estimation becomes weaker. In Fig. 2b, we study the virtual channel estimation within the parameter ˜ 10 . learning phase at different SNRs. We set M = 10 and present the MSEs of h ◦ ◦ ◦ Here, we adopt three values for both ASs, i.e. 5 , 10 , and 15 . From Fig. 2b, it can be seen that the MSEh10 curves becomes higher with increasing the ASs, as ˜ 10 is. the bigger the ASs is, the more the non-zero elements in h

5

Conclusion

In this paper, we proposed a DL channel tracking scheme for massive MIMO system. First, we formulated the time-varying channel model with the help of VCR and AR modeling. Then, we developed a EM-algorithm based SBL framework

222

X. Han et al.

(a)

(b)

Parameter learning | N = 1024, P = 150

N = 1024, P = 150 -16

MSE , 120km/h MSE , 240km/h MSE , 240km/h

-10 -15 -20 -25 -30 5

10

15

SNR [dB]

20

25

Mean Square Error (MSE) [dB]

Mean Square Error (MSE) [dB]

MSE , 120km/h

-5

AS:5° ,120km/h AS:10° ,120km/h AS:15° ,120km/h AS:5° ,240km/h AS:10° ,240km/h AS:15° ,240km/h

-18 -20 -22 -24 -26 -28 -30 -32 -34 5

10

15

20

25

SNR [dB]

Fig. 2. a The MSEs of the model parameters α, λ, versus SNR. b Performance of AMP based virtual channel recovery at different AS

to learn the model parameters. To track the posteriors in the expectation step of EM-algorithm, we applied approximate message passing. Numerical results showed that the proposed scheme has low estimation MSE.

References 1. Jungnickel V, Manolakis K, Zirwas W, Panzner B, Braun V, Lossow M, Sternad M, Apelfrojd R, Svensson T (2014) The role of small cells, coordinated multipoint, and massive MIMO in 5G. IEEE Commun Mag 52(5):44–51 2. Berger C-R, Zhaohui W, Huang J, Zhou S (2010) Application of compressive sensing to sparse channel estimation. IEEE Commun Mag 48(11):164–174 3. Wen C-K, Jin S, Wong K-K, Chen J-C, Ting P (2015) Channel estimation for massive MIMO using Gaussian-mixture Bayesian learning. IEEE Trans Wirel Commun 14(3):1356–1368 4. Gao Z, Dai L, Wang Z, Chen S (2015) Spatially common sparsity based adaptive channel estimation and feedback for FDD massive MIMO. IEEE Trans Signal Process 63(23):6169–6183 5. Ma J, Zhang S, Li H, Gao F, Jin S (2018) Sparse Bayesian learning for the timevarying massive MIMO channels: acquisition and tracking. IEEE Trans Commun 6. Zhang D, Wang X, Xu K, Yang Y, Xie W (2018) Multiuser 3D massive MIMO transmission in full-duplex cellular system. EURASIP J Wirel Commun Netw 2018(1):203 7. Ziniel J, Schniter P (2013) Dynamic compressive sensing of time-varying signals via approximate message passing. IEEE Trans Signal Process 61(21):5270–5284 8. Marzetta TL (2006) How much training is required for multiuser MIMO? In: Fortieth Asilomar conference on signals, systems and computers. IEEE, pp 359–363

An On-Line EMC Test System for Liquid Flow Meters Haijiao An(&), Xin Shi, and Xigang Wang Tianjin Institute of Metrological Supervision and Testing, No.4, Keyanxi Road, Nankai District, Tianjin, China [email protected]

Abstract. Electromagnetic interference causes metrological performance degradation of intelligent flow meters due to the electronic components. Therefore, the electromagnetic compatibility (EMC) tests are particularly important to evaluate the performance of flow meters under interferences. Relative test methods have been presented in some works. This paper proposes a kind of on-line EMC test system for liquid flow meters. By using a compact liquid flowrate standard facility, the system can realize the actual flow calibration under electromagnetic interference. Besides, simplicity is another advantage of the system proposed in this paper. Finally, contrast experiments are carried out which reveal that the system has a clear advantage that the variation of the metrological performance of flow meters can be measured during electromagnetic interference. Keywords: On-line EMC test method

 Zero flow method  Actual flow calibration

1 Introduction As the rapid development of electronic technique, intelligentization of flow meters is especially remarkable, however, intelligent flow meters are more susceptible to electromagnetic interference due to the electronic components. Therefore, electromagnetic compatibility of flow meters, especially water meters, is specified in many technical standards [1–3]. Normally, the EMC tests are implemented by zero flow method or actual flow calibration method. The zero flow method which is adopted extensively can only indicate if the data storing function of the meters is acceptable when the electromagnetic interference is applied. Unfortunately, the variation of the metrological performance of flow meters during the interference is cannot be measured by this method. To realize the actual flow calibration, some laboratories built special equipment whose pipeline perforates through the anechoic chamber. The main components like pump are placed outside the chamber, and the meter under tested is installed in it. However, this method may result in damage of chamber in a certain extent. An on-line EMC test system for liquid flow meters is proposed to avoid the defects of aforementioned methods. Utilizing a compact liquid flowrate standard facility, the system can realize the actual flow calibration under electromagnetic interference, and the variation of the metrological performance can be analyzed. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 223–230, 2020 https://doi.org/10.1007/978-981-13-9409-6_27

224

H. An et al.

2 Design of Compact Liquid Flowrate Standard Facility The compact liquid flowrate standard facility is composed of water tank, pump, surge tank, standard flow meter, and valves, as shown in Fig. 1. Because of the requirement of high compactness, the components should be designed as small as possible.

Fig. 1. Structure diagram of compact liquid flowrate standard facility

2.1

Design of Surge Tank

Surge tank which buffers pressure fluctuation can improve the flow stability of the standard facility. The plates placed in the tank and the compressed air inside make the flow smooth and steady [4, 5]. The most popular structure in engineering application places both horizontal and vertical plates in the tank. A number of circular holes are evenly distributed on the horizontal plate, which reduces flow velocity in the tank. Vertical plate is solid which avoids the water flowing into the outlet from inlet directly. The flow area of the horizontal plate is generally designed as 5 times of sectional area of inlet pipe. Because the pipe size is 50 mm, the flow area is 0.0098 m2. And the total area of horizontal plate Sh is 0.012 when the flow area ratio is set as 0.8. The structure diagram for radial cross section of surge tank is shown in Fig. 2. There are three horizontal plates in tank. Considering the mechanical strength and difficulty of machine work, the thickness of the plates is designed as 5 mm. The bore diameters of the plates from bottom to top are 12.5 mm, 8 mm, and 6 mm, respectively, and the hole center distances are 16 mm, 11 mm, and 8 mm, respectively. The vertical plate which is perpendicular to the inlet pipe is placed at 1/3 length of diameter of the tank, as shown in Fig. 2.

An On-Line EMC Test System for Liquid Flow Meters

225

Fig. 2. Structure diagram for radial cross section of surge tank

The area of horizontal plate Sh is approximatively expressed as pffiffiffi 2 4 2 2 D Sh ¼  a  b ¼ 3 27

ð1Þ

Then, the diameter of the tank D is calculated as 0.24 m, and the width of vertical plate b is 0.2 m. The structure diagram for axial cross section of surge tank is shown in Fig. 3. The height of tank consists of four parts, which are the height of compressed air space hs, the height from the lower edge of compressed air space to the top of vertical plate hv,

Fig. 3. Structure diagram for axial cross section of surge tank

226

H. An et al.

the height from the top of vertical plate to the center line of inlet hi, and the height from the center line of inlet to the bottom of tank ht, respectively. The compressed air in the top of tank buffers the fluctuation of water level. The volume of this space is ambulatory, but the minimal volume can be obtained by empirical formula which is expressed as follows. Vmin ¼

s  dq  qmax pdp

ð2Þ

where s is time factor, dq is the maximal relative fluctuation quantity of the pump’s flow, dq is the maximal relative fluctuation quantity of the tank’s pressure, and qmax is the maximal flow of the standard facility. According to the design requirements, dq is not greater than 5%, and dq is not greater than 0.1%. Substituting these parameters into (2) yields Vmin = 0.0796 m3. The surge tank uses spherical cap, and the cap height is 0.1 m. The volume of the cap is approximatively calculated by (3). The desired height hs are obtained according to (4). 2 pD2  hc  3 4

ð3Þ

4  ðVmin  Vc Þ þ hc pD2

ð4Þ

Vc ¼ hs ¼

After flowing through three horizontal plates, the water flows over the top of vertical plate and goes into the right side of tank. To ensure the water flowing to the outlet steady, the flow velocity vf is designed as 0.15 m/s. The hv is calculated by (5). qmax ¼ vf  b  hv

ð5Þ

According to experiences, the height from the top of vertical plate to the center line of inlet is suited to design as 10 times of inlet pipe size. The height from the center line of inlet to the bottom of tank is not related to the performance of surge tank. In this paper, the height ht is set to 0.25 m. According to above analysis, the minimum height of the tank is 1.04 m. 2.2

Design of Water Tank

Besides compactness, some other problems should be considered when we design a water tank. Firstly, the tank can store all water in the facility; Secondly, the water level in tank should not fluctuate too much in a test, normally less than 5% of the level; thirdly, the flow velocity in tank should less than 0.015 m/s in a test; finally, there should be enough space between the top of tank and water surface.

An On-Line EMC Test System for Liquid Flow Meters

227

Considering all above problems, the length, width, and height of tank are designed as 1 m, 0.6 m, and 0.6 m, respectively.

3 Analysis of Hydraulic Resistance The hydraulic resistance includes frictional and local ones. The pump should output the maximal flow while overcoming all these resistances of the standard facility. So the analysis of hydraulic resistance determines the choice of pump. According to technical standards, usual flow parameters of different pipe diameter of water meters are given in Table 1, depending on which the hydraulic resistance is analyzed as flows. Table 1. Usual flow parameters of different pipe diameter of water meters Pipe diameter DN15 DN20 DN25 DN32 DN40 DN50

Minimum flow rate (m3/h) 0.040 0.063 0.10 0.16 0.25 0.4

Permanent flow rate (m3/h) 4.0 6.3 10 16 25 40

Fig. 4. Resistant coefficients on the flow path of standard facility

228

H. An et al.

As shown in Fig. 4, the standard facility has only one flow path. The resistance is greater as flow velocity increases. Therefore, the maximal hydraulic resistance can be obtained under the greatest reference flow which is 28.5 m3/h. The head loss of frictional resistance is calculated according to (6). hw ¼

X

ki

li v2i di 2g

ð6Þ

where k is frictional resistant coefficient, l is the length of pipeline, d is the internal diameter of pipeline, v is the flow velocity, and g is acceleration of gravity. By substituting values in Table 2 into (6), the head loss of frictional resistance is obtained as 0.29 m. Table 2. Parameters for calculating the head loss of frictional resistance Sequence number 1 2 3 4 5 6 7

k 0.021 0.021 0.021 0.019 0.019 0.020 0.020

d (mm) 0.048 0.048 0.048 0.032 0.032 0.036 0.036

v (m/s) 4.38 4.38 4.38 9.85 9.85 7.78 7.78

l (m) 0.050 0.020 0.026 0.055 0.018 0.008 0.012

hw (m) 0.021 0.009 0.011 0.162 0.053 0.014 0.021

The local resistant coefficients for standard facility are shown in Table 3. According to (7), the head loss of local resistance is calculated as 17.76 m. Table 3. Local resistant coefficients for standard facility Local resistant coefficients n1 ; n2 ; n3 ; n6 ; n8 ; n5 ; n13 ; n16 ; n17 n4 n7 n9 ; n12 ; n15 n10 n11 n14 n18

Value 0.5 1.0 0.05 0.1 0.01 0.31 2.06 0.08

An On-Line EMC Test System for Liquid Flow Meters

hj ¼

X

ni

v2i 2g

229

ð7Þ

The height between the inlet of pump and the top of facility is 1.1 m. So the maximal efficient head of the standard facility is 19.15 m, and the delivery lift of pump should not be less than this value. On the basis of above analysis, this paper chose a pump whose delivery lift is 36 m, and the maximal output is 30 m3/h.

4 Experimental Research of on-Line EMC Test System To verify the performance of the on-line EMC test system, a DN15 water meter is tested during the application of radiated electromagnetic fields, as shown in Fig. 5. The frequency range for the radiated electromagnetic fields is 26 MHz to 1 GHz, which is divided to 17 steps. During each step, experimentalist increase the carrier frequency to the next one, and measure the error (called actual flow calibration method) or record stored value (called zero flow method) of the meter at the same time.

Fig. 5. On-line EMC test system

The stored values and errors of indication in each step obtained by zero flow method and actual flow calibration method are given in Tables 4 and 5, respectively. By comparing the experimental results above, it is seen that the zero flow method can only indicate if the data storing function of meters is acceptable when the interference is applied. Normally, this function is possibly not susceptible to radiated electromagnetic fields. However, the errors of indication of meter vary compared with the ones before applying the field. It is obvious that zero flow method can barely illustrate variations of metrological characteristics during the test. On the contrary, the on-line EMC test system proposed in this paper can measure the change quantitatively.

230

H. An et al. Table 4. Experimental results obtained by zero flow method

Stored value before applying the field(m3)

14

During the application of field Step 1 2 3 14 14 14 Stored value (m3) Step 10 11 12 Stored 14 14 14 value (m3)

4 14

5 14

6 14

7 14

8 14

9 14

13 14

14 14

15 14

16 14

17 14

/ /

Table 5. Experimental results obtained by the system proposed in this paper Error of indication before applying the field (%) 1.3

During the application Step 1 2 Error 1.2 1.3 (%) Step 10 11 Error 1.4 1.5 (%)

of field 3 4 1.2 1.4 12 1.5

13 1.6

5 1.2

6 1.4

7 1.5

8 1.3

9 1.5

14 1.6

15 1.7

16 1.6

17 1.5

/ /

References 1. Zhan ZJ, Zhao JL, Zhang LQ et al (2009) Cold water meter. General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China, Beijing 2. Li MH, Ye XC, Chen HZ et al (2005) Measurement of water flow in fully charged closed conduits-Meters for cold potable water and hot water. General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China, Beijing 3. ISO 4064 (2014) Measurement of water flow in fully charged closed conditions-Meters for cold potable water and hot water. The international Organization for Standardization, Switzerland 4. Wang J, Du F Jr, Pan RF (1987) Dynamic mathematical model of surge tank with compressed air. J Sci Instrum 8(2):135–142 5. Ma K (2004) Research and design on efficient combination type water flow standard facility. Tianjin University, Tianjin

Research on Kinematic Simulation for Space Mirrors Positioning 6DOF Robot Zhang Yalin(&), Liang Fengchao, He Haiyan, Wang Chun, Tan Shuang, and Lin Zhe Beijing Institute of Space Mechanics and Electricity, Beijing 100094, China [email protected]

Abstract. Six-degree of freedom (6DOF) parallel robot for space mirrors positioning is one of the effective way to adjust position and attitude of the space mirrors and improve the image quality of space camera. In order to realize the 3D simulation of Kinematic for the space mirrors positioning 6DOF robot, this paper construct the 6DOF kinematic model and algorithms, and then the Human-Computer interaction interface is programmed based on MFC frameworks and 3D simulation interface is achieved based on OpenGL. The experimental results show that the simulation system can display the movement of the space mirrors positioning 6DOF robot precisely and verify the dynamics algorithms with a friendly interface. Keywords: 6DOF

 Kinematic  OpenGL  Robot

During the imaging process of the space camera, the position error caused by the tilt, which is affected by factors such as launch shock, vibration and on-orbit temperature environment changes and stress release, can be effectively offset if the posed of the space mirrors adjusted promptly and precisely meanwhile the degradation of the camera image quality is also avoided [1]. Stewart’s six-degree-of-freedom (6DOF) parallel robot has the advantages of high precision, strength, and stability, small error and friction, and good dynamic performance. It is the main tool for precisely adjustment of space camera mirrors [2–4]. However, the 6DOF parallel robot has many characteristics such as large number inputs and outputs, strong coupling of the poles and complicated control process. Therefore, real-time dynamic simulation can be used to further understand the robot and verify the kinematics algorithms of the robot. It can be visualized and observed. In 3D simulation tools, OpenGL is a high-performance open graphics library technology that provides basic 3D graphics elements and abundant graphics functions, powerful 3D modeling capabilities, frame buffer animation technology and real-time interactive operations. At the same time, OpenGL is well applied to the Windows environment and effectively integrated with Visual Studio, so that the 3D simulation function module can be easily integrated into the measurement and control program of the entire 6DOF parallel robot based on the MFC framework. In this paper, OpenGL simulation development tool is used to establish a real-time simulation platform based on the kinematics model of the space mirrors positioning 6DOF parallel robot. This simulation is of great significance for the research and application of the robot. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 231–238, 2020 https://doi.org/10.1007/978-981-13-9409-6_28

232

Z. Yalin et al.

1 The Position Analysis of 6DOF Parallel Robot 1.1

6DOF Parallel Robot Model

The Stewart platform is a typical 6DOF parallel robot. This paper studies the 6-SPS shown in Fig. 1. The upper and lower platforms of the mechanism are connected by 6 poles. Each poles has two ball joints at both ends, and the middle is a movement vice. The driver pushes the moving pair to move relatively to each other, changing the length of each pole, and changing the position and posture of the upper platform in space. b3 b 2

y

b1 x

P

b4

l4

z

b6

b5

l3

l2 l6

l5 B3

O

B5

B2

Y

Z

B4

l1

B1 X

B6

Fig. 1. 6DOF parallel robot

The hinge points of the upper and lower platforms are respectively recorded as bi and Bi ði ¼ 1; . . .; 6Þ, and bi distributed on a circle with radius r, b1 , b3 , b5 and b2 , b4 , b6 , respectively, forming two equilateral triangles, and their relative angles are a as shown in Fig. 2a; Bi distributed on a circle of radius R, B1 , B3 , B5 and B2 , B4 , B6 respectively forming two equilateral triangles, the relative angle of which is b shown in Fig. 2b. The upper platform is a moving platform, and a dynamic coordinate system P-xyz fixed to the upper platform is established. The lower platform is a fixed platform, and the static coordinate system O-XYZ is established. The sitting of the vector v in the P-xyz moving coordinate system is marked as Pv , as v in the sitting mark in the O-XYZ reference coordinate system. The six drive poles are recorded as li the length of the poles as li ði ¼ 1; . . .; 6Þ.

Research on Kinematic Simulation for Space Mirrors Positioning

(a)

233

(b) B2

b2 b3

B1 B3

R

b1 b6

r

B4

b4

B6

b5

B5

Fig. 2. 6DOF parallel platform hinge points distribution

In the upper and lower platform coordinate systems, the coordinate expressions of the vector sum are bi ¼ r½cos ai sin ai 0T ði ¼ 1; . . .; 6Þ

ð1:1Þ

Bi ¼ R½cos bi sin bi 0T ði ¼ 1; . . .; 6Þ

ð1:2Þ

p

1.2

Inverse Solution of the Position of 6DOF Parallel Robot

After the coordinate system is determined, the pose of the moving platform is represented by a generalized coordinate vector q, where q ¼ ½q1 ; q2 ; q3 ; q4 ; q5 ; q6 T , ½q1 ; q2 ; q3 T the coordinate vector representing the center of the motion platform in the inertial coordinate system, ½q4 ; q5 ; q6 T represents the attitude angle of the motion platform in the inertial coordinate system, that is, the Euler angle. These six parameters determine the spatial pose of the moving platform. Decomposing the finite rotation of a rigid body around an axis mentioned in Euler’s theorem into three finite rotations around a coordinate axis in a certain order, in this paper, the rotation order is selected as x!y!z coordinate axis. The final rotation transformation matrix can be obtained from the properties of the rotation matrix: A BR

¼R 2 ðx; U Þ  Rðy; V Þ  Rðz; W Þ cVcW sVsW ¼ 4 sUsVcW þ cUsW sUsVsW þ cUcW cUsVcW þ sUsW cUsVsW þ sUcW

3 sV cVsU 5 cUcV

ð1:3Þ

where cU ¼ cos U, cV ¼ cos V, cW ¼ cos W, sU ¼ sin U, sV ¼ sin V, sW ¼ sin W. The generalized coordinate vector in the moving platform q ¼ ½x; y; z; U; V; W T , the pffiffiffiffiffiffiffiffiffiffi length of the pole is li ¼ jli j ¼ lTi  li , li is the vector bi Bi , jli j is the length of the pole, i ¼ 1; 2; . . .; 6.

234

1.3

Z. Yalin et al.

Positive Solution of the Position of 6DOF Parallel Robot

The positive solution is more complicated than the inverse solution of the position of the 6DOF parallel robot. The following is a solution to the positive solution using the Newton-Raphson method. Define a target vector function to describe the estimated value of the actuator’s telescopic length li and the actual value li . 2

3 2 23 f1 l21  l1 6.7 6 7 f ¼ 4 .. 5 ¼ 4 ... 5 2 f6 l26  l6

ð1:5Þ

The Newton-Raphson method takes the minimum value of the target vector function as the target. The steps to solve the pose vector P of the 6DOF parallel robot are as follows: 1. Measurement l, select the initial position P of the moving platform; 2. Based on P and Calculate l, form a vector function f; 3. If it PT P\e1 is true, then P is the desired pose, otherwise, proceed to the next step; 4. Calculate the Jacobian matrix @f J ¼ @P ; 5. Use JdP ¼ f to calculate the pose correction value dP; 6. If it dPT dP\e2 is true, P is the desired pose, otherwise, proceed to the next step; 7. Calculate P ¼ P þ dP and go to step 2. The calculation formula of the Jacobian matrix in step 5 is as follows: 8 Ji;1 > > > > Ji;2 > > > > < Ji;3 Ji;4 > > > > Ji;5 > > > > : Ji;6

¼ 2lix ¼ 2liy ¼ 2liz ¼ 2ðp aiy lTi Rcol3 p aiz lTi Rcol2 Þ  Rprow3 ai ðlix cos hz þ liy sin hz Þ ¼2 liz ðp aix cos hy þ p aiy sin hy sin hx þ p aiz sin hy cos hx Þ ¼ 2ðliy Rprow3 ai  lix Rprow2 ai Þ

ð1:6Þ

where: li ¼ ½ lix liy liz T , the coordinate vector of the i-th pole; p ai , the coordinate vector of the i-th hinge point in the o  xyz coordinate system on the motion platform; p aix , p aiy , p aiz , respectively present p ai in the coordinate vector in the o  xyz coordinate T T system; Rcol2 ¼ ½ Jx Jy Jz  ; Rcol3 ¼ ½ Kx Ky Kz  ; Rrow1 ¼ ½ Ix Jx Kx ; Rrow2 ¼ ½ Iy Jy Ky Rrow3 ¼ ½ Iz Jz Kz ; The 6DOF parallel mechanism generally moves near the neutral position, so the initial value of the positive solution can be set to the middle position to ensure the convergence of the solution method. This method can find the only feasible solution.

2 Simulation of 6DOF Parallel Robot Based on OpenGL The 6DOF parallel mirror platform is dynamically generated in real time according to the 3D coordinates provided by the inverse solution and the positive solution of the kinematic model. The platform pose is calculated by the inverse solution, and the pose parameters are not easy to find. Therefore, the rotation matrix of the struct is obtained

Research on Kinematic Simulation for Space Mirrors Positioning

235

by using the quaternion, which is simpler and faster than the Euler angle. The angle / is between the new pose of the pole and the benchmark pole, the normal vector of the plane formed by the pole and the benchmark pole is the rotation axis a ¼ ½ a1 a2 a3 T , the normalized vector is: aT a ¼ a21 þ a22 þ a23 ¼ 1 The quaternion parameter is written as q ¼

ð2:1Þ

  e , g2 þ e21 þ e22 þ e23 ¼ 1 and there g

are: 2 3 2 3 e1 a sinð/=2Þ / / 4 1 g ¼ cos ; e ¼ a sin ¼ a2 sinð/=2Þ 5 ¼ 4 e2 5 2 2 e3 a3 sinð/=2Þ

ð2:2Þ

Therefore the normalized rotation matrix based on quaternions is: 2

3 1  2ðe22 þ e23 Þ 2ðe1 e2 þ e3 gÞ 2ðe1 e3  e2 gÞ R ¼ 4 2ðe2 e1  e3 gÞ 1  2ðe23 þ e21 Þ 2ðe2 e3 þ e1 gÞ 5 2ðe3 e1 þ e2 gÞ 2ðe3 e2  e1 gÞ 1  2ðe21 þ e22 Þ

ð2:3Þ

According to the rotation matrix, the final 3D simulation image rendering is realized by OpenGL’s viewport transformation.

3 Simulation Results After the 3D geometric model and motion model of the 6DOF parallel mirror platform are established, the human-computer interaction software platform and the 3D interface can be programmed with the OpenGL-based MFC framework to achieve precisely control of the platform motion. The flow chart of the simulation system is shown in (Fig. 3). Firstly, the target pose of the moving platform is input on the human-computer interaction interface. The current pose of the dynamic platform can be used to obtain the target pose of the six poles through the inverse solution algorithm, and the current pose of the six poles is updated, and then based on six The current pose of the struts is obtained by the positive solution algorithm to obtain the current pose of the moving platform, update the current pose of the moving platform, and finally realize the 3D simulation of the entire mirror platform (Figs. 4 and 5). The human-computer interaction interface and 3D simulation interface of the 6DOF parallel mirror platform control system are shown in the figure. When at the middle position, the platform height is 181.5941 mm, and the hinge coordinates of the static platform and the moving platform are

236

Z. Yalin et al. Input moving platform target pose

Moving platform present pose

Inverse solution for moving platform pose

Update pose of six poles

Update moving platform pose

Positive solution for moving platform actual pose

3D simulation

Fig. 3. Simulation system control flow chart

Fig. 4. Simulation interface

Research on Kinematic Simulation for Space Mirrors Positioning

237

Fig. 5. Simulation interface of the new pose Chart 1 Six hinge coordinates of the static platform The static platform 1 2 3 4 5 6 X 143.4754 −5.4756 −137.9997 −137.9997 −5.4757 143.4754 Y 76.5128 162.5097 85.9969 −85.9969 −162.5097 −76.5128 Z 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 Chart 2 Six hinge coordinates of the moving platform The moving platform X Y Z

1 2 3 4 5 6 101.3365 36.6631 −137.9997 −137.9997 36.6631 101.3365 181.5941 181.5941 181.5941 181.5941 181.5941 181.5941 100.8416 138.1808 37.3391 −37.3391 −138.1808 −100.8416

After moving 50 mm in the X direction and 10° around the V axis, the simulation interface is as shown in the figure. The simulation interface accurately reflects the new pose of the 6DOF parallel robot. By comparison, it can be seen that the two sets of data are basically consistent, and the kinematics model of the 6DOF parallel mirror platform is verified to be accurate.

4 Conclusion This paper mainly designs the kinematics model and establishes simulation system for the space mirrors 6DOF parallel robot. According to the kinematics characteristic and requirements of the space mirrors, the human-computer interaction is designed and simulation system modeling of the control system of the space mirrors 6DOF parallel robot is completed. The motion control algorithm is programmed based on the MFC framework and OpenGL technology. Finally the real-time simulation of the system was realized. The practice proves that the simulation platform effectively verifies the

238

Z. Yalin et al.

correctness of the kinematics model of the 6DOF parallel robot. The real-time simulation platform is very important for validating the working principle, algorithm and working space of the space mirrors 6DOF parallel robot.

References 1. Fengchao L, Gang H et al (2017) Simulation of kinematics and dynamics of stewart platform for secondary mirror based on ADAMS. Res Explor Lab 36(2):107–112 2. Shuang T, Xiaoyong W et al (2015) Sensitivity analysis of position and pose adjustment of 6DOF parallel mechanism. Spacecraft Recovery Remote Sens 36(3):78–85 3. Yao R, Zhu WB, Qingge Y (2011) Dimension optimization design of the Stewart platform in FAST[C]. In: International conference on advanced design and manufacturing engineering. GuangZhou(CN), pp 2088–2091 4. QingLin W, Bing Q et al (2013) Secondary mirror control system design based on Stewart platform. Foreign Electronic Measur Technol 32(11):73–76

A Dictionary Learning-Based Off-Grid DOA Estimation Method Using Khatri-Rao Product Weijie Tan1,2(B) , Chenglin Zheng3 , Judong Li1 , Weiqiang Tan1 , and Chunguo Li4 1

2

School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China [email protected] School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China 3 School of Electronic Information, Wuhan University, Wuhan 430072, China [email protected] 4 School of Information Science and Engineering, Southeast University, Nanjing 210096, China [email protected]

Abstract. Grid mismatch is the main drawback in grid-based sparse representation. For DOA estimation, off-grid problem degrades the accuracy of angle estimation. In order to solve this problem, a dictionary learning-based off-grid DOA estimation method is proposed. Firstly, we calculate the sampling covariance matrix, then based on covariance matrix model, we formulate the DOA estimation as a sparse representation problem with Khatri-Rao product dictionary. In the proposed method, two stage iteration strategy is utilized to address the off-grid problem. In the first stage, the coarse estimation is attained by the gridbased sparse DOA estimation; in the second stage, the dictionary perturbation parameter is learned based on gradient descent method for improving the accuracy of DOA estimation. Simulation results verify the effectiveness of the proposed method. Keywords: Grid mismatch · Dictionary learning product · Gradient descent method

· Khatri-Rao

This work was supported in part by the National Undergraduate Training Program for Innovation and Entrepreneurship under Grant 201811078117 and the Natural Science Foundation of Guangdong Province of China under Grant 2018A030310338. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 239–248, 2020 https://doi.org/10.1007/978-981-13-9409-6_29

240

1

W. Tan et al.

Introduction

Source localization using sensor arrays has played a fundamental role in many engineering applications such as communication, radar, sonar, seismology, smart home and public-safety. So it has received much attention in signal processing field for many decades. Many powerful sparse DOA estimation algorithms have been proposed in recent years [1–3]. A great deal of them have focused on signals with sparse representations in finite discrete dictionaries [4], which provided that the grid is fine enough such that every continuous parameter lies on (practically, close to) certain grid point, and described the continuous parameter by a set of discrete grid points. However, signals encountered in many actual applications are usually specified by parameter in a continuous domain, the parameters are almost surely not located exactly on the assumed grid and not perfectly matched the predefined basis. This leads to a grid mismatch that results in a degradation of the recovery performance. To solve the grid mismatch problem, a great number of off-grid sparse methods have been proposed [6–16]. Interpolation is the basic strategy among these methods, which approximates the grid error by interpolating between grid points [6–9]. In continuous basis pursuit method (CBP) [6], a novel polar interpolation approach is proposed for leveraging the translation-invariant property of frequency-sparse signals in the frequency domain. In [7], off-grid sparse bayesian inference (OGSBI) method is presented which models the mismatch error as a mismatch parameter, fits the grid mismatch to the observed data statistically, and estimates the parameter via an alternating descent algorithm. In [8,9], the sparse total least squares (STLS) method is proposed, STLS method can yield an maximum a posteriori (MAP) optimal estimate, but the obvious drawback is its unrealistic model of Gaussian distributed off-grid errors. In [10], a lowcomplexity simultaneous orthogonal matching pursuit least-squares (SOMP-LS) algorithm is proposed, which is an iterative alternating descent algorithm, but its performance appears to be questionable for closely-spaced sources. In [11], perturbed OMP method is proposed, which use the OMP framework to obtain a selected set of dictionary atoms, and exploit off-grid solver to jointly solve for the signal support and the off-grid perturbations. Based on the covariance matrix model, an off-grid 1 covariance matrix reconstruction approach (OGL1CMRA) is proposed in [13, 14]. This method can attain the close-form of the perturbation parameter. Based on the element domain, the researchers in [15,16] proposed a dictionary learning DOA estimation method for single snapshot case, which uses the manifold dictionary to learn the perturbation parameter. In this paper, motivated by [13–16], a dictionary learning-based off-grid DOA estimation is proposed for addressing the grid mismatch problem. Based on covariance matrix model, the DOA estimation is formulated as a sparse representation problem, which is using Khatri-Rao product dictionary. To approximate the off-grid error, two stage iteration strategy is utilized. In the first stage, the grid-based sparse DOA estimation is used to attain the coarse estimation. In the second stage, the dictionary parameter is learning based on gradient descent

A Dictionary Learning-Based Off-Grid DOA Estimation

241

method to improve the accuracy of DOA estimation. Simulation results demonstrate the effectiveness of the proposed method. Notations: Matrix and vector are used by upper and lower case boldface, (·)∗ , (·)T , (·)H , (·)† represent a conjugate operator, transpose operator, conjugate operator, pseudo-reverse. det (·), diag (·) stand for a matrix determinant, diagonal matrix respectively. ◦,  denote hadamard product, Khatri-Rao product respectively, and vec(·) denotes vectorization operations, that is, stacking the matrix row by row. ⊗ represents Kronecker product,  · 1 ,  · 2 stand for the 1 norm and 2 norm. IM denote the M × M identify matrix.

2 2.1

Covariance-Based Model for Sparse DOA Estimation Array Model

We consider a uniform linear array (ULA) with M omnidirectional elements, which spacing d between adjacent elements are arranged with half-wavelength λ/2. We assume that K independent narrowband uncorrelated signals from θ1 , θ2 , . . . , θk impinge on this ULA. Then the received signal can be expressed as x(t) =

K 

a(θk )sk (t) + n(t),

(1)

k=1

where a(θk ) = [1, e−j2πd/λ sin θk , . . . , e−j2π(M −1)d/λ sin θk ] denotes the steering vector. s(t) represents the transmitted signal. n(t) is a additional Gaussian white noise vector with the zero mean and the variance is σn2 . The covariance matrix can be given as R = E{x(t)xH (t)} = A(θθ )Rs AH (θθ ) + σn2 IM , where A(θθ ) = [a(θ1 ), a(θ2 ), . . . , a(θK )] is array manifold, Rs = denotes signal covariance matrix. 2.2

(2) 1 L

L t=1

s(t)sH (t)

Covariance-Based Sparse Representation Model

In practice, since the covariance matrix R is estimated  by the finite snapshots, ˆ = 1 L x(t)xH (t) instead that is, we use the sampling covariance matrix R t=1 L of the covariance matrix R. There is the measurement error besides the noise ˜ = R + ΔR. Therefore, by the vectorization processing R ˜ − σ 2 IM , error, i.e. R n we can attain ˜r = vec(A(θθ )Rs AH (θθ ) + ΔR) = vec(A(θθ )Rs AH (θθ )) + ξ ,

(3)

˜ where the noise power σn2 can be estimated as the minimum eigenvalue of R. When the signals are uncorrelated, The problem (3) can be further expressed as ˜r = (A∗ (θθ )  A(θθ ))p + ξ = Bp + ξ ,

(4)

242

W. Tan et al.

where B(θθ ) = [a(θ1 )∗ ⊗ a(θ1 ), . . . , a(θK )∗ ⊗ a(θK )], p = [p1 , p2 , . . . , pk ]T is the signal power. In order to formulate the DOA estimation as a sparse representation problem, uniformly sampling the spatial domain angle attains a equal spacing grid set φ = [φ1 , φ2 , . . . , φN ], N  K and the Khatri-rao product dic˜ φ) = [a(φ1 )∗ ⊗ a(φ1 ), . . . , a(φN )∗ ⊗ a(φN )] can be constructed. The tionary B(φ DOA estimation problem can be formulated the following sparse representation problem: ˜ p2 ≤ , p1 , s.t. ˜r − B˜ (5) min ˜ ˜ 0 p

˜ is the extension of p from θ to φ with non-zeros entries denoting the true where p source locations. The aim of constraint term is to fitting the measurement error ξ , but the parameter is not easy to determinate. From [17], the measurement error ξ satisfies a complex Gaussian distribution, i.e. ξ ∼ CN (0, W), where ˜ − 12 ξ ∼ CN (0, I). Furthermore, it ˜ = 1 RT ⊗ R, then we can conclude that W W L can be deduced that ˜ − 12 ξ 2 ∼ χ2 (M 2 ). (6) W 2 Now, the sparse DOA estimation problem can be rewritten as: ˜ 12 (˜r − B˜ ˜ p)2 ≤ ε. min ˜ p1 , s.t. W ˜ 0 p

(7)

It is found that the regular parameter ε can be uniquely determined by χ2 (M 2 ). ˜ − 12 ξ 2 which can be determined χ2 distribution with degrees of freedom ε2 ≥ W 2 M 2 and 99% probability [2]. Generally, we assume that the DOAs of the signals are on the grid, but due to the continuous properties, this assumption is not always satisfied. To address this problem, we proposed a dictionary learningbased off-grid DOA estimation method.

3

Dictionary Learning-Based Off-Grid DOA Estimation Method

The proposed method includes two steps: In the first step, the Khatri-Rao prod˜ is fixed and sparse power vector p ˜ will be estimated by low comuct dictionary B plexity sparse recovery algorithms such as orthogonal matching pursuit (OMP). By calculating the maximum correlation between all atoms in the dictionary and the residual signal, the OMP estimates one atom iteratively. Therefore, it is guaranteed to yield a K-sparse representation after K iterations. In the second ˜ and then update the Khatri-Rao product dictiostep, we fix the power vector p ˜ or equivalently the angle vector θ . To update the Khatri-Rao dictionary, nary B we propose to minimize the following cost function ˜ − 12 (˜r − B(φ)˜ ˜ p)22 . minW φ

(8)

The cost function defined as ˜ − 12 (˜r − B(φ)˜ ˜ ˜ ˜ − 12 (˜r − B(φ)˜ Ξ(φ)  (W p))H W p),

(9)

A Dictionary Learning-Based Off-Grid DOA Estimation 1

243

1

˜ − 2 ˜r, C = W ˜ ˜ − 2 B(φ), Let y = W then the Eq. (9) can expressed as p), Ξ(φ)  ((y − C(φ)˜ p))H (y − C(φ)˜

(10)

which can be calculated as follow: Ξ(φ) = (y − C(φ)˜ p)H (y − C(φ)˜ p) = yH y − yH C(φ)˜ p − (C(φ)˜ p)H y − (C(φ)˜ p)H (C(φ)˜ p)

(11)

= y y − 2Re{y C(φ)˜ p} − (C(φ)˜ p) C(φ)˜ p. H

H

H

Therefore, using the steepest decent method, we can iteratively estimate the true ˜ DOA by learning the Khatri-Rao product matrix B(φ). By differentiating the function Ξ(φ) respect to the value φ, the steepest descent iteration is φ i+1 = φ i − μφ Ξ(φ), where

  ∂(C(φ)˜ p) φ Ξ(φ) = 2Re ((C(φ)˜ , p )H − y H ) ∂φ

(12)

(13)

and ∂(C(φ)˜ p) ˜ − 12 ((D(φ φ) ◦ A∗ (φ φ))  A(φ φ) + A∗ (φ φ)  (D∗ (φ φ) ◦ A(φ φ)))˜ =W p, (14) ∂φ φ) = j2π λd (0 : M − 1) cos(φ). Let where D(φ φ) = (D(φ φ) ◦ A∗ (φ φ))  A(φ φ) + A∗ (φ φ)  (D∗ (φ φ) ◦ A(φ φ)), G(φ

(15)

It is found that the final recursion for updating the estimated angle vector φ at iteration i + 1 is ˜ − 12 G(φ φ)˜ p)}, (16) φ i+1 = φ i − μRe{eH W where μ is the step size parameter, e = (C(φ)˜ p) − y is the error. Therefore, the overall dictionary learning-based algorithm for DOA estimation based on covariance matrix model is summarized in Algorithm 1.

4

Simulation and Analysis

In this section, we will present the simulation results to evaluate the estimation performance of our proposed method with comparison to several other stateof-the-art methods, including l1 -singular value decomposition (L1-SVD) [1], l1 -sparse representation of array covariance vectors (L1-SRACV) [2] and L1CMRA [3], OGL1CMRA [13] with Cramer-Rao lower bound(CRLB) [18]. In the following experiments, we consider a uniform linear array (ULA) with M = 8 elements. The signals are assumed to be mutually independent Gaussian distribution. The reference point is set at the left of the ULA. Unless otherwise stated, we use an uniform sampling grid from −90◦ to 90◦ with the step 2 degree.

244

W. Tan et al.

Algorithm 1 Dictionary learning-based DOA estimation method using Khatri-Rao Product. Input: Array output X ∈ CM ×N ˜ (0) ← 0N ×1 , φ (1) ← φ Initialization: p ˆ Calculate the sample covariance matrix: R 2 2 ˜ Preprocessing : ˜r = vec(R − σn IM ) , σn is the noise power. ˜ = 1 RT ⊗ R Calculate the prewhite matrix: W L for i = 1 to Itermax do ˜ i by the following optimization probFixed φ, update power parameter p lem ˜ 12 (˜r − B˜ ˜ p)2 ≤ ε min ˜ p1 , s.t. W ˜ 0 p

˜ i , update the angle φ i+1 by the following dictionary learning: Fixed p 1

˜ − 2 G(φ φ)˜ p)} φ i+1 = φ i − μRe{eH W end for ˆ←p ˜ (Itermax ) , Output: Source power estimation p Source direction parameter: θˆ ← φ (Itermax +1) .

In our algorithm, the maximum number of iterations is empirically set as 60, the ˆq < pq+1 − p step parameter μ = 10e−4 , the stopping criterion is defined as ˆ −3 2 ˆ 10 . We calculate the noise power σn by the minimum eigenvalue of R, and the regular parameter ε is determined by using MATLAB function chi2inv (1 − γ, M 2 ), where γ is set to 10−4 . In L1-SVD, we set regular parameter λ = 0.575. The root mean square error (RMSE) is defined as:   Q  1  (q) θˆ − θ 22 , (17) RMSE =  QK n=1 (q) where Q denotes the number of trials, θˆ and θ are the sets of the estimated and true directions of signals in the q trial, respectively. In the first experiment, we compare the RMSE of these methods with respect to different snapshots. we assume two signals impinge onto the 8 element ULA from [−5◦ + υ, 4◦ + υ], where υ is a random variable, uniformly chosen from the interval [− 2r , 2r ]. The SNR is set to 0 dB and L varies from 20 to 200. The simulation results are shown in Fig. 1, we can see that the curves of the proposed method coincides with the CRLB curve when the number of snapshots is larger than 120. And the performance of OGL1CMRA is better than L1CMRA, L1-SVD and L1-SRACV, the dictionary learning-based method has the best performance of DOA estimation among all methods. We have also compared the CPU time of these methods and show the results in Fig. 2. Since the proposed method and OGL1CMRA are based on coarse estimation to involve the iteration

A Dictionary Learning-Based Off-Grid DOA Estimation

245

4 L1-SVD L1-SRACV L1CMRA OGL1CMRA Dictionary Learning Method CRB

3.5 3 2.5 2 1.5 1 0.5 0

100

150

200

250

300

350

400

Fig. 1. The estimated RMSE’s versus the snapshots in the first experiment for the fixed SNR = 0 dB. 6

5

4

3

L1-SVD L1-SRACV L1CMRA OGL1CMRA Dictionary Learning Method

2

1

0

100

150

200

250

300

Fig. 2. The estimated time’s versus the snapshots in the first experiment for the fixed SNR = 0 dB.

procedure, more computations are required. By comparing in Fig. 2, we can see that the computations of the proposed method is the same as the L1CMRA, OGL1CMRA, in other words, the main computation is to attain the coarse estimation, although the estimation performance is better than them, and we can find the time-consuming is less than L1-SVD, and L1-SRACV. In the second experiment, we compare the RMSE of these methods with respect to different SNR. We repeat the previous simulation except that the number of snapshot is set to 100 and SNR varies from −9 to 15 dB. From Fig. 3,

246

W. Tan et al. 102 L1-SVD L1-SRACV L1CMRA OGL1CMRA Dictionary Learning Method CRB

101

100

10-1 -8

-6

-4

-2

0

2

4

6

8

10

12

Fig. 3. The estimated RMSE’s versus the SNR in the second experiment for the fixed L = 100. L1-SVD L1-SRACV L1CMRA OGL1CMRA Dictionary Learning Method

1

0.8

0.6

0.4

0.2

0

1

1.5

2

2.5

3

3.5

4

Fig. 4. The estimated RMSE’s versus grid interval for the fixed L = 100, SNR = 5 dB.

we can observe that the dictionary learning method have the best estimation performance when the SNR is more than −4 dB, and the performance curve is consistent with the CRLB when SNR is larger than 0 dB. Among all methods, the OGL1CMRA show the suboptimal performance, which uses the one order Taylor expansion to reduce the estimation error, however, it inevitable introduces error without high order term. The proposed method utilizes the gradient descent method to learning the perturbation parameter, the estimation performance is better than OGL1CMRA. In Fig. 4, the relationship between RMSE and grid size

A Dictionary Learning-Based Off-Grid DOA Estimation

247

is given. It can be seen that the proposed algorithm shows the best estimation performance under different grid sizes.

5

Conclusion

In this paper, a dictionary learning-based off-grid DOA estimation is proposed in order to solve off-grid DOA problem, which is based on covariance matrix model, and formulates the DOA estimation as a sparse representation problem with Khatri-Rao product dictionary. Two stage iteration strategy is utilized to approximate the off-grid error in the method, the coarse estimation is achieved by the grid-based sparse DOA estimation method, and the Khatri-Rao product dictionary perturbation parameter is learned by gradient descent method. The accuracy of DOA estimation can be improved by alternating iteration. Simulation results demonstrate the effectiveness of the proposed method compare with state-of-the-art methods.

References 1. Malioutov D, Cetin M, Willsky AS (2005) A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 53:3010–3022 2. Yin J, Chen T (2006) Direction-of-arrival estimation using a sparse representation of array covariance vectors. IEEE Trans Signal Process 59:4489–4493 3. Wu X, Zhu WP, Yan J (2016) Direction-of-arrival estimation based on Toeplitz covariance matrix reconstruction. In: 2016 IEEE international conference on acoustics, speech and signal processing. IEEE Press, Shanghai, pp 3071–3075 4. Tan W, Feng X, Ye X et al (2018) Direction-of-arrival of strictly non-circular sources based on weighted mixed-norm minimization. EURASIP J Wirel Commun Netw 225 5. Bernhardt S, Boyer R, Marcos S et al (2016) Compressed sensing with basis mismatch: performance bounds and sparse-based estimator. IEEE Trans Signal Process 64:3483–3494 6. Ekanadham C, Tranchina D, Simoncelli EP (2011) Recovery of sparse translationinvariant signals with continuous basis pursuit. IEEE Trans Signal Process 59:4735–4744 7. Yang Z, Xie L, Zhang C (2013) Off-grid direction of arrival estimation using sparse Bayesian inference. IEEE Trans Signal Process 61:38–43 8. Zhu H, Leus G, Giannakis GB (2011) Sparsity-cognizant total least-squares for perturbed compressive sampling. IEEE Trans Signal Process 59:2002–2016 9. Jagannath R, Leus G, Pribi´c R (2012) Grid matching for sparse signal recovery in compressive sensing. In: 2012 9th European radar conference. Amsterdam, pp 111–114 10. Gretsistas A, Plumbley MD (2012) An alternating descent algorithm for the offgrid DOA estimation problem with sparsity constraints. In: Proceedings of the 20th European signal processing conference. Bucharest, pp 874–878 11. Teke O, Gurbuz AC, Arikan O (2013) Perturbed orthogonal matching pursuit. IEEE Trans Signal Process 61:6220–6231

248

W. Tan et al.

12. Camlica S, Yetik IS, Arikan O (2019) Sparsity based off-grid blind sensor calibration. Digital Signal Process 84:80–92 13. Wu X, Zhu WP, Yan J et al (2018) Two sparse-based methods for off-grid directionof-arrival estimation. Signal Process 142:87–95 14. Zhang Z, Wu X, Li C et al (2019) An p -norm based method for off-grid DOA estimation. Circuits Syst Signal Process 38:904–917 15. Zamani H, Zayyani H, Marvasti F (2016) An iterative dictionary learning-based algorithm for DOA estimation. IEEE Commun Lett 20:1784–1787 16. Tan W, Feng X, Tan W et al (2018) An iterative adaptive dictionary learning approach for multiple snapshot DOA estimation. In: 2018 14th IEEE international conference on signal processing (ICSP). Beijing, pp 214–219 17. Tan W, Feng X (2019) Covariance matrix reconstruction for direction finding with nested arrays using iterative reweighted nuclear norm minimization. Int J Antenna Propag 18. Stoica P, Nehorai A (1989) Music, maximum likelihood, and Cramer-Rao bound. IEEE Trans Acoust Speech Signal Process 37:720–741

Radar Adaptive Sidelobe Cancellation Technique Based on Spatial Filtering Yumeng Zhang(&), Jinliang Dong, and Huifang Dong Nanjing Research Institute of Electronics Technology, Nanjing, China [email protected]

Abstract. The electromagnetic environment of radar operation is increasingly complex, and active interference will have a great impact on radar performance. Side-lobe cancellation technology is an effective means to eliminate interference by auxiliary antennas. This paper introduces an adaptive beamforming algorithm to form the cancellation weight based on the secondary antenna. The weight convergence speed of several algorithms is analyzed, and the cancellation ability is analyzed, and a normalized least mean square algorithm is proposed. Keywords: Sidelobe cancellation mean square algorithm

 Adaptive sidelobe cancellation  Least

1 Introduction The space electromagnetic environment under the process of informationization has become increasingly complex, and the struggle for electromagnetic space has been unprecedentedly intensified, which has had a profound impact on military activities. The interference of ground clutter and various active interferences bring continuous challenges to the development of radar systems at this stage [1]. The sharp deterioration of the electromagnetic environment in which radars operate is a serious challenge for modern radar systems. In order to extract targets from strong ground clutter and interference, the radar also has anti-resistance such as adaptive interference suppression and frequency agility to obtain lower antenna side lobes, low intercept rate performance and high maneuverability. In order to adapt to the complex and varied application environment, the radar system must have higher mobility and flexibility, and lower development and maintenance costs [2]. Therefore, a new radar anti-jamming technology that can well balance the above requirements is needed. It is a good choice regardless of the anti-jamming effect of the side-lobe cancellation technology or its implementation cost. The function of the sidelobe cancellation system is to cancel the sidelobe interference. It sets a certain number of auxiliary antennas around the main antenna of the radar to form an adaptive array with the main antenna. The adaptive weighting of the auxiliary array makes the synthesis of the entire sidelobe cancellation system. The zero point of the receiving pattern adaptively aligns the interference direction to achieve the purpose of suppressing interference. In the sidelobe cancellation system, the weighting coefficient of the main antenna is always 1, and the weighting coefficient of the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 249–258, 2020 https://doi.org/10.1007/978-981-13-9409-6_30

250

Y. Zhang et al.

auxiliary array is determined by the adaptive algorithm. Therefore, the sidelobe cancellation system is a special case of the adaptive beamforming system. The adaptive algorithm used in the radar sidelobe cancellation system can be divided into open-loop algorithm and closed-loop algorithm. The open-loop algorithm has a large amount of computation and engineering implementation is difficult. This paper studies the closed-loop adaptive algorithm. The closed-loop algorithm is mainly based on the Wiener filtering algorithm with minimum gradient descent. The optimal solution is obtained according to the selected performance surface function. The adaptive algorithm obtained by this principle is the least mean square algorithm (LMS). However, its convergence speed and error characteristics are difficult to balance, and high convergence speed will bring about a large steady-state error. The sampling matrix inversion algorithm (SMI) extended by this algorithm can improve the convergence speed and the contradiction of steady state error, but the amount of calculation becomes larger. In addition, considering the constraint of the dispersion degree of the correlation matrix caused by the gradient steepest descent principle adopted by LM and SMI algorithm, this paper uses the conjugate gradient method (CGM) to improve. Each iteration of the orthogonal path, constantly updated to seek the optimal solution. This paper proposes a normalized LMS algorithm to improve the contradiction between the convergence speed and steady state error of a typical LMS algorithm.

2 Radar Sidelobe Cancellation Technology The active interference of the radar can be accessed not only from the main lobe of the antenna but also from the side lobes of the antenna. One of the ways to deal with cochannel interference from side lobes into the receiver is to use very low side lobes. However, the development of low sidelobe antennas is extremely difficult and costly, and only the newly developed radar antennas have lower sidelobe levels. Moreover, with the advancement of interference technology, the effective power of interference increases continuously, and it is difficult to effectively suppress strong sidelobe interference only by the extremely low sidelobe antenna. The most effective way to deal with sidelobe interference is the sidelobe adaptive cancellation. The pattern of the antenna has spatial filtering characteristics, that is, “space selection”. The main lobe is equivalent to “passband”, and the side lobes are equivalent to “stopband”. If the auxiliary antenna is additionally added, the signal it receives is made. The sum is weighted to form a new spatial filtering characteristic, further eliminating the interference signal received by the side lobes of the main antenna [3]. The adaptive sidelobe cancellation achieves a nulling at the interference angle of the sidelobe position to suppress strong interference signals entering the antenna by the side lobes. Generally, an adaptive canceller is composed of a high-gain radar main antenna and a plurality of low-gain auxiliary antennas (the gain of the auxiliary antenna is equivalent to the first side lobe gain of the main antenna), and the adaptive processor is based on the main and auxiliary antennas. The received signal calculates a set of weight coefficients, adjusts the amplitude and phase of the auxiliary antenna, and adaptively forms a zero point in the active interference direction to achieve the purpose of suppressing active interference. The sidelobe cancellation schematic is shown in the Fig. 1.

Radar Adaptive Sidelobe Cancellation Technique Based on Spatial

251

d 1

N

2

s0 + i0

s+i x e

Fig. 1. Schematic diagram of sidelobe cancellation

In the schematic diagram, k represents the moment, and dðkÞ represents the main lobe signal, which includes the useful signal s0 ðkÞ and the interference signal i0 ðkÞ received by the radar main lobe [4]. For N auxiliary antennas, the signal received by the auxiliary antenna is an N  1 dimensional matrix xðkÞ, where the useful signal and the interference signal are respectively sðkÞ and iðkÞ, adaptive side lobes The weight vector obtained by the cancellation algorithm can be expressed as N  1 dimensional matrices xðkÞ, the auxiliary antenna output eðkÞ in the cancellation system can be expressed as: eðkÞ ¼ dðkÞ  xH ðkÞxðkÞ

ð1Þ

since the mean value of the useful signal is zero, that is, eðkÞ is the cancellation residual, and the representation pair Ability to dissipate.

3 Adaptive Cancellation Algorithm 3.1

Least Mean Square Algorithm

The purpose of the least mean square algorithm is to minimize the square of the average error. Through algebraic calculation, multiple iterations are used to obtain the convergence coefficient of the antenna array to achieve the desired antenna array performance [5]. The mean square error expression is: jeðkÞj2 ¼ jdðkÞj2 2dðkÞxH xðkÞ þ xH ðkÞxðkÞxH ðkÞxðkÞ

ð2Þ

For the quadric surface formed by the objective function, the iterative relationship of weights is obtained through its performance function, and the optimal search based on gradient information is realized. We use the steepest descent method (the opposite direction of the gradient) to iteratively obtain the gradient, and then derive the weight change relationship [6]. The recursive relationship of weights is:

252

Y. Zhang et al.

1 xðk þ 1Þ ¼ xðkÞ þ l½rðkÞ 2

ð3Þ

xðk þ 1Þ ¼ xðkÞ  l½Rxx x  r  ¼ wðkÞ þ le ðkÞxðkÞ

ð4Þ

When the gradient vector is zero, that is, when the Wiener solution is reached, the following relationship is obtained: E½eðkÞxðkÞ ¼ 0

ð5Þ

The implementation steps of the least mean square algorithm are: (1) Weight initial setting, vector xðkÞ is initially N  1 all zero matrix (2) Define the auxiliary antenna to receive the signal: xðkÞ ¼ vS  sðkÞ þ vS  iðkÞ þ n

ð6Þ

Vs stands for N  1-dimensional steering vector and n stands for noise. (3) Define the auxiliary antenna output signal: yðkÞ ¼ wH ðkÞxðkÞ (4) Weight update: xðk þ 1Þ ¼ xðkÞ  l½Rxx x  r  ¼ wðkÞ þ le ðkÞxðkÞ 3.2

Sampling Matrix Inversion Algorithm

A significant disadvantage of the least mean square algorithm is that it must undergo multiple iterations before reaching a stable convergence. We use the sampling matrix inversion algorithm to perform time-correlated estimation of the K-sampled array matrix. It does not require iterative calculations and is an adaptive algorithm based on the Maximum Signal to Interference and Noise Ratio (SINR) criterion [7]. For this block adaptation method, the kth block within the K samples is sampled: XK ðkÞ. 2

x1 ð1 þ kKÞ XK ðkÞ ¼ 4 x2 ð1 þ kKÞ xM ð1 þ kKÞ

3 x1 ð2 þ kKÞ x1 ðK þ kKÞ x2 ð2 þ kKÞ x2 ðK þ kKÞ 5 xM ð2 þ kKÞ xM ðK þ kKÞ

ð7Þ

The implementation steps of the sampling matrix inversion algorithm are: (1) Weight initial setting, vector xðkÞ is initially N  1 all zero matrix (2) Define the auxiliary antenna to receive the signal: xðkÞ ¼ vS  sðkÞ þ vS  iðkÞ þ n Vs stands for N  1-dimensional steering vector and n stands for noise.

ð8Þ

Radar Adaptive Sidelobe Cancellation Technique Based on Spatial

253

(3) Calculating the sampling covariance matrix: Rxx ðkÞ ¼

1 XK ðkÞXKH ðkÞ K

ð9Þ

(4) Computational correlation matrix: r¼

1  d ðkÞXK ðkÞ K

ð10Þ

(5) Calculate the weight vector: 1  H xSMI ðkÞ ¼ R1 xx ðkÞrðkÞ ¼ ½XK ðkÞXK ðkÞ d ðkÞXK ðkÞ

3.3

ð11Þ

Conjugate Gradient Algorithm

In view of the convergence of the correlation matrix dispersion degree on the convergence speed caused by the steepest descent method in the previous algorithms, we use the conjugate gradient method to improve. The orthogonal path of each iteration is continuously updated to seek the optimal solution, because the orthogonal search direction of the CGM algorithm converges the fastest [8]. The goal of the CGM algorithm is to minimize the quadratic cost function by multiple iterations: 1 JðxÞ ¼ xH Ax  d H x 2

ð12Þ

A is a K  N-dimensional matrix and represents K-sampling of the N-element auxiliary antenna. The gradient value of the cost function is: rJðxÞ ¼ Ax  d

ð13Þ

The vector is updated by defining the form of the residual to reduce the number of iterations: rð1Þ ¼ J 0 ðxð1ÞÞ ¼ d  Axð1Þ

ð14Þ

Define the conjugate direction of the iteration by the residual: Dð1Þ ¼ AH rð1Þ

ð15Þ

254

Y. Zhang et al.

The weight iteration relationship is: xðk þ 1Þ ¼ xðkÞ  lðkÞDðkÞ

ð16Þ

The step size is selected as: lðkÞ ¼

r H ðkÞAAH rðkÞ DH ðkÞAH ADðkÞ

ð17Þ

The update of the residual vector and the direction vector can be expressed as: rðk þ 1Þ ¼ rðkÞ þ lðkÞADðkÞ

ð18Þ

Dðk þ 1Þ ¼ AH rðk þ 1Þ  aðkÞDðkÞ

ð19Þ

aðkÞ ¼

3.4

r H ðk þ 1ÞAAH rðk þ 1Þ r H ðk þ 1ÞAAH rðkÞ

ð20Þ

Normalized Least Mean Square Algorithm

Normalized LMS (NLMS) is an improved algorithm based on the typical LMS algorithm, which aims to avoid the interference caused by gradient noise amplification, and adaptively adjust the tracking step size to make the tracking effect, The iterative speed and error variation is better than the typical LMS algorithm with a constant step size. The basic idea is to give a larger step size in the tracking phase, so that the signal converges faster, but after convergence, in order to prevent the steady-state error caused by the excessive step size, the step size is adaptively adjusted through the whole process to achieve fast convergence. Stable after convergence [9]. The basic idea is to adjust the step size of the algorithm according to the input signal. The input signal is proportional to the steady-state error, and the step size is inversely proportional to the steady-state error. The normalized LMS algorithm normalizes the step size by the square norm of the input signal to obtain the step size that changes with the signal to improve the performance of the LMS algorithm [10]. The variable step size LMS algorithm step size can be expressed as: xðk þ 1Þ ¼ xðkÞ þ lðkÞe ðkÞxðkÞ

ð21Þ

In order to achieve fast convergence, it is necessary to select the step value appropriately, reduce the instantaneous square error, and use the instantaneous square error as a simple estimate of the mean square error MSE, which is also the basic idea of the LMS algorithm. In order to speed up the convergence, it is appropriate to minimize the squared error, obtain the partial derivative of the variable coefficient, and make it zero, and find:

Radar Adaptive Sidelobe Cancellation Technique Based on Spatial

lðkÞ ¼

1 xT ðkÞxðkÞ

255

ð22Þ

The resulting step value may cause a negative value of the instantaneous error variation. In order to control the offset, considering the derivative of the instantaneous square error is not equal to the mean square error MSE derivative value, the normalized LMS algorithm is modified as follows: xðk þ 1Þ ¼ xðkÞ þ

l c þ xT ðkÞxðkÞ

eðkÞxðkÞ

ð23Þ

where l is called a fixed convergence factor and its purpose is to control the amount of offset. The parameter c is set to avoid the xT ðkÞxðkÞ is too small and the step value is too large.

4 Simulation and Performance Analysis The simulation analysis in this paper is based on the linear arrangement of 8 auxiliary antennas. The array element spacing is 0:5k, and the desired signal arrival angle is 0°. The interference signal has a wave azimuth angle of 30° and a useful signal mean of zero. The simulation analysis analyzes the convergence speed and weight stability of LMS algorithm, SMI algorithm and CGM algorithm. Finally, the normalized LMS algorithm proposed by the simulation compares the convergence performance of typical LMS algorithm (Fig. 2).

Fig. 2. Algorithm cancellation comparison chart

The convergence of the LMS is achieved through continuous iteration, but the convergence speed is slower, and the offset is also large after convergence. The SMI

256

Y. Zhang et al.

algorithm performs time-correlation estimation through the antenna array correlation matrix of the sampling point to obtain the optimal weight and direction graph. It does not need to be iterated, so the algorithm is fast, but the inverse operation is performed on a large amount of data, and the hardware implementation is very complicated. The algorithm performance of CGM algorithm is very superior, with fast convergence speed and high stability. However, the algorithm is very complicated due to the iterative update of weights by conjugate gradient method. The contradiction between the convergence speed and the steady-state error, the steps of 0.1, 5e–3 and 2e–3 are selected for comparison. Through Fig. 3, we can observe that the step value of 0.1 has the fastest convergence rate, but the convergence value is very obvious at the convergence value after convergence. The step value 2e–3 has the slowest convergence rate and is stable after convergence. The step value 5e–3 convergence speed and stability are in between.

Fig. 3. Weight magnitude iteration graph

In the Fig. 4, the normalized LMS algorithm dynamically changes the step size in the weight update formula. Because the fixed step value is too large, the convergence speed is fast but there is a large steady-state error value after convergence, and the step value is selected to be small. It is stable after convergence but the convergence speed is very slow. The improved algorithm performs normalization calculation based on the error value and signal value of the current point. As the iteration proceeds, the step size in the weight update formula is changed. In the early iteration, the larger step size is used to increase the convergence speed. As the convergence continues, the normalized step value becomes smaller to maintain a small steady state error.

Radar Adaptive Sidelobe Cancellation Technique Based on Spatial

257

Fig. 4. Weight convergence speed characteristic diagram

5 Conclusion In this paper, we study the radar sidelobe cancellation technology for active interference and analyze the principle of sidelobe cancellation. An adaptive cancellation algorithm based on auxiliary antenna is introduced to analyze the performance characteristics of several adaptive algorithms. LMS algorithm with excellent performance and complexity is improved. NLMS algorithm is proposed to verify the normalization. LMS algorithm can solve the contradiction between convergence speed and steady state error.

References 1. Kulpa JS, Maslikowski L (2017) Filter-based design of noise radar waveform with reduced side-lobes. IEEE Trans Aerosp Electron Syst 1–1 2. Pengliang Y, Chenjiang G, Qi Z et al (2018) Sidelobe suppression with constraint for MIMO radar via chaotic whale optimisation. Electron Lett 54(5):311–313 3. Benesty J, Cohen I, Chen J (2017) Adaptive beamforming. Fundamentals of signal enhancement and array signal processing. Wiley, Singapore 4. Sibei C, Qingjun Z, Mingming B et al (2018) An improved adaptive received beamforming for nested frequency offset and nested array FDA-MIMO Radar. Sensors 18(2):520 5. Elisa G, Piotr S, Maria-Pilar JA et al (2018) Recent advances in array antenna and array signal processing for radar. Int J Antennas Propag 2018:1–2 6. Cheng S, Wei Y, Chen Y et al (2017) A universal modified LMS algorithm with iteration order hybrid switching. ISA Trans 67(Complete):67–75 7. Horowitz L, Blatt H, Brodsky WG et al (1979) Controlling adaptive antenna arrays with the sample matrix inversion algorithm. Aerosp Electron Syst IEEE Trans AES-15(6):840–848 8. Nazareth JL (2009) Conjugate gradient method. Wiley Interdisc Rev Comput Stat 1(3):348– 353

258

Y. Zhang et al.

9. Jo SE, Kim SW (2005) Consistent normalized least mean square filtering with noisy data matrix. IEEE Trans Signal Process 53(6):2112–2123 10. Sahu R, Mohan MR, Sharma MS (2013) Performance analysis of LMS Adaptive beamforming algorithm

On the Spectral Efficiency of Multiuser Massive MIMO with Zero-Forcing Precoding Chenglin Zheng1(B) , Weijie Tan1,2 , and Yazhen Chen2 1 2

School of Electronic Information, Wuhan University, Wuhan 430072, China [email protected], [email protected] School of Computer Science, Guangzhou University, Guangzhou 510006, China [email protected]

Abstract. This paper investigates the spectral efficiency (SE) of downlink massive MIMO systems, where we consider the Ricean fading channels and utilize the zero forcing precoder at the base state. An exact expression for the SE is derived and the tight lower and upper bounds are presented by utilizing the modified Jensen’s inequality. Our results show that as the number of transmit antennas grows to infinite or in the high signal-to-noise ratios regime, the lower and upper bound coincide, which are approximately equal to the exact expression for the spectral efficiency of system. In addition, we reduce the Ricean fading channels to the Rayleigh fading case, a tractable lower bound of SE is obtained, which is shown that our results cover a series of previous works as special cases. Finally, numerical results are presented to validate the theoretical analysis. Keywords: Spectral efficiency · Zero-forcing precoding MIMO · Ricean fading · Space-division multiple-access

1

· Massive

Introduction

With the rapid development of smart terminals and their applications, the need for high speed services explosive increases year by year. In order to meet the need, massive multiple-input multiple-output (MIMO) has recently attracted a lot of interest in the future wireless communications [1], which is regarded as a promising technique for the fifth-generation communication systems. Some literatures have been theoretically demonstrated that massive MIMO systems enable significantly increase the achievable sum-rate of cellular communication systems, while possibly reducing system energy consumption [2–4]. This work was supported by the National Undergraduate Training Program for Innovation and Entrepreneurship and the Natural Science Foundation of Guangdong Province of China under Grant 2018A030310338. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 259–267, 2020 https://doi.org/10.1007/978-981-13-9409-6_31

260

C. Zheng et al.

To further maximize the achievable sum-rate, the massive MIMO systems with different precoding or detection schemes have been investigated broadly [3– 7]. In particularly, in [3], massive MIMO systems using simple linear algorithms such as the maximum ratio combining (MRC) for the uplink and the maximum ratio transmission (MRT) for the downlink were proposed. It was shown that the effect of fast fading will vanish when the BS deploys very large antenna arrays while simultaneously serving multiple users. Furthermore, the authors in [4–6] studied uplink performance with MRC, zero forcing (ZF) and minimum mean square error filter for massive MIMO. The lower bounds on the achievable sum-rate for massive MIMO were derived in [4]. In [5,6], some novel upper and lower bounds on the achievable sum-rate of point-to-point MIMO systems with ZF receivers was performed. In addition, the rate analysis of massive MIMO systems with MRT and ZF precoders was performed, some closed-form formulas for the achievable rate were derived in the high and low signal-to-noise ratios (SNR) regimes. The authors in [7] investigated the achievable rate of downlink massive MIMO systems with MRT and ZF precoders, but did not given the exact expression. To the best of our knowledge, there are currently few closedform analytical results on the achievable sum-rate of massive MIMO systems, and the analytical results of [3–7] were limited to independent and identically distributed (i.i.d.) Rayleigh fading channels or point-to-point MIMO systems, respectively, while the practically relevant case of massive MIMO systems in Ricean fading channels remains still an open problem. Motivated by this fact, our works are focused on the spectral efficiency for multiuser massive MIMO systems with ZF precoder. Our work differentiates from the previous literature results in the following aspects. A lower bound on the spectral efficiency is obtained by considering Ricean fading channels. And then we further reduce this model to Rayleigh fading channels. We derive an exact closed-form expression for the spectral efficiency of systems in Rayleigh fading channels and present tractable upper and lower bounds, which are shown to be much tighter than the results in [7] for arbitrary SNR and is shown that our results cover a series of previous works as special cases. Finally, numerical results are presented to validate the theoretical analysis.

2

System Model

We consider a downlink transmission in massive MIMO system, where the BS is equipped with Nt transmit antennas and transmits simultaneously to M users with single-antenna (Nt ≥ M ). Assume that the BS uses linear precoding techniques to process the signal before transmitting to all users. This requires knowledge of CSI at the BS. The received vector for M user can be written as √ (1) r = ρHWs + n, where ρ is the average SNR of system ,W denotes the Nt × M precoding matrix, n represents the vector of additive white zero-mean Gaussian noise, and H denotes the M × Nt the fast fading channel matrix between the BS and the

On the Spectral Efficiency of Multiuser Massive MIMO

261

M users. The Ricean channel is considered in this paper, which consists of a deterministic component corresponding to a line-of-sight signal and a Rayleigh distributed random component. The Ricean K-factor represents the ratio of the power of the deterministic component to the power of the scattered components. The Ricean channel matrix H can be written as [8] 1/2 1/2   −1 ¯ + (Ξ + INt )−1 H Hw , H = Ξ(Ξ + INt )

(2)

where Ξ is the Nt × Nt diagonal matrix with [Ξ]kk = Kk , which denotes Ricean K-factor of the K-th user, Hw denotes the random component, whose entries are i.i.d. complex Gaussian random variables with zero-mean and variance one, ¯ denotes the channel mean matrix, which can be expressed as and H   ¯2 , . . . , h ¯M . ¯1 , h ¯ = h (3) H Assuming massive MIMO system of a uniform linear array (ULA) at the BS, the channel mean vector of the k-th user is given by   ¯ k = 1 ejk0 d cos(ϕk ) . . . ej(Nt −1)k0 d cos(ϕk ) , (4) h where k0 = 2π/λ, λ is the carrier wavelength, d denotes the inter-antenna spacing, and ϕk is the angle of departure for the k-th user. To facilitate our analysis, the noise variance and the large-scale fading are assumed to be constant one since the large-scale fading can be known a priori. By employing zero-forcing (ZF) precoder at the BS, W can be written as W = HH (HHH )−1 .

(5)

In the following section, we circumvent this problem by deriving a close-form expression and tractable lower and upper bounds on the spectral efficiency of massive MIMO systems with ZF precoder.

3

Spectral Efficiency Analysis of System

In this section, we derive an exact closed-form expression for the spectral efficiency and present tight upper and lower bounds by using ZF precoder. 3.1

Spectral Efficiency Analysis in Ricean Fading

From (1), the spectral efficiency of the k-th user by using ZF precoder can be given as [7] ⎧ ⎛ ⎞⎫ ⎨ ⎬ ρ  ⎠ , RkZF = E log2 ⎝1 +  (6) −1 ⎩ ⎭ (HH H) kk

where the evaluation of the expectation requires all channel ergodic realizations of channel H.

262

C. Zheng et al.

Theorem 1. For Ricean fading channels, the lower bound on the spectral efficiency is given by   Kk Nt + (Nt − K) ZF . (7) = log2 1 + ρ Rk,lower Kk + 1 Proof. We begin with  presenting  the Jensen’s inequality that holds that    E{X} X E log 1 + Y  log 1 + E{Y } . With the aid of the inequality, we can derive the lower bound on the spectral efficiency of the k-th user as ⎛ ⎞ 1 ZF  ⎠ . Rk,lower = log ⎝1 +  (8) −1 E (HH H) kk

For the sake of simplicity, we let  −1     H −1  = E H H HH H . Yk = E kk

(9)

kk

Substituting (3) into (9), along with some basic manipulations, which can be further simplified as   −1    1 Kk ¯ H ¯ H H H +E H Hw Yk = E . (10) Kk + 1 Kk + 1 w 







¯ HH ¯ = Nt and E trace HH Due to E H w Hw  Yk =

−1 

Nt − M K k Nt + Kk + 1 Kk + 1

kk

=

M Nt −M .

Thus, we obtain

−1 .

(11)

Substituting (11) into (8) yields the desired result. ZF depends on the SNR, the number of From Theorem 1, we know that Rk,lower transmit antennas, the number of users and Ricean K-factor. We now investigate the exact expression of spectral efficiency under the Rayleigh fading channels in the following subsection.

3.2

Exact Expression of the Spectral Efficiency

Theorem 2. For i.i.d. Rayleigh fading channels, the exact analytical expression of RkZF by using ZF precoder is given by   1 Nt −M +1 1 , (12) Eh RkZF = log2 (e) e ρ h=1 ρ where Eh (·) denotes the exponential integral function of order h.

On the Spectral Efficiency of Multiuser Massive MIMO

263

Proof. For the sake of simplicity, we start by defining as 1

Xk = 

(HH H)

−1



.

(13)

kk

We rewrite (6) as RkZF = log2 (e) E {ln (1 + ρXk )} .

(14)

RkZF

We shall begin by studying the evaluation of according to the following expression ∞ ZF (15) Rk = log2 (e) ln (1 + ρxk )p (xk ) dxk . 0

According to random matrix theory, when the entries of small-scale fading H are i.i.d. Rayleigh random variances, the probability density function (p.d.f.) of Xk is given by [9] e−xk xNt −M . p (xk ) = (16) (Nt − M )! k Substituting (16) into (15) and applying the integration identity [10] ∞

ln (1 + aλ)λq−1 e−bλ dy = (q − 1)!eb/a b−q

q  h=1

0

Eh

  b . a

(17)

We can obtain the exact expression as (12) after some basic manipulations. This completes the proof. From Theorem 2, we can draw an interesting conclusion that RkZF is concerned with the SNR and the number of transmit antennas. We now study the bound on spectral efficiency in the following subsection. 3.3

Tight Bounds on the Spectral Efficiency

Theorem 3. For i.i.d. Rayleigh fading channels, the exact analytical expression of RkZF by utilizing modified Jensen’s inequality can be bounded as Rlower ≤ RkZF ≤ Rupper ,

(18)

where

  Rlower  log2 (e) ψ (Nt − M + 1) + log2 (e) ln γ + e−ψ(Nt −M +1)

and

 Rupper  log2 (e) ψ (Nt − M + 1) + log2 (e) ln γ +

1 (Nt − M )

(19)

 .

(20)

264

C. Zheng et al.

Proof. We start by re-expressing RkSE in (14) as follows     1 RkZF = log2 (e) E {ln Xk } + E ln ρ + . Xk

(21)

In order to evaluate the first term in (21), the required expectation of ln (Xk ) can be calculated as ∞ E {ln (Xk )} =

ln xk p (xk ) dxk .

(22)

0

Substituting the p.d.f. of Xk in (16) into (22) and applying the integration identity [10]  ∞ 1 λa−1 e−bλ ln λdλ = a Γ (a) [ψ (a) − ln (b)] . (23) b 0 After some basic manipulations, the average log function can be evaluated as E {ln (Xk )} = ψ (Nt − M + 1) .

(24)

With the help of the Jensen’s inequality, the second term of the log function in (21) can be upper and lower bounded as          1 1 ≤ E ln ρ + ≤ ln ρ + E . (25) ln ρ + e−E{ln Xk } Xk Xk In order to evaluate the right-hand side term in (25), We shall begin by studying the average value of 1/Xk , which can be calculated as  E

1 Xk



∞ = 0

1 p (xk ) dxk . xk

(26) ∞

Substituting (16) into (26) and applying the integration identity

λa e−bλ dλ =

0

a!b−a−1 . After some basic manipulations, the required expectation of 1/Xk can be calculated as   1 1 . (27) = E Xk (Nt − M ) Substituting the results into (25) and combining it with (21) yields the result.

4

Numerical Results

This section provides numerical results to confirm our theoretical analysis. In our simulations, the channel mean vectors between the users are orthogonal to

On the Spectral Efficiency of Multiuser Massive MIMO 45

Monte−Carlo simulation Analytical Analytical Lower Bound Analytical Upper Bound

40

Achievable sum−rate [bps/Hz]

265

35 30

Kk=15dB

K = −∞ k

25 20 15 10

0

5

10

15

20

25

30

SNR [dB]

Fig. 1. The achievable sum-rate versus SNR for different Ricean K-factor cases.

Achievable sum−rate [bit/s/Hz]

70

Monte−Carlo Simulation Analytical

65 SNR=15dB

60

55 SNR=10dB

50

45 50

100

150

200

250

300

350

400

450

500

The number of transmit antenna [ Nt ]

Fig. 2. Achievable sum-rate versus the number of transmitter antennas.

each other. We assume that the number of users is set to M = 3, the number of transmit antennas is N t = 30 and the inter-antenna spacing is d = λ/2. For the sake of simplicity, every user has the same Ricean K-factor (i.e. K k = M, ∀ k ). In Fig. 1, results are provided for Monte Carlo simulation and the exact analytical results in (12) are compared against the lower and upper bounds, shown in (19) and (20), respectively. Clearly, the lower and upper bounds remain sufficiently tight with the these results across the entire SNR regime. Furthermore, it can be found that the exact analytical for the achievable sum-rate and the lower bounds, as well as upper bound exactly coincide in the high-SNR regime and the larger transmit antennas. In addition, the analytical results are presented in

266

C. Zheng et al. 25

Achievable sum−rate [bit/s/Hz]

SNR=10dB

24

23

Monte−Carlo Simulation Analytical

22

21 SNR=5dB

20

19

0

5

10

15

20

25

30

Ricean K−factor [ dB ]

Fig. 3. Achievable sum-rate versus the Ricean K-factor.

Theorem 3. We can see that the achievable sum-rate almost no increase for Ricean K-factor K k = 15 dB. This phenomenon is anticipated since for the case of ZF procoder with perfect CSI, the systems experience no inter-user interference and no estimation-error. Figure 2 shows the Monte-Carlo simulations are compared against their corresponding analytical approximations in (7). Results are present for different SNR scenarios ρ = 15 dB and ρ = 10 dB, respectively. We see a precise agreement between the simulation results and our analytical results. For the ZF precoder, the achievable SE increases as the number of transmit antennas, which consistents with the previous results in [7]. In addition, we can also notice that the achievable sum-rate with a high SNR is much larger that the small one, which indicates that a high-SNR does dramatically benefit the achievable sum-rate. We study now the the achievable sum-rate varies with the Ricean K-factor in Fig. 3. Results are present for different SNR regimes ρ = 10 dB and ρ = 5 dB, respectively. We find that the achievable SE has a little change as the Ricean K-factor increases for the reason that the systems experience no interuser interference and no estimation-error with the ZF precoder, on the other ¯ whose hand, as Ricean K-factor, the channel matrix becomes identical to H, singular values have a large spread. Meanwhile, we also notice that the achievable sum-rate with a high SNR is much larger that the small one in the same Ricean K-factor scenario.

5

Conclusion

In this paper, we investigated the spectral efficiency for the downlink multiuser massive MIMO systems by utilizing ZF precoding. An exact expression of the spectral efficiency was derived and the tight lower and upper bounds are derived.

On the Spectral Efficiency of Multiuser Massive MIMO

267

Analytical results showed that the lower and upper bounds are approximately equal to the exact expression of the achievable ergodic sum-rate in the high-SNR regime or with larger transmit antennas. In addition, a tractable lower bound of spectral efficiency was derived in Ricean fading channels, which are shown that our results covered a series of previous works as special cases.

References 1. Boccardi F, Heath RW, Lozano A, Marzetta TL, Popovski P (2014) Five disruptive technology directions for 5G. IEEE Commun Mag 52(2):74–80 2. Rusek F, Persson D, Lau BK, Larsson EG et al (2013) Scaling up MIMO: opportunities and challenges with very large arrays. IEEE Signal Process Mag 30(1):40–60 3. Marzetta TL (2010) Noncooperative cellular wireless with unlimited numbers of base station antennas. IEEE Trans Wirel Commun 9(11):3590–3600 4. Ngo HQ, Larsson EG, Marzetta TL (2013) Energy and spectral efficiency of very large multiuser MIMO systems. IEEE Trans Commun 61(4):1436–1449 5. McKay MR, Collings IB, Tulino AM (2010) Achievable sum rate of MIMO MMSE receivers: a general analytic framework. IEEE Trans Inf Theory 56(1):396–410 6. Matthaiou M, Zhong CJ, Ratnarajah T (2011) Novel generic bounds on the sum rate of MIMO ZF receivers. IEEE Trans Signal Process 59(9):4341–4355 7. Yang H, Marzetta TL (2013) Performance of conjugate and zero-forcing beamforming in large-scale antenna systems. IEEE J Sel Areas Commun 31(2):172–179 8. Jin S, Gao XQ, You XH (2007) On the ergodic capacity of rank-1 Ricean fading MIMO channels. IEEE Trans Inf Theory 53(2):502–517 9. Grant A (2002) Rayleigh fading multi-antenna channels. EURASIP J Appl Signal Process 2002(3):316–329 10. Alfano G, Lozano A, Tulino AM, Verd´ u S (2004) Mutual information and eigenvalue distribution of MIMO Ricean channels. In: Proceedings IEEE ISIT 11. Gradshteyn IS, Ryzhik IM (2007) Table of integrals, series, and products, 7th edn. Academic 12. Abramowitz M, Stegun IA (1974) Handbook of mathematical functions. Dover, New York

A Signal Sorting Algorithm Based on LOF De-Noised Clustering Zhenyuan Ji(&), Yan Bu, and Yun Zhang Harbin Institute of Technology, Institute of Electronicsn, Harbin, China [email protected]

Abstract. In this paper, an algorithm for removing outliers is proposed for low SNR signals. Firstly, the coarse separation of signals is performed by using the isolated point removal algorithm based on Euclidean distance, and then the coarsely separated data is finely separated by the LOF algorithm based on density detection. The remaining signal data after fine separation is clustered. Through simulation analysis, the algorithm can remove all isolated points at the cost of useful signal loss at low SNR, and the residual signal clustering effect is better. Keywords: Outlier removal

 LOF  Clustering

1 Introduction Radar signal sorting is a vital part of the electronic reconnaissance system and is essential in electronic countermeasures. With the emergence and use of modern new system radars, the traditional radar signal sorting method based on inter-pulse parameter PRI has not adapted to the current use requirements [1]. In recent years, clustering algorithms have been used more and more in radar signal sorting in complex environments [2, 3]. K-means clustering algorithm is widely used in recent years. The initial clustering center and the number of clusters for the K-means clustering algorithm need to be artificially set. Zhang [4] uses the data field potential function to automatically obtain the clustering center and the number of clusters of data, but the method is only applicable to the receiving letter. For signals with high noise, for signals with low signal-to-noise, the noise isolation point needs to be removed. The traditional method of removing noise isolated points is the distance-based isolated point removal algorithm [5], but this algorithm removes a large amount of signal data while removing all noise isolated points, and there are cases where isolated points are not completely cleared. In this paper, an algorithm combining the Euclidean distance-based outlier removal with density-based outlier detection is proposed. The algorithm uses the distance-based outlier removal algorithm to first coarsely separate the signal data, and uses the density-based outlier detection algorithm LOF for fine separation of the coarsely separated data [6]. The algorithm removes all noise isolated points at the expense of less loss of signal.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 268–275, 2020 https://doi.org/10.1007/978-981-13-9409-6_32

A Signal Sorting Algorithm Based on LOF De-Noised Clustering

269

2 Data Standardization The actual types of radar signals received by different types are on different orders of magnitude, because the distance between parameters with large orders of magnitude is much larger than the distance between parameters with small orders of magnitude. In order to make the different types of parameters comparable, signal parameters are needed. Standardized processing. The parameter data is normalized according to (1). xi ð j Þ 0 ¼

xi ð jÞ  minðxi ð jÞÞ maxðxi ð jÞÞ  minðxi ð jÞÞ

ð1Þ

i ¼ 1; 2. . .M, M is the number of signals, j ¼ 1; 2. . .N N is the signal dimension. That is the number of signal parameter types. xi ð jÞ is the signal sample of the i-th row and jth column, x0i ð jÞ is a standardized processed signal sample. At this time, different types of signal parameters are all on the same order of magnitude and are comparable.

3 Outlier Removal Algorithm 3.1

Isolated Point Removal Based on Euclidean Distance

The K-means clustering algorithm is sensitive to isolated points in signal parameters, so it is necessary to remove isolated points. Set the standardized data to form a data set, Calculate the distance between each parameter in P and other parameters, Calculate the Euclidean distance between each parameter using (2). vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u M uX  2 xip  xjp di;j ¼ t

ð2Þ

p¼1

1  i; j  N, The resulting di;j forms an N  N-dimensional matrix F, Sum each line of F to get fi i ¼ 1; 2. . .N. fi is the sum of the distances between xi and other parameters, the average distance is shown as (3). f ¼

N X

fi =N

ð3Þ

i¼1

Comparing each row of fi with the average distance f , if the parameter larger than f is removed into the set A, the isolated points may not be completely separated. Therefore, the appropriate weighting coefficient m is selected to weight. f and remove the parameters greater than m  f into the set A, and the coarse separation of isolated points is completed. LOF is a density-based outlier detection algorithm that describes the degree of isolation of each parameter and assigns each parameter indicator to indicate its degree

270

Z. Ji et al.

of isolation. The larger the LOF value of the parameter, the higher the degree of isolation and the greater the likelihood of isolated points. Firstly, the related definition of LOF algorithm is introduced. It is assumed that the data separated by coarse separation constitutes a data set Q, and q is the data object in Q: 1. d ðq; oÞ represents the Euclidean distance between the two objects q and o. 2. The k-th distance of q is the distance between the object q and the object o close to its k-th, which is denoted as k  distðqÞ, and the condition that the object o should satisfy is: • There are at least k objects o0 2 Q  fqg in the set Q except q, which satisfies d ðq; o0 Þ  d ðq; oÞ. • There are at most k  1 objects o0 2 Q  fqg in the set Q except q, which satisfie d ðq; o0 Þ\d ðq; oÞ. 3. The k-th distance neighborhood Nk ðqÞ of q, which is the set of all objects within the k-th distance of q including the k-th distance. 4. The k-th reachable distance from object o to object q is shown as (4). reach  distðo; qÞ ¼ maxfdistðoÞ; d ðq; oÞg

ð4Þ

5. The local reachable density of q is shown as (5). P reach  distðq; oÞ lrdk ðqÞ ¼ 1=

o2Nk ðqÞ

jNk ðqÞj

ð5Þ

Represents the reciprocal of the average reachable distance of objects in the k-th neighborhood of q to q. 6. LOF is shown as (6). P LOFk ðqÞ ¼

o2Nk ðqÞ

lrdk ðoÞ lrdk ðqÞ

jNk ðqÞj

ð6Þ

Indicates the average of the ratio of the local reachable density of each object in Nk ðqÞ to the local reachable density of object q, and characterizes the degree of isolation of object q within its k neighborhood. The larger the value, the higher the degree of isolation, the more likely it is isolated point.

4 K-Means Clustering Algorithm Based on Data Field 4.1

Data Field

Data interacts with other data through the data field, and its influence function is a field strength function, which is shown as (7).

A Signal Sorting Algorithm Based on LOF De-Noised Clustering

fy ð xÞ ¼ qe

d 2 ðx;yÞ 2r2

271

ð7Þ

r is the radiation factor, and q is the amount of data reflecting the data points, generally 1, and d ðx; yÞ is the Euclidean distance. The potential function of the data field can be obtained from the field strength function is shown as (8). M M d 2 ðxi ;xj Þ X   X f xj ð xi Þ ¼ e 2r2 F xj ¼ i¼1

ð8Þ

i¼1

j ¼ 1; 2. . .M, M is the number of data. It can be seen that the larger the distance, the smaller the potential value, and the denser the regional data is. 4.2

Determination of the Initial Cluster Center

The line with the same potential value in the data field is the equipotential line, and the center surrounded by the equipotential line is called the potential center Fmax . The potential center is the maximum value of the potential value in the local range. The number of clusters and the initial cluster center can be determined by the potential center, but the potential value does not necessarily coincide with the data sample. Therefore, the sample data closest to the potential core should be selected as the initial. Cluster center is shown as (9). d ¼ min d ðFmax ; QðqÞÞ q2Q

ð9Þ

Select the q object position when d reaches the minimum value as the initial cluster center. 4.3

K-Means Algorithm

If the sample is closest to the type of initial cluster center obtained from the data field, the sample is classified into this category, so that the similarity within the category is higher, and the similarity between the categories is lower, and the cluster center is updated until When the sum of the squares of the errors of all samples and the mean of each class converges, the cluster center no longer transforms. The sum of squared errors is shown as (10). JN ¼

N X X  x  mj  j¼1 x2Qi

mj is the mean of the j-th sample set Qi . The LOF de-noised clustering algorithm flow chart is shown in Fig. 1.

ð10Þ

272

Z. Ji et al.

Calculate the distance between all data objects to form a matrix

Choose the value of k

Determine the number of categories N and the initial cluster center

Calculate k − dist ( q ) q ∈Q

Calculate the sum of each row and get f i

Calculate N

f = ∑ fi N i =1

Calculate the reach of o and q reach − dist ( q, o ) Determine the category of each sample Calculate the local reachable density of q lrd k ( q ) =

1



o∈N k ( q )

Data larger than m f is moved into Q

Calculate the distance of the sample to the cluster center

Calculate N k ( q )

reach − dist ( q, o ) Nk ( q )

Calculate the mean of each type of sample Calculate the LOF of q LOFk ( q ) =



o∈N k ( q )

lrd k ( o ) lrd k ( q )

Nk ( q )

Separate the object corresponding to the first l LOF value

Standardize the data after removing isolated points

mi =

1 Ni

∑x

x ∈ Xi

k

J k = ∑ ∑ x − mi i =1 x∈X i

Update cluster center

Convergence

Output sorting sample

Fig. 1. The LOF de-noised clustering algorithm

5 Simulation Analysis Simulate the radar signal parameters of complex systems, as shown in Table 1. Table 1. Simulation parameter Source type 1 2 3

RF/MHz 3100–3145 2800–3220 3270–3540

PW/us 8.3–15.6 15–18.2 18–19.75

DOA/° 34.5–38 39.45–40.65 37–40

Number of pulses 300 300 400

Adding an isolated point to the signal data stream, and the obtained data map is shown in Fig. 2.

A Signal Sorting Algorithm Based on LOF De-Noised Clustering

273

Fig. 2. Original distribution of data

The signal is subjected to coarse separation based on the Euclidean distance-based isolated point removal, and the obtained data map is shown in Fig. 3.

Fig. 3. Roughly distributed data distribution

The coarsely separated data is finely separated, that is, the LOF algorithm is used for detection, the obtained LOF value and the separated data map are as shown in Fig. 4.

274

Z. Ji et al.

(a) The value of the LOF

(b) Three-dimensional distribution of data after fine separation

Fig. 4. Data distribution after fine separation

The initial cluster center is determined by the data field potential function, as shown in Fig. 5. According to Fig. 5, the initial clustering center and the number of clusters were obtained, and then K-means clustering was performed. The final sorting accuracy was 99%.

(a) Equipotential distribution of RF and PW

(b) Equipotential distribution of RF and DOA

(c) Equipotential distribution of PW and DOA

Fig. 5. Signal parameter equipotential line distribution

A Signal Sorting Algorithm Based on LOF De-Noised Clustering

275

It can be seen from the experimental results that combining the outlier removal algorithm based on Euclidean distance with the LOF detection algorithm can correctly separate all isolated points at the cost of losing less useful signals, making the final signal clustering sorting more accurate.

6 Conclusion In this paper, an isolated point removal algorithm is proposed for low SNR signals. Firstly, the isolated point removal algorithm based on Euclidean distance is used for coarse separation. Then the LOF algorithm based on density detection is used to finely separate the data. Finally, the signals are clustered and sorted. According to the simulation analysis, the algorithm can remove all the isolated points at the cost of losing less useful signals, and then use the K-means algorithm based on the data field to have better sorting of the signals. Acknowledgements. This work was supported by the National Natural Science Foundation of China 61201304 and 61201308. It Thanks for the Key Laboratory of Marine Environmental Monitoring and Information Processing, Ministry of Industry and Information Technology.

References 1. Mei G (2011) Radar signal sorting algorithm on intensive signal condition. Harbin Engineering University, Harbin 2. Zhang W (2004) The application of clustering method in radar signal sorting. Radar Sci Technol 2:219–223 3. Zhu Z (2005) The clustering method of radar signals. Electron Countermeasures 6:6–10 4. Zhang R, Xia H (2015) Radar signal sorting algorithm of a new k-means Clustering Modern Def Technol 6:136–141 5. Chawla S, Sun P (2006) SLOM: a new measure for local spatial outliers. Knowl Information Syst 9(4):412–429 6. Dutta H, Giannella C, Borne K et al (2007) Distributed top-k outlier detection in astronomy catalogs using the demac system. In: Proceedings of 7th SIAM international conference on data mining

Design of a Small-Angle Reflector for Shadowless Illumination Guangzhen Wang(&) Foundation Department, Tangshan University, Tangshan 063000, China [email protected]

Abstract. The LED reflector of whole-reflection shadowless illumination was designed by flux compensation method. The theory of the Geometric optics and the Non-imaging optics were used in the design process of the reflectors. Based on LED’s characteristics, this reflector can achieve one uniform illumination spot at 1 m whose diameter is 200 mm. The illuminance is greater than 100,000 lx. The shadowless rate is also studied if there are occlusions. This reflector can meet the special requirements of shadowless lighting or signal transmission and coupling. Keywords: Light-emitting diodes

 Illumination design  Signal transmission

1 Introduction With the development of LED packaging technology, LED’s color uniformity and heat-spreading [1] have been greatly improved. HSIAO-WEN LEE proposed an advanced, ultra-thin, flexible light-emitting diodes LED package technique. Different types of micro-lenses were applied to different lighting regions to investigate their lighting effects [2]. And multiple colored LEDs [3] is used more and more popularly for it’s high optical flux [4, 5]. The large-size LED chip arrays can be used in shadowless lamp but need corresponding reflective or refractive devices to improve lighting performance. This belongs to the scope of small-angle lighting. In general, small-angle uniform illumination is achieved using PMMA lens. Because of converging light of large angle, TIR lens is the most suitable lens type. Shadowless lamp is an important equipment in the operating room or signal transmission. The LED light source has less heat compared to the traditional light source. It also has no UV radiation and long life. The majority of LED shadowless lamps are lens-type, whose design and manufacturing process are relatively more complex. This lighting’s design belongs to non-imaging optics [6]. The current design methods include free-form surface method, optimization method and other method. The free-form surface method is the most popular design method now because of its accurate design and high efficiency. This device must meet a variety of technical requirements at the same time and overcome many technical difficulties. The key is to distribute the total flux to the designated site. The illumination uniformity, shadowless rate and the illumination depth reach to the same or even higher levels. The theory of flux compensation and geometric optics are used in this paper to design LED shadowless lamp’s reflector. The surface of the reflector is obtained by the curve’s rotation. The total reflected flux from LED is divided into two parts, which are © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 276–283, 2020 https://doi.org/10.1007/978-981-13-9409-6_33

Design of a Small-Angle Reflector for Shadowless Illumination

277

reflected to the operative site in two ways. One part is the non-cross-reflection, the other part is cross-reflection. The purpose of these two ways is eliminating shadows.

2 Design and Simulation Because the illuminated site is circular, two-dimensional design is used here. Design ideas is shown in Fig. 1, which represents half of the reflector’s surface.

Fig. 1. Light rays’ paths form LED source.

In Fig. 1, the total flux from the LED is projected into the site in three parts. One part is the directly-incident part expressed by /1 (0–1). The second part of the flux is expressed by /23 (2–3), which is reflected to the site in the non-cross-reflection way. The third part is expressed by /34 (3–4), which is reflected to the surgical site in the cross-reflection way. Letter h is the vertical distance from LED to the operative site. Parameter R1 is the radius of the site. Spherical coordinate system shown in Fig. 2 is established for LED, the total light flux and /23 are shown in Eq. (1), respectively. Z /T ¼

Z IdX ¼

Zp=2 I0 cos h sin hdhdu ¼

pI0 sin2 h ¼ pI0

ð1Þ

0

/23 ¼ pI0 ðsin2 h3  sin2 h2 Þ

ð2Þ

where, I0 is the light intensity at the direction h ¼ 0. Therefore, the flux reflected by the reflector is /r ¼ /  /2 ¼ pI0 cos2 h2

ð3Þ

278

G. Wang

Ideally, if /1 and /r all reach to the operative site and produce an uniform illumination, the average illuminance is shown in Eq. (4). Eave ¼ /=pR21 ¼ I0 ðsin2 h1 þ cos2 h2 Þ=R21

ð4Þ

If h 2 ½0; h1 , take dh as an infinitesimal section, it is get Eqs. (5) and (6). EðRÞ2pRdR ¼ 2pI0 cos h sin hdh

ð5Þ

EðRÞ ¼ I0 cos4 h=h2

ð6Þ

So the compensate illuminance of reflector is Eq. (7). Ecom ¼ Eave  Edir

ð7Þ

And the compensate illuminance of /23 is: E23 ðhÞ ¼

ðsin 2 h3  sin2 h2 Þ Ecom ðsin2 h1 þ cos2 h2 Þ

ð8Þ

The calculation idea is shown in Fig. 2, in which Pm and Pm+1 are the arbitrary adjacent points. Parameter I and Nm are the light intensity and the normal vector at point Pm, respectively. According to the edge-ray theory, the two rays coming from the edge of LED must go to the edge of illuminated (or operative) site.

Fig. 2. Calculation of the reflected surface points.

The reflector surface’s coordinates and illuminated surface’s coordinates can be written as ðr cos h; r sin hÞ and ðh; h tan h0 Þ. The reflector is divided into many equal parts, it is get Eq. (9) taking flux d/23 as an infinitesimal section.

Design of a Small-Angle Reflector for Shadowless Illumination

du23 ¼ pI0 ðsin2 hm þ 1  sin2 hm Þ ¼ ph2 E23 ðhÞðtan2 h0m  tan2 h0m þ 1 Þ

279

ð9Þ

Set an initial point P1, all the coordinates of the target points can be calculated using Eqs. (8) and (9). The Law of Reflection (10) indicates the relationship between normal vector and tangent vector [6]: ND¼0

ð10Þ

where, D is the value of that Pm+1 vector minuses Pm vector. Coordinates of all points in the curve 2–3 can be get. Similarly, the coordinates of all points in the curve 3–4 surface can be calculated. rfxði; nÞ ¼ i  2ði  nÞn

ð11Þ

For example, an reflector was designed using above theory. The profile of the reflector is shown in Fig. 3a. And the simulation illumination system is shown in Fig. 3b. The ideal light efficiency of the reflector can be written as Eq. (12). g ¼ ð/1 þ /r Þ=/ ¼ sin2 h1 þ cos h22

ð12Þ

Here, h1 ¼ a tanð1=10Þ and h2 ¼ a tanð1=2Þ.

Fig. 3. Reflector profile and lighting system.

In the simulation, the illuminated site is replaced by a receiver which is an absorber. Ray tracing 2000 thousand light rays using Monte Carlo method. The light rays’ distribution is shown in Fig. 4 and not all the light rays are drawn. It is seen that the light is control to the small spot in order.

280

G. Wang

Fig. 4. Reflector profile and lighting system.

In the simulation, a LED chip array of 5 mm  5 mm is used with 4500 lm. Ignoring absorption and scattering loss, the reflector controls light to the site well. The illuminance distribution is shown in Fig. 5.

Fig. 5. Illumination distribution of the illuminated site.

Design of a Small-Angle Reflector for Shadowless Illumination

281

Figure 5a is a raster chart and 3723 lm light rays go to this receiver. That is to say, the light efficiency is g ¼ 3723=4500 ¼ 82:7% . Figure 5b is a line chart which shows the illumination uniformity is 95%. The illumination uniformity is defined as the ratio of minimum illumination to average illumination. Moreover, it can be seen that the illuminance is uniform and the spot’s edge is clear. If D50 and D10 refer to the spot diameters whose illuminances are 50% and 10% of the specified central illuminace, the value of D50 =D10  90%. Intensity distribution is shown in Fig. 6.

Fig. 6. Line chart of illumination and light intensity.

It can be seen from Fig. 6, the light is well controlled in the range of specified angle. The maximum intensity is 12,500 cd. The measurement of illuminance along the central optical axis is carried out. The result shows illumination depth is 700 mm. The energy distribution correspond s to radius and degree are shown in Fig. 7a and b, respectively.

Fig. 7. Energy distribution corresponding to radius and degree.

From Fig. 7, it shows the control ability of the reflector. Almost all the energy is limited in the circle with radius 150 mm and 40°.

282

G. Wang

In order to verify the effect of the LED reflector for shadowless lamp, the remaining illuminance is studied when a occlusion block light in the illumination system shown in Fig. 8.

Fig. 8. A system diagram with occlusion of one head and two heads.

The property of the occlusion is an absorber. Figure 9 displays the change of illuminance distribution in the illuminated site. Because of the occlusion, the illuminance reduced slightly.

Fig. 9. Illumination distribution with one occlusion.

The residual illumination is about 100,000 lx which is also more than 40,000 lx. The shadowless rate is g2 ¼ 100000=120000 ¼ 83%. So there is not a big impact on the illuminance because of the cross-reflection flux. A similar result from the

Design of a Small-Angle Reflector for Shadowless Illumination

283

Fig. 10. Illuminance distribution with two occlusions.

simulation of two occlusions shown in Fig. 10. It also has high shadowless rate and this reflector is a good element in illumination and signal transmission.

3 Conclusion In this paper, a reflector of shadowless lamp is designed for the current high-power LED array by means of energy supplement. Shadowless lamp’s reflector must achieve small-angle illumination. The traditional shadowless lamp is a lens array. It uses a number of LEDs tilting an angle to achieve a shadowless effect. Compared to the traditional shadowless lamp, the whole-reflection shadowless lamp is relatively simple and greatly reduce the cost. This designed reflector controls the cross-ray and noncross-ray to the illuminated site to achieve a shadowless effect. It is proved by simulation that the reflector can meet the requirements of shadowless lamp illumination. Because it has high energy utilization and illumination uniformity, it will have important applications in LED lighting or signal transmission and coupling.

References 1. Tsai PY, Huang HK, Sung CM, Kan MC, Wang YH (2016) High-power led chip-on-board packages with diamond-like carbon heat-spreading layers. J Disp Technol 12(4):357–361 2. Lee HW, Lin BS (2015) Micro-lens array design on a flexible light-emitting diode package for indoor lighting. Appl Optics 54(28):E210–E215 3. Ying SP, Lin CY, Ni CC (2015) Improving the color uniformity of multiple colored lightemitting diodes using a periodic microstructure surface. Appl Optics 54(28):E75–E79 4. Chung SC, Ho PC, Li DR, Lee TX, Yang TH, Sun CC (2015) Effect of chip spacing on light extraction for light-emitting diode array. Optics Express 23(11):A640–A649 5. Kim Y, Kim S, Iqbal F, Yie H, Kim H (2015) Effect of transmittance on luminescence properties of phosphor-in-glass for LED packaging. Optics Express 23(3):A43–A50 6. Wang G, Wang L, Wang D et al (2011) Secondary optical lens designed in the method of source-target mapping. Appl Opt 50:4031–4036

Anti-interference Communication Algorithm Based on Wideband Spectrum Sensing Minti Liu, Chunling Liu, Ran Zhang, and Yuanming Ding(&) Key Laboratory of Communication and Network, Dalian University, Dalian, Liaoning 116622, China {liuchunling,dingyuanming}@dlu.edu.cn, [email protected]

Abstract. It is difficult to analyze and detect wideband Chirp interference signals, since the existing algorithms are constrained by hardware performance. Aiming at this problem, an anti-interference communication algorithm based on wideband spectrum sensing is proposed. Firstly, the signal is represented as sparse signal by discrete fractional Fourier transform (DFRFT), and Gaussian observation matrix is applied to measure the sparse signal. Then, the signal reconstruction is realized under the Bayesian framework. Finally, the frequency domain information entropy is utilized to make spectrum judgment of the signal, and non-interference frequency band is used for communication, so as to ensure safe and reliable transmission of information. The simulation results demonstrate that, in the case of less measurement data and low signal-to-noise ratio (SNR), the proposed algorithm achieves higher accuracy of signal reconstruction and better detection performance compared with the Bayes compressive sensing energy detection algorithm. Keywords: Anti-interference Entropy

 Spectrum sensing  Compressed sensing 

1 Introduction Cognitive Radio (CR) is a kind of intelligent radio technology [1]. Through real-time spectrum sensing, CR nodes detect non-interference frequency bands for communication and realize safe and reliable transmission of information. Among numerous spectrum detection algorithms, energy detection (ED) algorithm is the most widely used because of its simple implementation and low cost [2]. However, ED algorithm is susceptible to the noise uncertainty, and the detection performance will drop sharply or even fail under low signal-to-noise ratio. Besides, the spectrum detection algorithm based on frequency domain information entropy is unrelated to signal power, which possesses good noise robustness [3]. In recent researches, the application of compressed sensing theory to wideband spectrum detection has become a hotspot for improving spectrum utilization [4]. Wideband spectrum detection algorithm based on compressed sensing technology, can achieve fast and efficient wideband spectrum detection by down sampling. However, this algorithm uses the match pursuit (MP) reconstruction algorithm, which is sensitive © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 284–293, 2020 https://doi.org/10.1007/978-981-13-9409-6_34

Anti-interference Communication Algorithm Based on Wideband

285

to measurement uncertainty, thus the signal reconstruction accuracy is low. Bayesian compressive sensing algorithm (BCS) is applied to wideband spectrum sensing, which can solve the uncertainty of measurement process by setting the prior of sparse signal and measurements noisy to reduce reconstruction error, and enable to recover the underlying sparse signal exactly [5]. In this research, an anti-interference communication algorithm based on wideband spectrum sensing is proposed. Our motivation to anti-interference communication is mainly from the perspective of avoiding interference frequency, and the related research is about spectrum sensing. The rest of the paper is organized as follows. The system model is described in Sect. 2, where the Chirp interference signal is represented by DFRFT basis. Wideband spectrum sensing based on Bayesian compressed sensing and entropy is then presented in Sect. 3. Section 4 analyzes the performance evaluation of the proposed algorithms compared with existing study. Finally, Sect. 5 concludes the whole research.

2 System Model Supposing the communication channel is a Gaussian channel, the interference in the environment is mainly Chirp interference signal, and this signal xW ðtÞ is composed of W components Chirp, which are sampled at Dt time intervals: yðnÞ ¼ xW ðnÞ þ eðnÞ

ð1Þ

where n ¼ 0; 1; . . .; L  1, eðnÞ is Gaussian noise signal, yðnÞ is received signal. According to compressed sensing theory, first the signal is sparsely expressed. In this paper, the decomposition-type discrete fractional Fourier transform (DFRFT) algorithm proposed by Ozaktas [6] is adopted to perform L-point DFRFT transformation on the signal, which can be expressed as: Y ðc; mÞ ¼ zðc; mÞ þ eðc; mÞ

ð2Þ

The parameter c 2 ½p; p is the rotation angle of DFRFT, m is the fractional sampling point number, zðc; mÞ is denoted the DFRFT of the multi-component Chirp interference signal, and eðc; mÞ is the DFRFT of the noise signal. The subscripts are omitted for briefly in the upcoming sections.

3 Proposed Algorithm 3.1

Wideband Spectrum Sensing Algorithm Based on Compressed Sensing

Based on Bayesian compressed sensing theory [7], the Gaussian observation of the signal can be expressed by the following mathematical model:

286

M. Liu et al.

g ¼ Uy ¼ UF ðz þ eÞ ¼  z þ E

ð3Þ

where  ¼ UF is a holographic dictionary, g 2 RM is observation signal, z 2 RM is the DFRFT coefficient of the Chirp interference signal, F is the DFRFT transform basis matrix, U 2 RML is the Gaussian observation matrix, and E is the measure noise vector. According to the observed data vector g, its Gaussian likelihood model can be expressed as:   b ð4Þ pðgjz; bÞ ¼ ð2p=bÞM=2 exp  kg zk22 2 Based on Bayes hierarchical model, the prior distribution setting of sparse vector z can be donated by: L    Y  N zj 0; a1 pðzjaÞ ¼ ð5Þ j j¼1

    is where zj is the jth DFRFT coefficient, and aj is the jth element of the a. N zj 0; a1 j a Gaussian probability density function with a mean of 0 and an precision of aj . To promote the scarcity of z, set gamma priors for hyper parameters: pðbja; bÞ ¼ Gaðbja; bÞ ¼

pðajc; d Þ ¼

L Y

ba ða1Þ b expðbbÞ CðaÞ

ð6Þ

  Ga aj jc; d

ð7Þ

j¼1

With the obtained observation vector, the Bayesian theorem can be used to derive the posterior probability density function of the hyper parameters. Since the integral item about the hyper parametric a vector is L-dimensional, in order to reduce computational complexity, the hyper parameter is estimated by maximizing the posterior probability density function to obtain:

aMAP ; bMAP ¼ arg maxðlog pðbja; bÞÞ þ log pðajc; d Þ a;b Z þ log dzpðgjz;bÞpðzjaÞ

ð8Þ

For simplicity, assuming a ¼ b ¼ c ¼ d ¼ 0, under the definition of (8), the maximum likelihood estimation of sum can be simplified as:

ML

a

;b

ML



Z ¼ arg max log a;b

dzpðgjz;bÞpðzjaÞ

ð9Þ

After estimating a and b, with Bayes’ theorem, Eqs. (4) and (5) can be used to obtain the maximum posterior probability density function for sparse vectors, as:

Anti-interference Communication Algorithm Based on Wideband

pðzjg;a;bÞ ¼ R

pðgjz; bÞpðzjaÞ ¼ N ðzjl;RÞ dzpðgjz;bÞpðzjaÞ

287

ð10Þ

Among them, the mean and variance are represented as l ¼ bR¤ T g and R ¼ 1 b¤ T ¤ þ A from the hyper parameter, respectively. The estimated value of the sparse vector z is l, and A ¼ diagða1 ; a2 ; . . .; aL Þ. From above analyzing, the estimation of sparse vector z can be changed to estimation of hyper parameters a and b. For accelerating computation efficiency, the more efficient relevance vector machine (RVM) algorithm [8] is applied to reconstruct signal. The values of a and b can be determined by the maximum likelihood method. By means of marginalizing the sparse coefficients z, the logarithm likelihood function of hyper parameters can be expressed as: 

Z Lða; bÞ ¼ logðgja; bÞ ¼ log

pðgjz;bÞpðzjaÞ dz

1 ¼  M log 2p þ logjCj þ gC1 g 2

ð11Þ

where C ¼ b1 I þ ¤ A1 ¤ T , and we set derivative of logarithm likelihood function as zero, then the point estimation of a and b can be obtained by: . 8 new 2 > a ¼ s j lj ðj ¼ 1; 2; . . .; LÞ > > < j  XL

. new

gj  Tlj 2 L  ¼ s 1=b j > 2 j¼1 > > : sj ¼ 1  aj Rj

ð12Þ

where lj is the jth component of the posterior mean l, gj is the jth component of the observation vector g. sj measures the corresponding effect of z determined by the observed data, and Rj is the jth diagonal element of the a posteriori covariance R. Each hyper parameter corresponds to one sparse vector. It is found that most of the hyper parameters tend to infinity. They are invalid for the sparse vector to be mapped to the observation vector, that is, the sparse vector is automatically obtained. After the sparse vector estimation being obtained, the original signal can be obtained by inverse transform. 3.2

Spectrum Decision Algorithm Based on Frequency Domain Entropy

The energy decision-based spectrum decision method is modeled as a binary hypothesis test problem. However, it is related to signal power, which is seriously affected by noise uncertainty. The detection performance is poor under low SNR. To solve above problems, the entropy-based spectrum decision algorithm [9] is applied. The result of z is recovered by the compressed sensing signal, and the detection technique is used to make a final decision on the spectrum occupancy of the signal. Based on the concept of frequency domain entropy in information theory, after

288

M. Liu et al.

compressed sensing being recovered, the original received signal vector can be obtained, and the L-point discrete Fourier transform DFT (corresponding to c ¼ p=2) by Eq. (2) can be defined as: Y ðmÞ ¼ X ðmÞ þ eðmÞ; m ¼ 0; 1; . . .; L  1

ð13Þ

where Y ðmÞ; X ðmÞ and eðmÞ are the spectrum of the received signal, the interference signal and the noise signal, respectively. Since the amplitude value of the signal spectrum is random, it can be recorded as a random variable R, and the measured signal is represented by estimating its probability density function. Therefore, signal detection based on frequency domain information entropy (FDE) can be modeled as: HJ0 versus HJ1

ð14Þ

Under the assumption Hjk , the state space dimension is the information entropy of the random variable of J (probability space dimension). To reduce the computational complexity, the histogram method is used to estimate the probability of each state. According to the number of values (state number), random variable R can be divided into J bins. The size of each bin is b ¼ ðRmax  Rmin Þ=J, which indicates the number of P frequency points in the ith bin, and the total number is Ji¼1 ni ¼ L. Therefore, the probability of frequency appearing in the ith bin is pi ¼ ni =L, then the information entropy of the signal and the corresponding statistics can be represented as: T ðRÞ ¼ HJ ðRÞ ¼ 

J X ni i¼1

L

log2

ni L

ð15Þ

From the above analysis, the entropy-based spectrum decision algorithm can be modified as: 

H0 : H ð R Þ [ k H1 : H ð R Þ  k

ð16Þ

where, the threshold definition [10] is:   k ¼ HJ þ Q1 1  Pfa b1=2

ð17Þ

Among them, HJ ¼ ln

b1 pffiffiffi þ 21 d þ 1 2b L

ð18Þ

Equation (18) is the theoretical noise entropy, where d is the Euler-Mascheroni constant, Q1 ðÞ is the inverse of the Q function, b is the noise accuracy (reciprocal of the variance) of H under H0 , and Pfa is the theoretical false alarm probability by Neyman-Pearson criterion. The probability of detection and false alarm probability is defined respectively as:

Anti-interference Communication Algorithm Based on Wideband

289

 d ¼ LD =LT P

ð19Þ

 fa ¼ LFA =LT P

ð20Þ

Among them, LT is the total number of detections, LD is the correct number of detections, and LFA is the number of false alarms.

4 Simulation Results 4.1

Chirp Interference Signal Sparse Representation

The Chirp signal parameters are set as follows: snapshot number L ¼ 1024, signal pulse width and sampling frequency is set to T ¼ 16 ls and fs ¼ 64 MHz, respectively. The signals bandwidth B ¼ ½B1 ; B2 ; B3 ; B4 ; B5 ; B6  ¼ ½50; 55; 60; 65; 70; 75ðMHzÞ, and 3:75; modulation frequency rate k ¼ ½k1 ; k2 ; k3 ; k4 ; k5 ; k6  ¼ ½3:125; 3:438;   4:063; 4:375; 4:688ðMHz/lsÞ, according to the angle of ration c ¼ arc cot kL=fs2 , then we can obtain acquisition c ¼ ½c1 ; c2 ; c3 ; c4 ;c5 ; c6  ¼ ½0:908; 0:861; 0:818; 0:778; 0:741; 0:706 ðradÞ. Figure 1a, b demonstrate that the signal noise value in the fractional domain is much smaller than the peak value of Chirp signal spectrum. There are only a few positions where obvious peaks appear, while most of the positions are small values. It shows good scarcity characteristics, thus the signal can be subsequent processed using compressed sensing technology.

(a)

(b)

Fig. 1. DFRFT of multi-component Chirp signal. a DFRFT of multi-component Chirp signal. b Chirp signal’s DFRFT amplitude on ðc; mÞ plane

290

4.2

M. Liu et al.

Signal Reconstruction Performance by Different Algorithms

To verify the signal reconstruction effect, the reconstruction accuracy is defined as:  Accuracy , 1  kz  ^zk2 kzk2 . In the formula, z is the original signal, and ^z is the recovery signal. Monte Carlo cycle of simulation is set to 500 times.

Amplitude

40 20 0

100

200

300

400

500

600

700

800

900

1000

1100

800

900

1000

1100

900

1000

1100

Length of Signal

(a) Original Sparse Signal Amplitude

40 20 0

100

200

300

400

500

600

700

Length of Signal

(b) Reconstruction with MP, M = 50

Amplitude

40 20 0

100

200

300

400

500

600

Length of Signal

700

800

(c) Reconstruction with BCS, M = 50

Fig. 2. Reconstruction of signal of length L = 1024. a Original sparse signal. b Reconstruction signals by MP. c Reconstruction signals by BCS

To simulate measurement uncertainty, each of the M = 50 measurements is added to Gaussian noise with zero-mean and standard deviation d ¼ 0:1. Figure 2b, c show the reconstruction result by MP and BCS, respectively. Because of the existing measurements noise, MP cannot recover original sparse signal exactly, while BCS can recover better than MP. Furthermore, these error bars may be used to describe confidence of the current reconstruction.

Anti-interference Communication Algorithm Based on Wideband 1

Rec. with MP Rec. with BCS

X: 50 Y: 0.9418

0.8

Reconstruction accuracy

291

0.6

0.4

0.2

0 25

30

35

40

45

50

55

60

65

Number of Measurements

Fig. 3. Reconstruction accuracy of MP and BCS as a function increasing number of measurements

Figure 3 indicates deficient number of measurements with BCS and MP, the reconstruction accuracy are all low. However, compared with MP, BCS can attain higher recover accuracy. Under the same number of measurements, for example M = 50, the reconstruction accuracy by BCS can approach to 94.18%, nevertheless, MP only reaches 43.78%. In addition, to achieve accurate recover, MP needs the more than 65 measurements. 4.3

Comparison of Detection Performance

In Fig. 4a, ROC characteristic curve demonstrates detection performance by Bayes compressed sensing frequency domain entropy (BCSFDE) and Bayes compressed sensing energy detection (BCSED), respectively. Due to noise uncertainty, detection probability of ED is clearly lower than FED in the same false alarm probability and SNR. To make the comparison meaningful, how the measurements affecting the detection performance under different SNR is studied. Figure 4b shows the probability of detection with respect to SNR for BCSFDE and BCSED, respectively. Owing to sufficiently take the uncertainty of noisy measurements and uncertainty of signal power into account, BCSFDE can obtain better detection property compared with BCSED under less measurements number and low SNR.

292

M. Liu et al.

(b)

1

0.9 0.8

Probability of Detection Pd

Probability of Detection Pd

(a)

SNR=-5dB

0.7 0.6

SNR=-10dB

0.5 0.4 0.3 0.2

BCSFDE BCSED

0.1 0

BCSFDE

1

BCSED

0.9

BCSFDE

0.8

BCSFDE

0.7

BCSED

M = 50

BCSED BCSFDE

0.6 0.5

M = 60

0.4 0.3

M = 40

0.2 0.1

BCSED

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Probability of False Alarm Pf

1

-25

-20

-15

-10

-5

0

5

SNR(dB)

Fig. 4. Comparison of detection performance with BCSFDE and BCSED. a Receiver operating characteristic (ROC) curve with signal-to-noise ratio of −5 and −10 dB. b Probability of detection as a function increasing SNR under difference number of measurements

5 Conclusions In this paper, a wide-band spectrum detection algorithm based on Bayesian compressed sensing and entropy is used to avoid wideband Chirp interference signals. The proposed algorithm can allay the effect of uncertainly measurements to improve recover accuracy, and abstain uncertainly noisy to ameliorate detection performance. Besides, this research could be extended in many kinds of interference signal and real wireless channel to enhance system flexibility. Acknowledgements. This work was supported in part by General Project of Domain Fund under Grant 61403110308.

References 1. Ding G, Jiao Y, Wang J (2018) Spectrum inference in cognitive radio networks: algorithms and applications. IEEE Commun Surv Tutorials 20(1):150–182 2. Chen Y, Oh H (2016) A survey of measurement-based spectrum occupancy model for cognitive radios. IEEE Commun Surv Tutorials 18(1):848–859 3. Zhang YL, Zhang QY, Melodia T (2010) A frequency-domain entropy-based detector for robust spectrum sensing in cognitive radio networks. IEEE Commun Lett 14(6):533–535 4. Ali A, Hamouda W (2017) Advances on spectrum sensing for cognitive radio networks: theory and applications. IEEE Commun Surv Tutorials 19(2):1277–1304 5. Arjoune Y, Kaabouch N (2018) Wideband spectrum sensing: a bayesian compressive sensing approach. Sensors 18(6):1839 6. Ozaktas HM, Arikan O, Kutay MA (1996) Digital computation of the fractional Fourier transform. IEEE Trans Signal Process 44(9):2141–2150 7. Ji S, Xue Y, Carin L (2008) Bayesian compressive sensing. IEEE Trans Signal Process 56 (6):2346–2356 8. Bishop CM (2006) Pattern recognition and machine learning. Springer, New York

Anti-interference Communication Algorithm Based on Wideband

293

9. Zhang Y, Zhang Q, Wu S (2010) Entropy-based robust spectrum sensing in cognitive radio. IET Commun 4(4):428–436 10. Zhao N (2013) A novel two-stage entropy-based robust cooperative spectrum sensing scheme with two-bit decision in cognitive radio. Wirel Pers Commun 69(4):1551–1565

A Multi-task Dynamic Compressed Sensing Algorithm for Streaming Signals Eliminating Blocking Effects Daoguang Dong(&), Guosheng Rui, Wenbiao Tian, Ge Liu, Haibo Zhang, and Zhijun Yu Navy Aviation University, Yantai, China [email protected]

Abstract. The performance of Multi-task compressed sensing for streaming signals is restricted by blocking effects caused by block sparse transformation. To solve this problem, a multi-task dynamic compressed sensing algorithm based on sparse Bayesian learning is proposed in this paper, which combines multi-task compressed sensing with sliding window based on LOT transform. Experiments show that the proposed algorithm has higher reconstruction accuracy and operation efficiency compared with its block DCT based version. Keywords: Streaming

 Multi-task  Blocking artifacts  Compressed sensing

1 Introduction Compressed sensing [1, 2] (CS) is of great significance in reducing the amount of data and relieving the pressure of wireless transmission. Compared with the traditional single measurement vector method, multi-measurement vector (MMV) [3–7] method can achieve higher reconstruction accuracy with fewer measurements. However, in the face of streaming signals in time domain, artificial blocking effects [7] is often unavoidable, the continuity and smoothness of the signal will then be damaged. Sparse Bayesian learning (SBL) [8, 9] was used for MMV in [3], which showed better reconstruction accuracy than greedy based and convex relaxation based algorithms. Though efforts have been made to explore methods of dynamic CS [4, 10–14], the above methods can hardly eliminate the blocking effects when handling streaming signals. Recently, [15] introduced the idea of lapped orthogonal transform [16] (LOT) into CS, and proposed a dynamic sparse reconstruction algorithm based on L1homotopy. [17] combines LOT with SBL on the basis of [15], and proposes a dynamic sparse reconstruction algorithm based on SBL for streaming signal processing, which proves the superiority of SBL algorithm over other algorithms. However, the application condition of the algorithm is single-task. For multi-task observation of streaming signals, there is no effective CS solution available for publication. Here in this paper, we present a SBL-based multi-task dynamic CS algorithm for streaming signals, which is abbreviated as SMT-SBL. We establish a multi-task sliding window observation system based on LOT transform, and complete the reconstruction via SBL. Experiments are conducted using historical prediction data of evaporation © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 294–302, 2020 https://doi.org/10.1007/978-981-13-9409-6_35

A Multi-task Dynamic Compressed Sensing Algorithm for Streaming

295

ducts, and the results show that compared with the traditional multi-task SBL algorithms based on block DCT, the proposed algorithm significantly eliminates the blocking effects, and has a higher reconstruction signal-to-error ratio (SER).

2 SMT-SBL Algorithm Assume that the streaming signals xl ðL ¼ 1; . . .; LÞ were observed by L sensors, yt;l ¼ Ul xt;l þ et;l

ð1Þ

where Ul 2 RMN ðM\N Þ is the measurement matrix, xt;l 2 RN1 is one piece of block signals among xt at time t, yt;l 2 RM1 is the measurement vector, et;l 2 RM1 is the noise. Assume that the LOT basis matrix is P ¼ ½PT1 ; PT2 T 2 R2NN . Then the corresponding LOT transformation and inverse transformation formulas are as follows wt;l ¼ ½PT1 ; PT2 ½xTt1;l ; xTt;l T

ð2Þ

xt;l ¼ ½P2 ; P1 ½wTt;l ; wTt þ 1;l T

ð3Þ

Tipping and Faul [8] pointed out that information of wtd;l is contained only from yt2d1;l to yt;l , and d ¼ 1 is sufficient to satisfy the requirement of precise reconstruction. Henceforth, a multi-task LOT sliding system is established as shown in Fig. 1, where Bl ¼ Ul ½P2 ; P1 .

Fig. 1. Multi-task LOT sliding system for streaming signals.

h i ^ ^ ^ ^  t ¼ ½  t;L , wt;l ¼ ½wTt2d1;l ; . . .; wTtd1;l T wt;1 ; . . .; w Denote W t ¼ wt;1 ; . . .; wt;L , W  t;l ¼ ½wTtd;l ; . . .; wTt þ 1;l T . The observation corresponding to task l is (4) with U l and w ^

 l . The priors are in (5)–(6), where  partitioned into Ul and U at ¼ ½aTtd ; . . .; aTt þ 1 T , at ¼ ½at;1 ; . . .; at;N T .

296

D. Dong et al. ^

^  lw ~yt;l ¼ U l wt;l þ U  t;l þ ~et;l ; t  2d þ 2     p ~et;l ¼ N ~et;l j0; a1 0;l I M ð2d þ 2Þ

ð5Þ

    p a0;l ¼ Gamma a0;l jal ; bl

ð6Þ

L tY þ1 Y N   Y 1 N wsi;l j0; a1 a 0 s;i

 tÞ ¼ pð W

ð4Þ

ð7Þ

l¼1 s¼td i¼1

 1     Tyt;l  TU U ¼ U lwt;l l l þ At l

ð8Þ

 1  ^ w ¼ U   TU R t;l l l þ At

ð9Þ

^

^  t ¼ ½yt;1 ; . . .; yt;L  and yt;l ¼ ~yt;l  U l w Denote Y t;l , and integrate a0;l out, the poste ^ w shown in (8) and (9),  t;l is Student’s-t with mean lwt;l and shape matrix R rior of w t;l h i   where At ¼ diagfAtd ; . . .; At þ 1 g, At ¼ diagfat g. Denote U l ¼ w1;l ; . . .; wNðd þ 2Þ;l ,

l ¼ I þ U  1 U T and at ¼ ½a1 ; . . .; aNð2d þ 2Þ ,   lA ~al ¼ 2al þ Mð2d þ 2Þ, C at can be estil t mated iteratively with the auxiliary variables introduced in (10)–(12), and the update formulas for any aj is (13). 1

1

1

 wj;l ; Qj;l ¼ wT C  yt;l ; Gl ¼ yT C  yt;l þ 2bl Sj;l ¼ wTj;l C j;l l  l t;l l  sj;l ¼

Q2j;l aj Sj;l aj Qj;l ; qj;l ¼ ; gj;l ¼ Gl þ aj  Sj;l aj  Sj;l aj  Sj;l

ð11Þ

L X ~al q2j;l =gj;l  sj;l   2 l¼1 sj;l sj;l  qj;l =gj;l

ð12Þ

hj ¼

aj ¼

ð10Þ

8 < :

M=

L P l¼1

~al q2j;l =gj;l sj;l

sj;l ðsj;l q2j;l =gj;l Þ

; if denominator [ 0

1;

ð13Þ

else

The procedures of the SMT-SBL algorithm in the sliding window at time t are shown in Algorithm1, and the fast update formulas are shown below. Adding wj;l to the model: 2DLj ¼

8 L < X l¼1

:

log

a0j a0j þ sj;l

!

. 19 q2j;l gj;l = A  ~al  log@1  0 aj þ sj;l ; 0

ð14Þ

A Multi-task Dynamic Compressed Sensing Algorithm for Streaming



R11;l ¼ R21;l

^ 0w R t;l

" #   T w  w ^  R12;l  U  l w R l 0w j;l t;l l j;l t;l ; l t;l ¼ R22;l lj;l

297

ð15Þ

S0k;l ¼ Sk;l  Rjj;l ðwTk;l ej;l Þ2 ; Q0k;l ¼ Qk;l  lj;l wTk;l ej;l ; G0l ¼ Gl  Rjj;l ðyTt;l ej;l Þ2

ð16Þ

 1 T  ^w 0 ^ w U ^ w  T ^ w þ Rjj;l R T where R11;l ¼ R t;l t;l l wj;l wj;l U l Rt;l , Rjj;l ¼ ðaj þ Sj;l Þ , R12;l ¼ Rjj;l Rt;l U l ^ w , R22;l ¼ Rjj;l , lj;l ¼ Rjj;l Qj;l , ej;l ¼ wj;l  U ^ w U  lR  T wj;l .  lR wj;l , R21;l ¼ Rjj;l wT U j;l

t;l

t;l

Deleting wj;l from the model: 2DLj ¼ 

. 1 9  = Q2j;l Gl S A þ log 1  j;l ~a  log@1 þ : l aj  Sj;l aj ;

8 L < X l¼1

l

0

ð17Þ

 w ^ w  Rj;l RT =Rjj;l ; l0w ¼ lw  lj;l Rj;l =Rjj;l R^0 t;l ¼ R t;l t;l j;l t;l

ð18Þ

0  wk;l Þ2 =Rjj;l ; Q0 ¼ Qk;l þ lj;l RT U  S0k;l ¼ Sk;l þ ðRTj;l U l k;l j;l l wk;l =Rjj;l ; Gl T 2  yt;l Þ =Rjj;l ¼ Gl þ ðRT U

ð19Þ

T

j;l

T

l

^ w , Rj;l is the jth column of R ^ w , lj;l is the jth where Rjj;l is the jth diagonal element of R t;l t;l  . element of lwt;l Algorithm1: SMT-SBL algorithm in the sliding window at time t ^ ~ t ; U l ð8lÞ; W t ; at1 and termination threshold e; 1: Inputs: Y at as corresponding values of esti2: Initialization: (a) Initialize atd ; . . .; at of  ^ ^ l ¼ I þ U  1 U T for all l;  lA mation of at1 ; (b) Compute yt;l ¼ ~yt;l  U l wt;l and C t

l

^ w ; . . .; R ^ w (use (8) and (9)); (d) Compute all  t ¼ ½lw ; . . .; lw  and R (c) Compute W t;1 t;L t;1 t;L Sk;l ; Qk;l ; Gk;l (use (10)); 3: Compute all hk ; sk;l ; qk;l ; gk;l (use (11) and (12)); 4: Compute all DLk using (14), (17) and (20), if DLk \e; ð8lÞ,stop iterating and go to Step10, else go to Step 5, end if; 5: Select ak that has the largest DLk as the candidate to be optimized; 6: if hk [ 0 AND ak ¼ 1, update ak as ak ¼ M=hk and add wk;l ð8lÞ to the model, ^ w ; lw ; Sk;l ; Qk;l ; Gl ð8lÞ using (15)–(16); and update R t;l

t;l

7: else if hk \0 AND ak \1, update ak as ak ¼ 1 and delete wk;l ð8lÞ from the ^ w ; lw ; Sk;l ; Qk;l ; Gl ð8lÞ using (18)–(19); model, and update R t;l

t;l

t;l

t;l

8: else if hk [ 0 AND ak \1, update ak as ak ¼ M=hk and remain wk;l ð8lÞ in the ^ w ; lw ; Sk;l ; Qk;l ; Gl ð8lÞ using (21)–(22); model, and update R 9: end if, go to Step3;

 ^ w (use (8) and (9)). 10: Outputs:  at , lwt;l and R t;l

298

D. Dong et al.

Remaining wj;l in the model: 2DLj ¼

L X

(

l¼1

h

 i ½ðaj þ sj;l Þgj;l  q2j;l a0j ð~al  1Þ  log 1 þ Sj;l a01  a1 þ~ al  log 0 j j ½ðaj þ sj;l Þgj;l  q2j;l aj

)

ð20Þ ^ w  cj;l Rj;l RT ; l0w ¼ lw  cj;l lj;l Rj;l ^ 0w ¼ R R t;l t;l t;l j;l t;l

ð21Þ

0 T   wk;l Þ2 ; Q0 ¼ Qk;l þ cj;l lj;l RT U  S0k;l ¼ Sk;l þ cj;l ðRTj;l U yt;l Þ2 k;l j;l l wk;l ; Gl ¼ Gl þ cj;l ðRj;l U l  l T

T

T

ð22Þ ^ w , Rj;l is the jth where cj;l ¼ ½Rjj;l þ ða0j  aj Þ1 1 , Rjj;l is the jth diagonal element of R t;l ^ w , lj;l is the jth element of lw . column of R t;l

t;l

3 Experimental Results Here Ul is a Gaussian random matrix, and the noise is white Gaussian. For the LOT transform, the continuity of signal would be slightly damaged when the blocking length N  32, therefore we set N ¼ 16. The version based on DCT block transformation is adopted as the comparison algorithm and called DCT-SBL, and the two algorithms are abbreviated as DCT and LOT when experimental results are shown, respectively. Historical diagnostic data of evaporation duct [18] height at sea is used, and the data are based on historical meteorological data obtained from meteorological buoy measurement in Laoshan sea area of Qingdao, and are calculated via the Babin prediction model [19]. The experimental variables are task number L, observation number M and signal-to-noise ratio (SNR). The reconstructed signal-to-error ratio (SER) is used to measure the reconstructed accuracy of the algorithm, and the running time is used to measure the efficiency of the algorithm. (a)

(b)

Fig. 2. Comparison of SER of SMT-SBL and DCT-SBL with the observation number varying.

A Multi-task Dynamic Compressed Sensing Algorithm for Streaming

299

The results of the algorithm comparison of SER varying with the number of measurements are shown in Fig. 2. From Fig. 2a, b, it can be seen that SMT-SBL has a significantly higher SER and SER increase speed than DCT-SBL. (a)

(b)

Fig. 3. Comparison of efficiency of SMT-SBL and DCT-SBL with observation number varying.

The results of the algorithm comparison of running time varying with the number of measurements are shown in Fig. 3. As can be seen from Fig. 3a, b, under the same SNR and number of tasks, the running time of both algorithms increases with the increase of the number of measurements, but the running time of SMT-SBL is always significantly shorter than that of DCT-SBL. In addition, under the same number of measurements, the running time of the two algorithms will increase with the number of tasks or decrease with the increase of SNR. The results of the algorithm comparison of SER varying with SNR are shown in Fig. 4. Figure 4a, b shows that the SER of both algorithms increases with the increase of SNR under the same number of measurements and tasks, but the SER of SMT-SBL algorithm is always significantly higher than that of DCT-SBL algorithm. The results of the algorithm comparison of the running time varying with SNR are shown in Fig. 5. Figure 5a, b shows that under the same number of measurements and tasks, the running time of the two algorithms decreases with the increase of SNR. The running time of SMT-SBL algorithm is significantly less than that of DCT-SBL algorithm, and Shortens significantly faster.

300

(a)

D. Dong et al.

(b)

Fig. 4. Comparison of SER of SMT-SBL and DCT-SBL with SNR varying.

(a)

(b)

Fig. 5. Comparison of efficiency of SMT-SBL and DCT-SBL with SNR varying.

The results of the algorithm comparison of SER varying with number of tasks are shown in Fig. 6. As can be seen from Fig. 6a, b, the SER of SMT-SBL algorithm is always significantly higher than that of DCT-SBL algorithm. The results of the algorithm comparison of the running time varying with number of tasks are shown in Fig. 7. Figure 7a, b shows that under the same number of measurements and SNR conditions, the running time of the two algorithms increases with the increase of the number of tasks. Nevertheless, the running time of SMT-SBL algorithm is always significantly less than that of DCT-SBL algorithm, and the growth rate is also significantly less than that of DCT-SBL algorithm.

A Multi-task Dynamic Compressed Sensing Algorithm for Streaming

(a)

301

(b)

Fig. 6. Comparison of SER of SMT-SBL and DCT-SBL with task number varying

(a)

(b)

Fig. 7. Comparison of efficiency of SMT-SBL and DCT-SBL with task number varying

4 Summary To eliminate the blocking effects in multitask compressed sensing of streaming signals, a multi-task compression sensing algorithm based on SBL is proposed, witch combines multi-task SBL with sliding window system based on LOT transform. Experiments show that under the same measurement number, task number and signal-to-noise ratio, the proposed algorithm can obtain the reconstructed SER gain from 3 to 20 dB over its DCT version, and has obvious advantages in operation efficiency.

302

D. Dong et al.

References 1. Candes E, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52 (2):489–509 2. Donoho D (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306 3. Wipf DP, Rao BD (2007) An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Press 4. Ji S, Dunson D, Carin L (2009) Multi-task compressive sensing. IEEE Trans Signal Process 57(1):92–106 5. Zhang Z, Rao BD (2011) Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning. IEEE J Sel Topics Sig Process 5(5):912–926 6. Chen W (2016) Simultaneous Bayesian sparse approximation with structured sparse models. IEEE Trans Signal Process 64(23):6145–6159 7. Wu Q (2015) Multi-task Bayesian compressive sensing exploiting intra-task dependency. IEEE Signal Process Lett 22(4):430–434 8. Tipping ME, A Faul (2003) Fast marginal likelihood maximization for sparse Bayesian models. In: Proceedings of the international workshop on artificial intelligence and statistics, pp 3–6 9. Ji S, Xue Y, Carin L (2008) Bayesian compressive sensing. IEEE Trans Signal Process 56 (6):2346–2356 10. Vaswani N (2008) Kalman filtered compressed sensing. In: 15th IEEE international conference on image processing. IEEE 11. Wang Y, Wipf DP, Chen W (2014) Exploiting the convex-concave penalty for tracking: a novel dynamic reweighted sparse Bayesian learning algorithm. IEEE international conference on acoustics. IEEE 12. Ziniel J, Potter LC, Schniter P (2010) Tracking and smoothing of time-varying sparse signals via approximate belief propagation. In: 11th Asilomar conference 13. Ziniel J, Schniter P (2013) Dynamic compressive sensing of time-varying signals via approximate message passing. IEEE Trans Signal Process 61(21):5270–5284 14. Goertz N, Hannak G (2017) Fast Bayesian signal recovery in compressed sensing with partially unknown discrete prior. In: WSA international ITG workshop on smart antennas. VDE 15. Asif MS, Romberg J (2014) Sparse recovery of streaming signals using l1-homotopy. IEEE Trans Signal Process 62(16):4209–4223 16. Malvar HS (1989) The LOT: transform coding without blocking effects. IEEE Trans Acoust Speech Signal Process 37(4):553–559 17. Wijewardhana UL, Codreanu M (2016) A Bayesian approach for online recovery of streaming signals from compressive measurements. IEEE Trans Signal Process 65(1):184– 199 18. Tian W, Rui G, Dong D (2018) Compressed sensing of evaporation duct based on blind adaptive KLT estimation. Acta Electronica Sinica 46(09):22–28 19. Babin SM, Young GS, Carton JA (1997) A new model of the oceanic evaporation duct. J Appl Meteorol 36(3):193–204

Thunderstorm Recognition Algorithm Research Based on Simulated Airborne Weather Radar Reflectivity Volume Scan Data Rui Liao1, Xu Wang1,2(&), and Jianxin He1,2 1

School of Electronic Engineering, Chengdu University of Information Technology, Sichuan, China [email protected] 2 Key Laboratory of Atmospheric Sounding, China Meteorological Administration, Chengdu 610225, China

Abstract. At present, most airborne radars have no volume scan capability, so the echo information detected is limited and it can be difficult to detect the thunderstorms in front of the aircraft completely. First of all, this paper proposes an airborne weather radar that adopts volume scan mode and takes the X-band ground-based weather radar data as the simulation source to obtain the airborne radar reflectivity volume scan data according to a simulation model. Then, based on the Storm Cell Identification (SCI) algorithm, this paper researches and proposes a thunderstorm identification algorithm applying to this airborne radar by modifying some threshold parameters, which has improvements on identifying thunderstorm cells. Finally, an example of thunderstorm identification based on the simulated airborne weather radar reflectivity volume scan data is given, which shows that the algorithm can effectively identify the thunderstorm cells in the scanning sector in front of the radar and get their attributes. It is helpful for monitoring thunderstorm and meaningful for flight safety. Keywords: Airborne weather radar identification  SCI algorithm

 Volume scan  Thunderstorm

1 Introduction As an extension of weather radar, airborne weather radar can detect weather system at a close range, making up for the inflexibility of ground-based radar and scant meteorological information with low resolution of space-based radar caused by the long distance [1]. Several countries started to do researches on airborne detection systems earlier [2]. The United States, Britain, France and other developed countries have dedicated meteorological detection aircraft engaged in business and research. Here are some of the most representative airborne weather radars used in these aircrafts and their detection modes shown in Table 1. Except the Spider has limited ability to scan, most

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 303–313, 2020 https://doi.org/10.1007/978-981-13-9409-6_36

304

R. Liao et al.

of these radars have no scanning capability. This paper proposes an airborne weather radar with a new scan mode, volume scan, something similar to WSR-88D radar. This scan mode can help radar to obtain high-resolution echo information in the 3D fanshaped space ahead of the flight. It may play an important role for the research and development of airborne weather radar technology, contributes to improving weather monitoring and ensuring flight safety as well. Table 1. Detection mode of existing airborne weather radars Radar Country/Company Wavelength Detection mode ELDORA US and France 3 cm (XRotary scanning of two (ASTRATA) Band) antennas (forward and backward) with an angle of 18.5° WXR-2100 Rockwell-Collins 3.1 cm (X- Emits two beams with Band) slight offset in the vertical (pitch) direction

CRS

ACR

Spider

Goddard Space Flight Center (NASA) UMass and JPL (NASA) Japan

3.2 mm (W-Band) 3 mm (WBand) 3 mm (WBand)

Scanning with a two-axis gimbal hanger (Airborne) RHI scan (Ground-based) Detect vertically downwards or upwards Scanning in the direction of the track from −40° to +95°

Application Convective storm detection with high resolution Detecting convective clouds that threaten aircraft safety Cloud measurement, especially cirrus Weak precipitation and snowfall Cloud measurement

Thunderstorm characterized by short life cycle, small range and strong destructive power refers to the deep moist convection phenomenon [3]. Thunderstorm weather is usually accompanied by lightning, thunder, rainstorm, gale, turbulence, hail and etc. which will affect aviation operation and threaten flight safety. According to the statistics of civil aviation organizations at home and abroad, the accidents directly and indirectly caused by thunderstorms account for more than 50% of all the flight accidents caused by meteorological reasons [4, 5]. At present, there are several methods mainly used for convective target identification: threshold segmentation method [6– 10], algorithm based on image processing [11], Gaussian mixture model [12], Cluster analysis [13] et al. Among them, based on threshold segmentation is used most widely. It is found that SCI algorithm has higher accuracy rate of recognizing Convection Cell with higher reflectivity intensity, as it can correctly identify 68% storms with maximum reflectivity factor over 40 dBZ and 96% storms with maximum reflectivity factor over 50 dBZ [14]. Because of convection cell’s growth and dissipating, the multi-threshold method has a better identification effect than a single threshold. Ultimately, this paper conducts a study on the thunderstorm identification algorithm of simulated Airborne Weather Radar based on SCI algorithm.

Thunderstorm Recognition Algorithm Research Based on Simulated

305

2 Airborne Weather Radar Scan Mode The scan mode of the simulated airborne weather radar during the flight is shown in Fig. 1. And its parameters are given in Table 2. The simulated airborne weather radar in this paper begins scanning from the lowest elevation angle, −15°. The scanning area is a sector area with azimuths of 120° and a radius of 60 km (Fig. 1a). Every time when the radar ends the current elevation scan, it will raise its antenna angle by 1° and scan the next elevation. And when the last elevation angle of 15° is end of scanning, a volume scan data will be formed (Fig. 1b). As a result, the airborne radar will have a volume scan data with 31 elevation scans, 120 radials per elevation scan and 600 sample volumes per radial. And it has a considerable detection space whose maximum vertical depth is 31 km and furthest distance is 60 km ahead of the flight altitude approximately. Therefore, the simulated airborne weather radar can obtain completely enough echo information. Even if there is a thunderstorm cell with a large horizontal scale (10–30 km) and a high echo top of more than 10 km, quite a number of its echo information can be collected by this radar. And a thunderstorm cell with very small scale that may not be found by ground-based weather radar can be detected by this radar for its high-resolution echo information.

Fig. 1. Simulated airborne weather radar scan mode (a) horizontal direction; (b) vertical direction

Table 2. Volume scan mode parameters of simulated airborne weather radar Parameter Altitude of aircraft Range resolution Azimuth scan range Azimuth resolution Maximum detection range Elevation number Elevation

Value 10 km 100 m −60 * +60° 1° 60 km 31 layers −15 * +15°

306

R. Liao et al.

3 Thunderstorm Recognition Algorithm In this paper, a thunderstorm identification algorithm based on SCI is studied, and verified by the radar volume scan data that is simulated by one X-Band radar data of the 8th International Sounding Test conducted by WMO on July 2010 in Yangjiang City, Guangdong Province, China. SCI algorithm is proposed by the National Severe Storms Laboratory (NSSL) in Norman, Oklahoma (USSL), and the specific identification process and parameters can be referred to Ref. [10]. Considering that thresholds of SCI algorithm are proposed according to S-band radar while X-band radar has marked attenuation in convective cloud observation especially [15] and there may be insufficient correction even if attenuation correction is added, we reduce seven default reflectivity thresholds by 5 dBZ. In addition, in order to prevent small thunderstorm cell from being missed, the threshold of component area is changed from 10 km2 to 1 km2. And the threshold of segment length is modified from 1.9 km to1 km to prevent the thunderstorm cell which is shorter in radial direction and longer in azimuth direction from being omitted.

4 Thunderstorm Identification Case Analysis Attenuation corrections have been made to the simulated airborne weather radar by the range-bin-by-range-bin correction algorithm. The thunderstorm identification results of the reflectivity volume scan data simulated from X-band radar volume scan data 20100720125839 are taken as an example for analysis. There are two thunderstorm cells are identified by this thunderstorm identification algorithm. Thunderstorm cells’ attributes are shown in Table 3 and their explanations are in Table 4. The attributes of the cell components contained in each storm cell are showing in Tables 5 and 6 respectively, and the meanings of these attributes can be found in Table 7. Thunderstorm cell whose serial number is 1 in Table 3 (thunderstorm cell 1 for short) contains seven cell components whose attributes are shown in Table 5. Since the strong echo region of thunderstorm cell 1 is already very small when the elevation angle reaches −8°, the component no longer exists above −8°. Thunderstorm cell with a serial number of 2 in the Table 3 (thunderstorm cell 2 for short) also contains seven cell components, and the attributes of these components are shown in Table 6. Because the strong echo region of thunderstorm cell 2 almost disappears at the elevation angle of −15°, the component only exists at elevations from −14 to −8°.

Serial number 1 2 Serial number 1 2

22.51 3.55

6.34 5.60

3.87 3.65

−15 –14

−9 −8

YSC (km)

−32 −13

HSC (km)

−25 −7

22.45 25.85

ZMAX (dBZ) HZMAX (km)

VIL (kg/km2)

22.55 34.25

−29.80 23.69 −11.78 20.56 2.23 54.5 4.75 33.36 −9.84 31.85 −5.44 31.38 4.46 51 3.32 7.14 TOP (km) Base (km) Loe EL (°) High EL (°) BEGAZI (°) ENDAZI (°) BEGRAN (km) ENDRAN (km)

XSC (km)

7 7 MSV (km3)

RS (km)

AS (°)

NC (unit)

Table 3. Thunderstorm cells’ attributes of 20100720125839 X-band simulated airborne weather radar data

Thunderstorm Recognition Algorithm Research Based on Simulated 307

308

R. Liao et al. Table 4. Explanations of attributes in Table 3

Attribute NC AS

RS

XSC

YSC

HSC ZMAX

HZMAX

VIL

Explanation The number of components in a storm cell The azimuth of the centroid (or mass weighted center) of a storm cell The (flat earth projected) range of the centroid of a storm cell The (flat earth projected) xcoordinate of the centroid of a storm cell The (flat earth projected) ycoordinate of the centroid of a storm cell The height of the centroid of a storm cell The maximum reflectivity factor (component) of the component s in a storm cell The height above ground of the sample volume corresponding to ZMAX The vertically integrated liquid

Attribute MSV TOP

Explanation The mass weighted volume of a storm cell The height of the highest component in a storm cell

BASE

The height of the lowest component in a storm cell

LOWEL

The elevation angle of the lowest component in a storm cell

HIGHEL

The elevation angle of the highest component in a storm cell

BEGAZI

The azimuth of the first cell segment of a storm cell The azimuth of the last cell segment of a storm cell

ENDAZI

BEGRAN

ENDRAN

The range (slant) to the front (closest to the radar) of the first sample volume of a cell segment The ending range (component), the slant range of the farthest part of a component (from the radar)

According to the beginning azimuth (ACbeg) and the ending azimuth (ACend), as well as the beginning range (RCbeg) and the ending range (RCend) of the components contained in each thunderstorm cell, the ranges of these components are marked on echo map that only reflectivity greater than 25 dBZ is retained. Cell components corresponding to thunderstorm cell 1 and thunderstorm cell 2 are respectively represented in red and blue lines. Combining Tables 3, 5, 6 and Fig. 2, it can be found that there are two thunderstorm cells here, one with a larger scale is located between 20 and 30 km at the azimuth of −30° and its elevations ranges from −9 to −15°, another with a smaller scale is located between 30 and 40 km at the azimuth of −10° approximately at elevations from −14 to −8°. Thunderstorm cell 1 is in the mature stage that the aircraft should avoid in time during flying. Thunderstorm

1 2 3 4 5 6 7

Comp

EL (°) −15 −14 −13 −12 −11 −10 −9

−29.74 −29.23 −29.88 −29.86 −29.88 −30.03 −30.32

AC (°)

RC (km) 23.83 23.62 23.51 23.42 23.42 23.47 23.61

XC (km) −11.82 −11.53 −11.71 −11.66 −11.67 −11.75 −11.92

YC (km) 20.69 20.61 20.39 20.31 20.31 20.32 20.38

HC (km) 3.87 4.32 4.75 5.17 5.57 5.96 6.34

DBZECmax (dBZ) 51.5 52.5 54.5 54.5 54.5 54 54.5

MC (km2) 3.21 3.01 4.11 3.84 3.88 4.77 3.55 ACbeg (°) −31 −30 −31 −31 −31 −32 −32

Table 5. Cell components information included in thunderstorm cell 1 ACend (°) −27 −27 −27 −27 −27 −25 −27

RCbeg (km) 23.85 23.35 22.65 22.55 22.45 22.65 23.55

RCend (km) 25.55 25.35 25.35 24.95 24.85 24.85 24.75

Thunderstorm Recognition Algorithm Research Based on Simulated 309

1 2 3 4 5 6 7

Comp

EL (°) −14 −13 −12 −11 −10 −9 −8

−12.74 −9.82 −9.78 −9.89 −9.55 −9.61 −9.61

AC (°)

RC (km) 26.44 32.27 32.44 32.36 32.06 32.10 32.07

XC (km) −5.83 −5.50 −5.51 −5.56 −5.32 −5.36 −5.36

YC (km) 25.79 31.80 31.97 31.88 31.61 31.66 31.62

HC (km) 3.65 2.81 3.32 3.89 4.50 5.05 5.60

DBZECmax (dBZ) 34.5 48 51 49.5 49.5 49.5 49

MC (km2) 0.26 0.49 1.15 1.01 0.97 1.23 1.44 ACbeg (°) −13 −10 −10 −10 −10 −10 −10

Table 6. Cell components information included in thunderstorm cell 2 ACend (°) −11 −8 −8 −8 −8 −8 −7

RCbeg (km) 25.85 32.65 32.55 32.15 31.95 31.85 31.65

RCend (km) 28.25 34.25 34.25 33.95 33.35 33.15 33.25

310 R. Liao et al.

HC

YC

XC

RC

Attribute EL AC

Explanation The elevation angle of an elevation scan The azimuth of the mass weighted center of a component The slant range to the mass weighted center of a component The (flat earth projected) x-coordinate of the centroid of a component The (flat earth projected) y-coordinate of the centroid of a component The height above ground (of the mass weighted center) of a component RCend

RCbeg

ACend

ACbeg

Attribute DBZECmax MC

The (flat earth projected) range of the closest part of a component (to the radar) The slant range of the farthest part of a component (from the radar)

The most clockwise extent of a component

The most counterclockwise extent of a component

Explanation The maximum reflectivity factor in a component The mass weighted area of a component

Table 7. Explanations of attributes in Tables 5 and 6

Thunderstorm Recognition Algorithm Research Based on Simulated 311

312

R. Liao et al.

Fig. 2. Schematic diagram of cell components on echo map with reflectivity above 25 dBZ

cell 2 is in the development stage and the aircraft should pay attention to its development and take precautions. Because cell components with smaller default reflectivity thresholds are discarded and only components with larger threshold are retained in the identification process, the plotted cell component area in Fig. 2 is not the whole area larger than 25 dBZ but the strongest echo area. Still, this thunderstorm recognition algorithm can identify thunderstorm cells well in general.

5 Conclusion This paper studies a thunderstorm identification algorithm for the simulated airborne weather radar based on SCI algorithm, and gives an example of thunderstorm identification. The results show that the algorithm can effectively identify the thunderstorm

Thunderstorm Recognition Algorithm Research Based on Simulated

313

cells in the sector scan area in front of the plane, not only the mature thunderstorm cells, but also those in the development stage with a certain scale. The algorithm also outputs the attributes of identified thunderstorm cells, including centroid position, the top height, the bottom height, VIL and et al. This study is of guiding significance for aircraft flight avoidance and verifies the feasibility of the simulated airborne weather radar volume scan data simultaneously. Acknowledgements. Thanks to National Key R&D Program of China (2018YFC1506104) and Application and Basic Research of Sichuan Department of Science and Technology (2019YJ0316) for research direction and providing research foundation for this topic.

References 1. He L (2014) Research on signal processing technology of beam multi-scan airborne weather radar. Nanjing University of Aeronautics and Astronautics 2. Gao Y (2009) Research on key technologies of airborne weather radar detection system Beijing University of Posts and Telecommunications 3. Yu X, Zhou X, Yu X (2012) Progress of thunderstorm and severe convection near weather forecast technology. Acta Meteorologica Sinica 70(03):311–337 4. Wei X, Jiang H, Wang G et al (2013) Disaster analysis of thunderstorm to aviation flight. Meteorol J Inner Mongolia 4:42–44 5. Zhang X (2011) Analysis and identification of thunderstorm weather and its impact on flight. J Changsha Aeronaut Vocat Tech Coll 11(2):49–54 6. Dixon M, Wiener G (1993) TITAN: thunderstorm identification, tracking, analysis, and nowcasting—a radar-based methodology. J Atmos Oceanic Technol 10(6):785–797 7. Han L, Fu S, Zhao L et al (2009) 3D convective storm identification, tracking, and forecasting—an enhanced TITAN algorithm. J Atmos Oceanic Technol 26(4):719–732 8. Wang L, Liu X, Wei M (2017) Simulation of adaptive hazard the weather warning method for airborne weather radar. J Syst Simul 29(07):1572–1581 9. Kyznarová H, Novák P (2009) CELLTRACK—convective cell tracking algorithm and its use for deriving life cycle characteristics. Atmos Res 93(1):317–327 10. Johnson JT, Mac Keen PL, Witt A et al (1998) The storm cell identification and tracking algorithm: an enhanced WSR-88D algorithm. Weather Forecast 13(2):263–276 11. Lakshmanan V, Hondl K, Rabin R (2009) An efficient, general-purpose technique for identifying storm cells in geospatial images. J Atmos Oceanic Technol 26(3):523–537 12. Choi J, Olivera F, Socolofsky SA (2009) Storm identification and tracking algorithm for modeling of rainfall fields using 1-h NEXRAD rainfall data in Texas. J Hydrol Eng 14(7): 721–730 13. Lakshmanan V, Rabin R, De Brunner V (2003) Multiscale storm identification and forecast. Atmos Res 67:367–380 14. Han L, Wang H, Tan X et al (2007) Research progress of storm identification, tracking and early warning based on radar data. Meteorol Monthly 01:3–10 15. Lv B, Yang S, Wang J et al (2016) Data quality evaluation of X-band dual-line polarization doppler radar. J Arid Meteorol 34(6):1054–1063

FPGA-Based Fall Detection System Peng Wang1,2(&), Fanning Kong1, and Hui Wang1 1

College of Electrical and Electronic Engineering, Harbin University of Science and Technology, Harbin, China [email protected] 2 Key Laboratory of Engineering Dielectrics and Its Application, Ministry of Education of China, 150080 Harbin, China

Abstract. As there is a high tendency of falling in the independent living of the elderly and the post-fall injury is very serious. It is necessary to get timely assistance when they fall. The main objective of this work is to build an FPGAbased hardware implementation of video-based fall detection system. First of all, the moving object model will be extracted through background subtraction based on Gaussian Mixture Models (GMM). Second, we judge whether there is a fall through the aspect ratio, the effective area ratio, and the change in the center of the human body. Finally, if the old person falls, the detection system will sound-light alarm and send message to the elderly family and community through GSM. The experimental results demonstrate the accuracy of this fall detection system is up to 95% indoor and this system satisfices the requirement of real-time. Keywords: Fall detection

 FPGA  Background subtraction  GSM

1 Introduction With the global aging population grows tremendously, large part of elderly people has to live alone while their children work outside. As reported that 28–35% from age group 65–75 falls at least once a year [1]. Falling exposes the elder to greater chances of suffering fall-related injuries [2]. Therefore, it is essential to put forward an automatically fall detection system for enabling the falling elder get immediate help to avoid any post-fall injuries or deadly cases due to delayed assistance. While various kinds of fall detection system have been researched in recent years, research in video-based fall detection has gained much attention [3]. Most of videobased fall detection which are implemented on the CPU or PC with software processing not targeting for real-time [4, 5]. FPGA has advantages in Processing speed because of the large number of processing cores that work in parallel in FPGA [6–8]. In this work, FPGA is selected as an accelerator to improve the performance of the system.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 314–321, 2020 https://doi.org/10.1007/978-981-13-9409-6_37

FPGA-Based Fall Detection System

315

2 Over View of the System The presented system consists of a OV7725 digital camera, an automatic detection platform with Cyclone IV FPGA device (EP4CE15F17C8N), LEDs, a Buzzer and a GSM module. All computation of the system should be done inside the FPGA. LEDs, Buzzer and GSM module are driven by FPGA. The fall detection system is described in Fig. 1.

OV7725 Camera

LEDs

Automatic detection system with FPGA device

Buzzer GSM

Fig. 1. Overview of the fall detection system

3 Fall Detection Algorithm A number of algorithms for object detection have been presented. Object detection algorithms can be classified into Frame subtraction schemes, Optical flow method and Background subtraction. Background generation

Read frame

Background subtraction Object detection Image binaryzation Minimum bounding box

N

Aspect ratio>T1? Effective area ratio>T2? Y

N

Body center change >T3? Y Fall detection

Fig. 2. Flow chart of the fall detection algorithm

316

P. Wang et al.

The algorithm of Frame subtraction schemes cannot extract the full object image. Optical flow method is complicated to calculate and has poor real-time performance. In this work, we choose Background subtraction method with small amount of calculation and good real-time performance. Figure 2 shows the flow of the fall detection algorithm. 3.1

Background Generation

In this work, GMM (Gaussian Mixture Background Modeling) was adopted for background generation. Every single pixel was expressed the feature with Gaussian model according to the following Eq. (1). Fi xt ; ui;t ;

X

! ¼

i;t

T P1 1 1ðxt ui;t Þ ðxi ui;t Þ  1 e 2 2   n P  ð2pÞ2    i;t 

ð1Þ

Then the background model was built with the weighted sum of the K-Gaussian models by the Eq. (2). Pðxt Þ ¼

k X

wi;t  Fi xt ; ui;t ;

i¼1

3.2

X

! ð2Þ

i;t

Moving Object Segmentation

After background generation, the next step was classifying the pixels into foreground object as the Eq. (3) shows. jIN ðx; yÞ  IB ðx; yÞj [ T

ð3Þ

While at coordinate (x, y), IN (x, y) is the intensity value for the new pixel; IB (x, y) is the intensity value for the background pixel. T is a difference threshold which is predetermined. Meanwhile, the pixel will be classified as the background object if the condition in Eq. (3) is not fulfilled. 3.3

Fall Detection

The minimum bounding box was generated with its features (height, width and size) for foreground object following the image binarization. As we know, a standing person should have a height (H) greater than width (W). It turns out to a height to width ratio (aspect ratio) > T1, which T1 is the threshold, as shown in Fig. 3a. On the contrary, a person who is falling should have an aspect ratio < T1, as illustrated in Fig. 3b. In spite

FPGA-Based Fall Detection System

317

of that, this work has proposed the supplementary condition to distinguish the real-fall from the daily exercises by effective area ratio as shown in Eq. (4). Reffective ¼

SO SB

ð4Þ

where SO = the area of the foreground object. SB = the area of the bounding box. It will be detected as a fall if the Reffective greater than T2. T2 is the threshold that predetermined. Because normal movement is slow and the center changes little when the old man squats, push-ups or walks normally. Fall is a kind of rapid and violent phenomenon. During the fall down, the change of the center will suddenly increase. Finally, in order to further improve the accuracy of the system, the judgment results are corrected according to changes in the body center. After calibrating the minimum external bounding box of the human body, find the center position O(x, y) of the human body, and correct the fall judgment result by using Eq. (5). Ok1y [ Oky ; qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi  2 Ok1y  Oky þ ðOk1x  Okx Þ2 [ T3

) ð5Þ

Compare human body centers Ok(x, y) and Ok–1(x, y) of two adjacent images. When the center of the human body in the kth frame image is lower than the body center in the k–1 frame image. And the distance between the two centers is greater than the threshold T3, the result of the determination is a fall [9]. Otherwise, it is not a fall.

(a) Person standing

(b) Personfalling down

Fig. 3. Aspect ratio, effective area ratio and body center of bounding box

4 Implementation on FPGA In this work, it has proposed a hardware implementation of fall detection system using FPGA. Figure 4 has shown the functional diagram of the system implemented in FPGA.

318

P. Wang et al.

CMOS sensor

Image capture

RAW to RGB

CMOS sensor config

Background generation SRAM

SRAM controller

FPGA

Algorithms

Write FIFO Read FIFO

SDRAM controller

Background subtraction

Minimum bounding box

SDRAM

Fall detection

LED controller

GSM controller

Buzzer controller

LEDs

GSM

Buzzer

Fig. 4. Overview of the fall detection system

Firstly, the CMOS sensor was configured through a configuration block. The FPGA acquired the video streaming through a capturing module from the CMOS sensor. Then, change the raw data from Bayer format to RGB format through a conversion module. The video frames and the generated background models were stored in the SDRAM with the FIFOs. To detect the moving object, the new image frame and background model will be read together in pixels from the SDRAM. The foreground model was generated by setting their absolute difference threshold. SRAM controllerregisters holding parameters for the fall detection algorithms Finally, the FPGA will drive the alarm controller after a fall event being determined with the bounding box. The project was synthesized for an Altera Cyclone IV (EP4CE15F17C8N) FPGA device using Quartus II Design Suite. The behavioral simulations performed in ModelSim10.1d verified that the hardware modules Implemented the same functionality as MATLAB R2012a. Table 1 presents the resource usage of the FPGA. It is worth noting that the data from Table 1, even a small FPGA device from Cyclone IV series can run quite complex video-based fall detection system which resource utilization at about 35% of the available resources. Table 1. Project resource utilization Resource FF LUT6 LE DSP48 BRAM

Used 5457 5730 4922 54 22

Available 54,576 27,288 15,408 112 116

Percentage (%) 10 21 32 48 19

5 Results and Discussions An Altera Cyclone IV (EP4CE15F17C8N) FPGA device is used to implement the presented fall detection system. The hardware description language uses Verilog HDL. The same fall detection algorithm was implemented in MATLAB on a PC which contains Intel Core i3-4170 3.70 GHz CPU with 4 GB RAM.

FPGA-Based Fall Detection System

(a) standing

319

(b) squatting

(c) lie down

(d) leg pressing

(e) falling down

(f) doing push-ups

Fig. 5. Fall detection with aspect ratio and effective ratio

Before this experiment, the subject has agreed to publish experimental images and data. and signed the agreement. After repeated experiments, we took 1.2 as the aspect ratio threshold. Using 0.45 as the effective area ratio threshold and 6.5 as the body center threshold in this work, take the first 1000 image frames of the video for the tests. Figure 5a, b, c, d, e and f were detected by aspect ratio. Figure 5c, e and f were judgement as fall. Therefore, this work introduces effective area ratios and center changes as corrections for falls to improve the accuracy of the fall detection. 5.1

Accuracy of System

A large number of tests were conducted to assess the accuracy of the fall detection system. Table 2 shows the results of the tests. The true positive of the FPGA-based solution is lower than the MATLAB-based solution. This result could be simply because of the processing speed of FPGA is faster than MATLAB.

320

P. Wang et al. Table 2. Fall detection rate for both FPGA and MATLAB implementations

Platform

True positive True negative False positive False negative (%) (%) (%) (%) FPGA 96.00 94.46 5.54 4 MATLAB 97.00 93.56 6.44 3 1 True Positive = correctly detected the number of falls/the actual total number of falls 2 True Negative = number of normal activities detected/total number of normal activities 3 False Positive = number of falls judged by mistake/total number of normal activities 4 False Negative = no number of falls detected/total number of actual falls

Table 2 shows that the fall detection accuracy of FPGA platform is 1% lower than the MATLAB platform. The result may be due to the FPGA processing speed is too fast. Causes the loss of image frames. 5.2

Processing Frame Rate

The processing frame rate of the implemented system was evaluated and is shown in Table 3. The results give a value of 58.82 fps with the resolution 640  480. The number of clock cycles required to complete a frame of image processing by the test system algorithm to calculate the frame rate of the video. In the meantime, the time required for MATLAB to conduct the same image frames processing was calculated. The results demonstrate that the performance of video-based fall detection in FPGA is near to 6X faster than the MATLAB.

Table 3. Detection time for a single frame on the FPGA and MATLAB Platform FPGA MATLAB

5.3

Max. (s) 0.017 0.128

Min. (s) 0.017 0.076

Avg. (s) 0.017 0.099

Frames per second (fps) 58.82 10.10

System Alarm Time

The average time of sound-light alarm response in Table 4 is 0.51 s. The average time of GSM message sending time is 4.97 s. The experimental results prove that the realtime nature of the fall detection alarm system based on FPGA can meet the system requirements. Table 4. The response time for falling alarm Alarm method sound-light alarm GSM

Max. (s) 0.73 6.85

Min. (s) 0.29 3.36

Avg. (s) 0.51 4.97

FPGA-Based Fall Detection System

321

Hence, FPGA is an effective instrument in this work to improve the performance of video-based fall detection system.

6 Conclusions In this work, we have presented a video-based fall detection system in FPGA. The experimental results demonstrate that this system is able to process up to 58.82 fps with the resolution of 640  480. At the same time, it could alarm automatically through the configuration with Verilog HDL when a fall happened. This work shows the performance of better stability and lower false positives. And it also demonstrates good robustness of FPGA for fall detecting with image processing.

References 1. Tarabini M, Saggin B, Bocciolone M, et al (2016) Falls in older adults: kinematic analyses with a crash test dummy. In IEEE international symposium on medical measurements and applications, pp 1–6 2. Zhang D, He Y, Liu M, Yang HB et al (2016) Study on incidence and risk factors of fall in the elderly in a rural community in Beijing. Zhonghua liuxingbingxue zazhi 37(5):624–628 3. Rougier C, Meunier J, St-Arnaud A et al (2011) Robust video surveillance for fall detection based on human shape deformation. IEEE Trans Circuits Syst Video Technol 21(5):611 4. Asano S, Maruyama T, Yamaguchi Y (2009) Performance comparison of FPGA, GPU and CPU in image processing. In: International conference on field programmable logic and applications, pp 126–131 5. Che S, Li J, Sheaffer JW et al (2008) Accelerating compute-intensive applications with GPUs and FPGAs. In: Symposium on application specific processors, pp 101–107 6. Ong PS, Ooi CP, Chang YC et al (2014) An FPGA-based hardware implementation of visual based fall detection. In: IEEE region 10 symposium, pp 397–402 7. Kryjak T, Komorkiewicz M, Gorgon M (2011) Real-time moving object detection for video surveillance system in FPGA. In: Conference on design and architectures for signal and image processing, pp 1–8 8. Wang Z (2015) Hardware implementation for a hand recognition system on FPGA. In: IEEE international conference on electronics information and emergency communication, pp 34–38 9. Liu H, Zuo C (2012) An improved algorithm of automatic fall detection. In: 2012 AASRI conference on computational intelligence and bioinformatics, p 6

Artificial Intelligence and Game Theory Based Security Strategies and Application Cases for Internet of Vehicles Zhiyong Wang1, Miao Zhang1, He Xu3, Guoai Xu1(&), Chengze Li2(&), and Zhimin Wu2(&) 1

School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing, China [email protected] 2 National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT), Beijing, China {lichengze,wuzhimin}@cert.org.cn 3 University College London, London, UK

Abstract. Information security of Internet of Vehicles (IoV) has attracted much attention in recent years. In view of security vulnerabilities existed in automobiles, many countries launch guidelines and cybersecurity standards concerning IoV security and plenty of new techniques have been applied to combat threats. In this paper, a variety of attacks on IoV are summarized and classified, then artificial intelligence and game theory based security countermeasures for IoV are highlighted, and their protection mechanisms are illustrated. Finally, a few application cases of artificial intelligence and game theory based security strategies for IoV is analyzed, aiming to provide helpful reference for the development of IoV security techniques. Keywords: Internet of vehicles (IoT) theory  Security  Application

 Artificial intelligence(AI)  Game

1 Introduction There are more than 1.2 billion motor vehicles across the globe now, and it is expected to hit two billion by 2035. It is estimated that over 125 million network connected automobiles will be manufactured between 2018 and 2022 [1]. In China, as of 2017, there were more than 17.8 million users of IoV [2]. IoV is regarded as a typical kind of Internet of Things (IoT), and it can ameliorate driving safety, provide convenience information and facilitate traffic management. IoV implements the communications between vehicles and public networks via vehicle-to-road (V2R), vehicle-to-human (V2H), vehicle-to-vehicle (V2V), and vehicle-to-sensor (V2S) interactions. However, the rapid development of IoV raises concerns about security and privacy problems which can threaten driving safety and driver’s lives and invade people’s privacy. In 2015, security flaws of BMW vehicles equipped with connected drive were found to enable thieves to unlock doors and steal car data. Following this, more than 1.4 million of Chrysler cars in US were recalled due to network security problems [3]. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 322–329, 2020 https://doi.org/10.1007/978-981-13-9409-6_38

Artificial Intelligence and Game Theory Based Security

323

In 2016, US National Highway Traffic Safety Administration (NHTSA) launched “Cybersecurity Best Practices for Modern Vehicles”, in which cybersecurity standards, principles, and best practices for car industry were described to improve the information security of vehicles. In 2017, “White Papers of Network Security of Internet of Vehicles” was published by China Academy of Information and Communications Technology (CAICT), aiming to promote the safe development of IoV. Until now, plenty of security strategies have been put forward to ensure the security of IoV, such as encryption, intrusion detection system, secure routing protocols and key management. However, more effective and flexible methods need to be developed to meet the needs of the special features of IoV, including dynamic change of network topology, limited storage capacity and processing capacity of automobile terminal. Artificial intelligence can not only enhance the detection accuracy for threats, but also can find hidden risks by learning from data without explicit programming. Game theory is also an intelligent tool to analyze the interaction process between the attackers and defenders. In this article, we will discuss the structure of IoV, security threats and countermeasures for network security. Meanwhile, artificial intelligence (AI) and game theory based security scheme for IoV will be emphasized, and the application cases of IoV with AI and game theory will be analyzed accordingly.

2 Literature Survey 2.1

Structure of IoV

Vehicle system architecture can be hierarchically grouped into four layers in terms of security, namely, external communication layer (Level 1), vehicle gateway layer (Level 2), in-vehicle network layer (Level 3), and hardware layer (Level 4) (shown in Fig. 1) [4].

Fig. 1. Structure of IoV

324

Z. Wang et al.

External communication layer achieves the communications between vehicles and outside world by linking on-board communication equipment to V2X systems, Wi-Fi and mobile networks. Vehicle gateway layer regulate automotive systems in a vehicle like the headquarter, in which vehicle gateway connects internal ECUs (Electronic Control Units) to external communication equipment at Level 1 and controls message transfer in the in-vehicle networks. In-vehicle network layer is responsible for transmitting messages among ECUs, and it can be classified into multiple of network subunits in terms of ECU functions, such as body domain, control domain, and telematics domain. CAN (Controller Area Network) or LIN (Local Interconnect Network) is commonly used as communication protocols at this layer. Naturally, hardware layer is comprised of ECUs and various components that perform specific functions related to vehicle [4]. 2.2

Attack Classification in IoV

Attacks can be mainly classified into five types in IoV: authentication attacks, attacks on availability, privacy attacks, attacks on routing and attacks of data authenticity. Attacks on authentication include Sybil attack, GPS camouflage, camouflage attack, and wormhole attack. With regard to availability attacks, interference on channel and service denial are two common attacks. Specially, the secrecy attacks steal customer data by interception or eavesdropping. With respect to routing attacks on routing, there are four attack types related to routing, including interception, camouflage, service denial and route modification. In regard to data authenticity attacks, they can be categorized into the masquerading attack, replay attack, illusion attack and information fabrication and falsification (shown in Fig. 2) [5].

Fig. 2. Attack model

Artificial Intelligence and Game Theory Based Security

2.3

325

Countermeasures for IoV Security

A wide range of countermeasures have been proposed to prevent the threats on IoV according to special characteristics of IoV attacks, including establishing suitable model of threat, adopting honeypot system, constructing intrusion monitor system, employing privacy protection mechanism of routing, using reliable routing protocols and key management. With regard to threat model, constructing mathematical model and adopting graph-based methods are two main approaches for simulating threats. Intrusion detection system (IDS) can employ anomaly detection approach and signature detection method to hinder attacks through collecting and analyzing internal system’s information. In addition, SVM-based security framework and protocol analysis can offer protection for IoV security as well. Honeypots can realize protection by tempting and hoaxing attackers’ attention to avoid invading in vital system data in the context of IoV. Several secure routing protocols can not only perform normal routing functions, but also can restrain attacks on routing such as SAODV, Ariadne, and SRP protocol. Routing privacy protection mechanism contains a few of algorithms to guard against routine nodes data leakage, including SLPD, ALAR, and STAP. Key management is a crucial strategy for IoV security in that encryption is a significant method for information security, and successful encryption relies on suitable key management. Additionally, pseudonym signature and certificateless signature can also provide effective protection (shown in Fig. 3) [5].

Fig. 3. Countermeasures for IoV security

2.4

Artificial Intelligence and Game Theory Based Security Strategies for IoV

Artificial intelligence (AI) is a kind of ability that a machine simulates human behavior intelligently and carries out specific tasks arranged by humans. Machine learning and deep learning are two main techniques to implement artificial intelligence. Some scholars argued that machine learning, deep learning and reinforcement learning can be

326

Z. Wang et al.

adopted to safeguard the information security in Internet of Things (IoT). For example, AI can build up real-time behavioral modeling for net nodes, servers and equipment, and reduce new malware attacks and APT malware (Phishing, Adware, Trojans, etc.) [6]. Machine learning can defend against threats on IoV via gathering and storing right data, the vehicle’s internal network can be monitored by storing and analyzing logs, thereby detecting wicked threats and combating attacks. Once user logs are acquired, machine learning can check anomalies existed in the picture. Thus, machine learning can be able to analyze outside service data and information to detect unusual activities and malware attacks [7]. Loukas put forward a deep learning based intrusion detection approach to prevent cyber-physical attacks in IoV, which can enhance intrusion detection accuracy for vehicles compared with other deep learning and machine learning approaches [8]. Support Vector Machine (SVM) based detection system differentiates normal contents and anomalies by analyzing the training data of normal parts. Naturally, the input space is therefore classified into normal and abnormal parts [9]. Kang developed a Deep Belief Network (DBN) to detect intrusion for in-vehicle network, which ensure 97.8% accuracy and 1.6% false positive rate [10]. Vuong et al. adopted decision tree to search for command injection and denial of service threats on robotic vehicles, indicating the introduction of physical input characteristics can increase detection accuracy and eliminate the false positive rate [11]. Game theory is exploited to find out optimal choices when facing conflicts. It refers to the process which individuals or organizations select strategies from action sets to make best decisions in a specific context [12]. Of these, security games focus on the interaction between malicious attackers and defenders and it is applied to detect intrusion in IoV networks. For instance, Buchegger and Alpcan proposed two-player zero-sum game to generate solutions to security of IoV, in which they defined one player as attacker, and a group of mobile nodes as defender to imitate jamming and Sybil attacks in a vehicular network. The result showed the mobile nodes can improve their safeguard strategy by adopting zero-sum game approach [13]. In the aspect of understanding attack and defense comprehensively, game theory is proven to be an effective analysis tool. Alpan utilized game theory model based on noncooperation approach and provided Nash equilibrium analysis for lots of common network attack detection [14]. Chen mentioned abnormal network attack detection and provided mixed Nash equilibrium analysis [15]. Afterwards, Ismail [16] applied Chen’s conclusion into privacy attack detection of ammeter infrastructure data. However, most of conclusions are based on assumption that attacker’s identity is known already, and this assumption does not work due to high masquerading of attackers [14–16]. To resolve the unknown identity problem, some scholars employed Bayesian game theory to solve attack detection problem, whose rationale is based on credibility evaluation of attacker’s previous behaviors and real-time update by Bayesian theory [17–25]. In terms of features of mobile wireless ad hoc networks of IoV, researchers performed a numbers of studies on intrusion node detection and node communication incentive [26, 27]. Additionally, game theory is also used for intrusion detection of industrial control including networked control system [28] and energy system [29]. Apart from this, game theory is also widely applied in numerous of fields [30–32].

Artificial Intelligence and Game Theory Based Security

2.5

327

Case Study of Artificial Intelligence and Game Theory Based Security Strategies for IoV

In the aspect of artificial intelligence, Kang et al. adopted deep neural network method to construct intrusion monitor system for security of IoV. In this system, parameters can be originated by utilizing deep belief networks as a pre-processing step. Following this, high-dimensional packet data can train neural networks to discriminate and analyze hacking and normal data’s statistical properties and find out the relevant characteristics [33]. Regarding artificial intelligence’s practical application, “learn and prevent” device was developed through machine learning by Miller and Valasek, aiming to detect intrusion in the vehicle. The device is essentially a NXP micro-controller, and its simple board can be plugged into the OBD-II port. It can collect the typical data patterns of vehicle in the beginning of driving as observation mode, and then it changes to detection mode to monitor unusual information. Once any attacks are found, the automobile will be switched into “limp mode” to interrupt the networks and suspend vital functions like steering, then prevention and alert mode will be stimulated when any anomalies are found. The prevention mode enables the vehicle to neglect the malicious attacks and attackers can be inhibited, while alert mode empowers the driver to take actions by sending messages. With respect to game theory, Raya et al. designed a repudiation protocol for network security based on game theory method. Raya et al. presented three choices that each player can adopt according to the available protocols. Firstly, a player can give up the local repudiation step by choosing A due to mobile node’s unwillingness of involving in repudiation step. Secondly, a player can use vote V to fight detected attacker by participating in local voting step. Finally, a player can perform invalidity procedure for attacker’s identity and its own identity and commit suicide. By introducing dynamic game, researchers define mobile nodes as players to solve the repudiation problem. Eventually, Raya et al. applied repudiation procedure based on gme theroy method to resolve practical problems. The protocol realizes quick and best repudiation process through motivating mobile nodes to be involved in repudiation process actively. Realistic simulation in IoV of this game theory method indicated a better tradeoff among various approaches [34].

3 Conclusions A wealth of countermeasures based on different theory and new techniques have been employed to defend against threats for the security of IoV. Among them, artificial intelligence and game theory based security strategies can prevent IoV from wicked attacks effectively and securely. Along with the occurrence of more application cases based on aforementioned two methods, artificial intelligence and game theory based secure scheme can play a stronger and broader role in cybersecurity of IoV in the future.

328

Z. Wang et al.

With the rapid development of IoV and techniques, more malicious attacks will emerge and more effective techniques can be developed to fight against threats in the future. Given this, any single technique cannot guarantee the absolute security of IoV, thus combined application of multiple of effective techniques may be a better way to prevent from threats of IoV networks. Additionally, the advent of 5G era and the emergence of a great number of innovative and effective techniques definitely bring new methods for IoV security protection and automobile manufacture industry to ensure the safe driving and facilitate the establishment of smart cities. Acknowledgements. This work is supported by the National Key Research and Development Program of China (Grant No.: 2018YFB0803605), the National Natural Science Foundation of China (Grant No.: 61897069), and the Foundation Strengthening Program for Key Basic Research of China (Grant No.: 2017-JCJQ-ZD-043). Guoai Xu, Chengze Li and Zhimin Wu are the corresponding authors.

References 1. Millman R (2018) Connected cars report: 125 million vehicles by 2022, 5G coming. In: Internet of business. https://internetofbusiness.com/worldwide-connected-car-market-to-top125-million-by-2022/ 2. Analysis of status development of internet of vehicles in China in 2018 (2018) In: RFID world. http://news.rfidworld.com.cn/2018_09/6746f0f84b2cd8cd.html 3. Takefuji Y (2018) Connected vehicle security vulnerabilities. IEEE Technol Soc Mag 37 (1):15–18 4. Tanaka M, Takahashi J, Oshima Y (2017) Cyber-attack countermeasures for cars. NTT Technical Rev 15(5):1–5 5. Sun YC, Wu L, Wu SZ, Li SP, Zhang T, Zhang L, Xu JF, Xiong YP, Cui XG (2017) Attacks and countermeasures in the internet of vehicles. Ann Telecommun 72:283–295 6. Lee GM Artificial intelligence (AI) for development series: report on AI and IoT in security aspects. ITU. 10 7. Causevic D How machine learning can enhance cybersecurity for autonomous cars. Total. https://www.toptal.com/insights/innovation/how-machine-learning-can-enhancecybersecurity-for-autonomous-cars 8. Loukas G, Vuong T, Heartfield R, Sakellari G, Yoon Y, Gan D (2018) Cloud-based cyberphysical intrusion detection for vehicles using deep learning. IEEE Spec Sect Secur Anal Intell Cyber Phys Syst 6:3491–3508 9. Carlos A, Catania FB (2012) An autonomous labeling approach to support vector machines algorithms for network traffic anomaly detection. Expert Syst Appl 39(2):1822–1829 10. Kang MJ, Kang JW (2016) Intrusion detection system using deep neural network for invehicle network security. PLoS ONE 11(6):1–17 11. Vuong TP, Loukas G, Gan D (2015) Decision tree-based detection of denial of service and command injection attacks on robotic vehicles. IEEE Int Work Inf Forensics Secur 1–6 12. Liang XQ, Yan Z (2019) A survey on game theoretical methods in Human-Machine networks. Futur Gener Comput Syst 92:674–693 13. Buchegger S, Alpcan T (2008) Security games for vehicular networks. In: Proceedings of the 46th annual allerton conference on communication, control and computing, pp 244–251

Artificial Intelligence and Game Theory Based Security

329

14. Alpcan T, Basar T (2011).Network security: a decision and game-theoretic approach. Cambridge University Press 15. Chen L, Leneutre J (2009) A game theoretical framework on intrusion detection in heterogeneous networks. IEEE Trans Inf Forensics Secur 4(2):165–178 16. Ismail Z, Leneutre J, Bateman D, Chen L (2014) A game theoretical analysis of data confidentiality attacks on smart-grid AMI. IEEE J Sel Areas Commun 32(7):1486–1499 17. Liu Y, Comaniciu C, Man H (2006) A Bayesian game approach for intrusion detection in wireless ad hoc networks. In: ACM international conference proceeding series 18. Nguyen KC, Alpcan T, Basar T (2009) Security games with incomplete information. In: IEEE international conference on communications 19. Zhang Y, Tan XB (2011) Perception method of internet security based on Markov game theory model. J Softw 22(3):495–508 20. Hu H (2011) Strategy model of internet security based on Markov game theory. J Xi’an Jiaotong University 45(4):18–24 21. Fu Y (2009) Study on strategy selection of attacks and defenses of internet. J Beijing Univ Posts Telecommun 32(1):35–39 22. Zhu Q, Tembine H, Basar T (2010) Network security configurations: a nonzero-sum stochastic game approach. In: American control conference 23. Nguyen KC, Alpcan T, Basar T (2009) Stochastic games for security in networks with interdependent nodes. In: International conference on game theory for networks 24. Nguyen KC, Alpcan T, Basar T (2010) Security games with decision and observation errors. In: American control conference 25. Jiang W (2009) Security evaluation and optimal active defense based on game theory model. Chin J Comput 32(4):817–827 26. Sagduyu YE, Berry R, Ephremides A (2009) MAC games for distributed wireless network security with incomplete information of selfish and malicious user types. In: International conference on game theory for networks 27. Zhu Q, Fung C, Boutaba R, Basar T (2012) GUIDEX: a game-theoretic incentive-based mechanism for intrusion detection network. IEEE J Sel Areas Commun 30(11):2220–2230 28. Amin S, Schwartz GA, Sastry SS (2013) Security of interdependent and identical networked control system’s. Automatica 49(1):186–192 29. Maharjan S, Zhu Q, Zhag Y, Gjessing S, Basar T (2012) Dependable demand response management in the smart grid: a Stackelberg game approach. IEEE Trans Smart Grid 61 (8):3693–3704 30. Manshaei M, Zhu Q, Alpcan T, Basar T, Hubaux JP (2013) Game theory meets network security and privacy. ACM Comput Surv 45(3):1–39 31. Roy S, Ellis C, Shiva S, Dasgupta D, Shandilya V, Wu Q (2010) A survey of game theory as applied to network security. In: Proceedings of the 43rd Hawaii international conference on system sciences 32. Liang X, Xiao Y (2013) Game theory for network security. IEEE Commun Surv Tutor 5 (1):472–486 33. Kang MJ, Kang JW (2016) A novel intrusion detection method using deep neural network for in-vehicle network security. Proc IEEE VTC Fall 1–5 34. Raya M, Manshaei MH, Felegyhazi M, Hubaux JP (2008) Revocation games in ephemeral networks. In: Proceedings of ACM conference on computer and communications security (CCS)

The Effect of Integration Stage on Multimodal Deep Learning in Genomic Studies Fariba Khoshghalbvash(B) and Jean X. Gao Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, USA [email protected]

Abstract. With recent advances in high-throughput sequencing, reading the human genome is not an arduous task anymore. The extensive collection of different types of omics data and possible causal relations between them have led the scientists to exploit specialized machine learning methods such as deep learning and perform integrative analysis of multi-source datasets. In this paper, we compare the performance of both generative and discriminative deep models based on their integration stage. First, we explain the architecture and mathematical point of view of these methods. Then we evaluate the performances of different models by applying them on two sets of cancer-related data to discover the effect of the integration stage on classification accuracy.

Keywords: Data integration

1

· Omics · Deep learning

Introduction

With recent developments in next-generation sequencing, a vast amount of heterogeneous omics datasets are available for analysis. It has been demonstrated that due to possible relations between input modalities, an integrative analysis can be more beneficial than single-input studies. For example, in the earlier era of integrative studies, Srivastava and Salakhutdinov [1] suggested that by combining text data and image data one can get better results. Shortly after, integrative analysis stepped into other research areas, and due to possible causal and regulatory relations between different genomic data sources, scientist proposed multimodal models that are superior to single source old-fashion machine learning methods. Previously, methods like Bayesian Networks [2] and Kernel-based methods [3] have been applied on multiplatform genomic datasets. However, using classical machine learning algorithms for this purpose is limited to two different approaches. One is to join input modalities by simple concatenation prior to any model training. Although this approach can reduce complexity, it is not applic Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 330–338, 2020 https://doi.org/10.1007/978-981-13-9409-6_39

The Effect of Integration Stage on Multimodal Deep

331

cable when two data modalities carry different characteristics and have different natures such as text and image. The second method is to train individual models for each modality and compute the average result to perform an ensemble learning technique. However, ensemble learning fails to capture between modality relations and is only beneficial for getting a more robust result. On the other hand, specialized machine learning methods such as deep learning are able transform a high-dimensional feature set and reach the abstract representation of any kind of input which can be integrated during the model training process. Deep Neural Networks (DNNs) are feed-forwarding artificial neural networks consist of an input layer, multiple hidden layers, and an output layer [4, 5]. A DNN-based structure can reveal possible dependencies between distinct modalities. By selecting a specific sub-network for each modality in the lower layers, one can get the abstract representations and then integrate them at the higher levels. This provides the flexibility of applying a deep learning method which suits the modality in a specific sub-network. Although, many studies have shown the superiority of deep integrative techniques [6–8], to our best knowledge, no study examines the effect of the integration stage in multimodal deep approaches. In this paper, we perform a classification task using cancer-related datasets by constructing both discriminative and generative deep models. Based on the integration stage and being supervised or semi-supervised we build six deep networks and compare their performances to examine the effect of the integration stage on classification.

2

Data

The dataset that is used in this study, contains normal and tumor samples of BReast CArcinoma (BRCA) and LUng ADenocarcinoma (LUAD) from TCGA. We downloaded miRNA (1870 genes) and gene Expression (20,530 genes) using TCGA-Assembler tool [9, 10] and lncRNA (12,727 genes) from TANRIC [11]. For BRCA datasets, we chose the three subtypes with the most population including Infiltrating Ductal Carcinoma (335 patients) and Infiltrating Lobular Carcinoma (66 patients), and Normal cases (73 patients) to construct a multiclass dataset. LUAD dataset is also categorized into three stages of Normal (18 patients), Stage I (226 patients), and a mixture of Stage II and III (168 patients). There might be a number of zero variance features in each modality. In order to reduce the computation and perform a more reliable analysis all zero variance feature are removed and all of the features are scaled between (0, 1). The remaining genes (with non-zero variance) of miRNA, gene Expression (geneExp), and lncRNA are 1562, 20,212, and 12,682 (in the same order) for BRCA and 1596, 20,161, and 12,610 for LUAD (Table 1).

332

F. Khoshghalbvash and J. X. Gao Table 1. Summary of the properties of BRCA and LUAD datasets. Data

3

Class groups (# samples) Modalities Original size Reduced size

LUAD Normal (18) Stage I (226) Stage II, III (168)

miRNA GeneExp lncRNA

1870 20,530 12,727

1596 20,161 12,610

BRCA Normal (73) Ductal Carcinoma (335) Lobural Carcinoma (66)

miRNA GeneExp lncRNA

1870 20,530 12,727

1562 120,212 12,682

Methods

In this study, three discriminative networks are used to directly use the original input sets and perform a completely supervised classification task. The difference between these networks is the layer in which integration takes place. Let’s name the layers in a general deep network with L layers h(0) , . . . , h(L) where h(0) is the input layer and h(L) is the output layer. Then the kth layer for k = 1, . . . , L is calculated as:   (1) h(k) = σ W (k−1) × h(k−1) + b(k−1) . where σ is the activation function and W (k−1) is the weight matrix between the kth and (k − 1)th hidden layer and b(k−1) is the bias term. In a multimodal deep network, with Lm hidden layers in modal-specific sub-network for modality m, for k = 1, . . . , Lm hidden layers are computed as:   (k−1) . (2) × h(k−1) + b(k−1) h(k) m = σ Wm m m In order to integrate three modalities of M ∈ {miRNA, geneExp, lncRNA} the first general hidden layers (h(1) ) is calculated as:     (1) (Lm ) (Lm ) (Lm ) W m × hm + bm (3) h =σ m∈M

For late integration, h(1) is also the final output layer. In this paper, concatenating input modalities in the first layer of the network (Fig. 1a) is called Early Integration (EI-DNN), building individual sub-networks followed by shared hidden layers (Fig. 1b) is called Middle Integration (MI-DNN), and integrating the individual outputs (Fig. 1c) is Late Integration (LI-DNN). It is worth mentioning that in LI-DNN all four sets of outputs are considered to minimize the multi-objective loss function. The loss functions is for the supervised network is categorical cross-entropy (CCE): CCE = −

N C 1  1y ∈C log pmodel [yi ∈ Cc ] N i=1 c=1 i c

(4)

The Effect of Integration Stage on Multimodal Deep

333

where N is the number of samples, C is the number of classes, 1yi ∈Cc is an indicator function of the ith sample belonging to class c, and pmodel [yi ∈ Cc ] is the predicted probability of the ith sample that belongs to class c. In semi-supervised learning, prior to discriminative classification, a generative network such as AutoEncoder (AE) which is constructed of two parts (encoder and decoder) is used to reduce the input size. The encoder transfers the original input space and represents it in another new space with lower dimension. Then, the decoder uses the encoded space to reconstruct the original input. When the error of the reconstruction is minimized, the encoded data can be counted as a good representation of the initial input. The objective function in AE is Mean Squared Error (MSE): N  (xi − R(xi ))2 (5) MSE = i=1

where R(xi ) is the reconstruction of sample xi or in other words, the out put of the AE.

(a) EI-DNN

(b) MI-DNN

(d) EI-AE

(e) MI-AE

(c) LI-DNN

(f) LI-AE

Fig. 1. Deep architectures used for classification.

334

F. Khoshghalbvash and J. X. Gao

Later, the encoded input is fed to one of the three aforementioned discriminative networks. Based on the integration stage, three semi-supervised model are designed. EI-AE-DNN and MI-AE-DNN are used when the shared encoded input from EI-AE and MI-AE (Fig. 1d, e) is used to train a simple single input DNN. If we apply separate AEs, to extract individual encoded inputs (Fig. 1f) and then use them as three encoded input modalities for an LI-DNN (Fig. 1c) it is called LI-AE-DNN.

4

Experimental Results

We applied the six aforementioned multimodal deep networks on breast cancer data described in Sect. 2 which is categorized into three groups to perform a multi class classification task. We used stratified 10-fold cross validation to run our experiment. Specifically, we trained and tested all the models for 10 iterations. During each iteration, %90 of the data was used for training and %10 for testing. Moreover, %20 of the training set was used for validation of the network. In total during each iteration, %72 of data was used for training, %18 for validation, and %10 for testing. We collected all the test results to compute Accuracy (Acc), Precision (Pre), F1-Score, Matthews correlation coefficient (Mcc), construct Precision-Recall Curves (PRC), and build confusion matrices. Note that the confusion matrices carries normalized values, therefor the total number of samples in each class is converted to 100. Results in Table 2, Fig. 2 suggest that middle stage integration has superiority compared with other stages on integration in both supervised and semi-supervised learning. Although, the supervised task is more promising. Moreover, Fig. 3 suggests that discriminative analysis is more capable of handling imbalanced data distribution. Table 2. Result of deep integrative classification of BRCA data. Method

Accuracy (%) Precision (%) F1-score (%) AUC (max = 1)

EI-DNN

84.18

83.85

82.04

0.79

MI-DNN

89.03

87.96

87.43

0.93

LI-DNN

86.08

85.53

81.93

0.88

EI-AE-DNN 70.68

49.95

58.53

0.58

MI-AE-DNN 87.97

86.34

85.71

0.93

LI-AE-DNN

49.95

58.53

0.61

70.68

The Effect of Integration Stage on Multimodal Deep

(a) Average PRC

335

(b) Average PRC

Fig. 2. Average precision recall curves and metrics comparison for BRCA dataset.

(a) EI-DNN

(b) MI-DNN

(c) LI-DNN

(d) EI-AE-DNN

(e) MI-AE-DNN

(f) LI-AE-DNN

Fig. 3. Confusion matrices for BRCA dataset.

336

5

F. Khoshghalbvash and J. X. Gao

Validation

We validated our observation by using another datasets associated with lung cancer explained in Sect. 2. Although due to lower number of samples general performance is not as good as using BRCA datasets, it is still demonstrated that middle stage integration led to better result compared to other methods (Fig. 4 and Table 3). By examining the confusion matrices in Fig. 5, it can be inferred that Normal samples can be distinguished from Tumor samples more easier that samples with different stages. This can be due to similarities between cancer stages or caused by possible noise in clinical categorization of different patients and assigning wrong labels due to lack of in depth knowledge. Table 3. Result of deep integrative classification of LUAD data. Method

Accuracy (%) Precision (%) F1-score (%) AUC (max=1)

EI-DNN

51.21

54.81

51.41

0.49

MI-DNN

60.68

60.87

60.72

0.57

LI-DNN

56.55

55.06

49.10

0.59

EI-AE-DNN 49.27

47.79

48.41

0.48

MI-AE-DNN 60.44

59.73

56.65

0.62

LI-AE-DNN

47.72

48.26

0.52

52.43

(a) Average PRC

(b) Average PRC

Fig. 4. Average precision recall curves and metrics comparison for LUAD dataset.

The Effect of Integration Stage on Multimodal Deep

(a) EI-DNN

(b) MI-DNN

(d) EI-AE-DNN

(e) MI-AE-DNN

337

(c) LI-DNN

(f) LI-AE-DNN

Fig. 5. Confusion matrices for LUAD dataset.

6

Discussions and Conclusions

Genomic data integration using deep networks has been one of the most popular research areas during the past years. Different multimodal deep models can be formed by applying the integration task at different stages in both supervised and semi-supervised classification. Although it has been demonstrated that deep models achieve competitive or superior results compared to classical classification algorithms, there remains a question about when to apply integration and whether the integration stage has any significant impact on classification result or not. In this work, we conducted a comprehensive study to examine the performance of models with different architectures to find a possible answer for the mentioned question. We constructed six different deep models according to their integration level and being supervised or unsupervised. Following this, these models were applied to two sets of real cancer-related data. According to the result extracted from our experiments, middle stage integration has superiority in both supervised and semi-supervised cases. Promising result and the ability to integrate different modalities with different characteristics in the middle stage of model training shows the necessity of deep models in multimodal studies. In addition to this, supervised models seem to be more useful in dealing with imbalanced class distribution. We believe this paper will be insightful

338

F. Khoshghalbvash and J. X. Gao

for future studies on integrative models. A possible interesting future research direction is to extract between modality relations for further analysis other than classification such as biomarker discovery.

References 1. Srivastava N, Salakhutdinov RR (2012) Multimodal learning with deep Boltzmann machines. In: Advances in neural information processing systems, pp 2222–2230 2. Jansen R, Yu H, Greenbaum D, Kluger Y, Krogan NJ, Chung S, Emili A, Snyder M, Greenblatt JF, Gerstein M (2003) A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science 302(5644): 449–453 3. Lanckriet GR, De Bie T, Cristianini N, Jordan MI, Noble WS (2004) A statistical framework for genomic data fusion. Bioinformatics 20(16):2626–2635 R 4. Bengio Y et al (2009) Learning deep architectures for AI, foundations and trends. Mach Learn 2(1):1–127 5. Hinton G, Deng L, Yu D, Dahl GE, Mohamed A-R, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97 6. Liang M, Li Z, Chen T, Zeng J (2015) Integrative data analysis of multi-platform cancer data with a multimodal deep learning approach. IEEE/ACM Trans Comput Biol Bioinform (TCBB) 12(4):928–937 7. Sun D, Wang M, Li A A multimodal deep neural network for human breast cancer prognosis prediction by integrating multi-dimensional data. IEEE/ACM Trans Comput Biol Bioinform 8. Chaudhary K, Poirion OB, Lu L, Garmire LX (2018) Deep learning-based multiomics integration robustly predicts survival in liver cancer. Clin Cancer Res 24(6):1248–1259 9. Zhu Y, Qiu P, Ji Y (2014) Tcga-assembler: open-source software for retrieving and processing tcga data. Nat Methods 11(6):599 10. Wei L, Jin Z, Yang S, Xu Y, Zhu Y, Ji Y (2017) Tcga-assembler 2: software pipeline for retrieval and processing of tcga/cptac data. Bioinformatics 34(9):1615–1617 11. Li J, Han L, Roebuck P, Diao L, Liu L, Yuan Y, Weinstein JN, Liang H (2015) Tanric: an interactive open platform to explore the function of LNCRNAS in cancer. Cancer Res CANRES—0273

An Advanced Aerospace High Precision Spread Spectrum Ranging System Technology Ning Liu1(&), Pingyuan Lu2, and Xiaohang Ren3 1

3

Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected] 2 Shanghai Aerospace Electronic Co., Ltd., Shanghai, China [email protected] Changping NCO School of the Equipment Institute, Beijing, China [email protected]

Abstract. First introduced the working principle of pseudo-code ranging for aerospace spread spectrum ranging system. An advanced method of spread spectrum ranging based on on-orbit automatic correction is proposed. The ranging error is analyzed, and the measured data is used to verify the effectiveness of the method. Keywords: Spread spectrum ranging analysis and verification

 Automatic correction  Ranging error

1 Introduction The space measurement and control communication system adopts the spread spectrum system, and its core is to introduce digital communication technologies such as pseudocode ranging, pseudo-code spread spectrum, code division multiple access, timedivision multi-channel, etc. into the system to realize telemetry, remote control and measurement of satellites [1]. Functions such as distance, speed measurement, tracking, angle measurement, and digital transmission complete the measurement and control tasks, and realize multi-target measurement and control communication by code division multiple access [2]. The spread spectrum ranging technology has excellent characteristics such as high ranging accuracy, no fuzzy distance and strong anti-interference performance. It is more and more widely used in satellite navigation and timing, satellite measurement and control [3]. Orbit determination and inter-satellite ranging and time synchronization have become the first choice for precision ranging in today’s complex electromagnetic environment.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 339–347, 2020 https://doi.org/10.1007/978-981-13-9409-6_40

340

N. Liu et al.

2 Basic Principle of Pseudo-Code Ranging The principle of radio ranging is to measure the transmission delay of radio waves. The distance is calculated by first transmitting a radio wave and then measuring the delay generated by the signal forwarded by the target relative to the transmitted signal [4]. The relationship between the target distance R and transmission time s is R ¼ s  c=2

ð1Þ

where: c is the radio propagation speed (speed of light). Therefore, ranging is the measurement delay. The direct sequence spread spectrum system measures the distance between the spacecraft and the ground monitoring station by utilizing the phase difference between the receiving end and the received arriving signal [5]. The principle of ranging based on direct spread spectrum technology is that the ground station controls the spread spectrum and carrier modulated signals. The spacecraft receives the signal and forwards it back to the ground. The ground solves the phase of the local signal and the received signal. Thus giving the distance between the satellite and the station. In the ranging process, the two-way spread spectrum communication task between the stars and the ground is simultaneously completed. After the ground monitoring station performs the frequency conversion processing and the related despreading processing on the receiver part, the phase difference s0 between the received signal and the transmitted signal is obtained by the TDOA detection technology, and the time difference of arrival (TDOA) can be calculated according to the Eq. (2). This gives the distance L between the stars and the ground. The relationship between L and the phase difference s0 is expressed by the Eq. (3). TDOA ¼ s0  Tc  Tz  Tj L ¼ TDOA 

C C ¼ ðs0  Tc  Tz  Tj Þ  2 2

ð2Þ ð3Þ

where L is the distance between the stars and the ground, TDOA is the time difference of arrival, s0 is the number of symbols of the difference between the local sequence of the base station and the received sequence, Tc is the width of the spreading code, C is the speed of light, Tz is the spacecraft frequency conversion The processing time (generally a constant, assumed to be known), Tj is the processing time for the base station to solve the TDOA, i.e. the phase capture time, (generally related to the time function, assumed to be known).

3 Optimization Design of Pseudo Code Ranging System 3.1

Conventional Pseudo-Code Ranging System

In the aerospace measurement and control system, various pseudo-code ranging implementations are based on the above principles, and the basic methods are the same.

An Advanced Aerospace High Precision Spread Spectrum Ranging

341

The working process of the measurement and control system is as follows: the receiving antenna receives the uplink RF signal from the ground spread spectrum monitoring and control station, and the signal form is two BPSK modulated spread spectrum signals, which have the characteristics that the carrier is suppressed [6]. The uplink RF signal is received by the receiving branch into the receiving channel of the spread spectrum transponder to complete low noise amplification, down conversion, intermediate frequency filtering, intermediate frequency signal amplification and AGC control. The uplink IF signal enters the digital baseband portion for A/D sampling, and the sampled data performs a series of processing operations in the digital baseband [7]. After completing the transmission code acquisition and tracking, carrier acquisition and tracking, and remote control information bit synchronization, the uplink remote control channel sends the solved remote control PCM code together with the synchronous clock and the strobe pulse to the remote control subsystem for subsequent processing; the downlink telemetry channel receives After the telemetry PCM code and the synchronization clock from the telemetry unit, the telemetry PCM code is spread and BPSK modulated; the measurement channel uplink and downlink signals adopt a prescribed measurement frame structure, and the downlink measurement frame is filled with the transponder state information, such as pseudo code information, PseudoDoppler measurement information, etc. The typical aerospace measurement and control system terminal structure is shown in Fig. 1.

Power and command interface TC

IF signal

Receiving channel

uplink RF signal

Digital baseband 1553B

IF signal

Lower computer module

Launch channel

downlink RF signal

Power amplifier

typical TT&C Terminal Fig. 1. Conventional spread spectrum ranging terminal principle

The uplink signal and the downlink signal are designed as follows: the uplink remote control and the ranging are both PCM/CDMA/BPSK, sharing the same carrier frequency, and the uplink remote control signal and each ranging signal are independent of each other, and the signals are distinguished by code division multiple access;

342

N. Liu et al.

Both downlink telemetry and ranging use PCM/CDMA/BPSK, sharing the same carrier frequency. The downlink telemetry signal and the ranging signal are independent of each other, and each signal is distinguished by code division multiple access. The modulated spread spectrum signal is upconverted, RF filtered, and power amplified in the transmit channel, and then the RF signal is transmitted from the antenna to the ground spread spectrum measurement and control station to form a downlink. 3.2

Advanced High Precision Pseudo-Code Ranging System

In the conventional spread spectrum ranging system, the most important component device is the measurement and control terminal, and the distance value corresponding to the delay of the system itself is called the distance zero value, and the value fluctuates with various factors. For example, the consistency of the switch, the change of the level of the ranging signal, the number of uplink ranging signals, and the temperature change of the measurement and control terminal itself. Aiming at the zero-value fluctuation caused by the above-mentioned factors, this paper proposes an improved high-precision ranging system, which adopts the method of adding self-calibration module to detect the change of the distance zero value of the measurement and control terminal with various factors. And feedback to the uplink ranging signal, by effectively detecting the self-calibration value and performing cancellation, the distance zero value fluctuation can be effectively eliminated, thereby obtaining the ranging value of the modified accuracy. The system is introduced as follows. 3.2.1 System Structure The self-calibration module is added to realize the self-closing function of the baseband self-calibration signal and the receiving transmission channel. The self-calibration function can offset the influence of device aging and temperature on ranging. The main principle is: the uplink signal and the downlink signal are combined into the receiving channel, and the baseband module demodulates the uplink and downlink signals to calculate the distance of the transponder itself. The zero value is sent to the ground through the downlink measurement frame to achieve high-precision ranging. According to the actual use of the answering machine, in order to reduce the influence of the level on the ranging signal, the self-calibration signal power can be adjusted to any value within the working level of the transponder through software setting. The self-calibration module is mainly considered in design. Group delay fluctuations and level control accuracy problems in the school channel. The block diagram of the improved high-precision pseudo-code ranging terminal is as follows (Fig. 2).

An Advanced Aerospace High Precision Spread Spectrum Ranging

Power and command interface Receiving channel

IF signal

TC Digital baseband 1553B

Launch channel

IF signal

Lower computer module

uplink RF signal

RF signal

Local oscillator signal

343

Selfcalibration module

RF signal

downlink Power RF signal amplifier

Improved high-precision pseudo-code ranging terminal

Fig. 2. Improved high-precision pseudo-code ranging terminal block diagram

Among them, the core part of the self-calibration module principle block diagram as shown (Fig. 3).

Receiving channel

Isolator

Uplink filter

Control signal TTL Secondary power supply +5V, -5V

Programmed attenuator

Secondary power supply filtering Reference clock 10MHz

Launch channel

uplink RF signal

Combiner

Downlink filter

Isolator

Coupler

downlink RF signal

Self-calibration module

Fig. 3. Self-calibration module block diagram

The uplink signal is sent to the receiving front end of the on-board measurement and control terminal as an input signal by the combiner and the uplink filter; a part of the downlink output signal is outputted to the back-end power amplifier through the down-converter, and the other part is set by the coupler. The appropriate coupling degree is used to control the amplitude of the self-calibration frequency conversion control module; the self-calibration frequency conversion control module converts the downlink frequency to the uplink frequency, and adjusts the self-calibration signal power level sent to the receiving channel through the baseband control signal.

344

N. Liu et al.

The difference between a self-calibration channel and a normal ranging channel is: (1) The self-calibration channel clearly knows the code phase and carrier frequency that it sends. (2) The self-calibration parameter is a slow-change parameter, so the self-calibration channel can be accumulated for a long time to improve the measurement accuracy. 3.2.2 Frequency Flow The frequency scheme of the improved high-precision pseudo-code ranging terminal proposed above is as follows. The receiving channel performs two down conversions on the uplink RF signal. The converted intermediate frequency signal contains the uplink remote control information and measurement information. After the A/D signal is sampled by the A/D, the FPGA despreads the signal. For demodulation work, the FPGA also constructs downlink measurement frame information, and collects the telemetry digital signal for spread spectrum modulation processing and then sends it to the transmission channel to complete the RF frequency conversion work in the transmission channel. The selfcalibration module converts the downlink RF signal into an uplink signal frequency point, and combines with the uplink signal to enter the receiving channel to realize the self-calibration function. In the frequency flow, the M1, M2, N1, N2, and L1 parameters can be set, and the parameter coverage can achieve full coverage of the S-band. 3.2.3 Error Analysis Compared with conventional spread spectrum ranging technology, the difference between high-precision spread spectrum technology mainly has the following two points. First, the software realizes millimeter-level ranging error using carrier smoothing pseudo-range technology. Theoretical analysis and simulation verification are as follows: (1) Assume that the pseudo code rate RPN is equal to 10 MHz, Then the chip interval TC is equal to 100 ns,According to engineering experience, the accuracy of the code tracking loop TC0 can generally reach TC =100, that is 1 ns. Therefore, the magnitude of the pseudorange measurement error is c  TC0 ¼ 0:3 m. (2) Assuming that the signal carrier frequency fc is 2 GHz, the carrier period T is 0.5 ns, and the accuracy of the carrier recovery loop can generally reach 10°, the time error ðT 0 Þ based on the carrier measurement can reach T 0 ¼ T=36 ¼ 0:014 ns, and the corresponding pseudorange measurement error is c  T 0 ¼ 0:0042 m. It can be seen from the performance comparison that the ranging accuracy obtained after the carrier smoothing pseudorange is greatly improved. Second, the self-calibration channel is used to eliminate the change in the zero value of the transponder. The analog filter mainly includes an out-of-band rejection filter of the radio frequency port, an intermediate frequency image frequency filter, and a channel selection

An Advanced Aerospace High Precision Spread Spectrum Ranging

345

filter. The higher the center frequency of the filter and the smaller the number of filter stages, the smaller the delay of the filter. Generally, the delay of the RF filter is relatively small, and the delay is relatively small with temperature fluctuations. The channel delay error is mainly caused by changes in material transmission characteristics and changes in filter component parameters due to environmental changes, resulting in variations in transmission line delay and filter group delay. In order to reduce the variation of channel delay, we can start from the following three aspects: one is to select the filter form that is less sensitive to the change of component parameters; the other is to control the variation range of component parameters and minimize the change of component parameters; The third is to design a closed-loop self-correction system that detects the change when the filter delay changes and compensates for this value during digital signal processing. The effect of delay fluctuations from 0 to 40 ns on the delay estimation variance is shown in the following figure. It can be seen that the in-band variation of 10 ns increases the pseudo-range measurement variance by 6%, the in-band variation by 40 ns, and the pseudo-range measurement variance by 16% (Fig. 4).

Fig. 4. Effect of RF group delay on ranging (pseudo-code speed: 10.23 Mcps)

4 Test Verification The high-precision pseudo-code ranging terminal of a satellite was tested with the ground TTC station, aiming to verify this technology. The temperature control during the test is to adopt the method of active cooling to reduce the ambient temperature of the high-precision pseudo-code ranging terminal by 15 °C, and to monitor the temperature change by using on-board temperature telemetry (Table 1).

346

N. Liu et al. Table 1. The effect of temperature on ranging

Temperature (°C) 24.4

17.4

16.5

15.4

15.4

15.4

13

Test items Original measured value On-board self-calibration Ground self-calibration Original measured value On-board self-calibration Ground self-calibration Original measured value On-board self-calibration Ground self-calibration Original measured value On-board self-calibration Ground self-calibration Original measured value On-board self-calibration Ground self-calibration Original measured value On-board self-calibration Ground self-calibration Original measured value On-board self-calibration Ground self-calibration

Ranging (m) 1923.497 1763.182 125.446 1923.477 1763.160 125.445 1923.467 1763.154 125.443 1923.468 1763.159 125.442 1923.465 1763.168 125.443 1923.467 1763.168 125.446 1923.460 1763.163 125.445

Correction value (m) 34.869

34.871

34.869

34.866

34.853

34.852

34.852

The average value of the range obtained by the high-precision pseudo-code ranging terminal and the ground measurement and control system equipment under various working conditions, the on-board self-calibration value, the ground self-calibration value, and the correction value curve are shown in the figures below (Fig. 5). In the above figure, by comparing the error between the correction value and the original measurement value, it can be seen that the on-board self-calibration and ground self-calibration techniques can effectively reduce the ranging fluctuation from 4 cm to less than 2 cm.

An Advanced Aerospace High Precision Spread Spectrum Ranging

347

Fig. 5. Temperature change conformance test

5 Conclusion By designing the on-board self-calibration technology in the traditional spread spectrum ranging system, the variation of the ranging value with temperature can be effectively reduced, and the improvement of the ranging accuracy is verified by the actual measured on-board data. This provides an effective guarantee for satellite high-precision ranging and orbit determination.

References 1. Cerri L, Berthias JP, Bertiger WI et al (2010) Precision orbit determination standards for the Jason series of altimeter missions. Mar Geodesy 33(S1):379–418 2. Bertiger W, Desai SD, Dorsey A et al (2010) Sub-centimeter precision orbit determination with GPS for ocean altimetry. Mar Geodesy 33(4):363–378 3. Ablain M, Philipps S, Picot N et al (2010) Jason-2 global statistical assessment and crosscalibration with Jason-1. Mar Geodesy 33(S1):162–185 4. Misra P, Enge P (2015) Global positioning system: signals, measurements, and performance (revised second edition). Ganga-Jamuna Press, Lincoln, MA 5. Urlichich Y, Subbotin V, Stupak G et al (2011) GLONASS: developing strategies for the future. GPS World 22(4):42 6. Fernández FA (2011) Inter-satellite ranging and inter-satellite communication links for enhancing GNSS satellite broadcast navigation data. Adv Space Res 47(5):786–801 7. Motella B, Savasta S, Margaria D et al (2011) Method for assessing the interference impact on GNSS receivers. IEEE Trans Aerosp Electron Syst 47(2):1416–1432

Weight-Assignment Last-Position Elimination-Based Learning Automata Haiwei An(&), Chong Di, and Shenghong Li School of Cyber Space Security, Shanghai Jiao Tong University, 800 Dong Chuan Road, 200240 Shanghai, China [email protected]

Abstract. Learning Automata (LA) is an adaptive decision-making unit under the reinforcement learning category. It can learn the randomness of the environment by interacting with it and adaptively adjust its behavior to maximize its long-term benefits from the environment. This learning behavior reflects the strong optimization ability of the learning automaton. Therefor LA has been applied in many fields. However, the commonly used estimators in previous LA algorithms have problems such as cold start, and the initialization process can also affect the performance of the estimator. So, in this paper, we improve these two weaknesses by changing the maximum likelihood estimator to a confidence interval estimator, using Bayesian initialization parameters and proposes a new update strategy. Our algorithm is named as weight-assignment last-position elimination-based learning automata (WLELA). Simulation experiments show that the algorithm has higher accuracy and has the fastest convergence speed than various classical algorithms. Keywords: Learning automaton  Weight-assignment initialization  Confidence interval estimator

 Bayesian

1 Introduction Learning automaton can be seen as an adaptive decision-making unit, It can constantly interact with the random environment to adjust its choices to maximize the probability of being rewarded. The process of the LA interacting with the environment is shown in Fig. 1; [1]. At each moment t, an action a(t) will be chosen by LA to interact with the random environment and receives the environment feeds back b(t), which can be either a reward or a penalty. Then, the automaton updates the state probability vector according to the received feedback. Because it’s simple algorithm, strong anti-noise ability and strong optimization ability, it has received extensive attention and has been applied in many fields, such as random function optimization, QoS Optimization and certificate authentication.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 348–354, 2020 https://doi.org/10.1007/978-981-13-9409-6_41

Weight-Assignment Last-Position Elimination-Based Learning

349

Fig. 1. LA interacts with the random environment

In the LA field, the most classical discrete pursuit algorithm with deterministic estimator is the DPRI algorithm given by Oommen in [2]. The main idea of the DPRI algorithm is to increase the probability vector which has the maximum value in the running estimates when the environment rewards the current action and decreases others; otherwise, the automaton changes nothing. Furthermore, many classical pursuit algorithms, such as DGPA [3] and SERI [4], also use estimators and discretization to improve the convergence speed of automaton. In [5], an algorithm named last-position elimination-based LA( LELA) which is contrary to the classical pursuit algorithm is proposed. Instead of greedily increasing the probability vector of the optimal estimation action, this algorithm reduces the probability of choosing the current worst estimation action, t Experiments show that LELA can get faster convergence speed than DGPA. However, the estimator used by LELA has some innate defects. One typical flaw is the cold start and initialization problem, for example, since the maximum likelihood estimator does not have any information at the beginning, each action has to interact with the environment a certain number of times, it will Increase the cost of getting information in some complicated situations. And the update strategy of the LELA algorithm simply makes all active actions equally share the penalized state probability from the last-position action, does not consider the difference between optimal action and other actions at all. Thus, in this paper, we propose a weight assignment LELA (WLELA) algorithm which has made the following three changes: 1. Improvement of initialization parameters; 2. the estimator improvement; 3. changes in the probability vector update strategy. In Sect. 2, a brief introduction of the LELA algorithm is given. In Sect. 3 we will show our algorithm WLELA in detail. Then in Sect. 4 part, we will give the simulation results of the WLELA algorithm, which will be compared with LELA and other classic algorithms such as DPRI, DGPA. Finally, summarize in Sect. 5.

2 Related Work LA can be represented by a four-tuple hA, B, Q, Ti model. They are explained as follows.

350

H. An et al.

A is the set of actions. B is the feedbacks from random environment, when B = {0, 1}, where b = 0 represents that the LA has been penalized, and b = 1 means the LA has been rewarded. Q = , E represents the estimator, it contains all the historical information that each action interacts with the environment. The most commonly used estimator is the maximum likelihood estimator. PPis the state probability vector of choosing an action a(t) at any instant t, it satisfies pi(t) = 1. T is the state transition function of LA, which determines how LA migrates to the state of t + 1 according to the state of output, input, and t at time t. The random environment can also be described by a triple mathematical model. where A and B are defined in the same way as above, and C is defined as C = {cij = Pr{b(t) = bj|a(t) = ai}}. In the original LELA algorithm [5], it uses the maximum likelihood estimator to record the historical information of all actions, according to the following formula diðtÞ ¼

WiðtÞ ZiðtÞ

ð1Þ

where Zi(t) is the number of times the action ai was selected up to time instant t and Wi (t) is the sum of the environmental feedbacks received up to time t. when an action is rewarded, the automaton will select the worst performing action from the estimator vector set and decrease corresponding state probability vector by a step Δ = 1/rn, where r is the number of allowable actions and n is a resolution parameter. If some action’s state probability vector is reduced to zero during the process, this action will be removed from the optional set of actions, while the remaining actions will evenly share the state probability value from each decrease. The update scheme is described as follows: If b(t) = 1 then Find m 2 Nr such that dm ðtÞ ¼ minfdi ðtÞjpi ðtÞ 6¼ 0g; i 2 Nr pm ðt þ 1Þ ¼ maxfpm ðtÞ  D; 0g If pm ðt þ 1Þ ¼ 0 Then kðtÞ ¼ kðtÞ  1 Endif   pm ð t Þ  pm ð t þ 1Þ pj ðt þ 1Þ ¼ min pj ðtÞ þ ; 1 ; 8j 2 Nr ; such thatpt ðtÞ [ 0: k ðtÞ Else pi ðt þ 1Þ ¼ pi ðtÞ8i 2 Nr Endif. j(t) denotes the number of active actions and is initialized by r. LELA has been proved to be e-optimal in every stationary random environment.

Weight-Assignment Last-Position Elimination-Based Learning

351

3 Proposed Learning Automata In order to overcome the shortcomings of the LELA algorithms, we propose the following improvements. Firstly, we use the Bayesian estimator introduced in [6] to solve the cold start and initialization problems. However, if the Bayesian estimator is used directly in the LA algorithm, the convergence speed will be additionally affected, so in WLELA, we directly modify it to the mean of the posterior distribution which is to set all actions’ di(0) = 0.5, thereby improving convergence efficiency while ensuring overcoming cold start and initialization problems. Secondly, in order to get more information, in the WLELA algorithm we used the confidence interval estimator proposed in [7] which is  di ð t Þ ¼



Zi ðtÞ  Wi ðtÞ ðWi ðtÞ þ 1ÞF2ðWi ðtÞ þ 1Þ;2ðZi ðtÞWi ðtÞ;0:005

1

; 8i 2 Nr

ð2Þ

where F2ðWi ðtÞ þ 1Þ; 2 ðZi ðtÞWi ðtÞ; 0:005 is the 0.005 right tail probability of the F distribution 2ðWi ðtÞ þ 1Þ and 2ðZi ðtÞ  Wi ðtÞ dimensional degrees of freedom. Last, since all state probability vectors add up to a total of 1, so the value of each state probability vector can be thought of as its weights in the vector set. So, WLELA increase their probability vector according to their weights. In this way, similar to the idea of finding the optimal action in the classic pursuit algorithm, the probability vector of the optimal action will get more attention when updating, so that more values can be added to the optimal action’s state probability vector each time. Assigning the added value by weight is more in line with the purpose of learning the automatic machine to select the optimal action. A detailed description of the WLELA is as follows Algorithm WLELA Initialize pi ð0Þ ¼ 1r ; Wi ð0Þ ¼ 1; Zi ð0Þ ¼ 2; 8i 2 Nr  1 Initialize di ð0Þ ¼ 1 þ ðWi ð0Þ þ 1ÞFZi ð0ÞWi ð0Þ ; 8i 2 Nr 2ðWi ð0Þ þ 1Þ;2ðZi ð0ÞWi ð0Þ;0:005

Step 1: At time t, pick a(t) = ai according to the state probability vector P(t); Step 2: Receive feedback bi(t) {0,1}. Update the estimate values Wi ðtÞ ¼ Wi ðt  1Þ þ bi ðtÞ; Zi ðtÞ ¼ Zi ðt  1Þ þ 1  di ðtÞ ¼

Zi ðtÞ  Wi ðtÞ 1þ ðWi ðtÞ þ 1ÞF2ðWi ðtÞ þ 1Þ;2ðZi ðtÞWi ðtÞ;0:005

Step 3: If bi(t) = 1 Then Find m 2 Nr such that

1

352

H. An et al.

dm ðtÞ ¼ minfdi ðtÞjpi ðtÞ 6¼ 0g; i 2 Nr pm ðt þ 1Þ ¼ maxfpm ðtÞ  D; 0g If pm ðt þ 1Þ ¼ 0 Then kðtÞ ¼ k ðtÞ  1 Endif  p j ð t þ 1Þ ¼

min

8j2Nr;pj ðtÞ [ 0

pj ðtÞ þ fpm ðtÞ  pm ðt þ 1Þg  fpj ðtÞ þ

pm ð t Þ  pm ð t þ 1 Þ g; 1 k ðtÞ



Endif Step 4: If bi(t) = 0 Then pi ðt þ 1Þ ¼ pi ðtÞ8i 2 Nr Goto Step 1. Endif Step 5: If maxfPðtÞg ¼ 1; Then CONVERGE to the action whose p = maxfPðtÞg. ELSE Goto step 1. Endif END The parameter k(t) has the same meaning in algorithm LELA.

4 Simulation Results This section we compare the relative performances of the proposed WLELA with the LELA and the classical pursuit algorithms DPRI and DGPA by presenting their accuracy and convergence speed. The random environment we used is the most commonly used benchmark environment E1-E4 with 10 allowable actions as shown in Table 1. Table 1. Benchmark environments E1 E2 E3 E4

C1 0.60 0.55 0.70 0.10

C2 0.50 0.50 0.50 0.45

C3 0.45 0.45 0.30 0.84

C4 0.40 0.40 0.20 0.76

C5 0.35 0.35 0.40 0.20

C6 0.30 0.30 0.50 0.40

C7 0.25 0.25 0.40 0.60

C8 0.20 0.20 0.30 0.60

C9 0.15 0.15 0.50 0.50

C10 0.10 0.10 0.20 0.30

In the process of the LA simulation experiment, if the state probability vector of a certain action exceeds the set threshold T(0 < T  1), the algorithm is considered to have converged, if the converged action has the highest reward probability in the environment, it is considered that the learning automaton converged correctly.

Weight-Assignment Last-Position Elimination-Based Learning

353

For all the algorithms in the experiment, they are simulated with their best parameters, which are defined as the values that yielded the fastest convergence speed and guaranteed the automaton converged to the optimal action in a sequence of NE experiments. Specifically, in our experiment, we set the same threshold T and NE in [2, 3, 5], that is, T = 0.999 and NE = 750. After adjusting to the best parameters, we carried out 250,000 experiments to evaluate the average convergence rate and accuracy. Accuracy is an indicator for judging the performance of an automaton, accuracy is defined as the probability that a learning automaton converges to the optimal action in an environment. As can be seen from Table 2, “Res” denotes the best resolution parameter, all algorithms can converge with high accuracy, while WLELA has higher accuracy than other algorithms, although the difference is not insignificant. Table 2. Accuracy (number of correct convergences/number of experiments) ENV E1 E2 E3 E4

WLELA Res n = 24 n = 98 n = 12 n = 31

Acc 0.997 0.997 0.998 0.998

LELA Res n = 20 n = 68 n = 10 n = 27

Acc 0.996 0.995 0.997 0.997

DGPA Res n = 65 n = 204 n = 28 n = 55

Acc 0.996 0.995 0.99 0.997

DPRI Res n = 653 n = 3221 n = 216 n = 881

Acc 0.994 0.993 0.996 0.994

A. Average converge times Convergence speed is one of the most critical performance indicators in learning automata. Convergence speed comparison data is shown in Table 2, “Ite” denotes the convergence speed. From the Table 3, we can see that the WLELA algorithm is better than other algorithms in terms of convergence speed. Compared with LELA, the rate of convergence improvement in each environment is {6.93, 19.76, 3.13, 12.49 %}. Compared with the traditional DGPA and DPRI algorithms, the rate of improvement is {27.91, 40.43, 18.01, 28.28%} and {51.45, 72.24, 21.65, 56.02%}. It can be seen that WLELA converges faster than the other three algorithms, and the E2 environment is the most complex compared to other environments, and WLELA still performs best.

Table 3. Convergence speed ENV E1 E2 E3 E4

WLELA Res n = 24 n = 98 n = 12 n = 31

Ite 1209 3090 619 1037

LELA Res n = 20 n = 68 n = 10 n = 27

Ite 1299 3851 639 1185

DGPA Res n = 65 n = 204 n = 28 n = 55

Ite 1677 5187 755 1446

DPRI Res n = 653 n = 3221 n = 216 n = 881

Ite 2490 11,132 790 2358

354

H. An et al.

5 Conclusion This paper proposes an improved algorithm WLELA. By using Bayesian initialization eliminates the cold start problem, using confidence interval estimator gets more interactive information and using weight allocation strategy to realize the classical LA’s idea of pursuing the best behavior. These three improvements allow the WLELA algorithm to achieve high accuracy and fast convergence in the simulation experiments, and the results show that WLELA not only has the highest accuracy, but also the fastest convergence speed. Especially in the most complex environment, the WLELA still performs very well. In future work, consider using a random estimator instead of a deterministic estimator in WLELA. and the WLELA algorithm can be used in many applications that need to learn automata. Acknowledgements. This work was supported by the National Key Research and Development Project of China under Grant 2016YFB0801003.

References 1. Thathachar M, Sastry PS (2004) Networks of learning automata: techniques for online stochastic optimization. Kluwer, Dordrecht 2. Oommen BJ, Lanctôt JK (1990) Discretized pursuit learning automata. IEEE Trans Syst Man Cybern 20(4):931–938 3. Agache M, Oommen BJ (2002) Generalized pursuit learning schemes: new families of continuous and discretized learning automata. IEEE Trans Syst Man Cybern Part B Cybern 32 (6):738–749 4. Papadimitriou GI, Sklira M, Pomportsis AS (2004) A new class of e-optimal learning automata. IEEE Trans Syst Man Cybern Part B 34(1):246–254 5. Zhang J, Wang C, Zhou MC (2014) Last-position elimination-based learning automata. IEEE Trans Cybern 44(12):2484–2492 6. Xuan Z, Granmo OC, Oommen BJ (2013) On incorporating the paradigms of discretization and Bayesian estimation to create a new family of pursuit learning automata. Appl Intell 39 (4):782–792 7. Hao G, Jiang W, Li S et al (2015) A novel estimator based learning automata algorithm. Appl Intell 42(2):262–275

Nonlinear Multi-system Interactive Positioning Algorithms Xin-xin Ma1,2(&), Ping-ke Deng1,2, and Xiao-guang Zhang2 1

2

University of Chinese Academy of Sciences, 100049 Beijing, China [email protected] Department of Navigation System, Aerospace Information Research Institute of Chinese Academy of Sciences, Beijing 100094, China

Abstract. The Bayesian probabilistic observation model is established by using the interactive input of multi-system observation data. The positioning information between multi-system is directly interacted. The non-linear problem of the observation system is solved by the extended Kalman filter theory. Moreover, the system probability is updated in real time by using the filtering innovation and variance of each system, and the estimated results are fused with each weight to output. The simulation results show that the proposed algorithm has better stability and adaptability than the traditional location algorithm under the same observation conditions. Keywords: Nonlinear algorithms

 Extend kalman filter  Multi-system  Interactive

1 Introduction The idea of multi-system interactive positioning algorithms originates from Interacting Multiple Model (IMM) tracking algorithm [1]. BLOM H A P and Bar-shalom Y first proposed the Bayesian interactive structure between motion models in Ref. [2]. In target tracking, multiple models are established to describe the motion state of objects, which reduces the limitation of single model to describe the motion of objects. Since then, the research direction of intelligent parallel fusion tracking technology has been initiated, and many improved algorithms for Bayesian multi-model interaction have been studied. References [3] and [4] propose an adaptive IMM algorithm for model set based on K-L (Kullback-Liber) theory and a multi-model set switching algorithm to solve the problem of incomplete description of motion by a single model set. In Ref. [5], an asymmetric interactive filtering algorithm parallel to extended Kalman filter (EKF) is proposed, which effectively solves the problem of coexistence of nonlinear and linear systems. References [6] and [7] propose an IMM algorithm for adaptive adjustment of model transition probability, which effectively solves the constraints of fixed transfer probability on target tracking and positioning performance. In Ref. [8], an IMM algorithm based on Unscented Kalman filter (UKF) is proposed to solve the problem of data loss. Reference [9] proposes an interactive multi-model algorithm with scalar weights, which can deal with complex environment. Besides the limitation of motion model, the performance of positioning system also plays a decisive role in © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 355–365, 2020 https://doi.org/10.1007/978-981-13-9409-6_42

356

X. Ma et al.

target tracking and positioning performance. References [10] and [11] proposed an interactive multi-sensor algorithm, which successfully introduced the interactive theory into the research of intelligent collaboration between systems, but it did not directly interact with real-time positioning information, resulting in the problems of lag and error accumulation in the collaboration of systems. Referring to the idea of interactive multi-model, document [12] proposes Interacting Multiple System Algorithm (IMS). In practice, many positioning systems are actually non-linear, and the applicability of this algorithm is limited. With the development of positioning system, multi position to ensure target in the changeable environment more widely, the scene, maintain the continuity and stability of tracking and positioning, then improved Interacting Multiple System Algorithm (IMS). The non-linear problem of the observation system is solved by the extended Kalman filter theory. Moreover, the system probability is updated in real time by using the filtering innovation and variance of each system, and the estimated results are fused with each weight to output.

2 System Modeling Target tracking is carried out simultaneously by multiple systems. Considering the nonlinearity of target motion, the state equation and observation equation of target motion can be written in the following forms: xðk þ 1Þ ¼ f ½k; xðkÞ þ GðkÞxðkÞ

ð1Þ

Zi ðkÞ ¼ hðk; xðkÞÞ þ vðkÞ

ð2Þ

The x(k) means state vector in moment k, f ðÞ means state transition function, GðkÞ means noise driving matrix. xðkÞ and vðkÞ are process noise and observation noise respectively, and Gaussian white noise with zero mean of process noise and observation noise. And xðkÞ and vðkÞ are independent of each other, the covariances are QðkÞ, Ri ðkÞ respectively. The state equation and observation equation of the non-linear model are expanded around the filter value ^xðkÞ of the previous moment, and the local linearization of the non-linear system is carried out. The equation of state is: xðk þ 1Þ ¼ Uðk þ 1jkÞxðkÞ þ GðkÞxðkÞ þ uðkÞ

ð3Þ

 _ @f ½x ðkÞ:k  ¼ Uðk þ 1jkÞ ¼ _ _  @xðkÞ @xðkÞ _x ðkÞ¼xðkÞ

ð4Þ

 @f  _ xðkÞ uðkÞ ¼ f ½xðkÞ:k   @xðkÞ  xðkÞ¼_x ðkÞ

ð5Þ

Among: @f

_

Nonlinear Multi-system Interactive Positioning Algorithms

357

The observation equation is: ZðkÞ ¼ HðkÞxðkÞ þ yðkÞ þ mðkÞ

ð6Þ

 @h  HðkÞ ¼ _  _ @xðkÞ  xðkÞ ¼ x ðkÞ

ð7Þ

 @h  _ yðkÞ ¼ hðx ðkjk  1Þ; kÞ  _  x ðkjk  1Þ _  @xðkÞ xðkÞ ¼ x ðkÞ

ð8Þ

Among:

_

In the actual observation, multiple systems are observed at the same time. It is assumed that the transition probability between systems is subject to the Markov chain property. It is assumed that from system i to j at the time of k to k + 1, the system transition probability is: pij ¼ Pfmk þ 1 ¼ jjmk ¼ ig

ð9Þ

3 Nonlinear Multi-system Interactive Positioning Algorithm In principle, in data processing, centralized fusion may lead to a large amount of data passing through the network, while bayesian theory is used to preprocess data from multiple systems to reduce data traffic. The basic idea of the bayesian multi-system interactive positioning algorithm is to set a basic set M, add the mixed observation information obtained by using the mixed probability to the system set M, put the mixed observation information in M into the corresponding filter, and the output result is the data fusion form of each mixed information. The algorithm is mainly composed of the following four core steps: interaction, filtering, updating and fusion output. 3.1

Multiple System Interaction

The new initial values are obtained according to the markov transfer matrix between different systems. The mixed initial probability of mj ðkÞðj 2 S ¼ f1; 2. . .NgÞ of each corresponding system and the corresponding initial observation information and noise variance are calculated. Assuming that system j is effective at time k + 1, the mixing probability is calculated as follows: lijj ðkjk þ 1Þ ¼

pij li ðkÞ cj

ð10Þ

cj is normalization constant, li ðkÞ is the posterior probability of system i at time k, calculate the normalization constant as:

358

X. Ma et al.

cj ¼

n X i¼1

pij li ðkÞ

ð11Þ

Zi ðk þ 1Þlijj ðkjk þ 1Þ

ð12Þ

Mixed observation information: _0

Z j ðk þ 1jk þ 1Þ ¼

n X i¼1

Mixed variance of noise: R0j ðk þ 1jk þ 1Þ ¼

n lijj ðkjk þ 1ÞfRi ðk þ 1Þ þ ½Zi ðk þ 1Þ X i¼1

_0

_0

 Z j ðk þ 1jk þ 1Þ  ½Zi ðk þ 1Þ  Z j ðk þ 1jk þ 1ÞT g

ð13Þ

Using the above interactive information as the input value of filtering, the second step filtering is carried out. 3.2

Multiple System Parallel Filtering

Similar to the multi-model tracking algorithm, the mixed observed values and mixed variances obtained in the first step are filtered and calculated as the input of the filter. In order to improve the performance of the localization, the extended kalman filter is used to solve the nonlinear problem. After linearization of the Eqs. (3) and (6) filtering recursive: _

_

x ðk þ 1jkÞ ¼ f ðxðkjkÞÞ

ð14Þ

pðk þ 1jkÞ ¼ Uðk þ 1jkÞpðkjkÞUT ðk þ 1jkÞ þ Qðk þ 1Þ

ð15Þ

Gain factor: Kðk þ 1Þ ¼ pðk þ 1jkÞH T ðk þ 1Þ½Hðk þ 1Þpðk þ 1jkÞH T ðk þ 1Þ þ R0j ðk þ 1Þ1 ð16Þ The variance is: Sj ðk þ 1Þ ¼ Hðk þ 1Þpðk þ 1jkÞH T ðk þ 1Þ þ R0j ðk þ 1Þ _

_0

_

_

ð17Þ

xðk þ 1jk þ 1Þ ¼ xðk þ 1jkÞ þ Kðk þ 1Þ½Z j ðk þ 1Þ  hðx ðk þ 1jkÞ

ð18Þ

pðk þ 1Þ ¼ ½I  Kðk þ 1ÞHðk þ 1Þpðk þ 1jkÞ

ð19Þ

The innovation is: _0

_

vj ðk þ 1Þ ¼ Z j ðk þ 1Þ  hðx ðk þ 1jkÞ

ð20Þ

Nonlinear Multi-system Interactive Positioning Algorithms

3.3

359

System Probability Update

System probability updating is a crucial step in the algorithm. And the updating use the posterior probability of system j in time k + 1. the likelihood function of each system and the weight of the system are calculated. The form mk þ 1 ¼ j means in time k + 1, system j is effective. It’s probability is expressed as qj ðk þ 1Þ. Zj ðk þ 1Þgnj¼1 denotes the set of observation vectors of multi-system at time k + 1. lj ðk þ 1Þ ¼ Pfqj ðk þ 1ÞjX k þ 1 g ¼ Pfqj ðk þ 1ÞjX k ; fZj ðk þ 1Þgnj¼1 g 1 ¼  PfZj ðk þ 1Þjqj ðk þ 1Þ; X k g  cj c

ð21Þ

The likelihood function is: Kj ðk þ 1Þ ¼ PfZj ðk þ 1Þjqj ðk þ 1Þ; X k g

ð22Þ

¼ N½vj ðk þ 1Þ : 0; Sj ðk þ 1Þ The normalization constant is: c¼

n X

Ki ðk þ 1Þcj

ð23Þ

j¼1

3.4

System Fusion Output

At the output end, we use Bayesian theory to fuse the results according to the weight of each system. Get a combined estimation and estimation error covariance of each system: _

x ðk þ 1jk þ 1Þ ¼

n X _ x j ðk þ 1jk þ 1Þlj ðk þ 1Þ

ð24Þ

j¼1

pðk þ 1jk þ 1Þ ¼

n X j¼1 _

_

lj ðk þ 1Þfpj ðk þ 1jk þ 1Þ þ ½x j ðk þ 1jk þ 1Þ _

_

ð25Þ T

 xðk þ 1jk þ 1Þ. . .. . .  ½x j ðk þ 1jk þ 1Þ  x ðk þ 1jk þ 1Þ g

4 Analysis of Simulation Experiment In most practical positioning applications, the observation of moving objects is based on the measurement of distance and azimuth. When sensors locate objects, such as GNSS satellite positioning, radar positioning, etc. Target tracking based on distance

360

X. Ma et al.

information is widely used, but the observation equation of target positioning is a nonlinear problem. In order to solve this problem, and effectively improve the robustness of multi-system positioning and other positioning performance, the traditional interacting multi-system positioning algorithm is improved. In this paper, a non-linear multi-system interactive positioning algorithm is proposed, simply called NL-IMS. At present, simulation experiments are carried out to verify the effectiveness of the algorithm. In the simulation, multi-system based on distance information is used to carry out experiments. For illustration, taking plane positioning as an example, the observation equation is established according to distance observation. ZðkÞ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðXðkÞ  X0 Þ2 þ ðYðkÞ  Y0 Þ2 þ VðkÞ

ð26Þ

In the simulation, the initial position of the target is set to (0 m, 0 m), the initial speed is set to (2 m/s, 10 m/s), and the positioning period is set to 0.2 s, and set 90 s. The coordinates of the reference station are set to (200 m, 200 m) and three systems are set to locate and track the target. The observation errors of the three systems are set as follows: Sampling time (s) 1–40 41–80 80–120

System1 (m) 3 10 10

System2 (m) 10 3 10

System3 (m) 10 10 3

The covariance matrix of observed noise is: 

diagð½9 m2 ; 9 m2 Þ; diagð½100 m2 ; 100 m2 Þ;

k ¼ 1  30 s k ¼ 31  90 s

8 < diagð½100 m2 ; 100 m2 Þ; R2 ðkÞ ¼ diagð½9 m2 ; 9 m2 Þ; : diagð½100 m2 ; 100 m2 Þ;

k ¼ 1  30 s k ¼ 31  60 s k ¼ 61  90 s

R1 ðkÞ ¼

 R3 ðkÞ ¼

diagð½100 m2 ; 100 m2 Þ; diagð½9 m2 ; 9 m2 Þ;

k ¼ 1  60 s k ¼ 61  90 s

The system probability at the initial time of the three systems is set to qi ¼ 1=3. The transformation between the three systems obeys the Markov chain, and the Markov transition probability matrix is set to: 2

0:98 pij ¼ 4 0:01 0:01

0:01 0:98 0:01

3 0:01 0:01 5 0:98

Nonlinear Multi-system Interactive Positioning Algorithms

361

The process noise driving matrix in the simulation is set as follows: G¼

1

2T

2

0

T 0

0 1 2 T 2

0 T

T

State transition matrix:  F ¼ diag

1 0

T 1



The variance matrix of process noise is set as Q ¼ 1e  4  diagð½0:5; 1Þ. Monte Carlo simulation is carried out. The tracking and positioning performance results of the proposed algorithm are displayed and analyzed, and compared with the IMS algorithm proposed in [12]. The performance advantages of the proposed algorithm are summarized. Figures 1 and 3 are the results of RMSE of location and velocity, Figs. 2 and 4 are the results of RMSE probability statistics based on location and velocity, Fig. 5 is the distribution of system probability of three systems in the whole experiment. Figure 6 is the results of comparison between real trajectory and estimated trajectory.

Fig. 1. Position deviation

362

X. Ma et al.

Fig. 2. Position deviation probability statistics

Fig. 3. Position deviation

Nonlinear Multi-system Interactive Positioning Algorithms

Fig. 4. Position deviation probability statistic

Fig. 5. System probability distribution

363

364

X. Ma et al.

Fig. 6. Target tracking line

5 Conclusion Analysis It can be seen from Figs. 1 and 3 that in the nonlinear case, the algorithm proposed in this paper has higher accuracy and stability than ims algorithm. The probability statistics in Figs. 2 and 4 supplement this situation. The system probability of the three systems in the whole fusion process is illustrated in Fig. 5. In the simulation, the systematic errors of the three systems in different time are tested. It can be clearly seen that when the system error is small, the weight of the system is larger, and the system alternates timely, indicating the effectiveness of the multi-system fusion positioning algorithm. In Fig. 6, the estimated trajectory of the multi-system fusion output is basically consistent with the real trajectory, with small difference and relatively stable. Experimental analysis to get the following conclusion: (1) The NL-IMS algorithm proposed in this paper has good accuracy and stability in multi-system fusion localization. (2) Compared with the traditional IMS algorithm, this algorithm is more suitable for the nonlinear situation in the real location and has practical feasibility. (3) This algorithm can adjust the system probability timely according to the system performance and improve the positioning performance of multi-system positioning. (4) The Markov transition probability of this algorithm is determined by artificial prior, and its self-adaptation has a certain hysteresis, so there is still a certain delay when the system is switched, and the self-adaptability of the transition probability of the system needs to be improved.

Nonlinear Multi-system Interactive Positioning Algorithms

365

References 1. Hui L (2006) The status quo and trend of target tracking based on interactive multiple model. Fire Control Command Control 31(11):865–868 2. Blom HAP, Bar-shalom Y (1988) The interacting multiple model algorithm for systems with Markovian switching coefficients. IEEE Trans Auto Control 33(8):780–783 3. Can Sun, Jianping X, Haozhe L et al (2012) S-IMM: switched IMM algorithm for maneuvering target tracking. J Convergence Inf Technol 14(7):461–468 4. Liang C, Junwei Y, Xiaodi S (2013) Model-set adaptive algorithm of variable structure multiple-model based on K-L criterion. Syst Eng Electron 35(12):2459–2466 5. Guixi L, Enke G, Chunyu F (2007) Tracking algorithms based on improved interacting multiple model particle filter. J Electron Inf Technol 29(12):2810–2813 6. Xiaobing L, Hongqiang W, Xiang L (2005) Interacting multiple model algorithm with adaptive markov transition probabilities. J Electron Inf Technol 27(10):1539–1541 7. Weidong Z, Jianan C, Long S (2014) Interacting multiple model with optimal mode transition matrix. J Harbin Inst Technol 46(11):101–106 8. Zhigang L, Jinkuan W (2012) Interacting multiple sensor filter for sensor networks. Acta Electron Sin 40(4):724–728 9. Zhigang L, Jinkuan W, Yanbo X (2012) Interacting multiple sensor filter. Signal Process 92:2180–2186 10. Liu M, Tang X, Zheng S et al (2013) Filtering of nonlinear systems with measurement loss by RUKF-IMM. J Huazhong Univ Sci Technol (Nat Sci Edition) 41(5):57–63 11. Weidong Z, Mengmeng L, Yongjiang Y (2014) An improved interacting multiple model algorithm based on multi-sensor information fusion theory. J South China Univ Tech (Nat Sci Edition) 42(9):82–89 12. Xiaoguang Z (2016) Interacting multiple system tracking algorithm. J Electron Inf Technol 38(2):389–393

Bandwidth Enhancement of Waveguide Slot Antenna Array for Satellite Communication Pengfei Zhao(&), Shujie Ma, Peiyao Yang, Fan Lu, and Shasha Zhang Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected]

Abstract. Owing to many advantages such as low losses in the feeder, high power handling capability, and high efficiency, waveguide slot antenna array has been widely used. However, the bandwidth of this kind of antenna is very limited. In this paper, 3 dB couplers are inserted to enhance the bandwidth of the waveguide slot antenna. To verify the validity of the bandwidth enhancement technique, A waveguide slot antenna array working at Ka band is designed, fabricated, and measured. Good agreement is found between the simulated and measured results, and the results show that the bandwidth is enhanced to 2.3 GHz (6.7%), which makes it suitable for satellite communication systems. Keywords: Waveguide slot antenna communication

 Bandwidth enhancement  Satellite

1 Introduction With the development of wireless communication technologies, broadband, high-gain and high efficiency antennas are in demand in satellite communication systems. Waveguide slot antenna array is a good candidate for its advantages such as low losses in the feeder, high power handling capability, and high efficiency [1, 2]. However, this kind of antenna suffers from narrow working bandwidth. For an array of four slots, the bandwidth is about 2% [3]. In [4] and [5], by bringing in cavity portion between the radiating slots and the coupling aperture, an 11% bandwidth is achieved. However, the complex multi-layer structure is depending on the new processing technic called diffusion bonding, which will increase the cost. Some substrate integrated waveguide [6] slot antennas were proposed; a 5% bandwidth is achieved. By using center-feeding technology [7], a 9.8% bandwidth is achieved. But the loss on the substrate is unacceptable, which leads to about 50% decrease of radiation efficiency. In this paper, a waveguide slot antenna array with inserted 3 dB couplers is designed and fabricated by soldering and brazing technology. The proposed antenna contains feeding network, 180° waveguide bend, 14-element sub-array, and 3 dB couplers between the waveguide bend and the feeding network. As we know, the real part of characteristic impedance of the waveguide slot antenna depends on the ratio of the modulus value of the transverse component of the electric field Et, to the magnetic field Ht. Sub-array with the 3 dB couplers can match the impedance of the feeding network in a wider band, thus the working bandwidth is improved from 2 to 6.7%. We use High Frequency Structure Simulator (HFSS) to model the antenna, and good agreement is found between the simulated and measured results. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 366–372, 2020 https://doi.org/10.1007/978-981-13-9409-6_43

Bandwidth Enhancement of Waveguide Slot Antenna

367

2 Antenna Design and Fabrication Figure 1 shows the perspective top view of the proposed waveguide slot antenna array. It is made up of two layers. Feeding network and 3 dB couplers are on the bottom layer, 14-element sub-arrays are on the top layer, and 180° waveguide bend connect the two layers.

Fig. 1. Perspective top view of the proposed antenna

The feeding network is made up of five H-plane T junctions to realize central symmetric and specific weights of each excitation, which is calculated based on the Taylor algorithm. The real part and imaginary part of the designed feeding network are 50 Ω and 0j Ω, respectively. The 14-element sub-arrays are made up of radiating waveguide, radiating slots, and short end. The slots are of same dimensions, offset and spaces between slots. For the sub-array end with a short terminal, the wave in it is traveling-standing. And the characteristic impedance is affected by parameters of the slots. By adjusting the slots, the imaginary part of characteristic impedance can be close to zero in a wide band. However, the real part of characteristic impedance is instable in a wide band. As shown in Fig. 2, from 33.3 to 35.6 GHz, the real part varies from 17 to 110 Ω. This is because the slots cut off the current on the waveguide wall, and electromagnetic fields are disturbed, so the ratio of Et to Ht has been changed. However, the characteristic impedance of the connected feed network is purely real of 50 Ω, and can’t match the sub-array in a wide bandwidth. We can note that with the 3 dB coupler, in a wider band the real and imaginary part are relatively stable and close to 50 Ω and 0j Ω, respectively. Thus, the matched bandwidth is broadened. The 3 dB coupler’s

368

P. Zhao et al.

structure is shown in Fig. 3. Port1 connects the feeding network; port2 and port3 connect two sub-arrays. Detailed values of the parameters are listed in Table 1. Also, the output phase at port2 differs 90° from port3, so we add the 180° waveguide bend to eliminate the phase difference.

Fig. 2. Characteristic impedance varies with frequency

Fig. 3. Schematic diagram of 3 dB coupler

Table 1. Dimensions of the 3 dB coupler (mm) w1 w2 w3 l1 l2 l3 Sw Sd 2 4.7 6.6 0.7 1.3 1.9 1 2

Bandwidth Enhancement of Waveguide Slot Antenna

369

In order to suppress the reflection, the bandwidth of the feeding network must be extended compared with the sub-array. Simulated results of each component are shown in Fig. 4.

Fig. 4. VSWR of each component

3 Results and Discussion Using soldering and brazing technology, the proposed waveguide slot antenna array is fabricated with aluminum to achieve light weight and high metal conductivity, as shown in Fig. 5. The dimension of it is 34 mm  100 mm  10 mm (exclusive the mechanical fixture for testing). In Fig. 6, the simulated and measured results are compared. It can be seen from Fig. 6 that the measured VSWR is below 2 from 33.3 to 35.6 GHz, which is 6.7% of relative bandwidth. Figure 7 presents the simulated and measured radiation patterns in both E-plane and H-plane. The side-lobe level is lower than −23 dB in the E-plane. The beam widths are 6° and 18.5° in E-plane and H-plane, respectively. Measured results show good agreement with the simulated ones. Figure 8 shows the frequency behavior of the calculated directivity, measured gain, and efficiency characteristics. The gain of the antenna is measured in an anechoic chamber, and the conductor loss and the reflection loss are included. The measured results of gain show that the efficiency is upon 80% in the working band including the losses.

370

P. Zhao et al.

Fig. 5 Photograph of the proposed antenna

Fig. 6. Measured and simulated VSWR

Fig. 7. Measured and simulated normalized gain

Bandwidth Enhancement of Waveguide Slot Antenna

371

Fig. 8. Frequency behaviour of the calculated directivity measured gain, and efficiency characteristics

4 Conclusion A 3 dB couplers inserted waveguide slot antenna with more than 24 dBi gain and more than 80% antenna efficiency in the Ka band is designed, fabricated, and measured. The matched bandwidth of the radiating sub-array is enhanced because the inserted couplers adjust the real part of the characteristic impedance. The feeding network is designed to suppress the reflection over a wide bandwidth. The measured VSWR is below 2 from 33.3 GHz to 35.6 GHz (bandwidth: 6.7%). The aperture field distribution follows the Taylor algorithm to achieve a −23 dB side-lobe in the E-plane. The proposed antenna is suitable for satellite communication systems. Next we plan to develop its sum and difference patterns functions, so that it can be used to transmit, receive information as well as track the source.

References 1. Stevenson AF (1948) Theory of slot in rectangular waveguide. J Appl Phys 19:24–28 2. Kimura Y, Miura Y, Shirosaki T, Taniguchi T, Kazama Y, Hirokawa J, Ando M, Shirozu T (2005) A low-cost and very compact wireless terminal integrated on the back of a waveguide planar array for 26 GHz band fixed wireless access (FWA) systems. IEEE Trans Antenna Propag 53(8):2456–2463 3. Mazen H (1989) Frequency limitations on broad-band performance of shunt slot arrays. IEEE Trans Antenna and Propag 37(7):817–823 4. Huang GL, Zhou SG, Chio TH, Yeo TS (2014) Broadband and high gain waveguide-fed slot antenna array in the Ku-band. IET Microwaves Antenna Propag 8(13):1041–1046 5. Miura Y, Hirokawa J, Ando M, Shibuya Y, Yoshida G (2011) Double-layer full-corporatefeed hollow-waveguide slot array antenna in the 60-GHz band. IEEE Trans Antenna Propag 59(8):2844–2851

372

P. Zhao et al.

6. Xu JF, Hong W, Chen P, Wu K (2009) Design and implementation of low sidelobe substrate integrated waveguide longitudinal slot array antennas. IET Microwaves Antennas Propag 3 (5):790–797 7. Chen M, Che WQ (2011) Bandwidth enhancement of substrate integrated waveguide (SIW) slot antenna with center-fed techniques. In: 2011 international workshop on antenna technology (iWAT), Hong Kong, China, pp 348–351

Design of an Enhanced Turbulence Detection Process Considering Aircraft Response Yuandan Fan(&), Xiaoguang Lu, Hai Li(&), and Renbiao Wu Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China, Tianjin, China {ydfan_16,elisha1976}@163.com

Abstract. Turbulence is very hazardous to the flight safety, which generally can be detected by airborne weather radar. In newly specification DO-220A revised by Radio Technical Commission for Aeronautics (RTCA), standards of enhanced turbulence detection with airborne weather radar have been complemented. In the specification, it is stated that the characteristics of aircrafts should be taken into account in the turbulence detecting process. The aircraft response following a turbulence encounter is analysed in this paper, and then the characteristics of aircrafts are quantified by employing the load factor. Based on the quantified analysis, the vertical load factor is predicted based on both radar observation and the characteristics of aircraft. It can provide more accurate turbulence metrics for crews involved with different aircraft types. The simulation results demonstrate that the vertical load factor based turbulence detection process meets requirements of DO-220A. Furthermore, the research is important for the study of enhanced turbulence detection specifications documented in DO-220A. Keywords: Airborne weather radar metrics  Vertical load factor

 Turbulence detection  Turbulence

1 Introduction Atmospheric turbulence is a hazardous weather to the flight safety, and generally is caused by the rapid and irregular motion of air. The turbulence encounters can lead to aircraft bumps and stresses acting on structural elements. Severe turbulence would even cause injury to passengers and damage to aircraft structure [1]. On April 19, 2018, an Air India flight from Amritsar to Delhi ran into such severe turbulence that three passengers suffered injuries, the inside part of a window panel came off and some overhead oxygen masks got deployed [2]. In order to warn pilots in advance of the potential turbulence hazards on the flight path, the airborne weather radar is equipped on the commercial aircrafts. In the early days, for non-coherent airborne weather radars, the turbulence is indicated by the amplitudes of the radar echoes [3], which is not reliable. After the appearance of the full coherent Doppler weather radar, with the Doppler effect, the mean speed of weather target and its speed diversion can be obtained by measuring the phase change of the

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 373–381, 2020 https://doi.org/10.1007/978-981-13-9409-6_44

374

Y. Fan et al.

radar echo. And then turbulence can be detected by estimating the spectrum width of the radar echoes [4], which has been applied in actual systems. Currently, the turbulence detection is processed according to the spectrum width of the radar echoes in the certified airborne weather radar. It is generally considered that weather objects with the velocity spectrum width of 5 m/s or greater would be considered turbulent in aviation [5]. In March 2016, the Radio Technical Commission for Aeronautics (RTCA) revised the minimum operational performance standards for airborne weather radar systems, w.r.t. DO-220A. DO-220A incorporates corrections to the previous version and technological advances in the field of airborne weather radar. In addition to modernizing the requirements and test procedures for the weather, ground mapping, and predictive wind shear functions set out in its predecessors, specifications were added for radar detection of turbulence and atmospheric threat awareness [6]. Specifications require that both the spectrum width and characteristics of aircraft should be considered for turbulence detection. And three aircraft classes based on wing loadings (aircraft weight divided by wing area) is established in DO-220A. It means that the different types of aircrafts would react much differently when they encounter turbulence, due to the differences in aircraft performance. Characteristics of the aircraft are necessary to be considered when detecting the turbulence using radar. As for the previous detection algorithms, the spectrum width is regarded as the only indication (it is generally considered that if the spectrum width of the echo is larger than 5 m/s, there is a turbulence). Actually, the turbulence with a spectrum width of 5 m/s may not impact a large aircraft, because of the well maneuverability of the aircraft. Thus, the traditional turbulence metric to indicate its absence and warn pilots may cause unnecessary re-route and reduce flight efficiency. However, the threshold defined would be too high for a small aircraft, resulting in a missed alarm and accompanied by an irreversible damage to aircraft. Furthermore, researches have suggested that the occurrence rate of mid-high intensity turbulence in winters on NAT will rise to its 40–170% by 2050, compared with that of before the industrialization [7]. Therefore, the more accurate detection of turbulence is important for improving flight safety and flight efficiency. In this paper, the turbulence detection is researched considering the impacts of the characteristics of aircraft. The aircraft’s response to turbulence encounters has been analysed, and the factor of the characteristics of an aircraft are quantified employing theories on load factor. Based on the quantified analysis, the vertical load factor is calculated with radar observations and the characteristics of aircraft, which provides more accurate scale of turbulence hazards for crews. Finally, an enhanced turbulence detection process is designed. This is a more reliable turbulence detection method, which is helpful for pilots to make a more efficient flight route with better safety and shorter diversions. The designed turbulence detection process is based on the requirements of DO-220A, and the research is of great significance for the study of enhanced turbulence detection specified in DO-220A.

Design of an Enhanced Turbulence Detection Process

375

2 The Estimation of the Vertical Load Factor During the flight, the steady airflow provides a smooth and constant lift forces for the aircraft. And so the aircraft could fly smoothly. However, encountered with a turbulence, the disturbed updraft will change the lift forces and hence a dynamical response of the aircraft. Then aircraft loads will also be produced, which are proportional to point variance of the turbulence velocity field. The stronger the turbulence is, the greater the load factor will be. When the maximum load is over, it would have an effect of the safety of the aircraft. For pilots and passengers, the preceding turbulence detection will improve the flight safety. In reference [8], a methodology of turbulence detection was given. Both the spectrum width of the radar echoes of the turbulence and the characteristics of the aircraft should be considered for turbulence detection. And the aircraft loads, denoted by rDn , is to quantifies a turbulence hazard to an aircraft. In detail, the structure of a generic hazard prediction algorithm based on airborne radar observables can be approximated by [8]: ^Dn ¼ r

rDn ½M 2 ð~ xÞ0:5 pffiffiffiffiffiffiffi ffi r2v ðrÞ unitrw

ð1Þ

r

where rDn =unitrw (g/m/s) means aircraft scale (conversion) factor and rw is defined as the standard deviation of the vertical component of the turbulent wind field. M 2 ð~ xÞ (m2/ p ffiffiffiffiffiffiffiffiffiffi ffi s2) is a quantity related to the spectrum width, and r2v ðrÞ=r is the theoretical compensation factor of the radar pulse volume. For clarity of notation, Eq. (1) can be simplified as: z¼xy

ð2Þ

where z is defined as the estimated vertical load factor, which is an airborne radar observable associated with the characteristics of the aircraft. The estimate can be used as a turbulence hazard metric for radar detection. y is the spectrum width of the radar echo. The spectrum width can be estimated employing many methods [9]. x is scaling factor of the aircraft in general, which depends on the characteristics of the aircraft (altitude, airspeed, and wing loadings). The specific estimation process of x is not be provided by literature [8] and the DO-220A, and the calculation details of x will be further discussed. The following mainly focus on the scaling factor of the aircraft. x can be determined by analyzing the aircraft’s response to turbulence, then Eq. (2) can be used to calculate z, and finally an enhanced turbulence detection process based on the vertical load factor can be given.

376

Y. Fan et al.

3 Enhanced Turbulence Detection Process Based on Vertical Load Factor In order to compute the aircraft scaling factor, the aircraft’s response to the turbulence should be analysed in detail. Firstly, it is necessary to construct a turbulence model which is considered as an input for the aircraft system. Secondly, a simplified aircraft model is also needed to be provided. Accordingly, the aircraft’s response to the turbulence can be quantified in virtue of the aerodynamics concepts, theories of flight mechanics model and relevant theory [10, 11]. And then the aircraft scaling factor which quantifies the impact of aircraft characteristics would be resolved according to the load factor and relevant theory [10]. According to Eq. (2), the calculation of the vertical load factor can be performed in further general detail. 3.1

Turbulence Model and Its Power Spectral Density Function

The aircraft’s response to turbulence is very complicated. For computation simplicity, it is very necessary to simplify the turbulence model. For a flying aircraft, turbulence can be regarded as gusts with obvious changes in airflow direction and intensity [12]. It is assumed that the turbulence is isotropic [13]. For simplicity, there is only the wings’ response to the symmetric vertical component of gusts hereon. Furthermore, although turbulence is a complex atmospheric phenomena, the real turbulence is highly unlikely to be either discrete gust or continuous Gaussian turbulence field. For lowing the research difficulty, it is possible to assure robustness of the aircraft structure to gusts and turbulence by covering these extremes [10]. The following only focuses on the analysis of the impact of continuous gusts on flight. It is provided that the continuous gusts can be represented by the random variation of the wind velocity along the flight path of the aircraft. And the random variable has a Gaussian distribution with zero mean, and its power spectral density (PSD) is represented by the Von Karman turbulence PSD function, Ugg ðxÞ, with units of (m/s)2/ (rad/m) [10]. L 1 þ ð8=3Þð1:339Lx=V Þ2 Ugg ðxÞ ¼ r2g h i11=6 p 1 þ ð1:339Lx=V Þ2

ð3Þ

where rg (m/s) denotes the turbulence intensity, which is also the root mean square turbulence velocity, L (m) is the turbulence scale. 3.2

Predicting Vertical Load Factor

Similarly, it is necessary to simplify the aircraft system model to simplify the aircraft’s response to turbulence. The following assumptions are made: the aircraft is a rigid aircraft with a weight of m, and its wings are not swept. The aircraft is in a trimmed level flight condition (with lift = weight) prior to encountering the turbulence [10, 11]. When an aircraft encounters turbulence, the gust velocity is constant across the aircraft span, and the aircraft will heave (move up or down) without pitching.

Design of an Enhanced Turbulence Detection Process

377

It also assumes that the quasi-steady aerodynamic representation will be employed, which means that the lifting surface enters the gust instantaneously, and the effective incidence angles and lift force are developed instantaneously. The above incremental lift forces are due to both the response and gust velocity. Thus, the heave equation of motion of the aircraft can be established by using Newton’s second law and performed in the frequency domain [10]. As a result, the transfer function relating the (downwards) heave acceleration response to the (upwards) gust velocity at frequency x is given by [10] Hzg ðxÞ ¼ x2

 12 qVSW a ~zc ¼ x2 wg0 x2 m þ ix 12 qVSW a

ð4Þ

where ~zc is the displacement due to aircraft’s heave response, wg0 and q are amplitude of gust velocity and air density respectively, V and SW are the flight speed and the wing area, and a is the lift curve slope for the whole aircraft. The center of mass acceleration response PSD is then obtained by connecting the transfer function of the system and the gust velocity PSD  2 Urr ðxÞ ¼ Hrg ðxÞ Ugg ðxÞ

ð5Þ

The RMS normal load per RMS vertical gust intensity, which is the aircraft scaling factor to be determined, is given by ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R xmax Urr ðxÞdx rr 0 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x¼ R xmax rg U ðxÞdx 0

ð6Þ

gg

Based on the content mentioned above, the vertical load factor can be predicted from Eq. (2), assuming the spectrum width is known. Consequently, calculation process of z based on response is shown in Fig. 1. Considering the different levels of turbulence intensity are defined by the different values of vertical load factor, the categorization adopted within this paper can be found in reference [8]. At this time, the severity of turbulence can be quantified, according to the categorization.

Fig. 1. Calculation flowchart of vertical load factor

378

Y. Fan et al.

4 Numerical Examples 4.1

Analysis of Examples

PSD

Taking an aircraft for example, a numerical experiment is implemented to calculate aircraft scaling factor and predict vertical load factor in a specific flight condition for estimating a specific turbulence hazard for the aircraft. Consider an aircraft with the following performance parameters: V ¼ 180 m/s, H ¼ 4500 m, SW ¼ 30 m2 , m ¼ 10000 kg, a ¼ 4:5=rad, wing loading is 332.0 kg/m2, rg ¼ 1 m/s, L ¼ 762 m. Assuming the known spectrum width is 5 m/s and the rigid aircraft uses the heave only model with quasi-steady aerodynamics, find the aircraft scaling factor and the prediction of vertical load factor due to the turbulence. First, in term of Eq. (3), the Von Karman turbulence PSD is plotted in Fig. 2.

10

1

10

0

10

-1

10

-2

10

-3

10

-4

10

-5

10

|Transfer function|2 Acceleration response PSD Von Karman turbulence

-2

10

-1

10

0

10

1

Frequency(Hz)

Fig. 2. Von Karman turbulence, |Transfer function|2 and the acceleration response PSD

The modulus squared value of the transfer function and the acceleration response PSD are shown in Fig. 2, based on the calculation flowchart of vertical load factor presented in Fig. 1. It can be concluded that the transfer function is an aircraft system property and the transfer function also dictates how the aircraft behaves effect by gusts at any frequency. According to Eq. (6), the aircraft scaling factor can be calculated as 0.0654 g/m/s. Vertical load factor may be calculated by inspection of Eq. (2) as z ¼ 0:0654  5 ¼ 0:327 g. Therefore, comparing this value with the turbulence intensity table shown in Ref. [8], the turbulence with a spectrum width of 5 m/s is a severe turbulence for this aircraft.

Design of an Enhanced Turbulence Detection Process

4.2

379

Application Analysis

Three aircraft classes based on wing loading is established in DO-220A. Class A, B, and C are defined as aircraft with wing loading equal to 390.6–659.1 kg/m2, 292.9– 488.2 kg/m2, and 146.5–341.8 kg/m2, respectively. In order to verify that the aircraft scaling factor is related to the quantization of the impacts of the characteristics of aircraft, the response of different types of aircraft to the same turbulence is compared in this section. Assume that flight conditions are the same as above and the spectrum width is 5 m/s, in order to quantify the impact of turbulence on different aircrafts, the typical types in these classes are selected for simulation. The results are shown in Table 1.

Table 1. The predicted value of vertical load factor for several types of aircraft Aircraft class Class A

Class B Class C

Types B777300ER B747400 B777300 B737700 A380 A330200 A320200 C919 ERJ190 ARJ21 ERJ145 ERJ135 MA-60

Wing loading (kg/m2) 821.7

x (g/m/s)

z (g)

0.0365

0.1825

776.7

0.0380

0.1900

699.8

0.0409

0.2045

664.9

0.0425

0.2125

662.7 636.1

0.0426 0.0438

0.2130 0.2190

628.2

0.0442

0.2210

561.4 539.6 507.1 410.1 371.2 290.7

0.0478 0.0492 0.0513 0.0593 0.0634 0.0745

0.2390 0.2460 0.2565 0.2965 0.3170 0.3725

Levels of turbulence intensity Moderate turbulence

Moderate to severe turbulence

Severe turbulence

As seen from Table 1, when aircrafts encounter the same turbulence, as for the aircraft with the smaller wing loading, the turbulence has a greater impact on the aircraft. It can be concluded that these aircrafts with different wing loading react differently to turbulence with the same spectrum width under the same flight conditions, and vertical load factor is a more accurate turbulence hazard metric. It also indicates that the aircraft scaling factor is a constant for the same aircraft with the same flight condition and is only related to the inherent characteristics of the aircraft under a specific flight condition. It is also the response following a unit vertical gust velocity.

380

Y. Fan et al.

5 Conclusion The aircraft response induced by a turbulence encounter was analysed in this paper, and then the characteristics of aircrafts were quantified. Based on the quantified scaling factor and radar observation, the vertical load factor which is a more accurate turbulence metrics can be estimated, thereby a turbulence detection process was designed. Finally, some examples have been given according to the designed turbulence detection process. The demonstration results show that different aircrafts would react differently to a same turbulence. And the aircraft with the smaller wing loading, the higher the value of vertical load, and the greater the impact on the aircraft. Therefore, the design of an enhanced turbulence detection process is very helpful to reasonably predict areas of turbulence risks to aircraft by using airborne weather radars. Acknowledgements. We thank the Foundation Items: National Nature Science Foundation of China (NSFC) under grant U1633106, U1733116, National University’s Basic Research Foundation of China under grant No. 3122017006, and Foundation for Sky Young Scholars of Civil Aviation University of China.

References 1. Golding WL (2002) Turbulence and its impact on commercial aviation. J Aviat/Aerosp Educ Res 11(2):8 2. Newshub Live At 6 pm, http://www.newshub.co.nz/home/world/2018/04/three-injured-onair-india-flight-as-another-plane-window-detaches.html 3. Lee JT, McPherson A (1971) Comparison of thunderstorms over Oklahoma and Malaysia based on aircraft measurements. In: Proceedings of the international conference on atmospheric turbulence, pp 1–13 4. Lu XG, Xia D (2011) Method for setting threshold of turbulence detection based on statistical confidence level. J Civil Aviat Univ China 29(4):27–30 (in Chinese) 5. Collins R (2003) Collins WXR-2100 MultiScan™ radar fully automatic weather radar. Internet Citation. Jan, 1-1OPP 6. RTCA/DO-220A (2016) Minimum operational performance standards (MOPS) for airborne weather radar system. RTCA Inc, Washington D.C 7. Williams PD (2017) Increased light, moderate, and severe clear-air turbulence in response to climate change. Adv Atmos Sci 34(5):576–586 8. Bowles RL, Buck BK (2009) A methodology for determining statistical performance compliance for airborne Doppler radar with forward-looking turbulence detection capability. NASA CR. 215769 9. Warde DA, Torres SM (2014) Improved spectrum width estimators for Doppler weather radars. In: Proceedings of the 8th European conference on radar in meteorology and hydrology, Garmisch-Partenkirchen, p 8 10. Wright JR, Cooper JE (2008) Introduction to aircraft aeroelasticity and load. Wiley, England 11. Howe D (2004) Aircraft loading and structural layout. Professional Engineering Publishing, London

Design of an Enhanced Turbulence Detection Process

381

12. Zhang JT (2005) Vertical and lateral gust loads analysis of airplane. Master, Northwestern Polytechnical University (in Chinese) 13. Zhao SN, Hu F (2015) Turbulence question: How do view “the homogenous and isotropic turbulence”? Scientia Sinica Physica, Mechanica & Astronomica 45(2):24701 (in Chinese)

Rain-Drop Size Distribution Case Study in Chengdu Based on 2DVD Observations Yan Liu1(&), Debin Su1,2, and Hongyu Lei1 1

Chengdu University of Information Technology, No. 24, Section 1 Xuefu Road, Southwest Airport Economic Development Zone, Chengdu 610225, China [email protected] 2 Key Laboratory of Atmospheric Detection, China Meteorological Administration, Chengdu 610225, China

Abstract. This paper selects the precipitation data of three precipitation processes on July 2, July 8 and July 11 of 2018 obtained from the two-dimensional video disdrometer (2DVD) of Chengdu University of Information Technology (CUIT). By counting the raindrop size distribution, calculating the total particle density and the median volume diameter during the sampling time to analyze the change of the raindrop spectrum during the precipitation process, and then calculating the precipitation intensity and the radar reflectivity factor during the sampling time. Combining the above related parameters for analysis, the following conclusions are obtained: The three precipitation processes are mainly composed of small raindrops with a diameter of 0.1–1 mm; Unstable precipitation will lead to a large change in the total particle density and median volume diameter, and the total particle density will change by 2 orders of magnitude, and the median volume diameter will vary by 1 mm. Keywords: 2DVD  Raindrop size distribution  Total particle density Median volume diameter  Precipitation intensity



1 Introduction Researchers at home and abroad have been studying the observation of raindrop spectrum for a long time. In the early days, the raindrop spectrum was measured by the filter paper stain method and the flour ball method, the impact type raindrop spectrometer was developed in the 1960s, and the laser raindrop spectrometer appeared in the late 1990s. With the advancement of technology, Austrian Joanneum Research developed 2DVD to observe the raindrop spectrum. However, there are few studies on the water drop spectrum in China using 2DVD observation data. Liu et al. [1] analyzed the raindrop spectrum data of the Chengdu area obtained by laser raindrop spectrometer, and concluded that the precipitation intensity mainly depends on the heavy raindrops, and the contribution rate of the small raindrops is negatively correlated with the rain intensity. Zhou et al. [2] analyzed the data of the laser raindrop spectrometer in Shandong province, and concluded that the precipitation intensity mainly depends on the maximum raindrop diameter, which is positively © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 382–389, 2020 https://doi.org/10.1007/978-981-13-9409-6_45

Rain-Drop Size Distribution Case Study in Chengdu

383

correlated with the raindrop concentration, but has little relationship with the average diameter. Gong et al. [3] analyzed raindrop spectrum data in Liaoning and concluded that the increase of urban aerosol particles would increase raindrop number density. In the summer of 2018, continuous rainstorm weather occurred in Chengdu. The analysis of the droplet spectrum characteristics of convective precipitation by using 2DVD observation data is of great significance for further understanding the convective precipitation process, providing scientific basis for the numerical model, and quantitative estimation of precipitation by radar in summer in Chengdu.

2 Instruments and Data 2.1

Instrument Introduction

The raindrop data of this paper was continuously observed by the 2DVD in the observation field of CUIT (103.98° E, 30.55° N). 2DVD is an advanced precipitation particle measuring device developed by Joanneum Research of Austria. It scans highspeed moving objects linearly by two cameras placed at different heights and at 90° to measure the size, shape, orientation and landing speed of individual precipitation particles in real time. 2DVD’s superior performance is measured in small objects. When the particle falling speed 1 @ sin a0 cos d0 A > ~ r0 ¼ parc > 100 ½N0  > < sin d10 0 cos a cos d0 > 0 > > > @ sin a0 cos d0 A ~ > ¼ ½N  r 0 0 > : sin d0

if

p[0 ð4Þ

if

p¼0

0 1 8 la cos d0 =p > > > ~  3 ð90  a0 ÞR  1 ð90 þ d0 Þ@ ld =p A > V ¼ ½N0 R > > < Vr 1 0 cos d0 l > a > > >  3 ð90  a0 ÞR  1 ð90 þ d0 Þ@ ld A ~ > V ¼ ½N0 R > : 0

if

p[0 ð5Þ

if

p¼0

where, ½N0  is the centroid equatorial coordinate system of the initial ephemeris, generally the [ICRS] coordinate system at time J2000.0, p is the parallax Angle of celestial bodies, la ; ld is the self-parameter of celestial bodies, and Vr is the apparent velocity of celestial bodies. After making this correction, we need to convert the star position r1 . vector ~ r10 into the unit direction vector ~ (2) Correction of annual parallax   ~ r1 þ D~ r2 ¼ ~ r1 þ p~ r1  ~ r1  ~ R r2 ¼ ~

ð6Þ

p is the parallax Angle of celestial bodies, and ~ R is the coordinate vector of the earth’s center of mass relative to the center of the solar system. This position vector can be obtained by reading the solar system numerical ephemeris DE405, or by using the VSOP2000 analytic ephemeris [5]. For this project, the use of DE405, DE421 or DE430 calendar table can ensure that the accuracy requirements of the project. (3) Gravitational deflection correction of light   ~ r2 þ D~ r3 ¼ ~ r2 þ h~ r2  ~ r2  ~ R r3 ¼ ~

ð7Þ

D is the complementary Angle between the direction of the sun at the center of the earth and the direction vector of the measured star, and ~ R is the unit direction vector of the sun relative to the center of the earth. (4) Correction of annual aberration

High Precision Spatiotemporal Datum Design Based

411

  1 ~ r3  ~ R_ r3  ~ c

ð8Þ

~ r3 þ D~ r4 ¼ ~ r3 þ r4 ¼ ~

c is the speed of light, ~ R_ is the instantaneous velocity vector of the earth’s center of mass relative to the center of the solar system, which can be obtained by reading the solar system DE numerical ephemeris. (5) Precession rotation of the earth N   1Þ~ ~ r4 þ D~ r5 ¼ ~ r4 þ ðP r4 r5 ¼ ~

ð9Þ

 and the nutation matrix N  are shown below The precession matrix P  ¼ Rz ðZA ÞRy ðhA ÞRz ðfA Þ P

ð10Þ

 ¼ Rx ðe  DeÞRz ðDwÞRx ðeÞ N

ð11Þ

fA is the moving component of the mean vernal equinox on the equator of the initial ephemeris; hA is the total displacement of the instantaneous mean celestial pole from the initial mean celestial pole, and is also the declination component of the movement of mean vernal equinox. ZA is the moving component of the mean vernal equinox on the instantaneous equator; Dw and De are meridional nutation and angular nutation. Their expressions and specific values are provided by the astronomical constant system, as detailed in the IERS2010 specification [6]. (6) Diurnal parallax correction   ~ r5 þ D~ r6 ¼ ~ r5 þ p~ r5  ~ r5  ~ RN r6 ¼ ~

ð12Þ

~ RN is the geocentric radial vector of the station (i.e. the position of the ground observer). (7) Correction of diurnal solar aberration ~ r6 þ D~ r7 ¼ ~ r6 þ r7 ¼ ~

  1 ~ r6  ~ R_N r6  ~ c

ð13Þ

~ R_N is the velocity vector of the station’s Diurnal motion relative to the center of the earth. (8) Atmospheric refraction correction 

~ r7 þ D~ r8 ¼ ~ r7 þ R secðzobs Þ~ r7  ð~ r7 ~ rz Þ ¼ sin1z7 f~ r7 sinðz7  RÞ þ~ rz sinðRÞg r8 ¼ ~ ~ r8 robs ¼ ~ ð14Þ

412

Y. Huang et al.

~ rz is the direction vector of the local zenith at the time of observation, R is the refraction Angle of the atmosphere, zobs is the observed zenith distance of celestial bodies, and z7 is the true zenith distance of celestial bodies. However, the observation zenith distance zobs ¼ z7  R can only be obtained after the atmospheric refraction correction is completed, so the atmospheric refraction correction needs iteration to solve the exact value. Here, the atmospheric refractive Angle R can be calculated by the following formula when the zenith distance is not greater than 70°. 8 273:15 > < RðT; PÞ ¼ R0  PðmbÞ  1013:25 273:15 þ Tðo CÞ > : R0 ¼ 6000 :29 tanðzÞ  000 :06688 tan3 ðzÞ

ð15Þ

T, P, respectively, for the temperature and pressure of observation time, R0 is a standard atmospheric conditions (north latitude 45° sea level, the temperature 0 °C, pressure 1013.25 mm), the wavelength of 0.57 microns of yellow star approximate refraction. (9) Transformation from the equatorial coordinate system to the horizontal coordinate system All the above steps can be derived and calculated in the equatorial coordinate system, but in the actual observation, we need to get its position parameters in the horizontal coordinate system. The observation position of celestial bodies is converted from the right ascension and declination coordinates ðaobs ; dobs Þ in the geocentric equatorial coordinate system to the azimuth and altitude coordinates ðA; H Þ in the horizontal coordinate system, which can be achieved by the following formula 0

cos aobs cos dobs

1

B C C ~ robs ¼ ½Z½Z0 ½NB @ sin aobs cos dobs A sin dobs 0 1 cos aobs cos dobs  B C p ¼ ½ZRy u  Rz ðSl þ pÞB sin aobs cos dobs C @ A 2 sin dobs 0 1 cos A cos H B C C ¼ ½ZB @ sin A cos H A sin H

ð16Þ

Sl is the local sidereal time at the observation time, u is the astronomical latitude of the observation place, Sl þ p is the azimuth starting point of the horizon is astronomical north, and it is positive to the west, and the right-handed system. If you want to get the

High Precision Spatiotemporal Datum Design Based

413

horizontal coordinates (north–east–south) defined by the traditional left-handed system, just take the negative of the azimuth calculated here.

4 Simulation (1) Calculation of the position of stars Suppose there is a star, and the information of the catalog is as follows: SID: 647,080 Right ascension (hour): 0.001823085 ICRSJ2000 declination (degrees): 25.88645705 Right ascension proper motion (milliarcseconds/year): 20.23 Declination self (fems/year): −7.14 Parallax (femtosecond): 4.34 Apparent velocity (km/s): −31.0 Calculate the star’s position at 10:00:00 UTC on October 15, 2017. (1) Call iau_CAL2JD function (green calendar conversion to Julian day function), iau_DAT function (calculate the TAI UTC of the specified date, i.e. International atomic time coordinated universal time function) and iau_TAITT (international atomic time TAI conversion to earth time function) to calculate the TT corresponding to the observation time, TT = JD 2458041.91746741 (2) Call PLACE, and set the parameter as: The OBJECT = ‘*’ LOCATN = 0 ICOORD = 1 STAR (1) = 0.001823085 do STAR (2) = 25.88645705 do STAR (3) = 20.23 do STAR (4) = 7.14 do STAR (5) = 4.34 do STAR (6) = 31.0 do (3) Relative to the imaginary observer at the center of the earth, the apparent direction of the star is RA ¼ 0:2587296; DE ¼ 25:9869708: (2) Calculation example: calculation of the position of solar system celestial bodies Suppose Neptune is observed at a different position at 10:00:00 UTC on October 15, 2017.

414

Y. Huang et al.

(1) Call iau_CAL2JD, iau_DAT and iau_TAITT to calculate the TT corresponding to the observation time. TT ¼ 2458041:91746741 JD (2) Call PLACE, and set the parameter as: The OBJECT = ‘NEPTUNE’ LOCATN = 0 ICOORD = 3 (3) Relative to the imaginary observer at the center of the earth, the astrometric direction of Neptune is RA ¼ 343:3945581; DE ¼ 8:0771549:

5 Conclusion In this paper, a time-reference and time-system conversion relation and correlation algorithm are designed for a high-precision space time reference system based on ground-based observation positions. The related influencing factors of ground-based observation positions are enumerated, and the common coordinate systems and their mutual conversion relation in high-precision space time reference system are strictly defined. Completed the design involving time reference and time system, common coordinate system and mutual conversion relation, etc., provided the geometric information and spatial and temporal distribution information of space geographic space, and laid the theoretical foundation for subsequent high-precision celestial body observation, satellite precise positioning and other space applications.

References 1. Pan Q, Ye Z, Feng Q (2015) Design of unified space-time reference based on Beidou/RFID system. Electr Measur Technol 10:11–16 2. Chen D, Su Y, Cui H (2019) Temporal and spatial connotation and characteristics of entities in pan-spatial information system. Geomat Spatial Inform Technol 42:52–55 3. Lei W, Zhang H, Li K (2016) Calculation and comparison of two coordinate transformation models between GCRS and ITRS. J Geom Sci Technol 33(3):236–240 4. Zhang H, Zheng Y, Ma G (2011) Research on coordinate transformation between GCRS and ITRS. J Geodesy Geodynam 31(1):63–67 5. Su M, Bao H, Zhao J (2015) A deduction of implement formula on astronomic longitude and latitude reducing to center. Eng Survey Mapp 6:1–3 6. Lei W, Zhang H, Li K (2016) Effects of precession-nutation models update, polar motion, difference between UT1 and TT on coordinate transformation. J Spacecraft TT C Technol 35(1):53–62

Study on Two Types of Sensor Antennas for an Intelligent Health Monitoring System Yang Li, Licheng Yang, Xiaonan Zhao, Bo Zhang, and Cheng Wang(&) Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China {liyang_tongxin,xiaonan5875}@163.com, [email protected]

Abstract. In this study, two types of in-body sensor antennas, which were designed for intelligent health monitoring systems, are studied and discussed. The impendence matching of two types of in-body sensor antennas are investigated. The transmission characteristics of in-body sensor antennas are explored. The traits of these two types of in-body sensor antennas are summarized. And the application range of these in-body sensor antennas is also proposed. Keywords: Intelligent health monitoring system  Body-centric wireless communications  In-body sensor antenna  Transmission characteristic

1 Introduction Due to the widespread application on Intelligent Health Monitoring System, sensor network gains more and more attentions from communicators and Antenna researchers [1, 2]. Generally, an intelligent health monitoring system based on body-centric wireless communications (BCWCs) [3] transmits the data collected inside of human body by a wireless in-body sensor. Considering the structure of human body, the physical dimension of an in-body sensor is limited to approximately 26 mm in length and approximately 10 mm in diameter. Restricted to its geometric size, the in-body antenna is supposed to have bad performances in the areas of in-body efficiency, absorption of electromagnetic waves and propagation loss [4]. To cope with these imperfections, there have been several types of in-body antenna design methodologies [5–8] which provide workable solution for Intelligent Health Monitoring System. In these previous researches, the methods of antenna design could be divided into two different design thought. The former was to place the antenna inside the in-body sensor [5–7]. On the contrary, in the later design, the antenna was fabricated on the outer wall of the in-body [8], which was in direct contact with the human body medium. However, a comparison of characteristics of these two types of in-body antenna has not been sufficiently studied and investigated.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 415–421, 2020 https://doi.org/10.1007/978-981-13-9409-6_49

416

Y. Li et al.

This article is written in the following parts. Two types of in-body sensor antenna are shown in Sect. 2. Then, in Sect. 3, the transmission characteristics of antenna are investigated. Finally, the results and observations are summarized in Sect. 4.

2 Two Types of in-Body Sensor Antenna Two types of in-body sensor antenna, namely, the out-wall type antenna and the in-wall sensor antenna, are investigated and discussed. Refer to the validity of the experimental results, it is critical to adopt the correct simulation method and the correct experimental method. In addition, we found that the results of finite-difference time-domain (FDTD) simulation accorded well with the measurement results of the dipole antenna in deionized water. Thus, a correct simulation method was obtained. And a kind of human body equivalent liquid material, developed by SPEAG Co. Ltd., was used as human body tissue simulating liquid. This tissue simulating liquid was called as “HBTSL” in the following parts. Figure 1 shows the relative permittivity and conductivity of the liquid material, which indicates that there is no particularly difference between the value of the HBTSL and the measured data of human body tissues provided by Gabriel [9] in the frequency range of 200 MHz–2 GHz. As for numerical analysis, a commercial human torso-shaped phantom named “Torso”, whose shell is made of fiberglass (er = 3.5) was in use as a container of HBTSL, as shown in Fig. 1. An in-body sensor antenna is put inside the torso-shaped phantom and an external receiving sensor antenna is fabricated outside the torso-shaped phantom. The distance between two antennas is set to D = 74 mm (Fig. 2). In the former parts, two different types of in-body sensor antennas were proposed. In order to study these two antennas, the traits of the two types of in-body sensor antennas which were proposed in [10] are shown in Fig. 3. In situation (a), the distance of the sensor inside the in-body d = 4 mm to the center axis, the sensor separation of liquid from the human body in situation (b), and the out-wall sensor antenna with a distance d = 5 mm to the center axis, which means that the sensor antenna exposed to the human body medium.

Fig. 1. Relative permittivity and conductivity of the HBTSL.

Study on Two Types of Sensor Antennas for an Intelligent Side-view

417

Front-view

In-body sensor antenna, l1

D Out-body sensor antenna, l2 l1=20 mm, l1=260 mm, D=74 mm

Fig. 2. Analysis model: torso-shaped human body phantom.

r=1

l1

d 10

r=1

d

30 l1 = 20 mm, d = 4 mm

l1 = 20 mm, d = 5 mm

In-wall sensor

Out-wall sensor

Fig. 3. Geometries of the in-body sensor antennas performed in [10]. a In-wall sensor antenna. b Out-wall sensor antenna.

The frequency characteristics of the input impedance of the two types of in-body sensor antenna are shown in Fig. 4. The results indicated that, in the frequency range of 200 MHz–2 GHz, in-wall sensor antenna is an electric small antenna, comparing to the wavelength. Its resistance is less than 20 Ω, which is quite small, and the reactance is negative, appearing capacitance. When it comes to out-wall sensor antenna, the resistance becomes large, while its reactance is X = 0 at 1.2 and 2 GHz. The wavelength of antenna in dielectric liquid becomes small. Presuming that the effective dielectric permittivity is from: eg ¼

er þ 1 2

ð1Þ

The effective wavelength kg at 1.2 GHz becomes 50 mm when er = 49, closed to the twice the dipole length, which is 20 mm in length, which resonance X = 0 at 1.2 GHz. The effective wavelength is given by: k0 kg  pffiffiffiffi eg

ð2Þ

eg is the effective dielectric permittivity approximated in Eq. (1). When er = 55.4 and l1 = 0.43kg, the effective wavelength kg = 47 mm, which manifests that the outwall sensor antenna is a kind of half-wavelength antenna at 1.2 GHz.

418

Y. Li et al.

(a)

(b)

Fig. 4. Impedance of the in-body sensor antennas. a Resistance R. b Reactance X.

Figure 5a shows that the input reflection coefficient |S11| of in-wall sensor dipole antenna is quite big on the frequency range of 200 MHz–2 GHz, while the |S11| of outwall sensor antenna is smaller. As for the |S21|, which is from the in-body sensor antenna through the torso-shaped phantom to the external antenna, is performed in Fig. 5b, it indicates that in-wall sensor antenna has a higher performance of |S21| = −29 dB at 1 GHz, while the value of |S21| of out-wall sensor antenna is −57 dB. These simulation analysis results means that the electric length of antenna which is placed on the out-wall will increase when in contact with liquid and in this case in-body sensor antenna will gain a maximum of |S21|.

3 In-Body Antenna Transmission Characteristic To investigated transmission characteristic of these two sensors of in-body sensor antenna better; the transmission factor s [11] is adopted to weigh up the maximum received power of antenna in the case of complex conjugate matching. A source with an internal impedance of ZS supplies excitation for the antenna, while the external antenna with an internal impedance of ZL. The power PL is transmitted to the ZL. Pin is

Study on Two Types of Sensor Antennas for an Intelligent

(a)

0

419

(a) Isolated In-wall sensortype

|S11 | [dB]

(b) Surface type Out-wall sensor

-10

-20

-30

0

0.5

1

1.5

2

Frequency [GHz]

(b)

0

In-wall sensor

-10

ZS=50

, ZL= 50

Out-wall sensor

|S21 | [dB]

-20 -30 -40 -50 -60 -70 -80

0

0.5

1 1.5 Frequency [GHz]

2

Fig. 5. Scatter parameters of the in-body sensor antennas. a Reflection coefficients. b Transmission coefficients.

the input power, while Pinc is the incident power. The reflection coefficients CS and CL looking toward the source ZS and the load ZL and Cin and Cout are the reflection coefficients from Port 1 and Port 2. The transmission factor is determined as  PL  PL 1 1  jCL j2 ¼ ¼ s¼ jS21 j2  2 Pinc ZS ¼Z  ;ZL ¼Zout Pin 1  jCS j  j1  S22 CL j2 in

ð3Þ

The transmission factors of the two types of in-body sensor antennas are displayed in Fig. 6, which demonstrates that a local maximum exists in the range of frequency. It is clear that the transmission factor s is higher than the value of out-wall sensor antenna, due to the increasing of conductivity loss when the antenna touches with liquid. An inwall sensor in-body antenna, ZS = 3.02 + j2467.22 Ω and ZL = 18.62 + j467.14 Ω, a large value of s = −20.0 dB at 500 MHz is acquired. In a word, for the in-wall sensor antenna, the impedance matching performance is good but has a small value of transmission factor. As for the out-wall sensor antenna, the value of transmission factor is larger, but has a bad performance on impedance matching. And its impedance matching performance is influenced by the relative permittivity of material surrounding it. Besides, the conductivity of the material impacts the transmission factor of antenna.

420

Y. Li et al.

Transmission factor [dB]

0

Out-wall sensor

* ZS=Zin* , ZL=Zout

-10 -20 -30

In-wall sensor

-40 -50 -60

0

0.5

1

1.5

2

Frequency [GHz]

Fig. 6. Transmission factors of the in-body sensor antennas.

4 Conclusion In this study, two types of in-body sensor antenna which are designed for intelligent health monitoring systems are studied and investigated. The traits of antennas, the transmission characteristics of antennas, the influences of relative permittivity and conductivity are compared and discussed. The in-wall sensor in-body sensor antenna, which is generally placed inside of the in-body sensor, thus it has a small conductivity loss, though the impedance matching performance is not good enough in the frequency range of 200 MHz–2 GHz. For the out-wall sensor antenna, which is usually in contact with the human body tissue liquid, therefore, the impedance matching is good in the frequency range of 1–2 GHz, however, the conductivity loss of out-wall sensor is larger. The characteristics of these two type sensor antennas have been summarized. When it comes to the application of these antennas, we can implement the appropriate antenna according to the practical situation. The out-wall sensor antenna could be applied to make use of the out-wall sensor when the inner space of in-body is strictly limited. And matching circuits are not required because of its good performance on impedance matching. Otherwise, when the demand of received power is more critical, the in-wall sensor antenna with matching circuits is more appropriate. Acknowledgements. This work made use of the Funding Program of Tianjin Higher Education Creative Team. The authors acknowledge the Natural Science Foundation of Tianjin City (18JCYBJC86000, 18JCYBJC86400), the Science and Technology Development Fund of Tianjin Education Commission for Higher Education (2018KJ153) and the Doctoral Funding of Tianjin Normal University (52XB1604, 52XB1905) for supporting this work. C.W. acknowledges the Distinguished Young Talent Recruitment Program of Tianjin Normal University (011/5RL153). The authors also would like to thank Professor Qiang Chen at Tohoku University for allowing us to use the computer with the electromagnetic software installed in his lab.

Study on Two Types of Sensor Antennas for an Intelligent

421

References 1. Liang Q, Cheng X, Huang SC, Chen D (2014) Opportunistic sensing in wireless sensor networks: theory and applications. IEEE Trans Comput 63(8):2002–2010 2. Liang Q, Chen X, Samn SW (2010) NEW: network-enabled electronic warfare for target recognition. IEEE Trans Aerosp Electr Syst 46(2):558–568 3. Hall PS, Hao Y (2012) Antennas and propagation for body-centric wireless communications, 2nd edn. Artech House, London, England, UK, pp 586–589 4. Iddan G, Meron G, Glukhovsky A, Swain P (2000) Wireless capsule endoscopy. Nature 405(6785):417 5. Chirwa LC, Hammond PA, Roy S, Cumming DRS (2003) Electromagnetic radiation from ingested sources in the human intestine between 150MHz and 1.2GHz. IEEE Trans Biomed Eng 50(4):484–492 6. Izdebski PM, Rajagopalan H, Rahmat-Samii Y (2009) Conformal ingestible capsule antenna: a novel chandelier meandered design. IEEE Trans Antenn Propag 57(4):900–909 7. Lee SH, Lee J, Yoon YJ, Park S, Cheon C, Kim K, Nam S (2011) A wideband spiral antenna for ingestible capsule endoscope systems: experimental results in a human phantom and a pig. IEEE Trans Biomed Eng 58(6):1734–1741 8. Yun S, Kim K, Nam S (2010) Outer wall loop antenna for ultra wideband capsule endoscope system. IEEE Antenn Wirel Propag Lett 9:1135–1138 9. Gabriel S, Lau RW, Gabriel C (1996) The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz–20 GHz. Phys Med Biol 41(11):2251–2269 10. Sato H, Li Y, Xu J, Chen Q (2018) Design of inner-layer capsule dipole antenna for ingestible endoscope. In: Proceedings of 2018 international symposium on antennas and propagation (ISAP 2018). Busan, Oct 2018 11. Chen Q, Ozawa K, Yuan QW, Sawaya K (2012) Antenna characterization for wireless power-transmission system using near-field coupling. IEEE Trans Antenn Propag Mag 54(4):108–116

A Fiber Bragg Grating Acceleration Sensor for Measuring Bow Slamming Load Jingping Yang(&), Wei Wang(&), Yuliang Li, Libo Qiao, and ChuanQi Liu Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected], [email protected]

Abstract. In view of the serious consequences of Slamming Loads on ships at high speed, a fiber Bragg grating acceleration sensor for measuring the Slamming Loads on bows is designed in this paper. It is mainly composed of the sensor shell, the sensor shell cover, the accelerometer sensitive devices and the hinge structure, which realize the automatic monitoring of the Slamming Loads on bows. The experimental results show that the sensitivity of the sensor is 295 pm/g in the frequency range of 0–100 Hz. Keywords: Slamming loads Flexible hinge

 Fiber Bragg grating  Acceleration sensor 

1 Introduction When a ship sails at high speed under harsh conditions, bottom of bow will be severely impacted by waves. At the moment of slamming, the vertical acceleration of the hull will suddenly change, and then the hull will vibrate at high frequency. Strong impact cause a series of serious consequences, therefore slamming load measurement should be one of the safety standards for high-speed ship design. Acceleration sensors can count the magnitude, frequency and location of vibration, they can judge the safety status of hull. In the design of acceleration sensor, the cantilever beam structure is the most used model. Although this structure can satisfy the requirement of frequency, its sensitivity will be limited by frequency, it is also vulnerable to lateral interference. Wang Hongliang designed a double-strength cantilever beam model, the working range of this structure is too small, when the frequency of vibration is about 80 Hz on the ship, the system will resonate and be damaged [1]. Zhang Dongsheng designed a FBG vibration sensor with a tubular model, which placed the mass block in the middle of the steel tube, pasted the mass block inside the steel tube with a double grating fiber, and changed the wavelength of the fiber through the vibration of the mass block [2]. This structure is complex in the manufacture of the sensor, and has weak anti-transverse interference ability. In order to satisfy the stability, sensitivity and measurement range of acceleration sensor system, a new type of sensor is proposed in the paper, it realizes the automatic monitoring of the bow slamming load of the hull structure, provides real and reliable data for the decision-makers on the ship, provides scientific basis. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 422–430, 2020 https://doi.org/10.1007/978-981-13-9409-6_50

A Fiber Bragg Grating Acceleration Sensor for Measuring

423

2 Theory 2.1

Basic Fiber Grating Sensor Theory

When a beam of bandwidth light enters the FBG, the diffraction phenomenon will occur. Because of the structure of FBG, the narrowband light of a certain wavelength is reflected, and the light that does not satisfy the condition of FBG is transmitted directly [3]. As shown in Fig. 1, the relationship between the central wavelength of the reflected light kB and the effective refractive index neff and the grating period is as follows:

Reflected light

FBG

Incident light

Incident spectrum

Transmission spectrum

Transmitted light

Reflectance spectrum

Fig. 1. Reflection and transmission spectra of FBG gratings

kB ¼ 2neff K

ð1Þ

It can be seen from the formula that the central wavelength of FBG reflected light mainly depends on the grating period K and the effective refractive index neff. When FBG detects the change of ambient temperature or strain, it will cause the change of grating period and effective refractive index, which will cause the shift of reflected light wavelength. Therefore, the monitoring of external temperature and strain can be indirectly converted to the measurement of FBG central wavelength. The corresponding relationship can also be used to measure other physical quantities, such as displacement, acceleration, pressure, angle, etc. 2.2

Temperature Characteristic

Because temperature and strain can have a direct effect on neff and K, neff and K vary with temperature and strain, so formula (1) can be written as: kB ¼ 2neff K ¼ 2neff ðe; T ÞKðe; T Þ By calculating its total differential, we can obtain:

ð2Þ

424

J. Yang et al.

    DkB 1 @neff 1 @K 1 @neff 1 @K þ þ De þ DT ¼ kB neff @e K @e neff @T K @T

ð3Þ

In order to reflect the influence of temperature and wavelength on the central wavelength of FBG, we write (4): DkB ¼ ½1  Pe De þ ½a þ nDT kB

ð4Þ

where Pe represents the elasto-optic coefficient of FBG, a represents the thermal expansion coefficient of FBG, n represents Thermo-optical coefficients for FBG. Temperature variation can lead to thermal expansion and thermo-optic effect, and then lead to FBG central wavelength shift [4]. 2.3

Axial Strain

When FBG is only subjected to axial stress, K changes, it can be expressed as: DK ¼ ex K

ð5Þ

where ex represents the axial strain at the position of the measuring point, and the refractive index of the optical fiber changes [5], which can be expressed as: Dneff n2eff ½P12  vðP11 þ P12 Þ ex ¼ neff 2

ð6Þ

where P11 and P12 both represent the Pulkes coefficient and the Poisson’s ratio of optical fibers. So formula (6) can be simplified as follows: DkB ¼ ð1  Pe Þex kB

ð7Þ

In quartz optical fibers, Pe = 0.22, So when the FBG is only subjected to axial stress: DkB ¼ 0:78ex kB

ð8Þ

In the formula (9), DL denotes the change of FBG length and L denotes the length of FBG. Generally, we use µe to express the strain, among 1e ¼ 1  106 le. ex ¼

DL L

ð9Þ

A Fiber Bragg Grating Acceleration Sensor for Measuring

425

3 Sensor Structure Analysis 3.1

The Structure of the Sensor

The structure of FBG accelerometer for measuring bow slamming load is shown in Fig. 2. It is mainly composed of sensor shell, sensor shell cover, accelerometer sensing device and hinge structure. The sensing element of acceleration sensing device is prestretched FBG. Accelerometer sensing device Shell

Hinge structure

Cover

Fig. 2. Structure diagram

When the grating is stretched above, the wavelength becomes larger, while the grating shrinks below. Because the structure we designed is symmetrical, the extension of the upper grating is equal to the shrinkage of the lower grating, so the change of the wavelength is twice as much as that of the single grating. We call it the differential structure. The accuracy of measurement is doubled by using this method. For hull structure, considering the harsh navigation environment and unstable temperature, the reference grating method is used to accurately measure ship acceleration signal. The reference fiber is only affected by temperature, so the reference grating is only used to measure temperature change [6]. This is also the most direct method of temperature compensation. In the manufacture of acceleration sensor, the choice of flexible hinge material is particularly important. In order to ensure the good performance of the sensor, we should select materials that can make the sensor have a larger frequency measurement range, higher sensitivity, better stability and better anti-lateral interference ability. Therefore, beryllium bronze as flexible hinge elastic material has the advantages of good elasticity, easy processing and corrosion resistance.

426

3.2

J. Yang et al.

Working Principle of Sensor

Elastic flexure hinge is the core of this design structure and the most important part of the whole structure design. Figure 3 is the schematic diagram of elliptical flexure hinge.

Fig. 3. Elliptical flexure hinges

In Fig. 3, one end of the hinge is fixed, the long half-axis of the ellipse is a, the short half-axis is b, and the thickness is d. After the structure is stressed, we introduce the centrifugal angle [4]. As shown in Fig. 4, if / is the centrifugal angle of the ellipse, then:

Fig. 4. Differential diagram of elliptical flexure hinge

hð/Þ ¼ 2b þ t  2b sin /

ð10Þ

x ¼ a  a cos /

ð11Þ

Since the hinges are symmetrical, then:

A Fiber Bragg Grating Acceleration Sensor for Measuring p

Z2 0

Zp

sin / ð2s þ 1  2s sin /Þ

3

d/ ¼ p 2

sin / ð2s þ 1  2s sin /Þ3

d/

427

ð12Þ

We learn to use Mathematica tool, after the operation of Mathematica software [7]: h¼

24Mak Edt3

ð13Þ

So the rotational stiffness of elliptical flexure hinges: K¼

M Ed 2 t ¼ h 24ak

ð14Þ

Formula (14) shows that the rotational stiffness of the structure is affected by many factors, which is inversely proportional to the long and short half axes of the ellipse. When the structure vibrates with a certain acceleration a along the vertical direction, it can be obtained by the moment balance. MaC  kDlh  Kh ¼ 0

ð15Þ

Formula (15) in which C is the distance from the mass center to the hinge center, h is the vertical distance from the highest point of the mass block to the mass center, k is the elastic coefficient of the optical fiber, K is the rotation stiffness. From this, the resonant frequency f0 of the sensor can be obtained: x0 1 ¼ f0 ¼ 2p 2p

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Kf þ Kt M

ð16Þ

Usually, the sensitivity of FBG accelerometer is expressed by the ratio of wavelength change to fiber acceleration. The wavelength changes are as follows: DkB ¼ ð1  Pe ÞkB eB

ð17Þ

The sensitivity is: P¼

ð1  Pe ÞkB eB a

ð18Þ

4 Vibration Testing of Sensors We have carried out many experiments on the sensor. In the experimental tests of different frequencies under the same acceleration, we selected one accelerations (±2.0 g). At each acceleration, we carried out standing frequency vibration at

428

J. Yang et al.

10–100Hz, the vibration time was 50 s, and the sampling frequency of demodulator was 2 kHz. Because of the interference in the vibration process of the shaking table, we choose Butterworth filter to eliminate the clutter when analyzing the data. We mainly analyze the data of 0–100 Hz. When analyzing and calculating with MATLAB, we set the stopband attenuation to be greater than 40 dB and the passband ripple to be less than 3 dB. Because FBG sensor with differential structure is used to analyze the change of wavelength by using the difference between two wavelengths, the change of wavelength is twice as much as that without differential structure. When the acceleration is +2.0 g, the difference between frequency and wavelength changes with time after filtering, as shown in Fig. 5.

Fig. 5. Change of (±2.0 g) wavelength with frequency and time

In Fig. 5, the x-axis represents time, the y-axis represents frequency, and the z-axis represents frequency. The measurement wavelength of the sensor is stable between 7.9 and 9.1 nm. Taking the positive and negative acceleration of the vibration signal in Fig. 5 as an example, 10 time periods are intercepted from 10 Hz to 100 Hz frequency, 1000 points are intercepted in each time period, and the peak value of the waveform is fitted by fitting curve. A total of 100 peaks were obtained. The variance of these 100 values is 0.00175. 100 values are installed in MATLAB software. From Fig. 6, we can conclude that the fitting curve is y = 0.0014x + 1.1038, the root mean square error of the curve is 0.007865, the determination coefficient (R-square) is 0.9551, and the slope of the curve is 0.0014. From this, we can conclude that under the condition of (±2.0 g 10 − 100 Hz), the change of wavelength drift and frequency is not very different, about 1.18 nm, and the sensitivity of the acceleration sensor is 295 pm/g, i.e. 30.10 pm/mg−2. Because of the demodulation of the general demodulator. The accuracy is 5 pm, which is proportional to:

A Fiber Bragg Grating Acceleration Sensor for Measuring

a s ¼ q p

429

ð19Þ

In the formula, a denotes the acceleration measured by the sensor, q denotes the minimum acceleration measurable by the sensor, s denotes the change of the sensor wavelength, and p denotes the minimum demodulation accuracy of the demodulator. So the minimum acceleration value that the sensor can measure is 0.17 mg−2.

Fig. 6. Wavelength peak and frequency fitting curve

5 Conclusion In this paper, a fiber Bragg grating accelerometer for measuring bow slamming load is designed. The use of flexible hinge structure greatly improves the sensitivity of the sensor and realizes the automatic monitoring of bow slamming load on the hull. In view of the harsh working conditions such as ocean humidity and surge, the durability and reliability of the FBG accelerometer for measuring bow slamming load are guaranteed by using the aluminium alloy shell and shell cover. The experimental results show that the sensitivity of the sensor is 295 pm/g in the frequency range of 0–100 Hz. Acknowledgements. This paper is supported by Natural Youth Science Foundation of China (61501326, 61401310). It also supported by Tianjin Research Program of Application Foundation and Advanced Technology (16JCYBJC16500).

References 1. Wang H, Zhou H, Gaohong H (2013) Fiber Bragg grating acceleration vibration sensor based on double intensity cantilever beam. Optoelectr Laser 4:635–641 2. Dongsheng Zhang, Kaifang Yao, Pei Luo, Desheng Jiang (2009) A new type of high frequency acceleration sensor based on fiber Bragg grating. J Instrum Instrum 30(07):1400–1403

430

J. Yang et al.

3. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306 4. Zhang F, Jiang M, Yan Q etc. (2017) High-sensitivity low-frequency fiber grating acceleration sensor based on flexible hinge structure, vol 3. Infrared and laser engineering 5. Liang L, Li D, Qiu L, Xu G (2016) An FBG acceleration sensor based on flexible hinge. J Optoelectr Laser 04:347–352 6. Wei Wang (2007) Study of key technology on ship hull structure health monitoring with fiber Bragg grating. Tianjin University, Tianjin 7. Zhang Y, Zhang W, Zhang Y et al (2017) 2-D medium–high frequency fiber Bragg gratings accelerometer. IEEE Sens J 17(3):614–618

Improving Indoor Random Position DeviceFree People Recognition Resolution Using the Composite Method of WiFi and Chirp Xiaokun Zheng(&), Ting Jiang, and Wenling Xue Key Labs of Universal Wireless Communications, Beijing University of Posts and Telecommunications, 100876 Beijing, China [email protected]

Abstract. To improve device-free people recognition resolution by using WiFi signal, we research the composite method of WiFi preamble and radar chirp signal, and present an indoor designated area device-free people recognition trial using the composite preamble in this paper. We carry out a different heights human recognition with random position in a small designated indoor area, based on finite-difference-time-domain (FDTD) calculation. The simulations for 802.11a show that recognition resolution is improved and is better than original WiFi preamble signal. Meanwhile the attainable accuracy is about 94% in our test. The given method may be applied to device-free people queue and product management in shopping center, such as further distinguish adults and children, as well as toilet fall detection and other health monitoring fields in a private scenario. Keywords: Composite preamble  Integration sensing and communication Device-free people recognition  Random position in designated area



1 Introduction Integrated sensing and communication is a hotspot issue in communication terminal. It is important to detect, track, count and identify human activity from people that do not carry any device. Meanwhile WiFi radio receiver is employed as a main sensor. Image-based recognition cannot work well in a dim dark or private scenario, so using widely distributed WiFi signal for recognition is an attractive technology. In [1, 2], WiFi signals were used for device-free human fall detection and gesture recognition. In [3] respiration monitoring for healthcare is discussed, where high resolution is needed. [4] presents a WiFi based driver state recognition through body movement to prevent loss of lives due to reckless driving. [5] presents a low cost and real-time parking occupancy monitoring system based on WiFi signals. Then how to improve the recognition resolution of WiFi signal is a state of the art issue. One way is to improve the recognition algorithm and select different physical characteristics. Such as using SVM, deep learning or neural network classification algorithm, etc. [6] uses a dynamic time warping to improve SVM recognition rate; [7] presents a wavelet de-noising scheme meanwhile selects best 9 characters for people © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 431–437, 2020 https://doi.org/10.1007/978-981-13-9409-6_51

432

X. Zheng et al.

counting indoor. In order to extract more precise signal physical features, [8] presents a method of many continuous WiFi data symbols making FFT together; and multiple receiving antenna forming multi-dimensional features also can be used [2]. The other way is modifying the WiFi signal to enhance its recognition resolution. The most important way is to mix radar chirp signal. [9] uses time division mode and sets up a fixed slot to chirp signal for smart transport sensing; [10, 11] use frequency division mode and allocate subcarriers to sensing signal. Moreover, cancellation technology is used to reduce interference between the sensing subcarrier and the communication subcarrier [10]. The conventional multiplexing modes have some effects on the transmission frame and data rate. In this paper, integrated waveform design is studied. Without an extra bandwidth or time slot occupied, a chirp composite scheme is studied based on original WiFi preamble. Data section of WiFi changes constantly, so it’s a good choice to use preamble. Previous study in our laboratory shows that the original preamble itself has limited recognition resolution [12, 13]. And the study focused on time domain overlay composite based on receiver cancellation [12] shows that only a short low power s (t) can be composed so as to reduce the influence of deletion residue on channel estimation (CE). And our study focused on frequency domain amplitude composite superposition [13] shows that it has exactly the same excellent anti-fading performance of CE as original WiFi preamble. Although only the frequency amplitude information of chirp is composite, it has good resolution improvement effect for target recognition in fixed position. And hence in this paper, we will focus on people recognition with random position in a designated indoor area by using the frequency domain amplitude composition (FDAC) method. Meanwhile current researches are mostly focused on adult people counting such as [7, 14], so we will also focus on people recognition with different heights. The main contributions of this paper include: (1) We present a different heights human body recognition trail with random position in a designated indoor area, by using the FDAC WiFi preamble. (2) We compare the recognition result of the composite preamble with the result by using original preamble. The results show that: (1) When the designated area is small, the recognition resolution is obviously higher than original WiFi preamble, and satisfactory recognition results can be obtained by using the composite preamble directly. (2) When the designated area is a bit larger, the recognition rate is not ideal even the composite preamble is used. In this case, combining precise positioning technology such as MIMO antenna [2] or UWB [15, 16] can be considered, and then make device-free people recognition. Or the results can be divided into multiple classifications, such as tall/short man of subarea A, tall/short man of subarea B, etc. Rather than only dividing into tall/short man two classifications, expectly more satisfactory recognition results can be achieved. The rest of this paper is organized as follows. In Sect. 2 we introduce the composite preamble method used in this paper, then present experiment principle of the recognition with random position. In Sect. 3, we introduce the FDTD scenario and calculation setting, then present the recognition result and compare the composite method with the method using original preamble. In Sect. 4 the conclusion is provided.

Improving Indoor Random Position Device-Free People

433

2 Composite Preamble Scheme Frequency domain of 802.11a long preamble [17] is constant amplitude, L = {00000 1, 1, −1, −1, 1, 1, −1, 1, −1, 1 …, −1, 1, 1, 1, 1, 000000}; original preamble La(t) is IFFT of L, or say that La = IFFT(L). Then real part in time Lar = [−0.156, 0.012, 0.092, −0.092, −0.003…], and imag part is Lai = [0, −0.098, 0.106, −0.115, −0.054, 1 0.074…]. While the common radar chirp signal s(t) is: sðtÞ ¼ UðtÞej2ptðf0 þ 2fslope tÞ . To improve the recognition accuracy, in [12] we have presented a time domain overlay composite scheme based on receiver cancellation:  Pr ðtÞ ¼

La ðtÞ La ðtÞ þ k  sðtÞ

0  t  TL  Tchrp TL  Tchrp \t  TL

ð1Þ

However, only a short low power s(t) can be composed to reduce the influence of deletion residue on channel estimation (CE). In addition, we have also studied the method of using the convolution of chirp and original preamble as new long preamble. Although the resolution improvement is more notable, however anti-fading of CE has also been slightly affected. And in [13] we have presented a composite scheme based on frequency domain amplitude composite (FDAC): P0r ðtÞ ¼ IFFT½L þ absðFFTðk  sðtÞÞÞ

ð2Þ

Although only the amplitude information is overlying, due to the more abundant spectral features than original WiFi preamble, it also has a certain resolution improvement in the trial of identifying large/small balls with fixed position. Meanwhile the CE accuracy and anti-fading performance of the composite preamble are completely unaffected; it’s exactly the same as the original. Then this paper will focuses on the FDAC method. All of our previous studies are focused on the recognition with fixed position, however in real scenes the targets are mostly random location in a designated area. So, to make the research more applicable, in this paper we will study the recognition in a designated area with random position, and we carry out different heights human body distinguish. The given method may be applied to device-free people queue or product management in shopping center, such as further distinguish adults and children, as well as toilet/bathroom fall detection and other indoor health monitoring fields.

3 Indoor Device-Free People Recognition When wireless signal recognizes target, it is more common that the target is in a specified area rather than a fixed coordinate point. Such as the border intrusion detection [18] scene, it is necessary to find out whether the invading object is human or other small animals timely; or the elderly toilet/bathroom fall detection scene. So in this section, different heights human body recognition with random position in a small designated indoor area is studied, and the FDAC preamble is used. Meanwhile for the

434

X. Zheng et al.

indoor closed scenario, FDTD simulation provides more accuracy than general multipath model [19, 20], hence we use the FDTD method. And after FDTD calculation we add white noise. Currently the full-wave three-dimensional electromagnetic field simulation softwares based on FDTD method mainly include American XFDTD and domestic eastwave platform. They both directly simulate the distribution field precisely. 3.1

Scenario and Calculation Setting

The size of the indoor recognition lab scene is 2.9  1.5  2 m, where the distance between the RX and TX antennas is 1.6 m, and the length of each antenna is 12 cm. The distinguish target is standing human with height 150 or 170 cm. And the position of the target is random in a designated area, the orientation of the target toes is random within 60° when standing, size of the designated area is 60  60 cm. Figure 1 shows the FDTD scenario. See Table 1 for other calculation settings.

Wall

RX side

Recognition target TX Signal designated area

(a) Target is in a designated area;

(b) different heights 3D human

Fig. 1. FDTD scenario and device-free people target

3.2

Signal Setting

Referring to 802.11a and the FDAC composite scheme in Eq. (2), we set our FDTD excitation source as shown in Table 1, meanwhile the average power of chirp signal is twice of WiFi preamble, or say pchrp ¼ 2pL . See Table 1 for other signal parameters. At the receiver, after normalization of the receiving WiFi composite preamble, time domain characteristics of the FDAC composite preamble are extracted, including the energy, excess delay, rms delay, maximum value, standard deviation, and peak value, etc. The SVM function svmtrain and svmpredict classifier in Matlab2016 were used in both experiments.

Improving Indoor Random Position Device-Free People

3.3

435

Result of the Experiment

In this subsection, we shall present the results of different heights human recognition with random position by using the FDAC composite preamble, as well as original WiFi preamble in Figs. 2 and 3. In Fig. 2, for the fixed point target recognition,composite preamble can attain 100% even in low SNR condition; previous studies also show that the fixed point recognition is improved obviously [12, 13]. Also in Fig. 2, for the recognition with random position in small area, although the recognition rate has decreased, it can be seen that it still has obvious improvement compared with original preamble. Table 1. Calculation and signal settings 3D human model Height 150 cm Human model 170 cm Training samples 600 FDTD Tstep 8e-12s x, y, z: absorb boundary PML = 6 Designated area 60  60 cm Duration 6.4 ls = 2TFFT Bandwidth 20 MHz Time domain features extracted: excess delay,

Material Musculature Permittivity 47.5 [21] Test samples 1400 Spatial grid 50  150  300 Space size 2.9  1.5  2 m Target toes orientation Random within 60° Carrier frequency 5.2 Ghz 80 MHz Sampling frequency fs std deviation maximum value, peak value, etc

1 0.95

Composite preamble with fixed position

Recognize Rate

0.9 0.85

Composite preamble

0.8

WiFi Preamble

0.75 0.7 0.65

150cm height human 170cm height human

0.6 0.55

-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10

SNR(dB)

Fig. 2. Human recognition with random position

In Fig. 3, when the area gets a bit bigger, original preamble is completely unable to recognize; recognition rate of the composite signal is not ideal either. It can be considered to combine position technology such as MIMO, UWB, etc., it is expected that more satisfactory results can be achieved. Or the results can be divided into multiple classifications, such as tall/short man of subarea A, tall/short man of subarea B, etc., instead of only dividing into tall/short man two classifications.

436

X. Zheng et al. 1 0.95

Recognize Rate

0.9 0.85 0.8

Composite preamble

0.75 0.7

150cm height human 170cm height human

0.65 0.6 0.55

WiFi Preamble

0.5 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10

SNR(dB)

Fig. 3. Human recognition in a larger area

4 Conclusions To improve the target recognition resolution of WiFi signal, we researched the composite method of WiFi preamble and radar chirp, and carry out a different heights human recognition with random position in a designated area. The result shows that: Using the FDAC composite signal which has exactly the same CE capability as original WiFi preamble, the recognition ability is still improved compare with original preamble when the target is no longer in a coordinate point. If the designated area is small, ideal result can be obtained using composite signal simply. However when the area is a bit larger, recognition rate of composite signal is not ideal either. Then precise position tech could be combined. Or dividing the results into multiple classifications, such as tall/short man of A subarea, tall/short man of B subarea, etc., rather than only dividing into tall/short man two classifications. Acknowledgements. This work was supported by the National natural science foundation of China (No. 61671075) and Major program of National natural science foundation of China (No. 61631003). The authors thank the anonymous reviewers for their helpful comments and suggestions to improve the paper quality.

References 1. Adib F, Katabi D (2013) See through walls with WiFi! ACM SIGCOMM 43(4):75–86 2. Hongbo Jiang CC (2018) Smart home based on WiFi sensing: a survey. IEEE Access 6:13317–13325 3. Usman Mahmood ZK (2017) A deep learning framework using passive WiFi sensing for respiration monitoring. IEEE GLOBECOM: 1–6 4. Arshad S, Feng C (2018) “SafeDrive-Fi: a multimodal and device free dangerous driving recognition system using WiFi. In: IEEE international conference on communications (ICC) Kansas City, USA 5. Won M, Zhang Y (2018) WiParkFind: finding empty parking slots using WiFi. In: IEEE international conference on communications (ICC), Kansas City, USA

Improving Indoor Random Position Device-Free People

437

6. Zhou G, Ting J (2015)A new method of dynamic gesture recognition using Wi-Fi signals based on DWT and SVM improved by DTW. In: 2015 IEEE global conference on signal and information processing (global SIP), 12.14–12.18 (2015) 7. Zou H, Zhou Y (2017) FreeCount: device-free crowd counting with commodity WiFi. In: IEEE GLOBECOM, Singapore 8. Pu Q, Gupta S, Gollakota S et al (2013) Whole-home gesture recognition using wireless signals. In: Proceedings of the 19th annual international conference on mobile computing and networking. ACM, pp 27–38 9. Liang HAN, Ke WU (2012) 24-GHz integrated radio and radar system capable of time-agile Wireless communication and sensing. IEEE Trans Microw Theory Tech 60(3):619–631 10. Sit YL (2014) MIMO OFDM radar with communication and interference cancellation features. In: 2014 IEEE radar conference, Cincinnati, 19–23.11 11. Mishra AK, Inggs M (2014) FOPEN capabilities of commensal radars based on whitespace communication systems. In: IEEE electronics, computing and communication technologies, Bangalore, pp 1–5 12. Zheng X, Jiang T (2018) A composite method for improving the resolution of passive radar target recognition based on WiFi signals. Eurasip J Wireless Commun Netw, Springer. Available https://doi.org/10.1186/s13638-018-1224-0 13. Zheng X, Jiang T (2018) A frequency domain composite preamble to integrate sensing and communication. In: Springer 7th international conference on communications signal processing and systems. China, Dalian 14. Sobron I, Del Ser J (2018) Device-free people counting in IoT environments: new insights, results and open challenges. IEEE J Internet Things Early Access: 1–13 15. De Angelis G, Moschitta A (2016) Positioning techniques in indoor environments based on stochastic modeling of UWB round-trip-time measurements. IEEE Trans Intell Transp Syst 17(8) 16. Mahfouz MR, Zhang C (2008) Investigation of high accuracy indoor 3-D positioning using UWB technology. IEEE Trans Microwave Theory Tech 56(6):1316–1330 17. IEEE Std 802.11a Part 11 (1999) Wireless LAN medium access control MAC and physical layer (PHY) specifications 18. Arjun D, Indukala PK (2017) Border surveillance and intruder detection using wireless sensor networks: a brief survey. In: IEEE international conference on communication and signal processing (ICCSP). Chennai, Indian 19. Maloney G, Smith GS (1989) Accurate computation of the radiation from simple antennas using the finite-difference time-domain method. In: Digest on antennas and propagation society international symposium 20. Ohtani T (2013) A stability improvement technique using PML condition for the threedimensional nonuniform mesh nonstandard FDTD method. IEEE Trans Magn: 1569–1572 21. Li G, Yarning, Lei G (1995) The measurement of the permittivity of a kind of phantom musletissue at microwave frequency. Chin J Med Phys 04:243–245+242

438

X. Zheng et al.

Xiaokun Zheng was born in Baoding, Hebei province, China, in 1976. He received the B.S. and M.S. degrees in information engineering from the Northwestern Polytechnical University, Xi’an, in 2001 and embarks on a Ph.D. degree in communication engineering from Beijing University of Posts and Telecommunications since 2012. From 2001 to 2002, He was an engineer assistant with the Telemetry and telecontrol Institute in Beijing. Since 2003, he has been a lecturer with the college of electronics and, Hebei University. He is the author of more than 10 articles, and 1 invention. His research interests include short-range wireless communication and wireless sensor network.

Ting Jiang was born in Weiyuan city, Sichuan province, in 1962. He received the Ph.D. degree from Yanshan University, Qinhuangdao, Hebei province, in 2003. Since 2009, he has been a Professor with the key Labs of universal wireless communications, Beijing University of posts and Telecommunications. He is the author of more than 50 articles, and more than 10 inventions. His research interests include wireless broadband interconnection, the information theory, the short distance wireless communication, the wireless sensor network. He hosted 2 nation science foundation projects, 1 national major technical projects and many enterprise projects.

Optimal Design of an S-Band Low Noise Amplifier Hai Wang(&), Zhihong Wang(&), Guiling Sun, Ming He, Ying Zhang, Ke Liang, and Rong Guo Electronic Information Laboratorial Teaching Center, College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China {wanghai,wanghao801226}@nankai.edu.cn

Abstract. A low noise amplifier (LNA) is designed, which can work stably at 2.45 GHz frequency. The noise figure (NF) is less than 1 dB and the transmission gain is greater than 14 dB. ATF54143 chip from Agilent is the core part of this LNA. ADS simulation software is utilized to analyze the noise figure and scattering parameter (S-parameter) and design the bias circuit combined with stabilization of the amplifier during the whole process. The inductance of transistor source in the schematic diagram is replaced by a short-circuit microstrip line. With the addition of negative feedback, the optimal design of stability and parameters in the circuit is completed. The circuit module is manufactured according to PCB layout afterwards. The test data illustrate that the actual parameters of the LNA satisfy the design requirements. Keywords: Low noise amplifier  Noise figure  Gain  Scattering parameter  Stabilization

1 Introduction Low noise amplifier (LNA) is an amplifier with very low noise figure, which is the important and indispensable circuit module in wireless communication [1–3]. The main operating frequency range is from 2 to 4 GHz (S-band) in the electromagnetic band [4]. LNA is generally utilized as a preamplifier for various radio receivers at high frequencies and intermediate frequencies, as well as an amplifier circuit for highsensitivity electronic detection equipments. It is also applied to amplify the weak signals received from the antenna and suppress the interference caused by noises and clutters. Therefore, low noise amplifier is accustomed to play a significant role to improve the overall performance of circuit system. In recent years, LNA has been extensively utilized in microwave communication, radar reconnaissance, remote control system, geodetic mapping, electronic countermeasures, GPS receivers and other high-precision microwave measurement systems [5, 6]. Low noise figure, high power gain, suitable bandwidth, stable working performance and wide dynamic range are the fundamental requirements for designing LNA. Moreover, the parameters such as noise figure, gain and linearity are of vital importance in low noise amplifier. The noise figure of the preamplifier has the greatest influence on the microwave system [7]. Its gain will restrict the noise suppression © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 439–447, 2020 https://doi.org/10.1007/978-981-13-9409-6_52

440

H. Wang et al.

degree of the subsequent circuit. And its linearity has a major impact on the linearity of the whole system and the common-mode noise rejection ratio.

2 Design and Analysis 2.1

Performance Indexes

The basic design requirements of low noise amplifier circuit are low noise figure, high gain performance, stable operation and sufficient dynamic range. 1. Noise Figure The ratio of signal power to noise power in practical circuits is called signal-to-noise ratio (SNR). When the weak signal is amplified, the noise of the amplifier itself may critically interfere with the signal. This noise should be crippled, consequently, for the sake of improving the SNR of output terminal. F¼

Signalinput =Noiseinput SNRinput ¼ SNRoutput Signaloutput =Noiseoutput

ð1Þ

The noise factor F is defined as the ratio of SNR at the input end to that at the output end of the amplifier, directly measuring the noise performance of LNA [8]. Its physical significance is, after the signal passes through the low noise amplifier, the degree to which the SNR decreases due to the noise generated by LNA itself. In engineering design and practical application, the corresponding noise figure is generally expressed in decibels as bellows. NF ¼ 10  log F

ð2Þ

For single-stage LANs, the formula for calculating noise figure is as bellows [9–11]. Cs Copt 2 4Rn   NF ¼ NFmin þ 2 Zo 1  jCs j2  1 þ Copt

ð3Þ

NFmin is the minimum noise figure of the transistor, which is determined by the tube itself of the amplifier. Zo is the standard reference impedance of 50 X. Copt, Cs and Rn are respectively the optimum source reflection coefficient, the transistor input source reflection coefficient and the transistor equivalent noise resistance, when NFmin is achieved. NF ¼ NF1 þ

NF2  1 NF3  1 NFn  1 þ þ  þ G1 G1 G2 G1 G2    Gn1

ð4Þ

Optimal Design of an S-Band Low Noise Amplifier

441

For multi-level cascaded networks, in contrast, the formula for calculating the noise figure is as above [12]. NFn and Gn are respectively the noise figure and the gain of the nth level low noise amplifier. At present, the service frequency of GaAs small signal field effect transistors is higher than bipolar low noise transistors, and the noise figure can be less than 1 decibel. 2. Gain The gain of the amplifier refers to the ratio of output power to input power: G ¼ Pout =Pin

ð5Þ

From the calculation formula (4), increasing the gain of LNA obviously has a positive effect on reducing the noise figure of the system. However, some negative issues will produced by the excessive gain of low noise amplifier, not only affecting the dynamic range of the whole system, but also being harmful to the circuit. Therefore, the gain of LNA should be considered in combination with the noise figure of the integral system and the dynamic range of the receiver, so as to attain a balanced optimal state. 3. Stabilization The stabilization of the low noise amplifier is determined by S-parameter of the transistor and the reflection coefficient of the input and output ports. And the reflection coefficient is related to the input and output impedance, varying with the diverse operating frequency. Similarly, scattering parameters of transistors are also related to frequency and bias circuits. S-parameter, regularly utilized by analyzing microwave and radio frequency circuit, is a network parameter based on the relation between incident wave and reflected wave [13, 14]. Both the reflected signal of the device port and the signal from that port to another, are capable of describing the operation of circuit network [15, 16]. If port 1 is the input terminal of the signal in a twin-port network, and port 2 is simultaneously the output. In that way, S11 represents the return loss, which is how much energy is reflected back to the source (port 1). To guarantee the better performance of this network, the numerical value of return loss is the smaller the better. S11 is generally recommended to be less than 0.1, i.e. −20 dB (20  lg0.1). Contrarily, the numerical value of insertion loss (S21) is designed the greater the better. It indicates how much energy is transferred to the destination (port 2). Thus, the larger figure of S21 demonstrates the higher transmission efficiency. S21 is generally recommended to be greater than 0.7, i.e. −3 dB (20  lg0.7). This value is 1 in the ideal case, or 0 dB in another way. Low noise amplifier is generally designed and demanded to satisfy the requirements of formula 6–9 simultaneously, in order to ensure LNA is absolutely stable. K is called stability discriminating coefficient [17]. K¼

1  jS11 j2 jS22 j2 þ jDj2 [1 2jS12 S21 j

ð6Þ

442

H. Wang et al.

jS11 j2 \1  jS12 S21 j

ð7Þ

jS22 j2 \1  jS12 S21 j

ð8Þ

jDj ¼ jS11 S22  S12 S21 j

ð9Þ

If the above formulae are not valid, it is called potential instability. Potentially unstable amplifier will self-oscillate only when the power supply or load impedance is in a certain range. Except for that, nevertheless it can operate regularly in most cases. For an amplifier in a potentially unstable state, either resistance or feedback is generally introduced to match other components in the circuit system, in order to meet the abovementioned conditions eventually. 2.2

Design Proposal

In order to research the design method of low noise amplifier in a better way, ATF54143 produced by Agilent is selected to be the core chip in this paper, which is a plastic package transistor with enhanced high electron mobility. The chip has the characteristics of high gain, high linearity and low noise. It is suitable for LNA circuits of 450 MHz–6 GHz band wireless systems. ATF54143 has a minimum noise of 0.51 dB and a maximum gain of 16 dB at suitable working frequency, voltage of 3 V and current of 40 mA. A low noise amplifier steadily operating at 2.45 GHz is designed in this paper, with noise figure of less than 1 dB and gain of more than 14 dB. The particular design procedure is shown in Fig. 1. Transistor Selection: NF, S-parameter, Gain, etc. DC Analysis: To Determine DC Operating Point Design of Bias Circuit & Stability Analysis Design of Noise Figure Circle & Input Matching Output Matching Design of Maximum Gain Implementation of Matching Network Schematic Figure Simulation & Optimization Layout Design & PCB Manufacture

Fig. 1. LNA design procedure

Optimal Design of an S-Band Low Noise Amplifier

443

3 Realization and Measurement The circuit schematic diagram of low noise amplifier, which satisfies the stability requirement, is designed by ADS simulation software demonstrated in Fig. 2. The ATF54143 chip is selected as the core component, as previously mentioned.

Fig. 2. Schematic diagram of LNA

On this basis, the schematic diagram is further modified. The ideal isolation components and radio frequency choke originals are substituted by actual ACT originals in the circuit diagram. In addition, two inductances of the transistor source in the schematic diagram are replaced by short-circuit microstrip lines, and negative feedback is added. Besides, negative feedback mechanism is added in this model. The modified and optimized schematic diagram is illustrated in Fig. 3.

Fig. 3. Optimized schematic diagram

Scattering parameters and noise figure of the modified model, in consideration of impact of introducing noise factor and matching input impedance, are simulated and

444

H. Wang et al.

analyzed by ADS software. According to simulation results directly shown in Figs. 4 and 5, the gain of LNA is approximately 14.6 dB and the noise figure is less than 0.5 dB at 2.45 GHz. Besides, both the input and the output reflection coefficients are below −17 dB, which indicates the low noise amplifier model completely meets the design requirements.

(a)

(b)

(c)

(d)

Fig. 4. S-parameter simulation results

The PCB layout is designed (shown in Fig. 6) after accomplishing the circuit simulation of LNA schematic diagram. Then practical circuit module is manufactured and tested on the platform including Agilent N9310A radio-frequency signal generator, N9320B spectrum analyzer, AV3618 vector network analyzer and other instruments in the laboratory. The test data are shown in Table 1, which are basically in agreement with the simulation results. For comparison, some references data are also illustrated in the table, which indicates that the design of this work has certain advantages.

Optimal Design of an S-Band Low Noise Amplifier

Fig. 5. Noise figure simulation result

Fig. 6. PCB layout of LNA

Table 1. In comparison with other references Ref. [2] [18] [19] [20] This work

Freq. (GHz) 2.45 2.45 2.45 2.4 2.45

NF (dB) 2.3 2.8 4.98 0.01 0.56

Gain (dB) 14 9.4 13.9 17 14.45

S11 (dB) −25 −12.6 −14 −15 −17.75

445

446

H. Wang et al.

4 Conclusion The main performance indicators, considered comprehensively when designing a low noise amplifier, are analyzed in this paper. Besides, the corresponding design proposal and circuit schematic diagram are proposed. ADS simulation software is utilized to analyze the noise figure and scattering parameters, moreover, the stabilization and relevant indexes of the circuit are optimized. The actual circuit module is manufactured on the basis of the PCB layout, and the test results prove that the original design requirements are completely satisfied. The entire design and implementation procedure provide a commendable reference for the design and practice of related engineering applications. Acknowledgements. This work was supported by the self-made experimental teaching instrument and equipment project fund of Nankai University, and Electronic Information Laboratorial Teaching Center.

References 1. Liscidini A, Martini G, Mastantuono D, Castello R et al (2008) Analysis and design of configurable LNAs in feedback common-gate topologies. IEEE Trans Circuits Syst II Express Briefs 55:733–737 2. Hashemi H, Hajimiri A (2002) Concurrent multiband low-noise amplifiers-theory design and applications. IEEE Trans Microw Theory Tech 50:288–301 3. Sabzi M, Medi A (2019) Analysis and design of multi-stage wideband LNA using simultaneously noise and impedance matching method. Microelectron J 86:97–104 4. Degiovanni A, Bonomi R, Garlasché M et al (2018) High gradient RF test results of S-band and C-band cavities for medical linear accelerators. Nucl Instrum Methods Phys Res Sect A 890:1–7 5. Khabbaz A, Sobhi J, Koozehkanani ZD (2018) A sub-mW 2.9-dB noise figure Inductor-less low noise amplifier for wireless sensor network applications. AEU Int J Electron Commun 93:132–139 6. Girard M, Dubois T, Hoffmann P et al (2018) Effects of HPEM stress on GaAs low-noise amplifier from circuit to component scale. Microelectron Reliab 88–90:914–919 7. Jafarnejad R, Jannesari A, Sobhi J (2017) A sub-2-dB noise figure linear wideband low noise amplifier in 0.18 µm CMOS. Microelectron J 67:135–142 8. Nordmeyer-Massner JA, De Zanche N, Pruessmann KP (2011) Noise figure characterization of preamplifiers at NMR frequencies. J Magn Reson 210:7–15 9. Caddemi A, Cardillo E (2019) On the microwave noise figure measurement: a virtual approach for mismatched devices. Measurement 137:116–121 10. Caddemi A, Cardillo E, Crupi G (2016) Comparative analysis of microwave low-noise amplifiers under laser illumination. Microw Opt Tech Lett 58(10):2437–2443 11. Davidson AC, Leake BW, Strid E (1989) Accuracy improvements in microwave noise parameter measurements. IEEE Trans Microw Theory Tech 37(12):1973–1978 12. Lv J, Bao Y, Huang J (2016) Wideband low noise amplifier using a novel equalization. In: 2016 progress in electromagnetic research symposium (PIERS), pp 609–614 13. Belaïd MA (2018) Performance analysis of S-parameter in N-MOSFET devices after thermal accelerated tests. Microelectron Reliab 91:8–14

Optimal Design of an S-Band Low Noise Amplifier

447

14. Akbar F, Atarodi M, Saeedi S (2015) Design method for a reconfigurable CMOS LNA with input tuning and active balun. AEU Int J Electron Commun 69:424–431 15. Arshad S, Zafar F, Ramzan R et al (2013) Wideband and multiband CMOS LNAs: State-ofthe-art and future prospects. Microelectron J 44:774–786 16. Perumana BG, Zhan JHC, Taylor SS et al (2008) Resistive-feedback CMOS low-noise amplifiers for multiband applications. IEEE Trans Microwave Theory Tech 56(5):1218– 1224 17. Caddemi A, Cardillo E, Patanè S et al (2018) An accurate experimental investigation of an optical sensing microwave amplifier. IEEE Sens J 18(22):9214–9221 18. Neihart NM, Brown J, Yu X (2012) A dual-band 2.45/6 GHz CMOS LNA utilizing a dualresonant transformer-based matching network. IEEE Trans Circuits Syst I Regul Pap 59 (8):1743–1751 19. Hyvonen S, Bhatia K, Rosenbaum E (2005) An ESD-protected, 2.45/5.25-GHz dual-band CMOS LNA with series LC loads and a 0.5-V supply. In: IEEE radio frequency integrated circuits symposium, digest of papers, pp 43–46 20. Msolli A, Nasri M, Helali A et al (2012) Ultra low power low noise amplifier design for 2.4 GHz applications. In: 7th international conference on design and technology of integrated systems in Nanoscale Era, pp 1–4

A Triangular Centroid Location Method Based on Kalman Filter Yunfei Suo, Tao Liu(&), Can Lai, and Zechen Li Institute of Electronic Engineering, Chengdu University of Information Technology, Chengdu 610225, Sichuan, China [email protected]

Abstract. GPS is difficult to solve the problem of positioning in large indoor places such as stores and warehouses, so using WiFi for positioning has become the mainstream indoor positioning method. However, most of the location fingerprint methods used in WIFI positioning have some shortcomings, such as low fault tolerance and weak anti-noise ability. To solve these problems, a WIFI indoor positioning method based on Kalman filter is proposed. After obtaining RSSI values, the optimal distance is estimated by Kalman filter, and then the optimal position is calculated by triangular centroid method, Kalman filter is adopted to calculate the optimal value finally. Keywords: Indoor location filter

 RSSI  Triangular centroid method  Kalman

1 Introduction In modern life, positioning technology is being applied to more and more occasions, and people’s demand for indoor location information is getting higher and higher. GPS is currently the most common way to obtain outdoor location information, but it is difficult for satellite signals to penetrate various walls and obstacles. So, GPS, APGS and other satellite positioning technologies are not suitable for indoor positioning requirement. In recent years, the wireless-based positioning methods are widely used in the indoor positioning. A mobile terminal receives wireless signal from multiple predeployed WIFI devices and calculates a desired location information. However, because of the multipath effect in the actual environment and the complexity of the indoor environment, as well as the instability, uncontrollability and unpredictability of AP points, it is difficult to achieve more accurate positioning performance in ordinary WIFI indoor positioning. Due to the instability of the WIFI signal itself, the positioning result will contain significant noise, making the positioning result extremely unstable, and the display position will continue to jitter. Positioning technology with high accuracy, such as multi-platform technology with high accuracy, such as multiplatform fusion positioning, UWB positioning, UKF filtering and integrated inertial navigation system, can greatly improve the positioning accuracy. But in general, designing more complex systems requires additional hardware, which inevitably increases the implementation cost. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 448–458, 2020 https://doi.org/10.1007/978-981-13-9409-6_53

A Triangular Centroid Location Method Based on Kalman Filter

449

In response to this situation, the Kalman filter algorithm is used to denoise the collected RSSI values. The position to be measured is then calculated by the triangular centroid method. The labor cost is reduced, the positioning accuracy is increased, and the reliability of positioning is improved.

2 RSSI This section introduces what is RSSI, what is the use, the relationship with the 802.11 protocol, the measurement process and the principle of ranging. 2.1

RSSI and 802.11 Protocol

RSSI, the full name of Received Signal Strength Indication. The RSSI value can be used to measure the distance between the signal point and the receiving point [1]. Therefore, RSSI is widely used in WIFI indoor positioning technology. In order to obtain the characteristics of the reverse signal, the RSSI values obtained in general are average values rather than instantaneous values. The instantaneous values of 8192 RSSIs are averaged over a period of 1 s to obtain the average RSSI. 2.2

Measurement Process and Ranging Principle of RSSI Value

The RSSI value is related to the signal propagation distance, shown as: d ¼ 10

jRSSI jA 10n

ð1Þ

where d is the distance between the node to be tested and one anchor node, A is the signal strength when the transmitting end and the receiving end are separated by 1 m, and n is the environmental attenuation factor. It can be known from Eq. (1) that the RSSI features the exponential attenuation with distance increasing, and values of A and n are different in different environments and hardware configuration. After the terminal receives the RSSI value, it is Carried by PMD_RSSI.indication primitive (this optional primitive can be generated by the PMD to provide the received signal strength to the PLCP. The primitive has one parameter: RSSI, which is the RF received by the high-speed PHY. The measured value of energy is 0–8 bits in length. RSSI and SQ are used together as part of the CCA mechanism).

3 Location Algorithm This section compares the advantages and disadvantages of several different positioning algorithms. Furthermore, it introduces an improved positioning method based on triangular centroid, which elaborates the advantages, principles and implementation of the triangular centroid algorithm. In the actual situation, due to the influence of multipath effects and obstacles on signal propagation, accurate measurement of RSSI values is often difficult to achieve,

450

Y. Suo et al.

which influences the accurate calculation of distance. Therefore, an algorithm with a high fault tolerance rate is needed to calculate the position to be tested. Although the triangulation method [2, 3] and the step-by-step method [4] are relatively simple, there are still some shortcomings in terms of calculation accuracy and scope of application. Although the accuracy of traditional triangular centroid in paper [5, 6, 7] is relatively high, it performs not well in complicated indoors environment. Based on paper [8], this paper proposes an improved triangular centroid method. The triangular centroid algorithm is based on the RSSI principle. By using the three nearest three reference nodes of the mobile node to be tested, the coordinates of the mobile node can be quickly calculated, which brings easier implement of the positioning algorithm. In our real experiments, five routers are equipped as AP points, which can significantly improve the location accuracy and strengthen the system stability and decrease the measurement error that caused by environment noise meanwhile. This algorithm can be employed extensively in varieties of location occasions. Since the hardware model and power consumption of each node are different and affected by environmental factors, the relationship between the RSSI value and the distance cannot be accurately expressed in the formula (1), which results in the fact that the measured distance cannot reach the ideal value. The three circles centered on the base station may not be able to intersect at a point. In the actual situation, most of them intersect in a small area, and even there may be no solution. The improved triangle centroid algorithm is more adaptable that can obviously offset the errors caused by the environmental noise and effectively improve the location accuracy. 3.1

Principle of Improved Triangular Centroid Algorithm

In the wireless sensor network node location technology, sensor nodes are divided into anchor nodes and unknown nodes. Three base stations are set as anchor nodes, and the mobile terminal is set as an unknown node. Ideally, as shown in Fig. 1. It is known that the coordinates of the three anchor nodes are A(x1, y1), B(x2, y2), C (x3, y3), and the distance to the unknown node D is R1, R2, R3 respectively. Three circles intersect at one point, without considering the influence of any external factors, the trilateral measurement method can obtain the coordinates of the unknown node by measuring the distance and the coordinates of anchor nodes, as shown in (2). However, in actual situations, due to non-line-of-sight propagation, multipath propagation, shadow effects, etc., these three circles with the radius of the distance often cannot be merged to one point. When the calculated distance is less than the actual distance, the three circles intersect at more than three points as shown in Fig. 2. 8 2 2 < ðx  x1 Þ þ ðy  y1 Þ ¼ R21 ðx  x2 Þ2 þ ðy  y2 Þ2 ¼ R22 : ðx  x3 Þ2 þ ðy  y3 Þ2 ¼ R23

ð2Þ

A Triangular Centroid Location Method Based on Kalman Filter

451

Fig. 1. Schematic diagram of RSSI ranging in an ideal environment

Fig. 2. The calculated distance is greater than the actual distance

At this time, the distance from the anchor node to the unknown node calculated by the RSSI is not equal to the distance between the actual two points. It can be seen that the three circles intersect with each other, every two circles intersect at two points, point the closer to the third anchor node as one of the endpoints of the triangle. The three intersection points and the coordinates are known as G(x4, y4), H (x5, y5), I(x6, y6), the estimated distance is d1, d2, d3. Connect the intersection points into a triangle, and set the coordinates of the unknown node to the centroid of the triangle. Then the following equation can be used to find the point G coordinate:

452

Y. Suo et al.

8 2 2 < ðx4  x1 Þ þ ðy4  y1 Þ  d12 ðx  x2 Þ2 þ ðy4  y2 Þ2 ¼ d22 : 4 ðx4  x3 Þ2 þ ðy4  y3 Þ2 ¼ d32

ð3Þ

The H and I coordinates can be calculated. if the calculated distance is greater than the actual one, the intersection point of three circles will less than 3 as shown in Fig. 3. The distance between anchor nodes and the unknown node calculated by the RSSI is smaller than the distance between the actual two points, disjoint round centers need to be connected as a line segment. The midpoint of the two intersections between this line segment and the two circles serves as the endpoint of the triangle. The coordinates of the endpoint can also be set as G, H, I. Connect the endpoints into a triangle, the center of mass is the coordinates of unknown node. In practical situation, judge what method should be adopted according to the scale of distance, two methods can be mixed-use to deal with the complex environmental change indoors.   Then the coordinates of the unknown node are x4 þ x35 þ x6 ; y4 þ y35 þ y6 . 3.2

Algorithm Implementation Steps

(1) Firstly, base stations periodically send signals to the mobile terminal, including the coordinates and numbers of anchor nodes. The RSSI value of the received signal is displayed on the interface by the mobile terminal. (2) Then the RSSI values are stored in the array, the estimated distance d can be calculated by putting the value into the distance formula after Kalman filter. (3) Arrange the distance from largest to smallest, and select three anchor nodes which nearest the node under test as the center of the three circles. (4) Draw the three circles’ position and select appropriate methods to calculate the position of the center of mass according to the size of distance.

4 Kalman Filter This section introduces the concept, principles, functions, and implementation steps of Kalman filter. Kalman filter is a linear estimation algorithm that uses the minimum mean square error as the best estimation criterion under the premise of linear filtering [9]. In the process of location indoors, owing to the influences of shadow effect and measurement noise and such factors, the calculated distance always has some deviations compared with the actual distance. While Kalman filter is performed on coordinates and the value of RSSI, it can make the calculated coordinates approach the real coordinates constantly and decrease the deviation meanwhile.

A Triangular Centroid Location Method Based on Kalman Filter

4.1

453

Principle of Kalman Filter

Kalman filter is a recursive estimation in which an estimate of the current state is calculated by knowing the estimated value of the previous state and the observation of the current state, so that it is not necessary to record the observed or estimated history information [10]. The difference between Kalman filter and most of the filters is the former is a pure time domain filter. It doesn’t need to be designed in frequency domain like low-pass filters and other frequency-domain filters and then switch to time domain to implement. When positioning tracking, given a set of values of a time series (coordinates, speeds and accelerations of unknown nodes), the state of the node to be tested can be described by linear state transition difference equations in positioning. Similarly, in order to obtain a relatively stable RSSI value, Kalman filtering is also used to achieve the purpose of filtering noise. 4.2

Algorithm Implement Steps

The following five iteration equations are necessary for Kalman filter:

2

1 60 6 60 A¼6 60 6 40 0

0 1 0 0 0 0

T 0 1 0 0 0

xk ¼ AXk1 þ BW

ð4Þ

pk ¼ APk1 AT þ Q

ð5Þ

 1 Kk ¼ Pk H T HPk H T þ R

ð6Þ

Xk ¼ xk þ Kk ðzk  HXk Þ

ð7Þ

0 T 0 1 0 0

Pk ¼ pk  Kk Hpk 3 2 3 0 0 0 0 60 07 0 07 7 6 7   7 60 07 T 07 6 7 W ¼ Wx ðk  1Þ B ¼ 60 07 Wy ðk  1Þ 0 T7 7 6 7 5 41 05 1 0 0 1 0 1

ð8Þ

When estimating the location, A, B is the state transfer matrix, T is the cycle of sampling signal, W is the random variable at the moment of k − 1 of the node to be test, Xk−1 is the prediction of state at the time of k − 1 toward that of k. Xk−1 is the best result of the former time of state, Pk is the prediction when at time k that the previous state toward the current state. Pk−1 is the error estimation covariance matrix corresponding toXk−1, Q is the covariance of the system processing noise. Kk is the Kalman gain which is used to modify prediction continuously, Zk is the prediction value at the time of k, H is the six-order unit matrix. Not only the Kalman filter can deal with the coordinates, but also can filter the noise of the process of measurement and system. The experiments show that the measurement of RSSI in enclosed environment is not stable, the value of RSSI obtained is shown as Fig. 4. While measuring one position for 10 s (100 times).

454

Y. Suo et al.

The WIFI signal that measured in the process of indoor location will fluctuate up and down continuously according to the Fig. 4. This paper adopts the method of Kalman filter to filter the value of RSSI in order to decrease the influential factors caused by such as obstacles and noise in the room. Because the filter targets are the value of RSSI so that the values of A and H are1, the value of B is 0, Z is the measurement value at the time of k.

Fig. 3. The calculated distance is smaller than the actual distance

Fig. 4. The value of RSSI that measured in the actual environment

A Triangular Centroid Location Method Based on Kalman Filter

455

The operation steps of Kalman filtering are divided into two stages: prediction and update. In the prediction phase, the position coordinates or RSSI values at time k are predicted from the optimal position at time k − 1, and then the new error covariance is predicted from the previous error covariance and process noise to calculate the Kalman gain. After the end of the prediction phase, the update phase is entered, and the correction is updated again to obtain the optimal value at time k, and the update operation is performed for the next optimal value of k + 1. After completing the above operations, enter the prediction phase again, and cycle constantly.

5 Experimental Results and Analysis Select a room with lots of items, frequent staff movements, and strong signal interference as a test site. Five routers are placed in the room as anchor nodes. The room environment is shown in Fig. 5.

Fig. 5. The environment of experiment

Five anchor nodes’ coordinates are (0.7, 0.9), (0, 5.7), (9.5, 6.8), (12.4, 1.2), (6.5, 0) separately. The value of RSSI of six positions were collected in the room, the conventional algorithm and the improved algorithm mentioned in this paper are used to compare the accuracy of location. Six positioning errors are shown in Fig. 6. Whatever in stability or the location accuracy, the improved triangular centroid compared with conventional triangular centroid method is more excellent, six positioning errors are about 1.38 m. The RSSI values are smoothed by Kalman filter, with results shown in Fig. 7. Compared with the RSSI values before filtering shown in Fig. 7, after several iterations, the RSSI values will change in a smaller range and become smoother. The influence that Kalman filter on the accuracy of positioning accuracy is shown in Fig. 8.

456

Y. Suo et al.

Fig. 6. The two methods’ comparison of the result of positioning error

Fig. 7. The comparison of the value of RSSI before and after filter

The coordinates after Kalman filter estimate will approach the actual coordinates and toward stable finally with the increase of iterations according to Fig. 8.

A Triangular Centroid Location Method Based on Kalman Filter

457

Fig. 8. The location error of Kalman filter estimation

6 Conclusion In this paper, a Kalman filter-based triangular centroid localization method is proposed. Kalman filter can filter out noise, effectively improve the hopping problem of WIFI signals, and further improve the positioning accuracy through the triangle centroid method. The simulation results show that the algorithm has clear ideas, simple code, few iterations, fast positioning speed and high positioning accuracy, which can achieve high quality indoor positioning under low cost. Acknowledgements. This work is supported by Subsidies for Scientific Research Projects of Sichuan Education Department (18ZA0111).

References 1. Sun C (2015) Research on mobile node location algorithms for wireless sensor networks based on RSSI/TDOA. Shenyang University of Technology 2. Gao L, Zheng X, Zhang H (2009) A node-localization algorithm for wireless sens-or network based on trilateration and centroid algorithm. J Chongqing Inst Technol Nat Sci Edn 23(07):138–141 3. Liu C, Ni S (2019) Mobile anchor node location scheme based on trilateral measurement. Inf Technol 03: 29–32+36 4. Jiang B, Shen Y, Wang J (2017) An Improved step-by-step ZigBee positioning method. Inf Commun 03:45–48 5. Lin W, Chen C (2009) RSSI-based Triangle and centroid location in wireless sensor network. Mod Electron Technol 2:180–182 6. Yin Y, Xu Q (2018) Optimization of centroid location algorithm for wireless sensor networks based on RSSI ranging. Comput Digital Eng 46(12): 2425–2429+2462

458

Y. Suo et al.

7. Zhang Y (2018) Weighted centroid localization algorithm based on RSSI in wireless sensor networks. In: China Automation Society. Papers of CAC2018. China Automation Society, pp 6 8. Zhao D, Bai F, Dong S, Li H (2015) An improved triangle centroid localization algorithm based on Kalman and linear interpolation filter. J Trans Technol 28(07):1086–1090 9. Zhao Y, Zhou H, Chen Y (2009) Application of Kalman filter in real-time tracking of indoor positioning system. J Wuhan Univ Sci Edn 55(6):696–700 10. Zhu P (2016) Research on RSSI-based indoor location algorithm. Yunnan University

Research on Spatial Network Routing Model Based on Price Game Ligang Cong, Huamin Yang(&), and Xiaoqiang Di School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, Jilin, China [email protected], [email protected]

Abstract. The topological structure of the spatial information network changes drastically, the resources on the space are limited, and the communication delay is long which bring great challenges to the construction of the spatial information network. In the analysis of the existing satellite network routing model, based on the DTN protocol, a low-orbit satellite network routing model on account of the price game is proposed based on the characteristics of satellite network. The model uses the price game to save the remaining storage space. The success of data forwarding is an important evidence for the routing node status as a satellite network for routing, which enhances the practicability of the routing algorithm. Simulation experiments show that compared with classical models such as Epidemic and FC, the proposed routing model has better performance in terms of delay, success rate, network overhead, etc., and the comprehensive performance is better. Keywords: Price game

 Spatial information network  Routing model

1 Introduction The spatial information network with low-orbit satellite and satellite network as the main structure has been paid more and more attention by its unique advantages, but its characteristics include dramatic changes in topology, limited node resources, and extended communication. These characteristics bring series problems to the construction of spatial information networks. How to solve these problems becomes the key to the smooth development of the spatial information network. Delay Tolerant Network (DTN) [1] is generated for special network environments with large delays and frequent interruptions. Therefore, it has unique advantages in solving satellite network routing problems. This paper takes the spatial information network based on DTN as the research object, and focuses on how to construct an efficient routing mechanism in a special space network environment. Typical DTN routing algorithms include Epidemic Routing [2], Prophet Routing [3], and Spray-and-Wait routing algorithms [4]. These routing algorithms are not optimized for the space environment, so they cannot be directly applied to the spatial information network. Therefore, many scholars have proposed a number of satellite network routes for the space environment, including Chang [5]. The finite state automaton (FSA) routing algorithm hides the dynamics of the network topology and © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 459–468, 2020 https://doi.org/10.1007/978-981-13-9409-6_54

460

L. Cong et al.

simplifies the routing algorithm, but it has poor adaptability to network emergencies and has a large demand for star storage resources; Uzunalioglu [6] proposed probabilistic routing protocol PRP, which reduces the number of call interruptions and redefinition routes caused by route switching, but does increase the call blocking rate and increases the possibility of network congestion; Del Re et al. [7] proposed guardian channel switching algorithm HQRP reduces call interruption by optimizing the handover process, and considers the optimization of channel resources, but increases the network overhead and reduces the overall efficiency of the network; the determined load proposed by Taleb [8]. The guiding idea of the Equilibrium Routing Algorithm (ELB) is that the service satellite can grasp the load of the next hop link. Optional routing, congestion occurs when a link to notify other satellites to reduce the transmission rate, network congestion can be reduced, but when the congestion situation is more serious when the algorithm fails. In addition, typical satellite routing algorithms include ALBR [9], DRA [10], CEAARS [2], etc. These satellite routing algorithms mentioned above are hard to say who has more advantages, only different from the focus, and each has its own characteristics in different application environments. Based on the DTN protocol, this paper proposes a satellite network routing model based on price game, which makes full use of the periodic characteristics of the satellite network topology, and divides the dynamically changing topology into several relatively static time slices in time slices. Introducing a price game to solve the routing problem of low-orbit satellite networks. While solving the routing problem, the model avoids the emergence of “selfish” nodes in the network, and also prevents network congestion caused by excessive consumption of individual node resources.

2 Definition of System Model 2.1

Spatial Network Topology Structure

Unlike terrestrial computer networks, nodes in low-orbit satellite networks are mobile, and their topologies are constantly changing. However, satellites operate in strict orbit, so their modes of change have significant periodicity and predictability. Based on this feature, researchers generally use related strategies to shield the dynamics of network topology and use abstract static models to conduct research. At present, the related strategies mainly include virtual topology strategy [8], virtual node strategy [9] and coverage domain division method [10]. This paper adopts virtual topology strategy. The virtual topology strategy discretizes the dynamic topological relationship of satellite network nodes, and divides a complete satellite network operation cycle into several time slices [t0, t1], [t1, t2], [t2, t3], [t3, t4], …, [tn, tn − 1], the satellite network topology is relatively fixed in the time slice, and changes only at time nodes such as t1, t2, t3, …, tn. The spatial network studied in this paper is based on the assumption that the network structure is stable during a certain period of time, and the routing behavior can be performed in a stable time slice of these structures.

Research on Spatial Network Routing Model Based on Price Game

2.2

461

Routing Node Storage Resource Allocation Scheme

Since the nodes in the DTN network are often in a non-connected state, the DTN network route is in the “storage-carry-forward” mode, that is, transmitting node, transmits data to the neighboring node, and the neighboring node stores and carries the relevant data. When the neighboring node encounters another suitable node, it forwards the data to the other nodes until the data reaches the target node. In order to improve data forwarding efficiency, a kind of nodes for data forwarding is often set in the DTN network. This type of node does not generate data and is only responsible for forwarding data, so-called “data dice” or routing nodes [11]. When multiple nodes send data to the routing node at the same time or a node sends data to multiple routing nodes, the network routing problem can be transformed into the storage resource allocation problem [12], how to optimize the storage space allocation of the data forwarding node. To improve network routing performance is the key issue of this paper. Referring to the real world, when people compete for resources, they often use the bidding method. The bidder determines the bid according to his own situation. The bidding determines how much the resource is acquired. In this paper, the model is introduced into the allocation process of node storage resources, and the price game model is used to solve the routing problem. The amount of data sent by the routing node is proportional to the price it is willing to pay. The higher the bid, the more data can be obtained. The pi is the price paid by the terminal, and the ai is the amount of forwarding data obtained after the routing node i bids. When there are n nodes requesting to send data to the data sending source node at the same time, the data distribution rule is as shown in Formula (1). ai ðpi Þ ¼ xi Q Among them xi ¼ PpNi i¼1

pi

ð1Þ

, Indicates the bid ratio of the terminal i bid in the n bid

nodes; Q is the total amount of data to be sent. The data sending node satellite will allocate data resources according to the bidding ratio. The data sender will determine the bid according to the current state of the node itself. The higher the bid, the larger the amount of sent data, and the lower the bid, the smaller the sent data. 2.3

Price Game Model

In the previous section, we defined the allocation scheme of the storage resources of the spatial routing nodes. This summary studies the resource allocation strategy through the discussion of the game model. We use the extended expression game to model this game process: (1) Game participant set: The player of the game is all routing nodes in the network that receive and forward data, and assuming the number of participants is n; (2) Participant action sequence: participants bid at the same time; (3) The participant’s strategic space: the participant’s bid for the data storage space; (4) Participant’s information set: This information set changes with time mainly including spatial node positional relationship, business priority, remaining power, remaining storage space, channel error rate, etc.

462

L. Cong et al.

(5) Participant’s payment function: Each action of the participants in the game will bring certain effects to the participant, and the utility is described by the participant utility payment function. Since the strategies and actions of participants in the game are interdependent, the effectiveness of each participant is related to other participant strategies. Assuming that the price of the routing node i for obtaining the data transmission right is pi, and the amount of data obtained is ai, the node will generate corresponding benefits and costs after obtaining the data, and the benefit function of the data forwarding node i can be expressed as ui ðai Þ ¼ vi ðai Þ  ci ðai Þ

ð2Þ

Among them vi ðai Þ is the benefit function after obtaining storage space ai for node I, assuming vi ðai Þ ¼ lnð1 þ ci ai Þ

ð3Þ

Here ci is an important parameter related to the state of the node i. We choose ci to represent the ratio of the storage space and the total storage space after the node i receives the data. It can be known from the function (3) that the higher the ratio, the higher the yield. Income function vi ðai Þ is an increasing function, and when the variable increases to a certain extent, the function value tends to be stable. Among them, ci ðai Þ is the cost function for routing data for routing node i. Assuming ci ðai Þ ¼ eai

ð4Þ

Here, e represents the energy consumed by the node to forward the unit data. For the sake of simplicity, it is assumed that all nodes forward the unit data and consume energy e. The more data is forwarded, the more energy is consumed, the higher the it cost. 2.4

“Selfish” Node Penalty Mechanism

Node satellite nodes are selfish in the process of competing for forwarding data. In order to maximize their own interests, they will use unscrupulous means. If no incentive and punishment mechanism is introduced, it is prone to malicious competition. For example, when a node needs to increase its own revenue, regardless of its actual situation, the data is not able to reach the destination node, the network is congested, and even the individual nodes are prematurely invalid due to excessive consumption, which affect the efficiency and service time of the entire network. Therefore, to avoid this situation, we must introduce a penalty mechanism to avoid excessive bidding, and must punish the nodes that conduct malicious competition, suppress the selfishness of data routing nodes, avoid waste of resources, and ensure the between nodes. Fair competition to maximize revenue.

Research on Spatial Network Routing Model Based on Price Game

463

At the same time, in order to ensure the efficient routing process, the incentive mechanism is also created when the penalty mechanism is introduced, and the enthusiasm of the resource contribution of the routing node is improved. The joint action of the two mechanisms can ensure the benign operation of the entire network system. The data forwarding success rate bi is a penalty standard. The higher the success rate, the lower the penalty. Conversely, the lower the success rate, the higher the penalty level. The penalty model is as follows: li ðai Þ ¼ ð1  bi Þvi ðai Þ

ð5Þ

among them. The value range of bi is [0, 1]. After determining the excitation function, the benefit function can be rewritten as: ui ðai Þ ¼ vi ðai Þ  ci ðai Þ  li ðai Þ ¼ vi ðai Þ  eai  ð1  bi Þvi ðai Þ

ð6Þ

¼ bi lnð1 þ ci ai Þ  eai

3 Routing Model Design 3.1

Routing Game Model Equilibrium Analysis

Through the above description, the routing problem of the satellite node is transformed into the problem of maximizing the revenue of the data sending node, that is, when the storage space of the routing satellite node is large enough, all the requirements of the data sending node will be satisfied, so it is important to study the situation of insufficient resources. Ai is the bidding strategy of the i-th satellite routing node, and A-i is the strategy of other satellite routing nodes except the i-node. Conclusion 1: For the routing game G defined in this paper, n spatial satellite nodes participate, and the number of game participants is limited. Then there is a pure strategy Nash equilibrium. Proof For the spatial dynamic routing game G, since the number of participants n is limited, it can be known that for the routing game of pure strategy space, the optional strategy of the participants is limited. According to the game theory related theorem, G is a finite game. And before the description of the game process, G is a perfect information game, so there is at least one pure strategy Nash equilibrium in game G. Conclusion 2: If A* is the Nash equilibrium solution of the game, then the node is unique. Suppose A* is the Nash equilibrium solution of the game. If the decision vector A = (a1, a2 … an) is another solution of Eq. (7), i.e. A 6¼ A*, let ai < ai*. On the one hand, since both A and A* are Nash equilibrium points, there is ui(ai) = ui(ai*); on the other hand, because vi is a monotonically increasing function, ui is also monotonically increasing, so there is ui(ai) < ui(ai′). Obviously, there is a contradiction between A and Nash equilibrium, so the equilibrium point is unique.

464

L. Cong et al.

According to the above two conclusions, the equilibrium point can be obtained by deriving the income function and getting 0 to solve the optimal solution. i That is, making @u @ai ¼ 0, Derivate Eq. (6) and let it equal 0. After calculating, ai ¼ ci bci e e is equilibrium point. i

@ui ðai Þ @ðbi vi ðai Þ  ci ðai ÞÞ ¼ @ai @ai @ðbi lnð1 þ ci ai Þ  eai Þ ¼ @ai c i bi ¼ e¼0 1 þ ci ai

3.2

ð7Þ

Routing Algorithm Design

According to the foregoing description of the game process, the routing process can be described as follows: Step 1 The data transmitting satellite node s determines the adjacent satellite according to the connection diagram, that is, the routing satellite node in the communication range, and excludes the node that sent the data to the s in the previous step, assuming that the number of adjacent routing satellite nodes is n; Step 2 The data transmitting satellite s sends broadcast information to the adjacent n nodes, and the broadcast content is the amount of data to be sent and the target node; Step 3 If there is a target node in the neighbor node, the data is all delivered to the target node, otherwise proceeds to step 4; Step 4 Any neighbor routing node i calculates an optimal bid according to its own state according to the routing game equilibrium model, and node i sends the bid to the node s; Step 5 After the node s obtains the bid of the adjacent node, the data to be sent is allocated according to the formula (1), and the node i obtains the data and repeats the step 2.

4 Simulation Analysis The simulation experiment of the model routing algorithm is completed using ONE1.5 [13]. The software using as specialized software for DTN network simulation, but since the software itself does not support the simulation of the satellite network space threedimensional motion model. The relevant base number information must be processed before the simulation and converted into a planar QR motion model. In this paper, 10 LEO communication satellites are used as simulation scenarios. Firstly, STK is used to construct the spatial motion model of the satellite scene, and then the satellite motion

Research on Spatial Network Routing Model Based on Price Game

465

trajectory is projected to the surface of the earth for a period of time to form a planar QR; satellite motion trajectory, and then the motion trajectory is used by OpenJump. Transformed into WKT data and scaled the model map. Finally, the simulation model was built with MapRoutMoment. The main simulation parameters are shown in Table 1.

Table 1. Simulation parameter table Parameter Communication rate Node communication radius Node movement model Node movement speed Node storage space Data generation interval Data survival time

Number 500 3000

Description 500 kbps 3000 km

Map route movement 3, 5 1000 MB 8, 12 600

Map rout movement 3–5 km/s 1000 MB Generate a data packet every 8–12 s Data packet survival time is 600 min

In this situation, the price game based routing model proposed in this paper is simulated and compared with the two typical routing algorithms of Epidemic and Egressive Routing Model (FC). (1) Transmission success rate The transmission success rate is the ratio of the total number of packets successfully reaching the target node to the total number of packets to be transmitted by the source node in a certain period of time. The ability of the routing algorithm to correctly forward packets to the target node is the most important indicator. 4.1

Transmission Success Rates Analysis

As shown in Fig. 1: The transmission success rates of the three routing algorithms are relatively similar, but in the initial stage of simulation, the Epidemic routing algorithm maintains a high success rate, but as the number of data copies sent by the algorithm increases in the network, the network performance is gradually reduced, and its transmission success rate lags behind FC and Price. The Price routing algorithm proposed in this paper is between FC and Epidemic algorithm in transmission success rate, which is slightly inferior to FC algorithm, but obviously better than Epidemic algorithm.

466

L. Cong et al.

Fig. 1. transmission success rate change diagram

4.2

Transmission Delay Analysis

The transmission delay is the time required for a message to reach the destination node from the source node, and is usually evaluated by the average transmission delay. The small transmission delay means that the routing algorithm has strong transmission capability and high transmission efficiency, which means that less network resources are occupied in the transmission process. Figure 2 is a comparison of the average delay of the three routing models. It can be seen from the figure that the average delay of FC in the three routing models is the largest, because the FC routing algorithm uses singlecopy mode to propagate data, which reduces the probability of data arrival and increases the propagation delay; the average delay of the Epidemic routing model is the lowest, because in the ideal network state, the infectious routing method generates a large number of data copies, which reduces the average delay of data propagation, but this method is very likely to cause network congestion. In the initial stage of simulation, the price model based on the price game has a price delay between the two models. The network delay of the model is slightly higher than the Epidemic routing model, but as the simulation time is extended, a large number of copies of the Epidemic model gradually make the network performance gradually reduced, the average delay is gradually become longer, the performance of the price game routing model is gradually stabilized, and it is close to the FC routing model in terms of delay.

Research on Spatial Network Routing Model Based on Price Game

467

Fig. 2. Network average delay variation diagram

4.3

Network Spending Ratio Analysis

The network spending ratio refers to the ratio of the total amount of data that has not successfully reached the target node to the total amount of data and successfully reached the target node during the entire simulation process. The ratio is used to describe the overall transmission performance of the network. Figure 3 shows the network spending of the three routing models compared to the cost ratio during the simulation. It can be seen from the figure that the network spending ratio of the routing algorithm based on the price game model is significantly lower than that of Epidemic, which is basically at the same level as the FC model. The purpose of the model is to save network resources while improving network performance. The model does not generate a large amount of data copies, does not cause serious network congestion, and has an ideal network spending ratio.

Fig. 3. Network overhead ratio change diagram

468

L. Cong et al.

5 Conclusion Based on the analysis of satellite node trajectory characteristics and communication characteristics in spatial information network, a price-based routing game model is established. In this model, the data sending node is based on the service priority and its own state. As a game bid basis, it not only satisfies the routing requirements, but also reduces the probability of congestion. At the same time, it introduces a “selfish” node penalty mechanism to improve network transmission effectiveness. Compared with the routing models such as Epidemic and FC, the routing model reduces the average network communication delay, saves network resources, and improves network performance. It is a beneficial attempt to solve the satellite network routing problem by the price game routing model. In the future work, we will evaluate the actual performance and adaptability of the algorithm by enriching the simulation scene and adjusting the environmental parameters, and further adjust and improve the model.

References 1. Fall K (2003) Delay-tolerant network architecture for challenged internets. Sigcomm03 33 (4):27–34 2. Vahdat A, Becker D (2006) Epidemic routing for partially-connected ad hoc networks. In: Technical report Cs-2000-06 3. Lindgren A, Doria A, Schelen O (2003) Probabilistic routing in intermittently connected networks. ACM SIGMOBILE Mobile Comput Commun Rev 4. Spyropoulos T, Raghavendra CS (2009) Spray and wait: an efficient routing scheme for intermittently connected mobile networks. In: Proceedings of the ACM SIGCOMM workshop on delay-tolerant networking 5. Chang HS (1996) Performance comparison of optimal routing and dynamic routing in low earth orbit satellite networks. Atlanta, GA 6. Uzunalioglu H (1998) Probabilistic routing protocol for low earth orbit satellite networks 7. Del Re E, Fantacci R, Giambene G (1995) Efficient dynamic channel allocation techniques with handover queuing for mobile satellite networks. IEEE J Sel Areas Commun 13(2):397–405 8. Taleb T, Mashimo D, Jamalipour A, Hashimoto K, Nemoto Y, Kato N (2006) SAT04-3: ELB: An explicit load balancing routing protocol for multi-hop NGEO satellite constellations. GLOBECOM ’06 9. Shannon CE, Weaver W (1947) The mathematical theory of communication. The University of Illinois Press, Urbana 10. Wood L (2001) Internet working with satellite constellations. In: Ph.D. dissertation. University of Surrey, Guildford, United Kingdom 11. Caini C, Firrincieli R (2012) Application of contact graph routing to LEO satellite DTN communications. In: ieee international conference on communications, IEEE, pp 3301–3305 12. Balasubramanian A, Levine B, Venkataramani A (2007) DTN routing as a resource allocation problem. ACM SIGCOMM Comput Commun Rev 37(4):373–384 13. Keränen A, Ott J, Kärkkäinen T (2009) The ONE simulator for DTN protocol evaluation. In: SIMUTools’09: 2nd international conference on simulation tools and techniques. Rome, Mar 2009

The TDOA and FDOA Algorithm of Communication Signal Based on Fine Classification and Combination Chi Zhang(&) Beijing Institute of Spacecraft System Engineering, No. 104, You Yi Road, Haidian District, Beijing, China [email protected]

Abstract. In order to solve the high precision and high time efficiency TDOA and FDOA estimation of communication signal in multi-station location system, we proposed an estimation algorithm based on fine classification and combination. Through the design of different fine classification series number, the algorithm can reduce the operation time effectively, and the estimation accuracy of TDOA and FDOA are developed by adopting the quadratic surface fitting. Simulation results show that when the carrier to noise ratio(CNR) is −5 to 5 dB, the estimation accuracy of TDOA is 20–100 ns, the estimation accuracy of FDOA is 45–100 MHz, which are improved by 3–4 times compared with the traditional estimation algorithm. Meanwhile, the operation time is reduced by more than one half according to the different fine classification series. Keywords: Fine classification  Quadric surface fitting  Estimation accuracy  Communication signal

1 Introduction Multi-station passive location system has the advantages of high precision, fast positioning and relatively simple reconnaissance equipments, which becomes the primary method of high-precision positioning of the target [1]. Among them, the TDOA and FDOA estimation technique of communication signal has been studied extensively in recent decades because of its wide application in the field of passive location. In 1981, Stein proposed the method of frequency difference estimation between the received signals by using the cross ambiguity function(CAF), the maximum peak of CAF which corresponding to the frequency is the FDOA estimation results [2], but Stein did not analyze the lower bound of the algorithm performance; Johnson, Ulman et al. have made various improvements to the CAF in order to improve the performance of frequency difference estimation and extend its applicability [3, 4]; Robert analyzed the theoretical limit of frequency difference estimation under continuous wave signal condition [5]; Yeredor and Angel used the discrete Fourier transform and time frequency matrix to represent the two received signals, which obtained the Fisher signal matrix (FIM), but did not give a closed expression of CRLB [6]; Weiss discussed the

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 469–478, 2020 https://doi.org/10.1007/978-981-13-9409-6_55

470

C. Zhang

passive location problem under the condition of random waveform and unknown waveform, the estimation algorithm of the position vector of the radiation source and the CRLB are given [7]. At present, there are two main problems in the TDOA and FDOA estimation of communication signals. Firstly, the accuracy and theoretical limit of TDOA and FDOA estimation are analyzed, but the research for multi-user signals with complex noise background is not enough; Secondly, the timeliness of TDOA and FDOA estimation algorithm determines the scope of its application, we need to study the effective way to reduce the operation time of the algorithm further besides improving the precision of parameter estimation [8]. Aimed at the high accuracy and high time efficiency estimation of TDOA and FDOA of communication signal, we proposed an estimation algorithm based on fine classification and combination. We can obtain the joint search range according to the prior information [9] and achieve the fine classification. The quadratic surface fitting method is used to realize the TDOA and FDOA estimation of communication signal with high precision, high efficiency and low operation time, which can extend the application range of the TDOA and FDOA estimation algorithm further.

2 The TDOA and FDOA Estimation Accuracy of Communication Signal According to the S. Stein’s theory, the TDOA and FDOA estimation accuracy of communication signal is expressed as follows: 1 ffi pffiffiffiffiffiffiffi rTDOA ¼ 0:55 B Bn Tc 1 ffi pffiffiffiffiffiffiffi rFDOA ¼ 0:55 T Bn Tc

) ð1Þ

where B is the signal bandwidth, Bn is the input noise bandwidth, T is the signal accumulation time, and c is the equivalent input signal-to-noise ratio (SNR) of the measurement module. When the SNR of the received signal is high, the joint estimation of delay and frequency shift based on the CAF is effective and unbiased, which can approximate the CRLB bound. The SNR definition is: c¼

Pr Pn

ð2Þ

where Pr is the signal power, Pn is the noise power. The noise power Pn is: Pn ¼ kTt Bn F

ð3Þ

where k is the Boltzmann constant, Tt is the temperature of the receiver, and F is the noise factor. From the analysis we can see that the TDOA and FDOA estimation accuracy is only depends on the signal power intensity, which is not independent of the noise

The TDOA and FDOA Algorithm of Communication Signal

471

bandwidth Bn. At the same time, the signal bandwidth B is larger, the accumulation time T is longer, the estimation accuracy is higher, but the joint search for required time-frequency resolution is higher. In general, the time difference search step of CAF should be less than 1/B, frequency difference search step should be less than 1/T, or may miss the relevant peak [10].

3 The Fine Classification and Combination Estimation Algorithm 3.1

The TDOA and FDOA Estimation Model

Set the sampling frequency is fs , the received two communication signals is sampled discretely, and in the time-frequency plane ðs  mÞ within the discretization [11]. According to the prior information of received communication signals, it is determined that ðD; F Þ is in the certain area of ðs  mÞ plane. We can divide the plane into several meshes using two parallel lines, which is: s ¼ Tts ¼ T=fs ; m ¼ KDf

ð4Þ

Dt and Df are the lengths of the grid in the time difference axis and the frequency axis respectively, T and K are integers; The discrete grid points can be expressed as ðTts ; KDf Þ, which denoted as ðT; K Þ. According to the analysis, the CAF under the discrete communication sampling signal condition can be expressed as [12]: AðTts ; KDf Þ ¼ Adis ðT; K Þ

  Df y1 ðnÞy2 ðn þ T Þ exp j2p Kn fs n¼0   N 1 X Df ¼ r ðn; T Þ exp j2p Kn fs n¼0

¼

N 1 X

ð5Þ

n oN1 is equivalent to the where, r ðn; T Þ ¼ y1 ðnÞy2 ðn þ T Þ, the CAF sequence AðT; K Þ K¼0 n oN1 . discrete Fourier transform of the correlation sequence r ðn; T Þ n¼0

In general, the sampling frequency fs is much larger than the corresponding signal n oN1 bandwidth B of the corresponding sequence r ðn; T Þ . However, according to n¼0

band-pass sampling theorem, in order to ensure the signal is not distorted, it requires the sampling frequency is greater than 2B, which does not require much greater than the signal bandwidth B. Meanwhile, in order to reduce the operation time of CAF further, we divide the n oN1 into M segments, and each N datas contained in the correlation sequence rðn; T Þ n¼0

segment has L datas, the relevant sequence after filtering and extracted is expressed as follow:

472

C. Zhang

Rðm; T Þ ¼

L1 X

r ðmL þ p; T Þh11 ðL  1  pÞ

ð6Þ

p¼0

Through the impulse response sequence h11 ðnÞ and the length L of the low-pass filter, the correlation sequence r ðn; T Þ is changed to Rðm; T Þ by extracting of the interval L, which is the extraction ratio. Then the CAF can be expressed as:   Df Rðm; T Þ exp j2p LKm Adis ðT; K Þ ¼ fs m¼0 M 1 X

3.2

ð7Þ

The Fine Classification and Combination Estimation Algorithm

By using the prior information of the communication signal to determine the joint search range of time-frequency difference, we can realize the high accuracy and high time efficiency of TDOA and FDOA according to the step-by-step progressive search and quadric surface fitting method. According to the analysis, it is necessary to carry out two-dimensional search when estimating the TDOA and FDOA of the communication signal based on CAF, which can determine the search range of time-frequency difference. In general, we can obtain the search range of time-frequency difference by using the range of the position P and velocity V of the radiation source and the observational parameters. Set the position and velocity of the observation station 1 are P1 and V1 , the position and velocity of the observation station 2 are P2 and V2 , the position and velocity of the radiation source are P and V, then the time-frequency difference is: 1 1 t21 ¼ kP  P2 k  kP  P1 k c c f21 ¼

fc ðV  V 2 ÞT ðP  P2 Þ fc ðV  V 1 ÞT ðP  P1 Þ  ckP  P2 k c kP  P 1 k

ð8Þ ð9Þ

fc is the carrier frequency of radiation source signal, c is the light speed. In order to carry out the two-dimensional search of time-frequency difference quickly and efficiently, we can adopt the fine classification search strategy, which from global to local and the former one leading the next level gradually. With the increase of the fine classification number, the search range of time-frequency difference becomes smaller, the CAF graph becomes more precise and the more accurate estimation of the time-frequency difference parameter is obtained. The basic steps of fine classification are as follows: Step 1 Make the search series i = 1, determine the time-frequency search range according to the prior information, then we can obtain the first timefrequency estimation are:

The TDOA and FDOA Algorithm of Communication Signal

473

b 0 ¼ arg maxfabs(Adis ðT; K ÞÞg D

ð10Þ

b c0 ¼ maxfAdis ðT; K Þg  1 : F N

ð11Þ

b ðiÞ Step 2 Calculate the CAF and obtain the estimated value of the time difference D ðiÞ b based on the square maximum criterion of the and frequency difference F CAF module. Step 3 Determine the next series search range of the time-frequency difference b ðiÞ , go into b ðiÞ and F according to the maximum possible estimation error of D step 2. Loop until the time and frequency difference search range is small enough and give out the estimated time and frequency difference. Due to the limitation of the sampling period Ts , the minimum time difference search interval can only be equal to Ts , which can not realize the time difference search of the fractional sampling period, and limit the time-frequency difference estimation accuracy. Therefore, we can select several points near the obtained time-frequency estimation point and adopt the quadratic surface fitting method to improve the TDOA and FDOA estimation accuracy further [13]. b and F b are the TDOA and FDOA estimates. jAðs; mÞj2 can be expanded into a Set D   b F b is: second-order Taylor series in D; 2 2       b b 2 @ jAðs; mÞj b þ @ jAðs; mÞj m  F b sD F  þ jAðs; mÞj2 ¼ A D; @s @m   @ 2 jAðs; mÞj2  b mF b þ sD @s@m 2 2 2 2 1 @ 2 jAðs; mÞj2  b þ 1 @ jAðs; mÞj m  F b þ s  D @s2 @m2 2 2   b F b , jAðs; mÞj2 can be approximated by quadric surface: That is, in the D;

   2  2    b b  b þ s2 s  D b F  ¼ s1 s  D jAðs; mÞj2 A D;    b mF b þ s3 s  D  2   b þ s5 m  F b þ s4 m  F 

ð12Þ

ð13Þ

 b F b , select five discrete points, struct equations and solve the coefD;  ficient vector s ¼ ½s1 ; s2 ; s3 ; s4 ; s5 T , we can obtain sopt ; mopt corresponding to the maximum of jAðs; mÞj2 : Near the

474

C. Zhang

sopt ¼

s3 s5  2s2 s4 b þD 4s1 s4  s3 s3

ð14Þ

mopt ¼

s2 s3  2s5 s1 b þF 4s4 s1  s23

ð15Þ

sopt and mopt are the estimated values of TDOA and FDOA.

4 Simulation and Analysis Results Select the received two communication signals are the direct spread spectrum signals, which adding different time delay and Doppler frequency shift in the received channels. As we known,the TDOA and FDOA estimation is mainly determined by the received signal power and the algorithm operation time. Set the operation time is within 500 ms, select five quadratic surface fitting discrete points, when the carrier-to-noise ratio (CNR) of the two received signals is from −5 to 5 dB, after 200 independent observations, the TDOA an FDOA estimation accuracy (RMS) of the fine classification and combination estimation algorithm and traditional CAF estimation algorithm is shown as Figs. 1 and 2.

Fig. 1. The TDOA estimation accuracy (RMS) of two algorithms

Simulation results show that the higher the CNR is, the higher the estimation accuracy of the TDOA and FDOA is. Meanwhile, the estimation accuracy of TOOA is 20–100 ns and the estimation accuracy of FDOA is 45–100 MHz in different CNR of the fine classification and combination estimation algorithm, which is much higher than the traditional CAF algorithm. The estimation accuracy of TDOA is improved by 3 times and the estimation accuracy of FDOA is improved by 4 times.

The TDOA and FDOA Algorithm of Communication Signal

475

At the same time, the signal accumulation time length also affect the estimation accuracy of TDOA and FDOA. Set the CNR is 0 dB of the received two communication signals, the TDOA an FDOA estimation accurac (RMS) of the fine classification and combination estimation algorithm and traditional CAF estimation algorithm is shown as Figs. 3 and 4.

Fig. 2. The FDOA estimation accuracy (RMS) of two algorithms

Fig. 3. The TDOA estimation accuracy (RMS) of two algorithms

476

C. Zhang

Fig. 4. The FDOA estimation accuracy (RMS) of two algorithms

Simulation results show: the longer the communication signals are accumulated, the higher the estimation accuracy of FDOA is, and the lower bound of the estimation accuracy of FDOA can be approached. However, the accumulation time length has no significant effect on the estimation accuracy of the TDOA. Meanwhile, with the different signal accumulation time, the TDOA an FDOA estimation accuracy (RMS) of the fine classification and combination estimation algorithm are higher than traditional CAF estimation algorithm. When the fine classification series from zero to five of the fine classification and combination estimation algorithm, the operation time with the signal accumulation time of the two algorithms is shown as Fig. 5.

Fig. 5. The operation time of the two algorithms

The TDOA and FDOA Algorithm of Communication Signal

477

Simulation results show that when the fine classification series number of the fine classification and combination estimation algorithm is less than five,the operation time is less than the traditional CAF estimation algorithm, which has a high real time performance. Meanwhile, the fewer number of the fine classification is, the shorter operation time is. However, the estimation accuracy of TDOA and FDOA are become worse. In the practical applications, we should make a choice on the fine classification series number and algorithm operation time.

5 Summary In order to solve the high accuracy and high time efficiency estimation of TDOA and FDOA, we proposed an estimation algorithm based on fine classification and combination. The joint search range can be determined according to the prior information, and we can achieve high accuracy and low operation time-frequency estimation by fine classification and quadric surface fitting. Simulation results show that the estimation accuracy of TDOA and FDOA are improved by 3–4 times than the traditional CAF estimation algorithm. At the same time, according to the different classification series, the operation time is reduced by more than one half.

References 1. Ho KC, Chan YT (1997) Geolocation of a known altitude object from TDOA and FDOA measurements. IEEE Trans Aerosp Electron Syst 33(3):770–783 2. Stein S (1981) Algorithms for ambiguity function processing. IEEE Trans Acoust Signal Process 29(3):1467–1472 3. Weiss LG (1994) Wavelets and wideband correlation processing. IEEE Signal Process Mag:13–32 4. Robert JU, Evaggelos G (1999) Wideband TDOA/FDOA processing using summation of short-time CAF’s. IEEE Trans Signal Process 47(12):193–200 5. Robert KO (1989) Frequency difference of arrival accuracy. IEEE Trans Acoust Signal Process 37(2):306–308 6. Yeredor A, Angel E (2011) Joint TDOA and FDOA estimation: a conditional bound and its use for optimally weighted localization. IEEE Trans Signal Process 59(4):1612–1623 7. Weiss AJ (2011) Direct geolocation of wideband emitters based on delay and doppler. IEEE Trans Signal Process 59(6):2513–2521 8. Ho KC (2012) Bias reduction for an explicit solution of source localization using TDOA. IEEE Trans Signal Process 60(5):2101–2114 9. Yu H, Huang G, Gao J et al (2012) An efficient constrained weighted least squares algorithm for moving source location using TDOA and FDOA measurements. IEEE Trans Wireless Commun 11(1):44–47 10. Xu B, Sun G, Yu R et al (2013) High-accuracy TDOA-based localization without time synchronization. IEEE Trans Parallel Distrib Syst 24(8):1567–1576 11. Meng W, Xie L, Xiao W (2013) Decentralized TDOA sensor pairing in multihop wireless sensor networks. IEEE Signal Process Lett 20(2):181–184

478

C. Zhang

12. Huang J, Wan Q (2012) Analysis of TDOA and TDOA/SS based geolocation techniques in a non-line-of-sight environment. J Commun Netw 14(5):533–539 13. Hara S, Anzai D, Yabu T et al (2013) A perturbation analysis on the performance of TOA and TDOA localization in mixed LOS/NLOS environments. IEEE Trans Commun 61(2):679–689

An Adaptive DFT-Based Channel Estimation Method for MIMO-OFDM Xiao Deng(&) and Xiao Ming Wu(&) The 28th Research Institute of China Electronics Technology Group Corporation, Nanjing, China [email protected], [email protected]

Abstract. An Adaptive DFT-Based channel estimation method for MIMOOFDM system is proposed. It enhances the precision of channel estimation, improves the demodulation performance of the receiver, and reduces the bit error rate. The results of experiments show that the proposed method has more excellent estimation performance, and Adapts to different channel conditions and SNR conditions. Keywords: DFT

 Channel estimation  MIMO-OFDM

1 Introduction Channel estimation using pilots is a common algorithm for OFDM channel estimation. Known data is inserted into certain subcarriers and symbol blocks for channel estimation, such as the LTE system. Channel state information obtained by pilots is used to estimate channel state information on all subcarriers. The channel response at the pilot can generally be estimated using least squares (LS) or minimum mean square error (MMSE). For a wireless communication system with a slower channel time, a training sequence can be inserted at the frame header for channel estimation, such as IEEE802.11 ac. The channel state information on all subcarriers is estimated by using the training sequence of the frame header, so that the information of the data field is balanced and restored. In the channel estimation criterion, the computational complexity of the LS estimation is low, which is often used in practical systems, but its performance is also poor. The correction algorithm based on DFT for time domain interpolation can effectively improve the performance of channel estimation by slightly increasing the amount of IFFT and FFT calculation [1–4]. However, the DFT correction algorithm also removes some leaked channel energy while reducing noise interference [5–8]. For the problem of DFT correction algorithm based on channel estimation, this paper proposes an adaptive denoising channel estimation correction algorithm based on DFT. This algorithm effectively analyzes the energy leakage amount and removes the noise while preserving the leakage energy, so that the performance of the channel estimation correction algorithm is improved.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 479–486, 2020 https://doi.org/10.1007/978-981-13-9409-6_56

480

X. Deng and X. M. Wu

2 DFT Channel Estimation Principle DFT Channel Estimation Principle The DFT correction method based on LS channel estimation is widely used in practical communication systems because of its simple structure and low computational complexity compared with MMSE. 2.1

LS Channel Estimation

LS estimation does not consider the effect of noise on estimation,the following cost function is minimized: JðHÞ ¼ kY  XH k ¼ ðY  XHÞH ðY  XHÞ ¼ Y H Y  Y H XH  H H X H Y þ H H X H XH

ð1Þ

By setting the derivative of the function with respect to H to zero, @JðHÞ ¼0 @H H X H Y þ X H XH ¼ 0

ð2Þ

The solution to the LS channel estimation is: HLS ¼ ðX H XÞ1 X H Y ¼ X 1 ðX H Þ1 X H Y HLS ¼ X 1 Y

2.2

ð3Þ

DFT-Based LS Channel Estimation

Since the LS channel estimation does not take into account the noise factor, the estimated result is greatly affected by the noise. Especially in the case of low SNR, the MSE of the LS estimator is significantly larger than that of the MMSE estimator. The literature [9] proposes a channel estimation method using DFT to improve the performance of LS channel estimation. The process of implementation is shown in Fig. 1. The main idea is that the multipath delay length of the channel is generally smaller than the cyclic prefix length Ncp . The result of the LS channel estimation is IDFT transformed into the time domain to become the impact response of the channel. The multipath signal existing before the length is taken out is regarded as greater than the length. It is the noise that is filtered out. The equivalent is filtered in the time domain and then restored to the time domain signal by DFT transformation. The DFT correction method improves the performance of the LS channel estimation by filtering out the noise in the time domain. The multipath signal existing before the length of Ncp is taken out, and the length greater than Ncp is regarded as noise filtering. This is equivalent to filtering in the time domain, and then performing DFT transformation to

An Adaptive DFT-Based Channel Estimation Method for MIMO-OFDM

481

restore the time domain signal. The DFT correction method improves the performance of LS channel estimation by filtering out noise in the time domain. H LS (0)

h (0)

h (0)

H ' (0)

H LS (1)

h (1)

h (1)

H ' (1)

CE LS

M-Point IDFT H LS (M  1)

Keep N-Point h ( M  1)

h(n)

M-Point DFT

 zero  

H LS  DFT CE LS-DFT

H ' ( M 1)

Fig. 1. DFT-based channel estimation

hðnÞ ¼

N 1 1X HLS ðkÞej2pnk=N ; 0  k  N  1 N k¼0

ð4Þ

Since multipath delay is finite, hðnÞ is considered to be noise or error term if it is greater than multipath delay. hðnÞ retains the length term Ncp , and then does the FFT to get the corrected channel estimate H 0 . ~hLS ðnÞ ¼



hðnÞ 0  n  Ncp  1 0 Ncp  n  N  1

ð5Þ

The actual multipath delays are not all integer multiples of the sampling interval, so energy leakage occurs when the channel impulse response is a non-integer multiple of the sampling interval. This paper proposes a method for adaptive dynamic filtering of noise, which can greatly improve the performance of the channel estimator and reduce the bit error rate of the OFDM system when using the channel estimation of the DFT correction method.

3 Proposed Methods After analyzing the LS channel estimation and performing IDFT, the position with strong energy leakage appears in the pre-Ncp length and the end Ncp length in the time domain. Figure 2 illustrates this leakage for a special case. The signal between Ncp þ 1 and N  Ncp  1 is mainly noise. Therefore, the noise power r2 can be estimated by using the signal of this segment. By setting a coefficient a, Filter threshold is ar2 . Check the first Ncp samples and the last Ncp samples of the channel estimation value after IDFT. If the sample is larger than the threshold, it is considered to be the multipath of the channel, and the sample value is reserved. Others are considered as noise signals and are filtered out. The power of the noise in the channel estimation is different under different SNR. The method can dynamically adjust the threshold according to the

482

X. Deng and X. M. Wu

estimation of the noise power, so the performance can be significantly improved under different SNR.

Fig. 2. Channel estimation IDFT

The architecture for Propose method is shown in Fig. 3.

Fig. 3. Proposed method

Assume that a MIMO-OFDM system has NT transmit antenna and NR receive antenna, and Noise and signal estimates are as follows p2i;j

1 ¼ 2Ncp r2i;j ¼

NX cp 1 n¼0

N 1  X    hi;j ðnÞ2 þ hi;j ðnÞ2

! ð6Þ

NNcp

NN cp 1 X   1 hi;j ðnÞ2 N  2Ncp n¼Ncp

ð7Þ

An Adaptive DFT-Based Channel Estimation Method for MIMO-OFDM

483

N is the FFT length and Ncp is the cyclic prefix length, i 2 ð1; 2    ; NT Þ, j 2 ð1; 2    ; NR Þ Define SNRi;j ¼

pi;j ri;j

ð8Þ

When SNR is large, the LS channel estimation is less affected by noise, so a threshold b is introduced.  ai;j ¼

a 0

SNRi;j \b SNRi;j  b

ð9Þ

  e ¼ IFFT ~hðtÞ H

ð10Þ

4 Simulation Results and Analysis Using MIMO-OFDM system, the ideal channel, LS, LS-DFT and LS-DFT-DT proposed method in this paper are used for simulation. The coding, modulation, multiplexing, and channel in the simulation are based on an equivalent low-pass model. The simulation parameters are shown in the following Tables 1 and 2. Table 1. System parameters Parameters TX antenna number RX antenna number Space layer number Frame length Frame number Channel Channel equalization FFT Channel estimation Modulation Channel coding

Value 4 4 4 4096 byte 2000 Rayleigh fading channel MMSE 256 LS/LS-DFT/LS-DFT-DF BPSK, QPSK, 16QAM LDPC R = 1/2, R = 3/4, R = 5/8

Table 2. Modulation and coding MCS index 0 1 2 3

Modulation BPSK QPSK QPSK 16QAM

Coding rate 1/2 1/2 3/4 1/2

484

X. Deng and X. M. Wu

Set different threshold coefficients a for performance simulation, comparing the simulation results when a is equal to 2, 4, 6, 8, and 10, as shown in Fig. 4. Using the LS-DFT-DT Proposed method, as the threshold coefficient x increases, the bit error rate performance of the system first decreases and then increases. When a ¼ 2, its performance is close to that of LS-DFT. When a ¼ 6, the bit error performance of the system is the best among several thresholds. The reason for this result is that the filter threshold is lower when the coefficient a is smaller, and the amount of noise filtering is smaller. As the x coefficient increases, the threshold rises and more noise is filtered out, so performance is increased. As the a continues to rise, the filter threshold continues to increase, causing some of the useful signal to be filtered out as noise, and thus system performance is degraded. Use 4  4 MIMO system, 4 spatial streams, BPSK modulation, LDPC code 1/2 rate, when the bit error rate is 10e-4, the LS-DFT-DT method has a performance improvement of about 3.3 dB over the LS channel estimation, and has a performance improvement of 0.9 dB compared to the LS-DFT. It can be seen that the use of Adaptive time domain filtering can reduce the influence of noise on channel estimation more effectively than the traditional LS-DFT method.

Fig. 4. BER performance of propose method with threshold setting a

The simulation was performed with the threshold coefficient set to 6, comparing the performance of the three channel estimates of LS, LS-DFT and LS-DFT-DF. As shown in Fig. 5 below, the LS-DFT method has performance degradation at high SNR, and bit

An Adaptive DFT-Based Channel Estimation Method for MIMO-OFDM

485

error floor occurs. Since the influence of noise is lowered at high SNR, the leakage of the spectrum signal is stronger than the noise, and the leakage component by LS-DFT filtering is more than the filtered noise component, so the performance is deteriorated. The LS-DFT-DF method can dynamically adjust the filtering threshold and retain more leakage components. In the low SNR case, the noise power is much higher than the leakage power. At this time, the LS-DFT-DF mode can filter out more noise components than the LS-DFT mode. It can be seen that the system adopts LS-DFT-DF method has better performance than LS and LS-DFT under different modulation coding and SNR conditions.

Fig. 5. The BER comparison for various channel estimation methods

References 1. Kang Y, Kim K, Park H (2007) Efficient DFT-based channel estimation for OFDM systems on multipath channels. Commun IET 1(2):197–202 2. Tian Z, Zhou Q, Zhou M et al (2013) Channel estimation for LTE downlink using RLS-based threshold judgement. In: Wireless and optical communication conference IEEE, pp 272–277 3. Yang B, Letaief KB, Cheng RS et al (2000) Windowed DFT based pilot-symbol-aided channel estimation for OFDM systems in multipath fading channels. In: Vehicular technology conference proceedings, VTC 2000-Spring Tokyo, 2000 IEEE, IEEE Xplore, p 1480 4. Fukuhara H, Yuan H, Takeuchi Y et al (2003) A novel channel estimation method for OFDM transmission technique under fast time-variant fading channel. In: IEEE 57th VTC, Jeju, vo1 4, pp 2343–2347 5. van de Beek JJ, Edfors O, Sandell M, Wilson SK, Brjesson PO (1995) On channel estimation in OFDM systems. In: Vehicular technology conference 1995, pp 815–819

486

X. Deng and X. M. Wu

6. Chini A, Wu Y, El-Tanany M, Mahmoud S (1998) Filtered decision feedback channel estimation for OFDM-based DTV terrestrial broadcasting system. IEEE Trans Broadcast 44 (1):2–11 7. Li Y, Seshadri N, Ariyavisitakul S (1999) Channel estimation for OFDM systems with transmitter diversity in mobile wireless channels. IEEE J Sel Areas Commun 17(3):461–470 8. Li Y, Cimini LJ Jr, Sollenberger NR (1998) Robust channel estimation for OFDM systems with rapid dispersive fading channels. IEEE Trans Commun 46(7):902–915 9. Minn H, Bhargava VK (2000) An investigation into time-domain approach for OFDM channel estimation. IEEE Trans Broadcast 46(4):240–248

A Novel Gradient L0-Norm Regularization Image Restoration Method Based on Non-local Total Variation Mingzhu Shi(&) Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin, China [email protected]

Abstract. This paper proposes a novel image restoration method based on nonlocal total variation (TV). Firstly, the image is divided into two types of regions by the gradient L0 norm. The one regularized by the local TV term contains edges and flat regions, the other regularized by the non-local TV term contains rich image details. Then, in order to simplify complex numerical algorithms, we adopt the alternating direction method of multipliers (ADMM) algorithm to optimize the object function. Finally, we carry out comparative experiments with several recent state-of-the-art methods to verify the performance of the proposed method. Experimental results show that the proposed method has better performance in the efficiency and get a good balance between balance between easing staircase effects and retaining image details. Keywords: Non-local total variation

 L0 norm  Image restoration

1 Introduction Image restoration has widely applications in imaging science and image processing tasks, such as image denoising, image deblurring and image inpainting. The common degradation model is usually modeled based on spatially-invariant system and formulated as g ¼ h  f þ n:

ð1Þ

Here, g is the degrade image, f denotes the original image, and n denotes the additive noise, assumed to be Gaussian white noise with zero mean. h represents the blur kernel and usually modeled as a blurring matrix, which is caused by the problem of degraded imaging system. According to whether the blur kernel is known, the image restoration problem is divided into blind restoration and non-blind restoration. In this work, we assume that the PSF is known and only focus on the non-blind image restoration problem. Recently, the non-local total variation method has been successfully applied to solve many image processing problems [1, 2]. The non-local TV model uses the pixel information of the whole image, not the local adjacent pixel information of the image. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 487–493, 2020 https://doi.org/10.1007/978-981-13-9409-6_57

488

M. Shi

In the non-local method, the non-local self-similarity is constrained under the variation framework to recovery the image details, which is the major difference from the local TV model. However, if we only use the self-similarity as the only constraint, we still can’t estimate the similar image structures accurately. Inspired by [3], both the nonlocal self-similarity constraint and the local TV model are considered to be imposed on the entire image. Moreover, compared with the local total variation, the non-local total variation is more time-consuming and requires more efficient algorithms, because the non-local total variation needs to weigh the difference between pixels of the whole image. Present combination methods are still highly time consuming [4, 5]. In order to overcome this drawback, an popular numerical algorithm called alternating direction minimization method of multipliers (ADMM) has been widely used in various image restoration optimization problems [6]. This paper is organized as follows. Section 2 introduces the definition of the nonlocal total variation and the strategy of using the gradient L0 norm regularization to divide the image into two types of regions. Section 3 carries out comparative experiments to illustrate the performance of the proposed method. Finally, a conclusion is given in Sect. 4.

2 Models and Methods 2.1

The Non-local TV Model

Here, the definitions and notations of the local total variation are introduced referring to the definition given in the literature [7]. Let X  R2 , x 2 X, uðxÞ is assumed as a real function X ! R and is defined as a weight function with non-negative symmetric xðx; yÞ ¼ xðy; xÞ. rx uðxÞ denotes the local gradient and the vector of all partial differences rx uðxÞ, at x is defined as: rx uðx; yÞ ¼ ðuðyÞ  uðxÞÞ

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi xðx; yÞ;

ð2Þ

where xðx; yÞ denotes the weight function between x and y. We define the graph divergence divx of a vector q as Z divx qðxÞ ¼

X

ðqðx; yÞ  qðy; xÞÞ

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi xðx; yÞdy;

ð3Þ

As the non-local means weight function, xðx; yÞ is defined (

) Gr  ðkf ðx þ Þ  f ðy þ ÞkÞ2 xðx; yÞ ¼ exp  : 2h2

ð4Þ

Here, Gr denotes the Gaussian kernel, r is the standard deviation, h represents the noise-dependent filter parameter, and f ðx þ Þ denotes a square patch centered by point x. When the reference image f is known, the nonlocal means filter is a linear operator. Now, the nonlocal TV norm with the isotropic L1 norm of the weight graph gradient rx uðxÞ can be defined as

A Novel Gradient L0-Norm Regularization

Z TVx ðuÞ ¼

X

ffi Z sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z ðuðxÞ  uðyÞÞ2 xðx; yÞdydx: jrx uðxÞjdx ¼ X

X

489

ð5Þ

And we define the discrete gradient operator by  Dð1Þ f ¼

fi þ 1;j  fi;j fi þ 1;j  fm;j

if i\m ;D f ¼ if i ¼ m ð2Þ



fi;j þ 1  fi;j fi;1  fi;n

if j\n : if j ¼ n

Formulating the local gradient and divergence concepts into the local form is the main purpose of the non-local regularization. In general, a reference image that is as similar as possible to the original image and needs to obtain a more accurate weight. Unsatisfactory, the original image is degraded during the imaging process, it is hard to obtain accurate weights between degraded pixels. Thus, Therefore, we pre-process the image to calculate the weight. In this paper, we adopt an image smoothing scheme that optimizes the L0 norm of the image gradient to pre-process the degrade image. 2.2

Image Division Using L0-Norm of Image Gradient

Inspired by ref. [8], we use a similar image smoothing scheme that is L0-norm of image gradient to preprocess the image. As shown in Fig. 1, we take the experiments on ‘Barbara’ as an example. Figure 1a is the original image, Fig. 1b contains the salient edges and flat regions of (a), Details of (a) is obtained by (a) minus (b) as shown in

(a)Original image

(b) Edge and flat region of (a)

(d) Blur image

(e) Edge and flat region of (d)

(c) Details of (a)

(f) Details of (d)

Fig. 1. Image regions division using L0-norm of image gradient

490

M. Shi

Fig. 1c is the blurred image and the blur kernel with the size 19  19 are shown in Fig. 1d. Figure 1e show edges and constant regions of (d), an d details of Fig. 1d are shown in (f). There is not much significant difference in extracted edges and constant regions by comparing the information in Fig. 1b, e, although the blurred image (d) is seriously damaged. Moreover, from the comparison of Fig. 1c, f, details in the blurred image are more sharp than the original image in visual effect. 2.3

The Proposed Model

In this paper, we propose a novel non-local TV based image restoration model as follows: 

      k 2   min kh  fs þ h  fd  gk2 þ a u Dð1Þ fs þ u Dð2Þ fs þ krx fd k ; fs ;fd 2

ð6Þ

where k and a are regularization parameters. fs and fd are divided areas from the above gradient extraction scheme. fs denotes the salient edges and flat regions, and fd represents the image details. The ADMM is applied for numerical optimization and several auxiliary variables should be introduces, such as v1 ¼ Dð1Þ f , v2 ¼ Dð2Þ f and z ¼ f , and the Augmented Lagrangian function of the Eq. (6) is defined as follows: LA ðf ; v; z; e; gÞ X k b ¼ kh  fs þ h  fd  gk22 eðz  fs Þ þ 1 kz  fs k22 2 2      b2 2   gðv  Dfs Þ þ kv  Dfs k2 þ a u Dð1Þ fs þ u Dð2Þ fs þ krx fd k : 2

ð7Þ

where b1 ; b2 are introduced linear constraints for the auxiliary variables z and v. e and g are the corresponding Lagrange multipliers. Starting loop operation, f ¼ f k ; e ¼ ek and g ¼ gk and applying ADMM yields the iterative scheme: • Fixing fd to search fsk þ 1 

   k 2  k c  Dð1Þ fs  vk þ g1  fsk þ 1 ; vk þ 1 ¼ arg min kh  fs þ h  fd  gk22 þ ð1Þ c 2 2 2  fs ) ! 2  2        gk2  ek  k k       þ Dð2Þ fs  vð2Þ þ  þ fs  z þ  þ a u Dð1Þ fs þ u Dð2Þ fs c 2 c 2

ð8Þ In order to optimize the Eq. (8), we use the Fast Fourier Transform (FFT) to resolve such a least squares problem for fs is a least square problem. It only requires Oðn logðnÞÞ arithmetic operations. The optimization solution of Eq. (8) is

A Novel Gradient L0-Norm Regularization

fsk þ 1 ¼

HT g þ



i k gk gk DTð1Þ vk1  c1 þ DTð2Þ vk2  c2 þ zk  ec h i H T H þ kc DTð1Þ Dð1Þ þ DTð2Þ Dð2Þ þ I

c k

491

h

ð9Þ

• Fixing fsk þ 1 to obtain fdk þ 1 : fdk þ 1 ¼ arg min



fs

  k h  f k þ 1 þ h  fd  g2 þ krx fd k : s 2 2

ð10Þ

This is a typical image restoration problem. In each iteration, fsk þ 1 is used as a reference image in the image smoothing scheme. It is very effective to improve the accuracy of the algorithm [9, 10].

3 Experimental Results As shown in Fig. 2, there are experiments on the blurred “Barbara” image. Obviously, the avatar area is cropped and the details are more clearly. Figure 2a is the blur image degrade by a blur kernel with the size 19  19. The experimental result of the

(a) Blur image ( PSNR=18.17dB)

(b)The local/non-local TV[9] (PSNR=22.89dB, t=26.77s)

(c) The non-local TV [5] (d) Our method (PSNR=23.34dB, t=14.32s) (PSNR=24.93dB, t=13.03s) Fig. 2. Experiments on the blurred image

492

M. Shi

comparison method [9] is shown in Fig. 2b, details are retained but consumes too long computing time. As shown in Fig. 2c, the method [5] gets a relatively satisfactory recovery effect and saves the computing time effectively. The result of our proposed method is shown in Fig. 2d, it can be seen that our method can ease staircase effects and retains more details. The blurred and noisy image is also verified as shown in Fig. 3. Figure 3a is the blurry and noisy image “Cameraman” degraded by a 9  9 average kernel and gaussian white noise with r = 3. In Fig. 3d, staircase effects are relieved but it takes too long time. Comparing Fig. 3c, d, it can be noted that the proposed method obtains better result and saves more computing time.

(a) Blurred and noisy image (PSNR=21.22dB)

(c)The non-local TV[5] (PSNR=25.69dB, t=6.3s)

(b) The local / non-local TV[9] (PSNR=24.57dB, t=10.5s)

(d)The proposed model (PSNR=27.69dB, t=5.2s)

Fig. 3. Results on the blurry and noisy image

4 Conclusions A novel non-local total variation based image restoration method is proposed for image restoration in this paper. In order to apply the information properly, we divide the image into two regions using the L0-norm regularization for the image gradient. The salient edges and constant regions are extracted by the local TV term and the non-local term is applied for the details. In the numerical optimization, we apply the ADMM algorithm to optimize the energy function to improve the efficiency, and discuss the

A Novel Gradient L0-Norm Regularization

493

parameter selection criterion for two key parameters. By comparison with other stateof-the-art methods, it can be seen that the proposed method has higher efficiency and gets a good balance between easing staircase effects and retaining image details. Acknowledgements. Thanks to the reviewers for their valuable comments. This work is funded by the National Science Foundation of China (Grant No. 61501328).

References 1. Chen D, Cheng L (2010) Alternative minimisation algorithm for non-local total variational image deblurring. Image Process IET 4(5):353–364 2. Shi MZ, Liu SQ (2015) PSF estimation via gradient cepstrum analysis for image deblurring in hybrid sensor network. Int J Distrib Sens Netw 10:1–11 3. Yun S, Woo H (2011) Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization. Elsevier Science Inc. 44 (6):1312–1326 4. Xu H, Sun Q, Luo N et al (2013) Iterative nonlocal total variation regularization method for image restoration. Plos One 8(6):e65865 5. Tan DJ, Zhang A (2013) Nonlocal total variation based image deblurring using split bregman method and fixed point iteration. Appl Mech Mater 335:875–882 6. Shi MZ, Feng L (2017) A novel local and nonlocal total variation combination method for image restoration in wireless sensor networks. Eurasip J Wireless Commun Netw 1:167 7. Gilboa G, Osher S (2008) Nonlocal operators with applications to image processing. Siam J Multiscale Modell Simul 7(3):1005–1028 8. Xu L, Lu C, Xu Y et al (2011) Image smoothing via L 0 gradient minimization. ACM Trans Graph 30(6):174 9. Louchet C, Moisan L (2011) Total variation as a local filter. Siam J Imaging Sci 4(2):651– 694 10. Shi MZ, Han TT, Liu SQ (2016) Total variation image restoration using hyper-Laplacian prior with overlapping group sparsity. Sig Process 126:65–76

Study on Interference from 5G System to Earth Exploration Satellite Service System in High Frequency Yi Wang(&), Baoju Zhang, and Wei Wang(&) Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected], [email protected]

Abstract. With the advent of 5G era, the research on the 5G has gradually become a hot topic in the world. This paper analyzes interference coexistence between the 5G and the EESS (Earth Exploration Satellite Service) system from 24.25 to 27.5 GHz, and gets the aggregate interference power in different areas. In addition, this paper has obtained the relationship between interference power and protection distance in urban area when the antenna of earth station is at different off-axis angles. According to the criterion of protection and the simulation results, the minimum protection distance for the coexistence of the two systems in the same frequency band can be calculated. Keywords: 24.25–27.5 GHz  5G  Earth exploration satellite service Protection distance  Antenna angle  Different areas



1 Introduction With the coming of the era of Internet of Everything, mobile communication, as one of the indispensable important areas in daily life, is about to face unprecedented challenges. The increasing number of users on the Internet and the Internet of Things, as well as the explosive growth in traffic and data volume, are undoubtedly placing higher demands on future mobile communication systems [1]. In this environment, the fifthgeneration mobile communication system has become one of the key research and development targets in the world. At the same time, the ITU (International Telecommunications Union) officially named the fifth generation mobile communication as IMT-2020 in 2015. According to the vision in the 5G technology white paper, 5G can enable users to experience ultra-high access rate, ultra-low latency, large number of connections, ultrahigh traffic density and ultra-high mobility in the future [2]. But the new problem also arise with it that is the scarcity of spectrum resources makes the 5G system have to coexist with some original services in the same frequency band. Because of the services in the low frequency band are already too dense, we don’t have enough spectrum resources to use. Therefore, ITU locked the target in high frequency, and held the first meeting of ITU-R TG5.1 Working Group in Geneva in 2016. The meeting mainly

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 494–502, 2020 https://doi.org/10.1007/978-981-13-9409-6_58

Study on Interference from 5G System

495

discussed how to make 5G system coexist with the original business in the same high frequency band [3]. Under the above background, this paper focuses on Interference from 5G system to EESS system in 24.25–27.5 GHz, and calculates aggregate interference from 5G system to EESS system in different areas which is urban, suburb and countryside. Besides, this paper has obtained the relationship between interference power and protection distance in urban area when the antenna of earth station is at different offaxis angles. According to the criterion of protection and the simulation results, the minimum protection distance for the coexistence of the two systems in the same frequency band can be calculated.

2 System Modeling and Analytic Procedure 2.1

The Interference Model

In this paper, the interference model is the aggregate interference from base station of 5G system, which are closest circle to the earth station, to earth station of EESS system in 24.25–27.5 GHz. The interference model is shown in Fig. 1.

Fig. 1. Aggregate interference model

2.2

Propagation Model

When electromagnetic waves are transmitted in the atmosphere, they are usually accompanied by a certain path loss. Therefore, we often use mathematical relations to establish a corresponding propagation model, indicating the relationship between the

496

Y. Wang et al.

propagation loss of radio waves and various factors [4]. The propagation model which used in this paper is the LOS propagation model. Formula (1) is specific LOS modeling process. PL0 ¼ 20 lg d þ 20 lg f þ 32:4 þ Ag

ð1Þ

PL0 represents the path loss value. d represents the transmission distance between the two systems which unit is km. f represents the electromagnetic wave operating frequency which unit is MHz. Ag represents the atmospheric absorption loss which unit is dB. 2.3

Method of Deterministic Calculation

when study on co-channel interference, there are two typical interference scenarios which are commonly used. The first interference scenario is single-link interference, which is the interference from a 5G base station to a satellite earth station. The second interference scenario is aggregate interference, which is the interference from multiple 5G base stations to one satellite earth station. The aggregate interference can be divided into two types which are tunneling and remotening. This paper chooses the method of deterministic calculation is based on tunneling. The specific calculation of aggregate interference is given by formula (2). I ¼ 10 log

n¼N XBS

! 10

½Pt þ AA ðu;hÞ þ GR PL0 LT LP CL=10

ð2Þ

n¼1

I represents the aggregate interference power (dBm) received by the satellite earth station receiver. n represents the number of 5G base station. NBS represents the total number of 5G base stations. Pt represents the total transmission power (dB) of the 5G system. AA(u, h) represents the beamforming antenna gain of the 5G transmitter. GR represents the antenna gain of the earth station receiver. PL0 represents the path loss. LT represents the feeder loss (dB) of the 5G transmitter. LP represents the polarization loss (dB). CL represents the ground loss (dB).   NBS ¼ 2pdprotection =dspacing

ð3Þ

dprotection represents the protection distance between the IMT base station and the satellite earth station. dspacing indicates the distance between two adjacent base stations. Pt ¼ GSINGLE þ 10 lgðNH  NV Þ

ð4Þ

GSINGLE represents the transmit power of a single antenna. NH and NV represents the number of y-axis array antennas and the number of z-axis array antennas, respectively [5].

Study on Interference from 5G System

497

3 Simulation Experiment and Result Analysis 3.1

Parameters of Simulation Experiment

3.1.1 Parameters of 5G Base Station According to the ITU-R WP5D research report [6] and the Recommendation ITU-R M.2101 [7], the parameters of the 5G base station in 24.25–27.5 GHz are shown in Table 1. Table 1. Parameters of 5G base station Types of parameter Base station spacing Downtilt Antenna pattern Antenna polarization Antenna array configuration (row  column) Ohmic loss (dB) Feeder loss (dB) Conducted power per antenna element Element gain (dBi) front-to-back ratio (dB) Sidelobe level (dB) Element spacing 3 dB bandwidth of single element

Urban 0.5 km 10° ITU-R M:2101 Linear ± 45° 88

Suburb 1 km 6° ITU-R M:2101 Linear ± 45° 88

Countryside 2 km 3° ITU-R M:2101 Linear ± 45° 88

3 3 10 dBm/200 MHz

3 3 10 dBm/200 MHz

3 3 10 dBm/200 MHz

5 30 30 0.5k 65°

5 30 30 0.5k 65°

5 30 30 0.5k 65°

3.1.2 Parameters of Earth Station This paper refers to the contact letter which sent by ITU-R WP7B to TG5/1. The received characteristics of the Earth exploration satellite system are shown in Table 2.

Table 2. Parameters of earth station Types of parameter Frequency Antenna gain Noise temperature of receiver system Threshold of protection Antenna pattern

Parameters 24.25–27.5 GHz 59.8 dBi 433 K −120 dBm/MHz ITU-R S:580  6

498

Y. Wang et al.

The earth station antenna pattern in this paper will use the receive antenna pattern in Recommendation ITU-R S.580-6 [8]. The specific modeling process is given by formula (5). 8 e  emin Gmax > > > > < 29  25 log10 ðeÞ emin  e  20 ð5Þ 3:5 20  e  26:3 GR ¼ >   > 32  25 log ð e Þ 26:3  e  48 > 10 > : 10 48  e  180 In the above modeling process, the receiving antenna gain of the earth station is represented by GR, and its maximum gain is represented by Gmax. e represents the offaxis angle of the earth station antenna. The size of emin is determined by the diameter D (meter) of the antenna and wavelength k (meter). The specific modeling process is given by formula (6).  emin ¼

3.2

maxf1; 100D=kg maxf2; 114ðD=kÞ1:09 g

D=k  50 D=k\50

ð6Þ

Simulation

In this paper, the simulation experiment is based on the Matlab 2014b platform. The beamforming antenna pattern of 5G base station which based on Massive-MIMO refers to recommendation ITU-R M.2101 [9, 10]. The antenna pattern of 5G base station is figured in Fig. 2.

Fig. 2. Antenna pattern of base station

Study on Interference from 5G System

499

In Fig. 2, the beamforming antenna gain varies with the vertical directional of the antenna. When the angle of antenna is in the horizontal position, the antenna gain reaches the maximum value which is 23.16 dBi. But the antenna of a 5G base station is often accompanied by a certain amount of physical downtilt which is used to enhance signal strength in coverage and reduce signal interference between adjacent base stations. Therefore, antenna gain tends to decrease compared to the maximum value. In order to explore the aggregate interference caused by 5G base stations to satellite earth stations in different areas, three different areas of urban, suburban and countryside are selected for simulation experiments. Based on the simulation parameters in three areas, this paper combines with the simulation results of beamforming antenna gain and the antenna pattern of earth station antenna pattern, calculated the aggregate interference power in three areas, the specific simulation results are obtained. The specific simulation results are shown in Figs. 3 and 4.

Fig. 3. CDF plot of interference from 5G systems into earth station

Figure 3 is a CDF plot of the relationship between the protection distance and the aggregate interference power in different areas. As the protection distance between the two systems changes continuously, the 5G base station in the urban area has generated more aggregate interference power to the earth station than the other two areas. In countryside, the aggregate interference power is the smallest. Among them, the probability of aggregate interference which below protection threshold in urban, suburb and countryside are 91, 93 and 95%. Figure 4 shows the relationship between the protection distance and the aggregate interference power. We can observe that the aggregate interference power

500

Y. Wang et al.

Fig. 4. Relationship between protection distance and interference power in different areas

monotonically decreasing with the increase of protection distance in three areas. Combined with the protection threshold of the earth station, the minimum protection distances which ensuring Earth station is not interfered by the base station in urban, suburb, and countryside are 9.1, 7.2 and 5.8 km. In order to explore the relationship between the aggregate interference power and the protection distance when antenna of base station at different angles, this paper takes 10° downtilt of 5G base station as an example which based on urban, simulated relation chart which based on the off-axis angle of satellite earth station at 10°, 20° and 30°. The specific simulation results are shown in Fig. 5. When earth station receives the same amount of interference power, the protection distance monotonically decreasing with the increase of the off-axis angle. The minimum protection distance, ensuring earth station is not interfered by the base station(maintains 10° downtilt), in urban area when the satellite earth station antenna off-axis angle at 10°, 20° and 30° are 9.1, 1.6 and 1.1 km.

Study on Interference from 5G System

501

Fig. 5. Relationship between protection distance and interference power at different off-axis angles of earth station

4 Conclusions Aiming at the question about coexistence of 5G systems and Earth exploration satellite systems in 24.25–27.5 GHz, this paper has obtained the relationship between interference power and protection distance in different areas when the antenna of each system at fixed angle. According to the protection threshold of the earth station and the simulation results, the minimum protection distance for the coexistence in urban, suburb and countryside are 9.1, 7.2 and 5.8 km. Simultaneously, this paper calculates minimum protection distance which ensuring earth station is not interfered by the base station in urban area when the satellite earth station antenna off-axis angle at different angles. Acknowledgements. This paper is supported by Natural Youth Science Foundation of China (61501326, 61401310). It also supported by Tianjin Research Program of Application Foundation and Advanced Technology (16JCYBJC16500).

References 1. Sexton C, Kaminski NJ, Marquez-Barja JM et al (2017) 5G: adaptable networks enabled by versatile radio access technologies. IEEE Commun Surv Tutorials 19(2):688–720 2. Agiwal M, Roy A, Saxena N (2016) Next generation 5G wireless networks: a comprehensive survey. IEEE Commun Surv Tutorials 18(3):1617–1655

502

Y. Wang et al.

3. Zhu Yutao, Zhu Ying (2016) Progress and prospect of WRC-15 conference. Telecommun Netw Technol 261:1–4 4. Zhao Mingfeng (2013) Analysis of LTE propagation model. Telecommun Sci 9:117–121 5. Li X (2018) Research on the interference of IMT-2020 system to other systems in the frequency band 3–6 GHz. Beijing University of Posts and Telecommunications, Beijing 6. ITU-R (2017) Characteristics of terrestrial IMT systems for frequency sharing interference analyses in the frequency range between 24.25 and 86 GHz 7. ITU-R (2017) Modelling and simulation of IMT networks and systems for use in sharing and compatibility studies. In: Recommendation ITU-R M.2101-0 8. ITU-R (2004) Radiation diagrams for use as design objectives for antennas of earth stations operating with geostationary satellites. In: Recommendation ITU-R S.580-6 9. Wang CX, Haider F, Gao X et al (2014) Cellular architecture and key technologies for 5G wireless communication networks. IEEE Commun Mag 52(2):122–130 10. Hassan WA, Jo HS, Tharek AR (2017) The feasibility of coexistence between 5G and existing services in the 5G candidate bands in Malaysia. IEEE Access 5:14867–14888

Sparse Planar Antenna Array Design for Directional Modulation Bo Zhang1 , Wei Liu2 , Yang Li1 , Xiaonan Zhao1 , and Cheng Wang1(B) 1 Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China [email protected], liyang [email protected], [email protected], [email protected] 2 Communications Research Group, Department of Electronic and Electrical Engineering, University of Sheffield, Sheffield S1 4ET, UK [email protected]

Abstract. Directional modulation (DM) has been applied to sparse linear antenna arrays to increase security of signal transmission. In this work, we extend the DM design to sparse planar antenna arrays and provide the corresponding design formulations. In previous studies, group sparsity technique was used for sparse antenna array design, but no quantitative analyses were given. In this paper, both designs with and without group sparsity are provided, and the corresponding optimised antenna locations are shown explicitly. Design examples are provided to verify the effectiveness of the proposed design.

Keywords: Directional modulation array

1

· Group sparsity · Planar antenna

Introduction

Directional modulation (DM) as a physical layer security technique was introduced to keep known constellation mappings in a desired direction or directions, while scrambling them for the remaining ones [1]. Both reconfigurable arrays [2] and phased antenna arrays [3–5] were employed in its design. In [6], a phased antenna array was combined with a reflecting surface to achieve positional modulation (PM), where the signal can only be received at desired locations instead of directions. Similarly, multiple phased antenna arrays [7] were also proposed for the PM design. From the algorithm’s aspect, dual beam DM [8], bit error rate (BER) DM transmitter synthesis [9], artificial-noise-aided zero-forcing synthesis approach [10], a multi-relay design [11] and a pattern synthesis approach [12, 13] were introduced to the DM design area.

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 503–511, 2020 https://doi.org/10.1007/978-981-13-9409-6_59

504

B. Zhang et al.

Recently, the so-called artificial noise (AN) was introduced with an aim to create a noise component which does not affect the signal received by desired receivers, but distorts the signal intercepted by eavesdroppers. Two methods were proposed for the AN design, including the orthogonal vector method [14,15], where the added AN vector is orthogonal to the steering vector of the desired direction, and the AN projection matrix method [16,17], where by designing an artificial noise projection matrix, the AN vector is projected into the zero space of the derivative of the desired direction. However, to our best knowledge, almost all of the existing studies are focused on one dimensional DM. In this work, we extend the DM design to a twodimensional (2-D) planar antenna array, and to further reduce the number of antenna, a sparse planar antenna array design method is proposed with optimised antenna locations. In previous studies, group sparsity was used for sparse antenna array design, but no quantitative analyses were given. In this paper, both designs with and without group sparsity are provided, and the corresponding optimised antenna locations are shown explicitly. The remaining part of this paper is structured as follows. A review of planar antenna array based beamforming is given in Sect. 2. The proposed sparse planar antenna array design for DM is presented in Sect. 3. In Sect. 4, design examples are provided, with conclusions drawn in Sect. 5.

2

Review of Planar Antenna Array Based Beamforming

A narrowband planar antenna array for transmit beamforming is shown in Fig. 1, consisting of N omni-directional antennas with the spacing dx,n along the x-axis, and K omni-directional antennas with the spacing dy,k along the y-axis, where dx,n and dy,k (n = 0, . . . , N − 1 and k = 0, . . . , K − 1) represent the spacing between the first antenna to its subsequent antennas, respectively. The elevation angle θ ∈ [0◦ , 180◦ ], and azimuth angle φ ∈ [0◦ , 180◦ ] ∪ [0◦ , −180◦ ]. For the antenna on the n-th position of the x-axis and k-th position of the y-axis, the corresponding weight coefficient is represented by wn,k . Here we gather all weight coefficients together to form a vector represented by w, w = [wx0 ,yo , wx0 ,y1 , . . . , wx0 ,yK−1 , . . . , wxN −1 ,yK−1 ]T .

(1)

where {·} is the transpose operation. Then the corresponding steering vector is given by T

s(ω, θ, φ) = [1, ejω(dx,0 sin θ cos φ+dy,0 sin θ sin φ)/c , . . . , ejω(dx,0 sin θ cos φ+dy,K−1 sin θ sin φ)/c , . . . ,

(2)

ejω(dx,N −1 sin θ cos φ+dy,K−1 sin θ sin φ)/c ]T , where c is the speed of propagation. The beam response of the array can be written as (3) p(ω, θ, φ) = wH s(ω, θ, φ), where {·}H represents the Hermitian transpose.

Sparse Planar Antenna Array Design for Directional Modulation

505

z d y,K

1

d y ,0 d x ,0 d x, N

y

1

x Fig. 1. A planar antenna array.

3

Sparse Planar Antenna Array Design for DM

The method to achieve DM is to find the corresponding weight vector of the antenna array for each symbol. Here we assume the weight vector for the m-th symbol is given by wm = [wm,x0 ,y0 , wm,x0 ,y1 , . . . , wm,x0 ,yK−1 , . . . , wm,xN −1 ,yK−1 ]T ,

(4)

m = 0, . . . , M − 1. The desired response pm (ω, θ, φ) for the m-th symbol can be categorised into two regions: the mainlobe response pm,M L and the sidelobe response pm,SL . Without loss of generality, we assume R elevation angles are sampled for each azimuth angle φv (v = 0, 1, . . . , V − 1), and the desired directions are θ0 , θ1 , . . . , θr−1 with φ0 . Then, the desired beam responses and steering vectors for the mainlobe and sidelobe regions can be written as pm,M L = [pm (ω, θ0 , φ0 ), pm (ω, θ1 , φ0 ), . . . , pm (ω, θr−1 , φ0 )], pm,SL = [pm (ω, θr , φ0 ), pm (ω, θr+1 , φ0 ), . . . , pm (ω, θR−1,φ0 ), pm (ω, θ0 , φ1 ), . . . , pm (ω, θR−1 , φ1 ), . . . , pm (ω, θR−1 , φV −1 )], = [s(ω, θ0 , φ0 ), s(ω, θ1 , φ0 ), . . . , s(ω, θr−1 , φ0 )],

SM L SSL = [s(ω, θr , φ0 ), s(ω, θr+1 , φ0 ), . . . , s(ω, θR−1,φ0 ), s(ω, θ0 , φ1 ), . . . , s(ω, θR−1 , φ1 ), . . . , s(ω, θR−1 , φV −1 )].

(5)

The CS-based sparse antenna array design is to make the designed response close to the desired one with the minimum number of non-zero valued weight coefficients; those antennas with zero-valued coefficients can then be removed from the array, leading to a sparse solution. To achieve this goal, we employ the reweighted l1 norm minimisation method, which provides a closer approximation to the required l0 norm than the original l1 norm [18–20]. Then, the weight vector

506

B. Zhang et al.

wm for the m-th symbol is given by min wm

subject to

N −1 K−1  

u u δm,n,k ||wm,x || n ,yk 2

n=0 k=0 u H ||pm,SL − (wm ) SSL ||2 ≤ α

(6)

u H (wm ) SM L = pm,M L ,

where || · ||2 represents the l2 norm, the superscript u indicates the u-th iteration, and δm,n,k is the reweighting term for the coefficient at the n-th location of the x-axis and the k-th location of the y-axis for the m-th symbol, given u u−1 = (||wm,x || + ξ)−1 (ξ > 0 provides numerical stability to preby δm,n,k n ,yk 2 vent δm,n,k becoming infinity). The inequality constraint is to make the difference between desired and designed responses in sidelobe regions under a given threshold value α, while the equality constraint is to set the same value between designed and desired responses in mainlobe directions. However, the corresponding optimised locations deduced from the weight vectors w0 , . . . , wM −1 in (6) may not be the same, i.e., the optimised antenna locations for one symbol may be the redundant locations for other symbols. Therefore, a common set of active antenna locations for all symbols is needed, and the group sparsity technique can provide the solution [21]. Here, we introduce ˜ xn ,yk , representing weight coefficients for all M symbols at the n-th location w of the x-axis and the k-th location of the y-axis, ˜ xn ,yk = [w0,xn ,yk , w1,xn ,yk , . . . , wM −1,xn ,yk ]. w

(7)

Then the weight vectors and the corresponding optimised locations for all symbols based on group sparsity can be obtained by solving the following problem min W

subject to

N −1 K−1  

u ˜ xun ,yk ||2 δn,k ||w

n=0 k=0

||PSL − (Wu )H SSL ||2 ≤ α

(8)

(Wu )H SM L = PM L , where W, PSL and PM L are three matrices for weight coefficients, beam responses in sidelobe regions and beam responses in mainlobe directions, W = [w0 , w1 , . . . , wM −1 ], PSL = [p0,SL , p1,SL , . . . , pM −1,SL ]T , PM L = [p0,M L , p1,M L , . . . , pM −1,M L ]T .

(9) (10) (11)

The above problem can be solved by the CVX toolbox in MATLAB, a package for specifying and solving convex problems [22, 23].

Sparse Planar Antenna Array Design for Directional Modulation

507

Fig. 2. Resultant beam pattern based on the sparse planar array design in (8). 180 135

Phase angle

90 45 0 -45 -90 Symbol 00 Symbol 01 Symbol 11 Symbol 10

-135 -180 -90

-60

-30

0

30

60

90

(degree)

Fig. 3. Resultant phase pattern based on the sparse planar array design in (8).

4

Design Examples

In this section, we consider a 5λ × 5λ uniform planar antenna array with a 0.1λ spacing between adjacent antennas. Without loss of generality, the desired direction θM L = 0◦ with φ = 90◦ . The sidelobe regions θSL ∈ [5◦ , 90◦ ] for φ = ±90◦ . The desired response in the mainlobe direction is a value of one (magnitude) with 90◦ phase shift (QPSK), i.e., √ √ √ √ √ √ √ √ 2 2 2 2 2 2 2 2 +i ,− +i ,− −i , −i (12) 2 2 2 2 2 2 2 2 for symbols ‘00’, ‘01’, ‘11’, ‘10’, and a value of 0.1 (magnitude) with random phase shifts over the sidelobe regions. The given threshold α = 2 for the design without group sparsity (location optimisation for each symbol separately) in

508

B. Zhang et al. 50

y-axis

y-axis

50

0

0

20

40

0

60

0

20

x-axis Optimised Locations for symbol 00

60

Optimised Locations for symbol 01

50

50

40

y-axis

y-axis

40

x-axis

30 20 0

20

x-axis

40

0

60

0

Optimised Locations for symbol 11

20

x-axis

40

60

Optimised Locations for symbol 10

Fig. 4. Optimised locations for the planar antenna array without group sparsity in (6). 50

y-axis

y-axis

50

0

0

20

x-axis

40

0

60

0

Optimised Locations for symbol 00

20

x-axis

40

60

Optimised Locations for symbol 01

y-axis

50

y-axis

50

0

0

20

40

60

x-axis Optimised Locations for symbol 11

0

0

20

40

60

x-axis Optimised Locations for symbol 10

Fig. 5. Optimised locations for the planar antenna array with group sparsity in (8).

(6), while α = 4 for the design with group sparsity (a common set of optimised locations for all symbols) in (8). Bit error rate (BER) is also calculated based on in which quadrant the received signal lies in the IQ complex plane, and 106 randomly generated bits are transmitted, with signal to noise ratio (SNR = 12 dB) in the desired direction. The resultant beam and phase patterns in (8) for all symbols are shown in Figs. 2 and 3, respectively, where we can see that all main beams are exactly pointed to the desired direction 0◦ with a low sidelobe level, and the phase only in the desired direction follows the required QPSK modulation, with random values

Sparse Planar Antenna Array Design for Directional Modulation

509

BER of QPSK with awgn

100

Bit error rate

10-1

10-2

10-3

10-4

10-5 -90

-60

-30

0

30

Elevation angle ( degree) with a fixed

60

90

= 90

Fig. 6. BER based on the sparse planar array design in (8).

in other directions. Figure 4 shows the optimised locations for the design without group sparsity technique. It can be seen that the set of optimised locations for all symbols are not the same, and not even for a single optimised location, which means we have to keep all these optimised locations (the total number of optimised locations is 47, where for symbol ‘00’ is 11, for symbol ‘01’ is 14, for symbol ‘11’ is 10, and for symbol ‘10’ is 12), while Fig. 5 shows the common set of optimised locations for the design with group sparsity (the number of optimised locations is 14). BER in all transmission angles is shown in Fig. 6, which is down to 10−5 in the desired direction, and around 0.5 in other directions.

5

Conclusions

Directional modulation design has been applied to planar antenna arrays with optimised antenna locations for the first time. Satisfactory design results for beam patten, phase pattern and BER were provided, where the main beams for all symbols are pointed to the desired direction, with the given QPSK modulation based phase value, while in other directions power level is low and phase values are random. The BER pattern shows that error bits received in the desired direction is the lowest, while in other directions the BER is about 0.5, indicating that it would be extremely difficult for eavesdroppers located in these regions to crack the information. Moreover, design examples with and without group sparsity were shown in comparison with each other, further demonstrating the effectiveness of the proposed formulations.

510

B. Zhang et al.

Acknowledgements. This work was supported by the Funding Program of Tianjin Higher Education Creative Team. The authors acknowledge the Natural Science Foundation of Tianjin City (18JCYBJC86000), and the Science & Technology Development Fund of Tianjin Education Commission for Higher Education (2018KJ153) for funding this work. C.W. acknowledges the Distinguished Young Talent Recruitment Program of Tianjin Normal University (011/5RL153).

References 1. Babakhani A, Rutledge DB, Hajimiri A (2009) Near-field direct antenna modulation. IEEE Microw Mag 10(1):36–46 2. Daly MP, Bernhard JT (2010) Beamsteering in pattern reconfigurable arrays using directional modulation. IEEE Trans Antennas Propag 58(7):2259–2265 3. Daly MP, Bernhard JT (2009) Directional modulation technique for phased arrays. IEEE Trans Antennas Propag 57(9):2633–2640 4. Zhang B, Liu W (2018) Multi-carrier based phased antenna array design for directional modulation. IET Microw Antennas Propag 12(5):765–772 5. Zhang B, Liu W, Li Q (2019) Multi-carrier waveform design for directional modulation under peak to average power ratio constraint. IEEE Access 7:37528–37535 6. Zhang B, Liu W (2018) Antenna array based positional modulation with a two-ray multi-path model. In: Proceedings sensor array and multichannel signal processing workshop 2018 (SAM2018). Sheffield, UK, pp 203–207 7. Zhang B, Liu W (2019) Positional modulation design based on multiple phased antenna arrays. IEEE Access 7:33898–33905 8. Hong T, Song MZ, Liu Y (2011) Dual-beam directional modulation technique for physical-layer secure communication. IEEE Antennas Wirel Propag Lett 10:1417– 1420 December 9. Ding Y, Fusco V (2013) Directional modulation transmitter synthesis using particle swarm optimization. In: 2013 Loughborough antennas and propagation conference. Loughborough, UK, pp 500–503 10. Xie T, Zhu J, Li Y (2017) Artificial-noise-aided zero-forcing synthesis approach for secure multi-beam directional modulation. IEEE Commun Lett PP(99):1–1 11. Zhu W, Shu F, Liu T, Zhou X, Hu J, Liu G, Gui L, Li J, Lu J (2017) Secure precise transmission with multi-relay-aided directional modulation. In: 2017 9th international conference on wireless communications and signal processing (WCSP), pp 1–5 12. Ding Y, Fusco V (2013) Directional modulation transmitter radiation pattern consid- erations. IET Microw Antennas Propag 7(15):1201–1206 13. Ding Y, Fusco V (2015) Directional modulation far-field pattern separation synthesis approach. IET Microw Antennas Propag 9(1):41–48 14. Ding Y, Fusco V (2014) A vector approach for the analysis and synthesis of directional modulation transmitters. IEEE Trans Antennas Propag 62(1):361–370 15. Ding Y, Fusco V (2014) Vector representation of directional modulation transmitters. In: The 8th European conference on antennas and propagation (EuCAP 2014), pp 367–371 16. Hu J, Shu F, Li J (2016) Robust synthesis method for secure directional modulation with imperfect direction angle. IEEE Commun Lett 20(6):1084–1087 17. Hu J, Yan S, Shu F, Wang J, Li J, Zhang Y (2017) Artificial-noise-aided secure transmission with directional modulation based on random frequency diverse arrays. IEEE Access 5:1658–1667

Sparse Planar Antenna Array Design for Directional Modulation

511

18. Cand`es EJ, Wakin MB, Boyd SP (2008) Enhancing sparsity by reweighted l1 minimization. J Fourier Anal Appl 14(5):877–905 19. Prisco G, D’Urso M (2012) Maximally sparse arrays via sequential convex optimizations. IEEE Antennas Wireless Propag Lett 11:192–195 20. Fuchs B (2012) Synthesis of sparse arrays with focused or shaped beampattern via sequential convex optimizations. IEEE Trans Antennas Propag 60(7):3499–3503 21. Shen Q, Liu W, Cui W, Wu SL, Zhang YD, Amin M (2015) Low-complexity direction-of-arrival estimation based on wideband co-prime arrays. IEEE Trans Audio Speech Lang Process 23(9):1445–1456 22. Grant M, Boyd S (2008) Graph implementations for nonsmooth convex programs. In: Blondel V, Boyd S, Kimura H (eds), Recent advances in learning and control, Ser. lecture notes in control and information sciences. Springer Limited, pp 95–110. http://stanford.edu/∼boyd/graph$ $dcp.html 23. CVX Research: CVX: Matlab software for disciplined convex programming, version 2.0 beta (2012). http://cvxr.com/cvx

Research on the Linear Interpolation of Equal-Interval Fractional Delay Filter Shen Zhao1(&), Yunwei Zhang2, XiWei Guo1, and Deliang Liu1 1

2

Shijiazhuang Campus of Army Engineering University, Shijiazhuang 050003, China [email protected] School of Mathematics and Statistics, University of Sydney, Sydney, NSW 2006, Australia

Abstract. To the demand of accurate time delay (TD) for digital broadband signal, the equal-interval fractional delay (EIFD) filter and its linear interpolation method are studied. Based on the theory of multi-rate signal processing, the design method of EIFD filter is proposed, and the principle and the implementation structure are demonstrated. The EIFD falls on a number of delay grids, which means approximation to the general FD to the delay grids. Multiple groups of the EIFD filter are interpolated to form the final FD filter, based on the relationship between the required TD and the EIFD, to enhance the accuracy of the TD. It shows that the linear interpolation of EIFD filter can provide accurate TD for digital broadband signal, given a simulation test on the linear frequency modulation signal. Keywords: Fractional delay filter  Equal-interval fractional delay interpolation  Multi-rate signal processing

 Linear

1 Introduction Accurate time delay (TD) technique of digital signal has been widely used in sequential signal processing, voice signal processing, auditory localization and communication [1]. Especially in broadband radar and sonar areas, phase control technique has finite broadband problem when scanning wave beam. For wide band wave beam scanning, wave beam with different frequency inside the band has different direction, meantime the transit time of the aperture causes distortion to the echo. The effective solution is to apply the TD method in the beamforming [2]. Traditional digital TD methods are oversampling, digital interpolation in time domain, linear phase weighting in frequency domain, etc., of which the principle is based on TD quantization [3]. However, this brings the problem of huge calculation burden and low accuracy problem. Fractional delay (FD) filter, involving the finite impulse response (FIR) or the infinite impulse response (IIR) [4], achieve the required TD by digital filtering. FIR FD filter has the advantage of stability and linearity and is widely used in application [1]. The method proposed in this article belongs to the FIR filter. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 512–519, 2020 https://doi.org/10.1007/978-981-13-9409-6_60

Research on the Linear Interpolation of Equal-Interval Fractional

513

2 Ideal Digital Fractional Delay Filter For equal interval sampling discrete time signal (sampling period as T) which satisfies Nyquist’s theorem, the ideal delay system can be indicated as Fig. 1 [5]. x(n)

D/C

xc(t)

delay τ

yc(t)

C/D

y(n)

T

Fig. 1. Principle of ideal digital delay filter

The ideal discrete to continuous converter (D/C) reconstructs the continuous bandlimited signal xc(t) with digital sample series x(n), and the output of continuous signal xc(t) passing through the ideal delay system is yc(t) = xc(t − s). The ideal continuous to discrete converter (C/D) samples the continuous band-limited signal yc(t) into digital sample series y(n). The output of ideal digital delay filter is yðnÞ ¼ xc ðnT  sÞ

ð1Þ

Based on the definition of the ideal reconstruction system, the relationship between the band-limited signal xc(t) and digital series x(n) is [5] xc ðt Þ ¼

þ1 X

xðkÞ sinc½ðt  kTÞ=T

ð2Þ

k¼1

where sincðtÞ ¼ ½sinðptÞ=ðptÞ By substitution Eq. (2) into Eq. (1), it can be derived yðnÞ ¼ xðnÞ  sinc½ðn  DÞT

ð3Þ

where “*” means convolution, D = s/T is the digital FD. Based on the equation, the unit impulsion response of the ideal digital FD filter is hid ðnÞ ¼ sinc½ðn  DÞT

ð4Þ

When s is the multiples of the sampling period, i.e. D is an integer, we have hid(n) = d(n − D) as D = 2 shown in Fig. 2. More generally, when s is not the multiples of the sampling period, i.e. D is a real number, and can be divided into two parts D = Dint + d, where Dint = fix(D) is the integer TD and d 2 ½0; 1Þ is the FD, hid(n) is non-zero at any time, as D = 2.7 shown in Fig. 2. In general, hid(n) is called the ideal FD filter and its frequency response function is Hid(ejx) = e−jxD, and the amplitude-frequency response and the phase-frequency

514

S. Zhao et al.

response are Hid(ejx) = 1, arg{Hid(ejx)} = −Dx respectively. In general case, hid(n) is infinite length non-causal filter. In practice, it has to restrict a certain number length causal system h(n) to approximate hid(n). 1 D=2 D = 2.7

0.8

H id (n)

0.6

0.4

0.2

0

-0.2

-0.4 -6

-4

-2

0

2

4

6

n

Fig. 2. Ideal digital FD filter (Partial) for different delays

3 Design of the Equal-Interval Delay Filter The common FD filter design methods in [6] have various advantages, while they also have common disadvantages: for given length and FD, the parameters of the filter form a fix set, which can be only applied for fix TD application. In the area of medical diagnosis, voice signal processing, acoustic localization, variable TD filter are commonly required to achieve the desire performance. For different FD d, the methods proposed in [6] cannot be applied in the real world. Reference [7] proposes the Farrow structure filter that meets the demand of flexible TD. It has M group filters of order N, whose parameters are polynomial of the FD. There is no need to reload filter parameters when TD changes. However, the Farrow FD filter suffers from computational complexity which prevents its application in practice [8]. 3.1

Ideal Equal-Interval FD Filter

The principle of EIFD filter is the multi-rate signal processing theory, as shown in Fig. 3.

x(n)

xe(n)

xd (n)

xf (n)

LPF1

LPF2

x1(n)

TD mT1

x2(n)

Fig. 3. Principle of multi-rate signal processing on FD filter

Research on the Linear Interpolation of Equal-Interval Fractional

515

The original signal x(n) pass through the L times up-sampling system and reconstruction filter LPF1, we get the reconstructed signal x1(n), with sample period T1 = T/L, then after m sample points, with TD mT1, i.e. x2(n) = x1(n − m) • x2(n) pass through anti-aliasing filter LPF2 and the L times down-sampling system, we get the output delay signal x d ð nÞ ¼

1 X

xðk Þ

k¼1

sin½pðn  k  m=LÞT , xðnÞ  hd;m ðnÞ pðn  k  m=LÞT

ð5Þ

where, hd,m(n) is defined as the ideal EIFD filter with TD d = mT/L. Based on the multi-rate signal processing procedures above, the accuracy of TD is 1/L of the original sampling period. 3.2

Equal-Interval FD Filter

According to the associative law of the linear time-invariant systems, for a given TD d = mT/L, the up-sampling system, reconstruction filter, ideal delay filter, anti-aliasing filter and down-sampling system can be cascaded into a single FD filter, whose unit impulsion response is hm ðnÞ ¼ sincðn  m=LÞT

0mL  1

ð6Þ

By comparison, it can be known that hm(n) is the special form of the ideal FD filter for d = mT/L. In this equation, hm(n) is an infinite non-causal filter, so LS method and windowing principle is needed to derive the finite length causal system. Without loss of generality, LS method for constrained bandwidth is conducted for the study. It can be obtained that [6] ha;m ðnÞ ¼ asinc½aðn  Dint  m=LÞT

ð7Þ

where, n 2 ½M; N þ M, m 2 ½0; L  1, M is the sampling delay. One can take M = 0 that is n 2 ½0; N without influencing the FD filter then the group delay Dint = N/2. The EIFD filter hd,m(n) is the discrete sample of the sinc(n − s) function. Considering the realization of the delay filtering, we need to pre-store L groups parameters of the EIFD filter. According to the desired TD, we choose the closest group of the filters to achieve the FD filter to the signal. 3.3

Interpolation of the Equal-Interval Delay Filter

By the process of up-sampling & down-sampling, the EIFD filter divide the delay period [0, T] into L equally spaced lattices, which enhances the accuracy to ±dm/2. While more precise TD control is needed in applications such as high resolution beams and precise TD estimation. In order to improve the accuracy of TD, a linear interpolation method for the coefficients of EIFD filters is proposed.

516

S. Zhao et al.

Let the TD s be decomposed into integer delay Dint T and FD d. Since the integer delay doesn’t affect the delay accuracy, the output of the FD is concentrated here. Let integer l ¼ fixðd Þ 2 f0; 1; 2; . . .; L  1g

ð8Þ

The output of the signal x(n) pass through the (l)-th and (l + 1)-th EIFD filters is 

xl ðnÞ ¼ xðnÞ  hl ðnÞ xl þ 1 ðnÞ ¼ xðnÞ  hl þ 1 ðnÞ

ð9Þ

According to the physical meaning, xd(n) must fall between xl(n) and xl+1(n), which can be approximated by linear interpolation xd ðnÞ  xl ðnÞ þ ½xl þ 1 ðnÞ  xl ðnÞDl

ð10Þ

Two sets of convolution operations are needed to calculate FD output by direct application Eq. (10). In order to reduce the computational complexity, formula (9) is substituted into formula (10) and the result is obtained xd ðnÞ ¼ xðnÞ  ½hl ðnÞDl þ 1 þ hl þ 1 ðnÞDl  The last term of the convolution operation is defined as the interpolation of the EIFD filter hd ðnÞ ¼ ½hl ðnÞDl þ 1 þ hl þ 1 ðnÞDl 

ð11Þ

xd ðnÞ ¼ xðnÞ  hd ðnÞ

ð12Þ

then

To sum up the above process, the linear combination of (l)-th and (l + 1)-th EIFD filters is calculated, and the coefficient of delay filter hd(n) is obtained. The convolution of x(n) and hd(n) is the output signal xd(n).

4 Simulation and Verification 4.1

Simulation of Equal-Interval Delay Filter

Let the sampling frequency of the system be 100 kHz, the duration of LFM signal be 5 ms, the center frequency be 11 kHz, and the bandwidth be 8 kHz. The interpolation coefficients are used to construct 10 EIFD filters. Taking the bandwidth coefficient L = 10 and the filter order N = 20, 10 groups of filters with delay interval of 1 ls are obtained. The signal x(n) pass through 10 equal interval FD filters to get 10 output signals x(n – Dint – d) as shown in Fig. 4, where d = {0.0, 0.1, 0.2,…, 0.9}. The results of Dint T

Research on the Linear Interpolation of Equal-Interval Fractional

517

and (Dint + 1) T delays implemented by direct delayed sampling are also plotted, which are expressed as x[n – Dint] and x[n – Dint – 1], respectively.

(a) Overall view

(b) Partial view

Fig. 4. The output of direct sampling delay and equal interval fractional delay filters

Analysis of Fig. 4a shows that the waveforms of 10 output signals are basically the same as those of direct delayed signals x[n – Dint], which shows that the amplitudefrequency response of the filter is consistent and has a linear phase in the effective frequency band. Figure 4b shows that the output signals are basically uniformly distributed between x[n – Dint] and x[n – Dint – 1], which satisfies the characteristic of EIFD. At the beginning and end of the effective signal, there is an obvious Gibbs phenomenon, which is caused by the truncation of the LS criterion. Based on the peak time of cross-correlation function of the input and output signals, i.e. the 10-channel output signal delay, is determined by the parabolic fitting method in [9]. The results are shown in Fig. 5. The analysis shows that the time delay of 10 output signals basically satisfies the equal interval relationship.

Fig. 5. 10 output signal delay calculated based on cross-correlation

518

S. Zhao et al.

Further, the error between the actual and theoretical time delays of the output signal is calculated as shown in Table 1. The error of output signal varies at different time delay, and the maximum error is 0.18 ls (less than 2% of the sampling period), indicating that the EIFD filtering has high precision. Table 1. Error between actual delay and theoretical delay (L = 10) m 0 1 2 3 4 5 6 7 8 9 E 0.00 −0.08 −0.14 −0.18 −0.18 −0.11 −0.02 0.02 0.03 0.02

4.2

Simulation of the EIFD Filter Interpolation Algorithm

For any variable FD d, the group number of the filter can be approximated according to Eq. (8), and interpolated according to Eq. (11), to calculate the FD filter coefficient hd(n), and then the FD can be realized by Eq. (12). According to the simulation hypothesis, the delay accuracy of the EIFD filter is 1 ls. The FD d is taken in the [0, T] interval and the step size is 0.1 ls for simulation. The error between the output signal delay and the theoretical delay is evaluated using a method similar to Sect. 4.1. The results are shown in Fig. 6. Simulation result shows that the maximum delay error of 0.187 ls (less than 2% of the sampling period) occurs between 3 and 4 ls, coinciding with the results in Table 1. The validity of the interpolation method to construct filter coefficients with EIFD filter is verified, and the delay error is closely related to the constructed EIFD filter. The analysis shows that increasing the interpolation decimation factor L and filter order N can reduce the delay error, but at the same time it will increase the hardware storage and computation overhead. In practical engineering, the allowable delay error and system resources need to be considered comprehensively.

Fig. 6. Error between actual delay and theoretical delay

Research on the Linear Interpolation of Equal-Interval Fractional

519

5 Conclusion This paper studies the principle and design method of digital FD filter. To meet the requirement of variable delay, a design method of EIFD is proposed. The principle and implementation structure of EIFD filter are demonstrated. In order to improve the accuracy of time delay, a linear interpolation method of EIFD filter is proposed to construct the desire FD filter. The simulation results of LFM signal show that the delay accuracy of less than 2% of the sampling period can be achieved when the interpolation coefficient is 10 and the order of the filter is 20, which can meet the requirement of high precision variable FD filtering of broadband signal in engineering.

References 1. Tahar B, Abdelfatah C, Imen A (2018) A new approach for the design of fractional delay by an FIR filter. ISA Trans 82:73–78 2. Shen Z, Yi Y, Xiwei G et al (2015) Generating and time-delaying of narrow-band noise in array signal source. Comput Measur Control 23:2163–2166 3. Yongjun H, Wenjun C (2010) The implementation of time delay based on fractional delay filters for wideband signals. RADAR & ECM 30(2):37–40 4. Jong-Jy S, Soo-Chang P, Cheng-Han C (2009) Mini-max phase error design of allpass variable fractional-delay digital filters by iterative weighted least-squares method. Signal Process:1774–1781 5. Allan VO, Ronald WS (2009) Discrete signal processing, 3rd edn. Pearson 6. Välimäki V (1995) Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology 7. Farrow CW (1988) A continuously variable digital delay element. In: IEEE international symposium on circuits and system. Espoo, pp 2641–2645 8. Zhanchun F (2008) Research on digital delay and phase shifting technology of broadband digital array. University of Electronic Science and Technology of China 9. Fangyuan Q, Xiaoyan Y, Jianing Y (2015) Fractional delay estimation algorithm based on parabolic interpolation method. Inf Commun 149:285–287

Single-Channel Grayscale Processing Algorithm for Transmission Tissue Images Based on Heterogeneity Detection Baoju Zhang1,2(&), Chengcheng Zhang1,2, Gang Li3,4, Ling Lin3,4, Cuiping Zhang1,2, and Fengjuan Wang1,2 1

3

4

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected] 2 School of Electronics and Communication Engineering, Tianjin Normal University, Tianjin 30087, China State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin 300072, China Tianjin Key Laboratory of Biomedical Detecting Techniques & Instruments, Tianjin University, Tianjin 300072, China

Abstract. Aiming at the problem of low contrast and unclear edge of gray image in hyperspectral transmission imaging, the single-channel grayscale processing algorithm was applied to the simulated image based on the simulation experiment. The experiment shows that this algorithm improves the contrast to a certain extent and enhances the image edge and grayscale image quality. While improving the quality of grayscale images, this algorithm also triples the number of original images, providing a data enhancement method for heterogeneity detection using deep learning. Therefore, this experiment verifies the feasibility of the single-channel processing algorithm and may provide a method for multispectral transmission biological tissue images, it may be a data enhancement method that can be applied to deep learning for tissue image detection. Keywords: Multispectral tissue image processing  Image graying  Grayscale image quality  Target detection

1 Introduction Anomaly in new medical treatment, hyperspectral transmission breast imaging [1] may be used in early self-test of breast tumors. The self-test of the human body is susceptible to environment and individual operations, which makes the light intensity weak, and the biological tissue has strong scattering characteristics, which may cause problems such as weak image signal, low contrast, and blurred edges. In response to these problems, the denoising method based on the traditional filtering method is mainly proposed. However, the characteristic of the filtering smooth edges is not conducive to the edge positioning in the tissue heterogeneity detection [2] (lesion tissue in the breast that is distinct from normal breast tissue). There are also some methods of © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 520–528, 2020 https://doi.org/10.1007/978-981-13-9409-6_61

Single-Channel Grayscale Processing Algorithm for Transmission

521

combining hardware and software: for example, adaptive multispectral imaging or acquisition systems are designed to achieve higher quality multispectral images from source characteristics, single pixel imaging, drive signals, etc. [3–5]. In addition, the paper [6] found that the multi-wavelength “synergy effect” obtained by frequency division modulation can be used to improve the image quality of each band in the lightemitting diode (LED)-multispectral image acquisition system. Li [7] used local Laplacian pyramid transform (LLP) and adaptive cloud model (ACM) to significantly improve the details of high-contrast images. Li et al. [8] proposed a model-based full sharpening method, which uses a degradation model associated with low-resolution multispectral images with unknown high-resolution multispectral images as a data fitting term to maintain the spectrum, which delivers better performance. As well as in our previous work, we used frame-accumulation combined with image enhancement algorithms to greatly improve the grayscale resolution of transmission tissue grayscale images and the image quality of color images. Current medical images are mainly grayscale images, and the amount of information provided by color is small (almost none) in most medical images, the gradient is the key factor for identifying objects. After the gray image is grayed out, the matrix dimension is decreased, the operation speed is greatly improved, and the gradient information is retained. This is essential for medical image target detection under big data technology. So, on the basis of our previous work, we attempt the grayscale processing method of three-channel image reconstruction by single-channel grayscale image in this paper. The experiment results show that the grayscale processing method has higher contrast than the RGB synthetic three-channel grayscale image in the previous work, and the quality of the overall grayscale image is improved to some extent. At the same time, the total number of image samples is increased by a factor of three, which maybe a data enhancement method for deep learning [9] used in target detection [10].

2 Grayscale Processing and Experimental Preparation In the target detection algorithm, the most critical factor for identifying an object is the gradient that means the edge, which is the most essential part. However, the calculation of the gradient requires grayscale image. Therefore, the preprocessing of grayscale images is particularly important for target detection of tissue images. There are many ways to deal with grayscale processing: component method, maximum method, average method, weighted average method, and so on. The weighted average method refers to converting R, G, and B components of a color image into a grayscale image according to a weighted average method. In this paper, we first use the component method to obtain a single-channel grayscale image, and then, the grayscale image of each channel of the three channels is synthesized into a 3channel transmission phantom grayscale image after a series of enhancement processing. Moreover, the blue and yellow that transmission light of two wavelengths are selected in the experiment to transmission phantom according to the absorption spectrum of water.

522

B. Zhang et al.

3 Experiments In this paper, we based on the acquisition experiment of multispectral phantom images designed in the previous work, the single-channel gradation method is tested on the basis of the frame accumulation preprocessing method. 3.1

Experimental Device

Figure 1 depicts a schematic diagram of the experimental equipment. The device consists of the following components: LED light source (0.5 W), phantom [2], mobile phone (model: Huawei mate9, frame rate: 59 fps, image resolution: 1080  1920), a computer for images handling, shading cloth. Phantom is composed of a PMMA (polymethyl methacrylate) cuboid container with a transmittance of 96%, which contains a mixed solution of milk and water. We suspend pork and carrot cubes in the solution representing heterogeneity tissues in terms of optical properties of different materials under different wavelengths. The blue light (460 nm) has the strongest transmission ability in water according to the absorption spectrum of water, so, the synthetic light of two wavelengths of 460 and 560 nm was used in the experiment to transmit the phantom. Milk solution

Heterogeneity Shading cloth

Phantom LED

Camera

Fig. 1. The schematic diagram of the experiment system device, and the inset is a photo of an actual phantom example

3.2

Experimental Process

Based on the experimental setup that has been built, the phantom images are obtained: The specific experimental steps are as follows: (1) Adjust and fix the distance between the light source and the experimental phantom and the distance between the mobile phone and the experimental phantom, and set up the blackout cloth. Turn on the light source and the phone camera to record the

Single-Channel Grayscale Processing Algorithm for Transmission

523

video of this phantom. It was carried out in groups: 12 min of video images for each group, and a total of k ¼ 36 groups. (2) Extract images from each video, and get rid of the images with obvious errors in each group, then get k = 36 kinds, a total of n = 1,500,000 frames of original transmission phantom images. Each group of images are separated from RGB, and to obtain three single-channel images, and finally k = 36 kinds are achieved, and a total of m = 3n single-channel grayscale images is gained. (3) Frame accumulation: After cropping the image, frame accumulation for each 100frame single-channel image increases the gray level of the single-wavelength image, increases the signal-to-noise ratio, and accurately extracts the edge. For each group of single-channel images xqi;j , each N frames is averagely frameaccumulated. That is, for each group of images q ¼ 1; 2; . . .; k, we have: 8 i þP N1 > > > xqi;1 ¼ xqi;1 þ xqiþ 1;1 þ xqiþ 2;1 þ    xqiþ N1;1 > > > i > < i þP N1 xqi;2 ¼ xqi;2 þ xqiþ 1;2 þ xqiþ 2;2 þ    xqiþ N1;2 ; 0  i\n  1; N [ 1 > i > > > i þP N1 > > > xqi;3 ¼xqi;3 þ xqiþ 1;3 þ xqiþ 2;3 þ    xqiþ N1;3 :

ð1Þ

i

After the frame is accumulated, 15,000 frames of RGB single-channel grayscale images are obtained. (4) The R, G, and B single-channel images are further subjected to Gaussian filtering denoising after the above processing, then they are combined into three-channel grayscale images, respectively. Let CðÞ denotes the synthesis function, then have:   C xqði;jÞ ðj ¼ 1; 2; 3Þ ¼ M ði; jÞ; M ¼ R; G; B ð2Þ M ði; jÞ is composed of 3 R or G or B channels, which is composed of 3 singlechannel matrices with a depth of 8 bits and its depth is 24 bits. Then, the composite grayscale images of R, G, and B are each 15,000 frames, that is, a total of 45,000 frames of three-channel grayscale images are obtained, which has increased by a factor of three compared with the 15,000-frame color image obtained in our previous work [2] (the number of corresponding grayscale image is 15,000 frames too).

4 Analysis of Experimental Results After the above experiments, we obtained the following experiment results and analyzed them. Figure 2 is a comparison of the grayscale image of the three-channel color image in our previous work with the grayscale image processed by the single-channel grayscale in this experiment.

524

B. Zhang et al.

(a)

(b) The number of each gray level

3000 2500 2000 1500 1000 500 0

0

50

100

150

200

250

300

250

300

The gray level

(c)

(d) The number of each gray level

4000 3500 3000 2500 2000 1500 1000 500 0

0

50

100

150

200

The gray level

Fig. 2. Grayscale image of three-channel color image and grayscale image of single-channel grayscale processing in this experiment and their histogram. a is the grayscale image of threechannel color image. b is the grayscale image of single-channel grayscale processing in this experiment. c, d are the histograms of (a), (b) respectively

As can be seen from Fig. 2, the number of gray levels between 150 and 256 in the histogram is increased, and the histogram is more uniform, that is, the gray resolution is improved. And the comparison of the (a), (c) can clearly reflect the change in contrast. The single-channel grayscale image of the original image is compared with the grayscale image after the grayscale processing of the experiment in Figs. 3 and 4. It can be found from the comparison between Figs. 3 and 4 that, obviously, the grayscale image after single-channel image synthesis has higher contrast than the simple singlechannel image and the original three-channel image from the perspective of visual effect.

Single-Channel Grayscale Processing Algorithm for Transmission

525

Fig. 3. Each single-channel grayscale image of the original image and the grayscale image after the grayscale processing of the experiment. a is the B-channel image. b is the G-channel image. c is the R-channel image. d is the grayscale image after the grayscale processing of the experiment

Fig. 4. Grayscale images synthesized by BGR channels respectively. a is the grayscale images synthesized by B-channel. b is the grayscale images synthesized by G-channel. c is the grayscale images synthesized by R-channel

In this paper, we use the gradient threshold to extract the edge of the image to compare the edge of the gray image with the previous experiment: using non-maximum suppression to set the same gradient threshold for the images before and after edge enhancement, then the edges are extracted and the set of points in the boundary is obtained, and then we got the edge detection image.

526

B. Zhang et al.

The gradient threshold algorithm is as follows:  mði; jÞ ¼

255; gði; jÞ [ p ; 0; gði; jÞ  p

i ¼ 0; 1; 2. . .; 502; j ¼ 0; 1; 2. . .; 362; p [ 0

ð3Þ

mði; jÞ is the gray value of any point of the image, gði; jÞ represents the gradient value at any point of the image, and p is the threshold. In this paper, p = 4 is set to obtain the two edge detection images of the Fig. 5. The RGB single-channel grayscale image after N-frames accumulation and averaging is first synthesized into color, and then the grayscale image is obtained by grayscale processing in matlab method, and the Grayscale images processed by the single-channel grayscale processing method in the experiment are shown in Fig. 5a, c, respectively.

Fig. 5. The grayscale image in the experiment and grayscale image in our previous work, and their edge-detection images. a is grayscale image in our previous work. c is the grayscale image processed by single-channel grayscale processing method in this experiment. b, d are edgedetection images of (a), (c) respectively

It can be seen from Fig. 5 that the set of boundary points of (d) is larger than that (b) under the same gradient threshold, and (d) showing more details. That is to say, the grayscale image processed by the grayscale processing method of the experiment has more clear and rich edge details. Usually, the standard deviation of the image is an important indicator to measure the effect of an image enhancement. The larger the standard deviation of the image pixel matrix is, the higher the contrast of the image is. Conversely, the lower the

Single-Channel Grayscale Processing Algorithm for Transmission

527

contrast of the image is. It can be found from calculation that the average standard deviation of the RGB single-channel synthetic grayscale image in this experiment is 73.8714, which is about 0.3 higher than the image in our previous work [2].

5 Conclusions This paper experiments the effects of different grayscale processing methods on tissue images and verifies the feasibility of the single-channel grayscale processing method proposed in this experiment based on the characteristics of multispectral tissue images. The experiment results show that the grayscale image processed by the single-channel grayscale processing method in this experiment has more clear and rich edge details, the contrast is obviously enhanced, and the grayscale resolution is improved. Compared with the three-channel grayscale image, the direct use of single-channel grayscale eliminates the process of reconstructing the color map and then grayscale. The number of image samples is also increased by three times while improving the quality of grayscale images, which is more in line with the requirements of large-data sets for target detection based on deep learning. Therefore, The grayscale processing method in this paper can improves the quality of transmission phantom grayscale image to some extent, which may be more suitable for the detection of heterogeneous images, it is also possible to provide a grayscale preprocessing method for target detection of multispectral transmission tissue images. Acknowledgements. The authors would love to acknowledge funding from NYSFC and TSF. This work was partially supported by the Natural Youth Science Foundation of China (NYSFC61401310) and the Tianjin Science Foundation (TSF-18JCYBJC86400).

References 1. Yang X, Li G, Lin L (2016) Assessment of spatial information for hyperspectral imaging of lesion. In: SPIE, vol 10024, p 100242O 2. Zhang BJ, Zhang CC, Li G, Lin L et al (2019) Multispectral heterogeneity detection based on frame accumulation and deep learning. IEEE Access. https://doi.org/10.1109/ACCESS. 2019.2897737 3. Rousset F, Ducros N et al (2018) Time-resolved multispectral imaging based on an adaptive single-pixel camera. Opt Express 26(8):10550–10558 4. Jian XH, Cui YY, Xiang YJ, Han ZL (2012) Adaptive optics multispectral photoacoustic imaging. Acta Phys Sin. https://doi.org/10.7498/aps.61.217801 5. Yang X, Hu Y, Li G, Lin L (2018) Optimized lighting method of applying shaped-function signal for increasing the dynamic range of LED-multispectral imaging system. Rev Sci Instrum. https://doi.org/10.1063/1.5022700 6. Li H, Li G, An WJ, He GQ, Lin L (2019) “Synergy effect” and its application in LEDmultispectral imaging for improving image quality. https://doi.org/10.1016/j.optcom.2018. 12.091

528

B. Zhang et al.

7. Li WS, Du J, Zhao ZM, Long JY (2019) Fusion of medical sensors using adaptive cloud model in local Laplacian pyramid domain. IEEE Trans Biomed Eng 66(4):1172–1183 8. Wang WQ, Liu H, Liang LL, Liu Q, Xie G (2019) A regularised model-based pansharpening method for remote sensing images with local dissimilarities. Int J Remote Sens 40(8):3029–3054 9. Yann LC, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444 10. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE conference on computer vision and pattern recognition, pp 580–587

Handwriting Numerals Recognition Using Convolutional Neural Network Implemented on NVIDIA’s Jetson Nano Huan Chen, Songyan Liu(&), Haining Zhang, and Wang Cheng Electronic Engineering College, Heilongjiang University, Harbin 150080, China {chenhuan_1204,zhanghaining0229}@163.com, [email protected]

Abstract. An efficient handwriting numerals recognition structure based on Convolutional Neural Network (CNN) with RMSProp optimizer algorithm and Adam optimizer algorithm is presented in this paper. The experiment is implemented on NIVIDIA’s Jetson Nano platform, where we compare the performance of CNN models with two different optimizer algorithms. Experimental results show that the training accuracy of the model using the Adam optimization algorithm is better than that of the model with the RMSProp optimization algorithm. The training accuracy is 98.25%. Adam algorithm has fast convergence speed and RMSProp algorithm. Keywords: Jetson nano

 MNIST dataset  Convolutional neural network

1 Introduction In some cases, adaptive optimization algorithms (such as Adam and RMSProp) have better optimization performance than random gradient descent (SGD). Further there are comparisons about the performance of the two optimizer algorithms for convolutional neural network training and classification of MNIST data sets. Chang verified the performance of Adam optimized LSTM neural network [1]. Zhang proposed that the normalized direction preserving Adam (NDAdam) method can control the direction and step size more accurately, so as to update the weight vector and improve the generalization performance significantly [2]. Vijaya Kumar Reddy proposed a handwritten Hindi digital recognition structure based on convolutional neural network (CNN) and RMSProp optimization technology [3]. Though the Adam and the RMSProp are used, the performance of the two was not compared on an embedded platform. This article uses NVIDIA’s Jetson Nano platform to experiment with the Adaptive optimization algorithms (RMSProp and Adam), which contribute to build a convolutional neural network to train the MNIST dataset and compare performance. This paper is organized as follows. Section 2 presents some related work, including an introduction to the MNIST dataset and a basic profile. The experiment research, which contains the relevant parameters of the CNN network model is proposed in Sect. 3. Finally, Sect. 4 shows the experimental results and analysis.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 529–535, 2020 https://doi.org/10.1007/978-981-13-9409-6_62

530

H. Chen et al.

2 Related Work In this paper, TensorFlow library is an open source software library developed by Google and adopts data flow graphs for numerical calculation. Originally used in machine learning and deep neural networks, the system has been used in many other computational areas, owing to its versatility, so the TensorFlow classifier was chosen to compare the impact of several different Optimizer algorithms on classification. In this study, the MNIST dataset was used to measure the performance of the TensorFlow library. MNIST is the abbreviation of “modified national institute of standards and technology”, which is a large dataset composed of handwritten Numbers and widely used in training of image processing systems. The picture of the MNIST data set and the selected test picture are shown in Fig. 1.

Fig. 1. Examples of the MNIST dataset and a handwritten test sample of the MNIST data set

The MNIST dataset contains 70,000 black and white handwritten digital images, of which 55,000 are training sets, 5000 are verification sets, and 10,000 are test sets [4]. Each image has a size of 28 * 28 pixels, with the value of 0 for pure black pixels and 1 for pure white pixels. The label of the data set is a one-dimensional array of length 10. The index number of each element in the array indicates the probability of occurrence of the corresponding number. The MNIST training set and test set are shown in Table 1. Table 1. The MNIST training set and test set File train-images-idx3ubyte.gz train-labels-idx1-ubyte. gz t10k-images-idx3ubyte.gz t10k-labels-idx1-ubyte. gz

Content Training set pictures—55,000 training pictures, 5000 verification pictures Training set images corresponding to the digital label Test set pictures—10,000 pictures Test the set of images corresponding to the digital label

Handwriting Numerals Recognition Using Convolutional Neural

531

Loss function: used to indicate the difference between the predicted value (y) and the known answer (y_). In the process of training the neural network, the loss function is reduced by changing all the parameters of the neural network, so as to train the neural network model with higher accuracy. The frequently-used loss functions include mean square error, custom and cross entropy, etc. Cross Entropy: the distance between two probability distributions. In a program writing with the TensorFlow library, the correct classification performance of the dataset is measured by selecting different optimizer algorithms. The selected optimizer type and gradient update rules are shown in Table 2. Table 2. Optimizer algorithm and gradient update rule Optimizer algorithm Gradient update rule RMSProp E½g2 t ¼ 0:9E½g2 t1 þ 0:1g2t ; g ffi gt ht þ 1 ¼ ht  pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 E ½g t þ e

Adam

Hinton recommends setting r at 0.9 and learning rate η at 0.001 mt ¼ b1 mt1 þ ð1  b1 Þgt vt ¼ b2 vt1 þ ð1  b2 Þg2t mt vt ^ t ¼ 1b m vt ¼ 1b t ;^ t ; 1 2 g ^ t; ht þ 1 ¼ ht  pffiffiffi m ^vt þ e

Recommends b1 ¼ 0:9; b2 ¼ 0:999; e ¼ 10e  8

RMSProp is an upgraded version of Momentum, which is an adaptive learning rate method proposed by Geoff Hinton. Adam is an upgraded version of RMSProp, providing a distinct method to calculating the adaptive learning rate for each parameter [5]. In addition to storing the exponential decay average of the past gradient squared vt like RMSProp, it also maintains the exponential decay average of the past gradient mt like momentum. The mean square error of this paper adopts cross entropy. Cross entropy represents the distance between two probability distributions. The larger the cross entropy, the farther the two probability distribution distances and the more different the two probability distributions are; the smaller the cross entropy is, the closer and the more similar the two probability distributions are. Cross entropy loss that has the form: X y  log y ð1Þ H ðy ; y Þ ¼  Softmax classifier is used for handwritten numeric classification of datasets. Softmax is a classifier and function that is often used in deep learning applications. The function formula is shown in Eq. (2), Softmax function—n outputs (y1 , y2 , … yn ) is a function that satisfies the following probability distribution requirements.

532

H. Chen et al.

8x; PðX ¼ xÞ 2 ½0; 1;

X

Px ðX ¼ xÞ ¼ 1 denoted as Softmax

eyi soft maxðyi Þ ¼ Pn

j¼1

eyi

ð2Þ

The ReLU activation function formula is shown in Eq. (3), The ReLU activation function is selected to introduce nonlinear activation factors to improve the expression of the model. The reason for using simgoid and tanh instead of ReLU as activation functions is to accelerate convergence.  fðxÞ ¼ maxðx; 0Þ

0; x;

x0 x0

ð3Þ

3 Handwritten Hindi Digits Recognition The flow chart of the whole system based on the NIVIDIA’s Jetson Nano is shown in Fig. 2, and convolutional neural network is used for feature extraction. After forward propagation and back propagation, artificial neural network model parameters are trained.

NIVDIA s Jetson Nano

Feature extraction (artificial neural network (CNN))

The forward propagation

adjustment parameter Back propagation training model parameters

Classification and recognition of handwritten Numbers

Fig. 2. System flowchart

Handwriting Numerals Recognition Using Convolutional Neural

533

Parameters of the TensorFlow library for phase 2 deep learning are specified. In this study, 10,000 iterations are processed. Since each image consists of 28  28 pixels, the network input parameter value is 784. The Output class is selected as 10 because these Numbers should be between 0 and 9. Considering the weight of the third stage, 5  5 convolution layer and 32 outputs are selected. Then applying a 5  5 convolutional layer and 64 outputs. In the fully connected layer, the output layer, which the output value of 512 for the previous layer, is also the input, taking the total number of 10 output classes. The offset value serves as the output value of the weight layer. In the fourth step, the performance of the classifier is measured in the selected iteration process by selecting different activation functions. Figures 3 and 4 are the description of convolution layer and convolution kernel. Conv1 32@28*28

Input 28*28

Conv2 64@14*14 Pooling1 32@14*14

Fc1 512

Pooling2 64@7*7

Output 10

A

Fig. 3. Convolutional parametric model

Fig. 4. Convolution kernels

The fourth step is to measure the performance of the optimizer algorithm in the selected iteration by selecting different optimizer algorithms. The training accuracy of Adam optimizer is 98.25%, and that of RMSProp optimizer is 98.29%.

534

H. Chen et al.

Fig. 5. Accuracy and cross_entry train

Fig. 6. Accuracy and cross_entry test

As can be clearly seen from Fig. 5, the final training accuracy of the two algorithms is extremely close when training. Adam optimization algorithm is better than RMSProp optimization algorithm, and Adam algorithm has a fast convergence speed. Figure 6 shows the test data image. In the test process, Adam optimization algorithm is also better than RMSProp optimization algorithm. Both Adam algorithm and RMSProp algorithm have higher test accuracy, but Adam has faster convergence speed. Therefore, under the same conditions, it is recommended to select the Adam optimizer algorithm.

4 Results and Discussion In recent years, deep learning has been widely used in research and application at home and abroad. Deep learning provides efficient and fast solutions, especially in the field of big data analysis. This study classifies the MNIST data set on NVIDIA’s Jetson Nano platform. Different optimization algorithms are selected to test the accuracy and performance of the system classification. Use RMSProp and Adam to optimize the algorithm. SoftMax is used as the classifier function. The results show that the optimal precision can be obtained by selecting the RMSProp optimization algorithm. In the

Handwriting Numerals Recognition Using Convolutional Neural

535

research of RMSProp optimization algorithm, the classification accuracy of test data reaches 98.29%. The increase of iteration number indicates the increase of precision value, but the total classification time also increases. However, in the process of training and testing, the Adam optimizer algorithm has better convergence, and the accuracy is 98.25%. So the Adam optimizer algorithm is better than the RMSProp optimizer algorithm. In the following research, we aim to improve the accuracy and speed performance of training by applying different convolutional neural network structures.

References 1. Chang Z, Zhang Y, Chen W (2018) Effective Adam-optimized LSTM neural network for electricity price forecasting. In: 2018 IEEE 9th international conference on software engineering and service science (ICSESS), pp 245–248 2. Zhang Z (2018) Improved Adam optimizer for deep neural networks. In: 2018 IEEE/ACM 26th international symposium on quality of service (IWQoS), pp 1–2 3. Vijava Kumar Reddy R, Srinivasa Rao B, Prudvi Raju K (2018) Handwritten Hindi digits recognition using convolutional neural network with RMSprop optimization. In: 2018 second international conference on intelligent computing and control systems (ICICCS), pp 45–51 4. Ertam F, Aydın G (2017) Data classification with deep learning using Tensorflow: a definition. In: 2017 international conference on computer science and engineering (UBMK), pp 755–758 5. Liu CL, Nakashima K, Sako H, Fujisawa H (2004) Handwritten digit recognition: investigation of normalization and feature extraction techniques. Pattern Recogn 37(2), pp 265–279

Implementation of Image Recognition on Embedded Systems Haining Zhang, Songyan Liu(&), Huan Chen, and Wang Cheng Electronic Engineering College, Heilongjiang University, Harbin 150080, China {zhanghaining0229,chenhuan_1204}@163.com, [email protected], [email protected]

Abstract. Image recognition technology is becoming more and more widely used and is getting closer to people’s lives. This article applies the Jetson Nano embedded system and use the ImageNet dataset as a training set. Image recognition is implemented on the TensorFlow platform using Soft Max regression algorithm, and add CNN to improve the recognition accuracy. Keywords: Image recognition

 CNN  TensorFlow  Jetson Nano

1 Introduction AI technology has developed rapidly in recent years, Image recognition as an important branch of AI has become more and more widely used in recent years. Along with a series of classic data sets, such as the mnist dataset dedicated to handwritten digit recognition [1]; the ImageNet dataset dedicated to image recognition [2] and the PTB dataset dedicated to natural speech processing [3]. Because neural network training has higher hardware requirements, most of them are based on PC or work stack for training. This experiment solves the problem that some people do not have a hardware environment on the Jetson Nano embedded system.

2 Technical Background Accurately identifying images is a key link in the technical fields of target tracking, robot navigation, assisted driving, and product testing. This paper uses ImageNet dataset and convolutional neural network to build a recognition network model on TensorFlow platform. 2.1

ImageNet Dataset

ImageNet is a large image database based on WordNet which solve the problem of excessive image resolution in real life and too many types of objects in a single image. In ImageNet, nearly 15 million images were linked to WordNet’s approximately 2000 noun synonym set. Each of the WordNet-related WordNet synonym sets represents an

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 536–543, 2020 https://doi.org/10.1007/978-981-13-9409-6_63

Implementation of Image Recognition on Embedded Systems

537

entity in the real world and can be considered a category in the classification problem. In the image of ImageNet, an image may have multiple entities represented by a synonym set. In the object recognition problem, the rectangle used to frame the entity is generally referred to as a bounding box. In Fig. 1 there are a total of four entities, including two cats, one duck, and one dog. Similar entity outlines in some of ImageNet’s images are also labeled. For more accurate image recognition [4].

Fig. 1. ImageNet sample image and the marked outline of the entity

2.2

Jetson Nano Embedded Development Board

The Jetson Nano is a small AI computer for makers, learners, and developers which published by NVIDIA. The Jetson Nano is powered by a quad-core Cotex-A57 processor, and the GPU is an NVIDIA Maxwell graphics card with 128 NVIDIA CUDA cores. Memory 4 GB LPDDR4, storage is 16 GB eMMC 5.1, Support 4 K 60 Hz video decoding. The Jetson Nano provides 472 gigaflops of computing power at 5 W of low power. The Jetson Nano is very small, ideal for use in robots or smart speakers, and can run Linux directly, supporting a large number of AI frameworks. Based on the above reasons, the Jetson Nano embedded device is a hardware environment, and the TensorFlow platform is built on the Jetson Nano for image recognition. 2.3

Convolutional Neural Networks

Since the parameters of the fully connected network processing the image are too many, the calculation speed is slow and it is easy to cause an overfitting problem. Therefore, a more reasonable neural network structure is needed to effectively reduce the number of parameters in the neural network. Convolutional neural networks can achieve this goal.

538

H. Zhang et al. Conv1

Pool1

Conv2

Pool2

Fc1

Fc1

Output

Softmax

Input

Fig. 2. A CNN architecture diagram for image classification problems

In a convolutional neural network that processes images, the input layer is typically a matrix of pixels of a picture. As shown in Fig. 2, the length and width of the leftmost three-dimensional matrix represent the size of the image, and the depth represents the number of channels of the image color. Starting from the input layer, the convolutional neural network transforms the three-dimensional matrix of the upper layer into the three-dimensional matrix of the next layer through different neural network structures until the final fully connected layer. The pooled layer neural network does not change the depth of the threedimensional matrix, but it can reduce the size of the matrix. The pooling operation can be thought of as converting a picture with a higher resolution into a picture with a lower resolution. As can be seen in Fig. 2, the number of nodes in the last fully connected layer can be further reduced by the pooling layer. Convolutional layers are an important part of convolutional neural networks and are generally used for more in-depth analysis to obtain more abstract features. In general, the matrix of nodes processed through the convolution layer will become deeper. In Fig. 2, we can see that the depth of the node matrix after the convolution layer increases. As shown in Fig. 3, the forward propagation process of the convolution filter is the process of calculating the nodes in the right unit matrix through the nodes in the small matrix on the left side. Suppose wix;y;z is used to represent the i-th node in the output unit matrix, The weight of the filter input node ðx; y; zÞ, using bi to represent the offset parameter corresponding to the i-th output node, Then the value of the i-th node in the identity matrix gðiÞ is: Z gðiÞ ¼

2 X 2 X 3 X x¼1 y¼1 z¼1

! ax;y;z 

wix;y;z

þb

i

ð1Þ

where is the value of the node ðx; y; zÞ in the filter, and f is the activation function.

Implementation of Image Recognition on Embedded Systems

539

Fig. 3. Schematic diagram of the convolution layer filter

In TensorFlow, Softmax acts as an additional processing layer that transforms the output of the neural network into a probability distribution. It can be obtained that the current sample belongs to different kinds of probability distributions. Assuming the original neural network output is y1 ; y2 ; . . .; yn , then the output after Softmax regression processing is: eyi softmaxðyi Þ ¼ y0i ¼ Pn

j¼1

ð2Þ

eyi

3 Method 3.1

Convolutional Neural Network Classification Model

In the traditional image recognition, it is necessary to extract the feature indicators of the image through a series of preprocessing for classification and output. In the convolutional neural network, there is no need to consider the design and extraction of features. It is only necessary to directly input the image as data into the network, and the result can be obtained at the output, as shown in Fig. 4.

Input

Simple pretreatment

CNN

Softmax

Output

Fig. 4. Convolutional neural network classification model

3.2

Convolutional Neural Network Model Construction

The convolutional layer constructed in this paper has 5 convolutional layers, 3 pooling layers, and 3 fully connected layers. The specific network structure connection method is shown in Fig. 5.

540

H. Zhang et al.

INPUT

conv1

conv7

pool2

pool8

conv3

fc9

conv5

pool4

fc10

fc11

conv6

OUTPUT

Fig. 5. Convolutional neural network structure

The neural network constructed in this paper consists of 11 layers. The input layer needs 227 * 227 * 3 image as input. After the convolution layer, pooling layer and allconnected layer work together, the output result is finally obtained (The output results are shown in Table 1). Table 1. Neural network output results Type conv1 pool2 conv3 pool4 conv5 conv6 conv7 pool8 fc9 fc10 fc11

Convolution kernel 11 * 11 3*3 5*5 3*3 3*3 3*3 3*3 3*3 1*1 1*1 1*1

Output feature map size 55 * 55 27 * 27 27 * 27 13 * 13 13 * 13 13 * 13 13 * 13 6*6 1 1 1

Number of output features 96 96 256 256 256 512 512 512 4096 4096 1000

In the conv1 layer, 96 convolution kernels of size 11 * 11 and step size of 4 are used to convolve the image to obtain 96 feature maps with a size of 55 * 55. In the pool2 layer, using a non-all-zero padding mode, a pooling function is performed using a convolution kernel of size 3 * 3 and a step size of 2, and 96 feature maps of size 27 * 27 are obtained. In the conv2 layer, 256 convolution kernels of size 5 * 5 and step size 1 are used for convolution operations to obtain 256 feature maps with a size of 27 * 27. In the pool4 layer, a non-total zero-fill mode is used, and a convolution kernel with a size of 3 * 3 and a step size of 2 is used for pooling, and 256 feature maps with a size of 13 * 13 are obtained. In the conv5, conv6, and conv7 convolutional layers, 256, 512, and 512 convolution kernels with a size of 3 * 3 and a step size of 1, respectively, are used, and the resulting feature map size is 13 * 13 * 512; The pooling effect results in 512 feature maps with a size of 6 * 6.

Implementation of Image Recognition on Embedded Systems

541

In the fully connected layer of fc9, fc10, and fc11, the data of the feature map of the pool 8 layer is vectorized and connected to the output layer, and finally a 1  1  1000 output result is obtained for identification.

4 Result In this design, all RGB pictures are kept at the original aspect ratio and cropped to a size of 227 * 227 as the input layer data of the experiment. This experiment was completed using TensorFlow in the hardware environment of the Jetson Nano embedded development board. The experimental data set is the Image net data set. The learning rate of this training is set to 0.001, the batch-size is set to 32, 100 trainings per training is recorded as one iteration, and the learning rate is recorded once, a total of 50 iterations are performed to ensure the training accuracy rate is above 98%, and save this model. First test with an image containing only one entity, enter the image and the results are shown in Fig. 6:

TOP-3 possibility

1

0.8653 0.6592 0.4592

0.5 0 tabby cat

tiger cat Egyptian cat

Fig. 6. Single entity image and recognition result

It can be seen that the model well recognizes that the entity in the picture is a cat and recognizes the cat’s breed. Next, enter the image with two entities into the model for identification, as shown in Fig. 7.

possibility

Top-3 1 0.8 0.6 0.4 0.2 0

0.8106

0.8032 0.3092

golden retriever

British St Bernard shorthair

Fig. 7. Two entity images and recognition results

542

H. Zhang et al.

It can be seen that the model recognizes that the two entities are cats and dogs, respectively, and gives the top three possibilities of variety. Compared with the identification of a single entity, we find that the accuracy is slightly reduced. The experiment now identifies three images containing entities, as shown in Fig. 8.

Top-3 possibility

0.8

0.7565

0.75

0.6869

0.7

0.6689

0.65 0.6 French Bulldog

tabby cat

bench

Fig. 8. Three entity images and recognition results

It can be seen that there are not only three entities in the picture, in addition to cats, dogs, benches, and buildings, trees and other entities. However, since the model only outputs the top three entities of accuracy, the output contains only cats, dogs and benches. In addition, the recognition accuracy is reduced compared to the images of single and dual entities.

5 Conclusion After testing, the model designed in this experiment can get a good recognition rate. Supported by the ImageNet dataset, the model can perform most of the image classification operations. It can be seen from the experimental results that a relatively high accuracy is achieved when identifying an image with a single entity. When identifying a picture with multiple entities, although each entity in the picture can be more comprehensively identified, the recognition accuracy will be relatively low. This is also the improvement direction of the model established in this experiment.

References 1. Peres AA, Vieira SM, Caldas Pinto JR (2018) Hybrid neural models for automatic handwritten digits recognition. In: 2018 international joint conference on neural networks (IJCNN), pp 1–8 2. Rahman MA, Paul SP et al (2019) Convolutional neural networks based multi-object recognition from a RGB image. In: 2019 international conference on electrical, computer and communication engineering (ECCE), pp 1–6

Implementation of Image Recognition on Embedded Systems

543

3. Hentschel M, Delcroix M, Ogawa A et al (2018) Factorised hidden layer based domain adaptation for recurrent neural network language models. In: 2018 Asia-Pacific signal and information processing association annual summit and conference (APSIPA ASC), pp 1940– 1944 4. Deng J, Dong W, Soche R et al (2009) ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255

A Precise 3-D Wireless Localization Technique Using Smart Antenna Shuang Feng, Desheng Chi, Jingyu Dai, and Xiaorong Zhu(&) College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, People’s Republic of China [email protected]

Abstract. Three dimensional (3-D) wireless localization is a significant technology for wireless networks. The two key factors of improving the accuracy of localization are the locations of beacons and the improvement of precision. Aimed at these problems, we propose a precise 3-D wireless localization technique based on the arrival angle Ranging (AOA) by smart antenna. In concrete, we use the linearization approach by Taylor-series expansions estimation for 3-D positioning to get the initial positions. Then we propose to use Precision-Weighted Aggregation to define the final estimated position of the object from the mean of the initial position estimates or the centroid of a polygon formed by those points. Simulation results show that the equilateral triangle system formed by the selected beacons is optimal among all systems with the minimum positioning error, and the average positioning error is about 0.15 m. Keywords: Smart antenna placement of beacons

 3D space  AOA  Indoor localization  Optimal

1 Introduction In modern life, localization technology is in the ascendant and widely used in many fields, such as tourism, mining and petrochemical industry, intelligent transportation, fugitive hunting and so on. Especially with the arrival of 5G technology, the research of real-time positioning technology will be the most important. In general, there are two main types of positioning technology based on radio signals: one is the traditional geometric method [1–4], and the other is the probability method [5–9]. At present, the popular geometric methods for locating mobile users include measuring the arrival time (TOA), time difference of arrival (TDOA) and angle of arrival (AOA) of wireless signals transmitted by mobile stations (MS), which are received at a number of fixed beacons/base stations (BSs). Geometric positioning method has the advantages of high computational efficiency and real-time positioning. The positioning technology proposed in this paper is based on probability model. The mapping function between signal and spatial position is directly established by using machine learning technology. In reality, the environment is more complex, signal propagation is affected by many factors, including path loss effect, shadow effect and © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 544–554, 2020 https://doi.org/10.1007/978-981-13-9409-6_64

A Precise 3-D Wireless Localization Technique Using Smart Antenna

545

multipath effect, and they are the comprehensive effects of non-linear and noise modes. The model integrates the advantages of different technologies, such as fingerprint-based positioning technology [5], machine learning [6, 7], Bayesian rule [8, 9], and factor graph [10, 11], to improve positioning accuracy. In practical applications, position estimation system can capture signals and user dynamics in signal and location space by combining geometric method and probability method. Combining the above two methods, we propose a new three-dimensional accurate positioning method based on AOA measurement of smart antenna. We try to use four antenna arrays, each with three unique combinations of antenna arrays. And thus the ambiguity in the positioning results can be removed by the fourth antenna by generating a new cone. Therefore, four antenna arrays are formed. Firstly, the angle information is transformed into position coordinates, and then the initial position estimation is carried out by using the linearized optimization results. Standard methods use supervised learning techniques. However, for real-time or for larger-scale applications such supervised learning methods as support vector machines and neural networks prove impractical. After that, we discuss several factors affecting localization accuracy such as orientation of the antenna array, the influence of distance between the point to be located and the base station, and the shape of service region of the base station. Finally, we propose a complete localization algorithm.

2 3D Estimation of an Object’s Location by AOA Measurement In this paper we propose the 3-D wireless precise positioning technology based on smart antenna, which is based on our previous research of precise 2-D Wireless localization technique [12]. This paper has successfully achieved the leap from twodimensional space to three-dimensional space. The AOA-based localization technology needs the use of antenna arrays in the receivers. In two-dimensional space, as shown in Fig. 1, the antenna array at BSS is measured and the angle of arrival information is obtained. Node A represents the location to be measured. When it receives two signals from nearby base stations B1 and B2, its position can be calculated. By analogy, in 3-D space, there will be three tangents, which may give intersection points. Therefore, at least three antenna arrays are required to obtain the intersection points, as shown in Fig. 2. In this paper, only the linear antenna arrays are considered. Because the problem is considered in far field, the locations of the antenna arrays are known and can be considered as fixed points at ðxi ; yi ; zi Þ. The orientations of the different antenna arrays are fixed as ðai ; bi ; cÞ. The estimated location of the object is set to be ð^x; ^y; ^zÞ. This location is estimated by finding the result of the following function, which aims at finding the intersecting point of N cones generated by N antenna arrays. di sin hi ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; ð^x  xi Þ2 þ ð^y  yi Þ2 þ ð^z  zi Þ2

i ¼ 1; 2; . . .; N

ð1Þ

546

S. Feng et al.

Fig. 1. Node A can be located by B1 and B2

Fig. 2. Cones generated by the antenna

where vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2     u  ^z  zi ^x  xi 2  ^x  xi ^y  yi 2 u ^y  yi ^z  zi      u þ þ t bi ci  ci ai  ai bi  di ¼ ; a2i þ b2i þ c2i

i ¼ 1; 2; 3; . . .; N ð2Þ

is the distance between a point and a line in space. Equation 1 shows that sin hi is the ration of the distance of the object to the prolonged line of the antenna array to the distance between the object to the antenna array.

A Precise 3-D Wireless Localization Technique Using Smart Antenna

547

3 The Position Estimation The position of the source could be found from the centroid of a polygon formed by those point. In this section, we propose to use Precision-Weighted Aggregation to define the estimated position of the object. The main reasons for the error of localization technology based on AOA are multipath propagation effect and actual measurement error. The final position estimation is obtained by the Weighted Aggregation. In general, multipath propagation affects the signal propagation of mobile station and base station in the form of scattering, thus affecting the accuracy of final location estimation. In the three-dimensional case, we modeled the macro cellular propagation environment as a scattering sphere about MS and BS outside the sphere. Figure 3 illustrates this spatial structure, in which we assume that MS uses omnidirectional antennas, so we assume that the main scatterer is located in a sphere with radius R centered on MS. Let (x, y, z) represents the position of the scatterer. It is very efficient to use polar coordinates to represent the pdf of scatterer AOA. The interchange formulas of rectangular coordinates and polar coordinates are as follows:

Fig. 3. Calculating AOA

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ¼ x2 þ y2 þ z 2 x ¼ r sin h cos u y ¼ r sin h sin u z ¼ r cos h

ð3Þ

The joint pdf fr;h;u ðr; h; uÞ is defined by fr;h;u ðr; h; uÞ ¼

 fx;y;z ðx; y; zÞ jJ ðx; y; zÞj  x ¼ r sin h cos u y ¼ r sin h sin u x ¼ r cos h

ð4Þ

548

S. Feng et al.

where J ðr; h; uÞ is the Jacobian transformation given by 2 @x Jðx; y; zÞ ¼ ¼

@r 6 @y 4 @r @z @r

@x @h @y @h @z @h

1 r 2 sin h

3

@x 1 @u @y 7 @u 5 ¼ @z @u

  sin h cos u   sin h sin u   cos h

r cos h cos u r cos h sin u r sin h

1 r sin h sin u  r sin h cos u   0 ð5Þ

Substituting Eq. 5 into Eq. 4 gives fr;h;u ðr; h; uÞ ¼ r 2 sin h fx;y;z ðr sin h cos u; r sin h sin u; r cos hÞ 3

If the scatters are evenly distributed within a sphere RV with volume V ¼ 4pR 3 , and then we can get the scatterer density function by fx;y;z ðx; y; zÞ ¼

1

; 0;

V

x; y and z 2 RV else

ð6Þ

Therefore, Eq. 6 reduces to fr;h;u ðr; h; uÞ ¼

r 2 sin h V

ð7Þ

Thus, the pdf of AOA at the BS, h, can be calculated by Z2p DZþ R f h ð hÞ ¼

Z2p DZþ R fr;h;u ðr; h; uÞdrdu ¼

0

DR

0

DR

r 2 sin h ð3D2 þ R2 Þ sin h ð8Þ drdu ¼ V R2

where h is h0  sin1 DR  h  h0 þ sin1 DR , D is the distance between the MS and BS and h0 is the angle between the MS and z axis. Let Dh1 ¼ h0  h. Then the pdf of Dh1 is fDh1 ðDh1 Þ ¼

ð3D2 þ R2 Þsinðh0  Dh1 Þ R R  sin1  Dh1  sin1 R2 D D

ð9Þ

The Final Position Estimation by Precision-Weighted Aggregation. Let ^hi denote the true value of AOA at the ith BS, let Dhi1 denote the noise from multipath propagation, and let Dhi2 denote the measurement noise. Then the measured value hi is given by hi ¼ ^hi þ Dhi1 þ Dhi2

ð10Þ

The pdf Dhi1 of is given by Eq. 10. And the mean and variance of Dhi2 can be found by [13]

A Precise 3-D Wireless Localization Technique Using Smart Antenna

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s 2  2 ffi k 6 rn Di ¼ 2 2pS sinðhi Þ M MPi N

549

ð11Þ

Di is the bias in the estimation, for one source AOA, where S is the sensor separation, M denote the number of sensor elements, N denote the number of the independent snapshots, r2n is the noise power level and Pi is the source power level. The analysis results show that the bigger the impinged angle is, the smaller the bias error is. Then, we can obtain the weight of hi by wi ¼ SNR ¼ Dhi1 hþi Dhi2 . Obviously, different combinations of the antenna arrays will produce different combinations of hi , of course, with different weights xSi on these 4 sets of solutions according to hi . Then we can calculate the weight of each set xSi as Q xS1 ¼ xi  ri i ¼ 1; 2; 3 i Q xS2 ¼ xi  ri i ¼ 1; 2; 4 i Q ð12Þ xS3 ¼ xi  ri i ¼ 1; 3; 4 i Q xS4 ¼ xi  ri i ¼ 2; 3; 4 i

Therefore, final estimated position of object (x, y, z) is P xSi ^xi x ¼ Pi x P i Si xSi ^yi y ¼ Pi x P i Si xSi ^zi z ¼ Pi x i

Si

i ¼ 1; . . .; 4 ð13Þ

i ¼ 1; . . .; 4 i ¼ 1; . . .; 4

4 Consider the Influence of Distance If the centroid of the service region is known. All the antenna arrays should face to the centroid directly. Then, the distance between the centroid and the antenna arrays should be as small as possible, because sin hi does not influence the result. So, larger the SNR, smaller the bias Di . But for other objects other than the centroid, the antenna arrays are not normal to them, therefore the influence of the sin hi needs to be considered. So in order to get the smallest overall bias of the whole service region, how to get the optimal distance d between the object and the antenna array is investigated. As it is shown in Eq. 5, Di is influenced by cos hi and

r2n p2

. In order to get a smaller

bias Di , either cos hi could be turned larger, or the SNR could be larger, i.e., be smaller. On one side (Fig. 4),

r2n p2

turn to

550

S. Feng et al.

Fig. 4. Influence of d

sin hi ¼

d d ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 di d þ ri2

d2i ¼ d 2 þ ri2

ð14Þ ð15Þ

So, the larger the di , the larger the . On the other side, the pass loss in dB is usually expressed by ðLÞdB ¼ 10n log10 ðdi Þ þ C where, d is the distance between the transmitter and the receiver, usually measured in meters, and C is a constant which accounts for system losses. For free space, n = 2. For flat earth model, n = 4.   ðSNRÞdB ¼ ðPt ÞdB LdB  r2n dB So, Pi ¼ Pt  din , where, Pt is the transmission power of the object. In application, we can assume Pt to be a fixed value. So, the smaller the di , the larger the SNR. There is a tradeoff between cos hi and SNR. In order to get the best Di, take differential with respect to di . We assume a service region with a centroid, which is normal to all the antenna arrays. All the other objects are uniformly distributed in the service region. The total bias of the service region for one antenna array is D¼

hmax X 1

 Di ðhi Þ

Ai Atotal

 ð16Þ

where, Ai is the area of a tiny in region with the arrival angle of hi . Atotal is the total area of the service region. Di ðhi Þ is the bias of the arrival angle hi . ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s 2  2 ffi k 6 rn Di ðhi Þ ¼ 2pS sinðhi Þ M 2 MPi N Because all the other parameters are fixed, we consider Di ðhi Þ as

ð17Þ

A Precise 3-D Wireless Localization Technique Using Smart Antenna

Di ðhi Þ /

sffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  n þ 2 1þn ri2 þ d 2 4 din þ 2 ðri2 þ d 2 Þ 2 ¼ ¼ ¼ d2 d2 d2 ðsin hi Þ2

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi din

551

ð18Þ

The parameter of ri is fixed on a certain service region. Here, if we consider the free space, n = 2, then Di ðhi Þ is Di ð h i Þ /

ri2 þ d 2 ri2 ¼ 2 þ1 d2 d

ð19Þ

Therefore, in order to get the smallest bias on a whole, we take differential on d after getting the total bias D. This can be done by computer analysis, when getting the virtual problem.

5 Simulations and Results Analysis The localization techniques described in this paper are simulated in an indoor environment with a size of 10  8  5 m to determine the performance. Beacons are placed to make shapes of three typical triangle and the distance between beacons is similar when each placement is controlled. 14 sample points with the same theoretical value of each system were taken for comparative measurement. The error of different placement is shown in Fig. 5.

Fig. 5. Error difference of placement

552

S. Feng et al.

Fig. 6. The result comparison

As can be seen, the average positioning error is positive at about 0.15 m. The placement of an equilateral triangle is the best one based on the research on the positioning error of the distance between the Beacons/based Station. Figure 6 shows the comparison between the actual position of the object and the position obtained by the localization system when the distance between the beacons is 3 m and the placement position is equilateral triangle. It can be seen that the maximum difference is only 20 cm, which indicates that the system has great advantages in localization accuracy and is very reliable to a certain extent. Figure 7 shows the comparison of various algorithm, and the RMS error of our algorithm is as low as 20 cm, which has obvious advantages over other algorithm in accuracy.

Fig. 7. Influence of algorithms

A Precise 3-D Wireless Localization Technique Using Smart Antenna

553

Fig. 8. Track error diagram

Figure 8 shows the real moving trajectory of an object and the location information counting by our system. The point G to O are sampling point of the localization result. There is modestly difference, as low as 20 cm, between the real position and the calculated one. In other words, the accuracy of localization is guaranteed even if our localization system is applied to a dynamic object.

6 Conclusion In this paper, the proposed method of three-dimensional wireless precise localization based on smart antenna, which uses the Angle of arrival and innovative algorithm, makes the positioning accuracy higher. Through the simulation analysis, the optimal BSs’ placement and accurate localization results are obtained. The detection time is insignificant and the antenna load is considered to optimize the algorithm. The simulation results show that our system and solution is more accurate. Acknowledgements. This work was supported by Natural Science Foundation of China (61871237), National Science & Technology Key Project of China (2017ZX03001008) and Key R&D Plan of Jiangsu Province (BE2019017).

References 1. Roberts GW, Ashkenazi V (2015) Experimental monitoring of the Humber Bridge using GPS. Civ Eng 120(4):177–182 2. Mautz R (2012) Indoor positioning technologies 3. Liu W, Ding H, Huang X et al (2012) TOA estimation in IR UWB ranging with energy detection receiver using received signal characteristics. IEEE Commun Lett 16(5):738–741 4. Al-Jazzar S, Ghogho M, McLernon D (2009) A joint TOA/AOA constrained minimization method for locating wireless devices in non-line-of-sight environment. IEEE Trans Veh Technol 58(1):468–472 5. Bayesian filtering for location estimation 6. A statistical modelling based location determination method using fusion technique in WLAN 7. Learning adaptive temporal radio maps for signal-strength-based location estimation

554

S. Feng et al.

8. Madigan D, Ju WH, Krishnan P, Krishnakumar AS, Zorych I (2006) Location estimation in wireless networks: a Bayesian approach. Stat Sin 16:495–522 9. Youssef MA, Agrawala A, Udaya Shankar A (2003) WLAN location determination via clustering and probability distributions. In: Proceedings of the first IEEE international conference on pervasive computing and communications, 2003. (PerCom 2003), pp 143– 150 10. Chieh Chen J, Cheng Wang Y, Shyang Maa C, Tsair Chen J (2006) Network-side mobile position location using factor graphs. IEEE Trans Wirel Commun 5(10):2696–2704 11. Mensing C, Plass S (2007) Positioning based on factor graphs. EURASIP J Adv Signal Process 2007. Article ID 41348, 11p 12. Zhu X, Wang Y, Zhu H (2010) A precise 2-D wireless localization technique using smart antenna. In: 2010 international conference on cyber-enabled distributed computing and knowledge discovery (CyberC). https://doi.org/10.1109/CyberC.2010.21 13. Jiang L, Tan SY (2004) Simple geometrical-based AOA model for mobile communications systems. Electron Lett 40(19). 16 Sept 2004

A Two-Phase Fault Diagnosis Algorithm Based on Convolutional Neural Network for Heterogeneous Wireless Yong Wang1, Lei Zhang2, and Xiarong Zhu2(&) 1

Nanjing Institute of Industry Technology, Nanjing, People’s Republic of China [email protected] 2 Nanjing University of Posts and Telecommunication, Nanjing, People’s Republic of China [email protected], [email protected]

Abstract. Considering the high complexity of fault diagnosis of heterogeneous wireless networks (HNNs), we propose a two-phase fault diagnosis algorithm based on convolutional neural network (CNN), which includes monitoring phase and diagnosis stage. In monitoring phase, based on the analysis of the causes of failures of HNNs, feature selection is used to select network parameters that have a great influence on network nodes. Then the timing distribution characteristics of network parameters are monitored and suspected faults can be diagnosed. Once suspected faults are diagnosed and the second stage diagnosis program will be triggered, the Operation Administration and Maintenance (OAM) system will request the detailed network KPIs data of neighboring base stations to provide more comprehensive diagnostic information. And CNN is used to confirm and locate the faults. Simulation results show that this proposed algorithm has good performances on short diagnostic delay, high fault recall ratio and precision ratio. Keywords: Heterogeneous wireless networks (HNNs) Convolutional neural network (CNN)

 Fault diagnosis 

1 Introduction The future wireless communication system should connect not only human beings but also things such as machines, vehicles, sensors, etc. To satisfy different kinds of service requirements, the next-generation wireless network will be of the heterogeneous kind that integrates multiple Radio Access Technologies (RAT) such as LTE, WLAN and 5G [1]. In the context of complex heterogeneous wireless networks (HNNs), it urgently needs dynamic and adaptive network management methods. Many literatures have proposed to use graph theory, model traversing techniques, and artificial intelligence to model fault localization in computer networks [2, 3]. However recently many researchers pay more and more attention to study intelligent fault detection and localization algorithms for wireless networks [4–9]. In [2], aiming at the problems of configuration overhead and large link load in distributed fault detection, a probe detection model with adaptive probe interval is © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 555–563, 2020 https://doi.org/10.1007/978-981-13-9409-6_65

556

Y. Wang et al.

proposed, which can change the probe interval dynamically based on changes in probe response delay and packet loss rate. Reference [3] establishes a fault diagnosis model for large-scale IP networks based on dynamic Bayesian networks by improving a representative exact algorithm, which greatly reduces the computational complexity of the algorithm in the case of slightly reducing the accuracy of diagnosis. Reference [4] proposes KPI grading for automatic detection and diagnosis problem and forms reports by analyzing the previous failure cases. A scoring system with KPI level and reports as input parameters is designed to realize automatic detection and diagnosis of network faults. With the increase of network complexity, however, the establishment of fault report database will be a challenge. Reference [5] proposes a super-learning algorithm to solve the problem of anomaly detection in cellular networks. The algorithm is actually an ensemble learning method. It uses an optimal weighted combination of multiple learning algorithms to approach the real approach in a gradual manner. The predictive model provides a very powerful detection method. Literature [6] proposes a protocol for distributed spatio-temporal event correlation. This method uses a gossip like protocol to aggregate spatio-temporal related events over the entire overlay stack based on the time sequence of the events and the order relative to the layers. However, it requires the underlying network to know the overlay topology. Reference [7] proposes an algorithm for internal evaluation of virtual network and underlying network faults based on trust evaluation. The algorithm ignores the internal correlation between network symptoms and the diagnostic time and diagnostic accuracy need to be improved. Reference [8] proposes a cell interruption management framework for data and control separation under heterogeneous networks. The control layer uses the k nearest neighbor or local outlier factor (LOF) algorithm to calculate the distance between current data and the benchmark data under normal network conditions to determine whether it is abnormal. Data layer detection is through the GM algorithm to detect anomalies. Literature [9] analyzes the relationship between business symptoms and network failures. Based on the ambiguity-related symptom failure relationships, the author proposes a multi-dimensional fuzzy association rule mining communication network fault early warning method. This method is suitable for scenarios with not too many symptom collections. As the scale of network services increases, the set of service symptoms will become very large, and the accuracy of the algorithm will decrease.

2 CNN-Based Diagnosis Model Aiming at the locating and predicting the faults of HWNs, we propose a two-stage fault diagnosis model based on CNN. As shown in Fig. 1, the first phase is the monitoring phase. During this phase, the monitoring program will perform timing pattern analysis based on a small number of network feature parameters selected in advance and match the fault characteristics in database. In the second stage, if the matching degree in the first stage is large, the model diagnosis stage program will be triggered. The fault diagnosis model trained by the program using historical data will re-apply for detailed network parameter information to the Operation Administration and Maintenance

A Two-Phase Fault Diagnosis Algorithm Based on Convolutional

557

(OAM) system. OAM will process the detailed network parameter data in the near time period and send to the diagnosis model and finally the classification result is output.

Fig. 1. Two-stage fault diagnosis diagram

2.1

The First State: Monitoring Phase

2.1.1 Feature Selection The monitoring procedure needs to be operated at all times to receive parameter data from nodes such as macro BSs, small BSs and relays, etc. If all collected data is transmitted to the server, not only wireless resources are occupied, but also the server load will be affected. Therefore, the feature selection algorithm needs to traverse different types of network nodes to obtain the optimal network parameters subsets based on their characteristics. This paper uses mRMR [10] to select typical features. This algorithm maximizes the degree of association between classification tags and data features, taking into account the degree of association between features. This idea is based on a limited dimension and maximizing the performance of features, which helps to make full use of strong feature combinations to complete fault monitoring work in a resource constrained wireless network environment. 2.1.2 Diagnosis of Abnormal Symptoms Based on the parameter selection, we get the optimal k parameters for each node. Assume that the length of the data collection time window is T, then the distribution of the time window parameter matrix X formed based on the optimal parameter set at time t is as follows: 2

KPI1tT þ 1 6 KPI tT þ 1 1 6 X¼6 .. 4 . KPI1tT þ 1

KPI1tT þ 2 KPI1tT þ 1 .. . KPI1tT þ 1

... ... .. . ...

3 KPI1tT þ 1 KPI1tT þ 1 7 7 7 .. 5 .

ð1Þ

KPI1tT þ 1

The normalized weight value of the ith parameter at time j in time window is [11]:

558

Y. Wang et al.

j KPInorm

i

¼ Pt

KPIij

m¼tT þ 1

KPIim

; j ¼ t  T þ 1; . . .; t

ð2Þ

And the weight of the KPI set distribution at time j is expressed by [11] j ¼ KPIweight

k 1X j KPInorm i ; j ¼ t  T þ 1; . . .; t k i¼1

ð3Þ

Then, the distribution weight vector based on the time window is 2

3 tT þ 1 KPIweight 6 KPI tT þ 2 7 6 weight 7 7 w¼6 .. 6 7 4 5 . t KPIweight

ð4Þ

The centroid distance (similarity) between online KPI parameter distribution and that in the fault state in the database is   ^w^  dis ¼ Xw  X 2

ð5Þ

^ is KPI parameter distribution matrix for the historical fault data, W ^ is the where X distribution weight vector. The data distribution similarity factor a can be defined, and when dis < a it can be considered as a suspected fault symptom and a command to initiate diagnosis is sent to the machine learning model for diagnosis. 2.2

Second Stage: Diagnosis Stage

2.2.1 Base Station Selection Based on RSRP and RSRQ, the neighboring base station set BSRSRP with the strongest RSRP signal and the neighboring base station set BSRSRQ with the best RSRQ quality are selected respectively [12]. Three base stations are selected in each base station set as follows: BSRSRP ¼ fBSRSRP1 ; BSRSRP2 ; BSRSRP3 g   BSRSRQ ¼ BSRSRQ1 ; BSRSRQ2 ; BSRSRQ3

ð6Þ ð7Þ

If a certain base station is simultaneously by two sets, the union of two sets needs to be taken by BSneighbours ¼ BSRSRP [ BSRSRQ

ð8Þ

A Two-Phase Fault Diagnosis Algorithm Based on Convolutional

559

In order to ensure the consistency of the data format, that is, there are 6 neighboring BSs for each piece of data. We fill with 0 in data samples with less than 6 BSs. This can avoid the situation that the number of neighboring BSs is missing when BSs are sparsely deployed. 2.2.2 Fault Diagnosis Model After detection phase, suspicious symptoms of network may be discovered. In the following the system will obtain all relevant network parameters in the most recent time interval, including the parameter sets of the current BSs and the neighboring BSs as the feature data. Based on the characteristics obtained above, CNN is used to classify failure. The change of a single parameter is a time feature. A high-dimensional feature represents the simultaneous change of multiple parameters in the time dimension. In recent years, the convolutional neural network has achieved very little classification results in computational vision and image processing. The essence of image is the distribution of pixel number, and the temporal structure of network data is similar distribution. Therefore, a CNN can be used to mine the local structural feature combination of the input matrix. Therefore, before the collected data is input into the network, the following data dimension transformation is performed on the features. For each BS, the parameters at time t are set as follows:  1  2 N ; Xcur ; . . .; Xcur Xcur ¼ Xcur

ð9Þ

1 represents the first parameter. When where N is the number of parameter, Xcur neighboring BSs are selected, the network characteristics matrix at time t can be expressed by

2

3 2 1 Xcur ðtÞ Xcur ðtÞ; 6 X1 ðtÞ 7 6 X 1 ðtÞ; 6 7 6 1 6 X2 ðtÞ 7 6 X 1 ðtÞ; 6 7 6 21 7 6 XðtÞ ¼ 6 6 X3 ðtÞ 7 ¼ 6 X3 ðtÞ; 6 X4 ðtÞ 7 6 X 1 ðtÞ; 6 7 6 4 4 X5 ðtÞ 5 4 X 1 ðtÞ; 5 X6 ðtÞ X61 ðtÞ;

2 Xcur ðtÞ; X12 ðtÞ; X22 ðtÞ; X32 ðtÞ; X42 ðtÞ; X52 ðtÞ; X62 ðtÞ;

3 N . . .; Xcur ðtÞ . . .; X1N ðtÞ 7 7 . . .; X2N ðtÞ 7 7 . . .; X3N ðtÞ 7 7 . . .; X4N ðtÞ 7 7 . . .; X5N ðtÞ 5 . . .; X6N ðtÞ

ð10Þ

i ðtÞ denotes the ith feature of current BS, and Xi ðtÞ denotes the ith neighwhere Xcur boring BS of current BS. Taking into account that the network fault parameters change with time, so the input data should have time variation characteristics. Input of the model should take at a time interval n, i.e.

Input ¼ ½X ðt  n þ 1Þ; X ðt  n þ 2Þ; . . .; X ðtÞ

ð11Þ

Therefore, we have completed data pre-processing of network faults and obtained the parameter matrix of heterogeneous networks. Then we use CNN to complete the

560

Y. Wang et al.

fault diagnosis, as shown in Fig. 2, after the network parameters is completed by the above transformation, the local feature transformation is performed through the CNN convolution layer [13]. And through the pooling layer, a non-linear mapping of features is achieved. After several network feature transformations, the final features converge to the full-connected layer and finally network faults are classified and output.

Fig. 2. Fault diagnosis model based on convolutional neural network

3 Simulation and Performance Evaluation 3.1

Simulation Environment

The experimental simulation hardware environment is Intel(R) Xeon(R) CPU E7-4870 [email protected] GHz (2 processor), memory 3G, operating system windows7 server, and we make simulation by OPNET 18.6. In this simulation, a cellular network consisting of 7 macro BSs is used. The overall coverage of the network is 5 km  5 km. Each macro BS covers a hexagonal cell with a radius of approximately 600 m in the coverage area. A number of low-power BSs are deployed in a terminal-intensive area, and users are randomly distributed in the respective cells. We introduce recall and precision to measure the performance of fault diagnosis algorithm. 3.2

Performance Analysis

We compare the proposed two-stage CNN algorithm with the K-nearest neighbor classification algorithm (KNN) mentioned in [14] and direct CNN without using monitoring state. Figure 3 shows that the diagnostic performance of KNN algorithm is lower than that of the two deep learning models. Since faults occur with a lower probability and hence the training samples and test samples generated are unbalanced data, KNN algorithm is sensitive to unbalanced data sets and hence recall ratio is low. The recall ratio of two-phase CNN fault diagnosis algorithm is lower than that of direct CNN in case of small number of input historical record. However, as the number of input historical data increases, the diagnostic recall ratio is very close to that of direct CNN, because the recall rate of the first phase of abnormal symptom diagnosis is greatly increased, and therefore, the recall performance of the second stage can be also improved.

A Two-Phase Fault Diagnosis Algorithm Based on Convolutional

561

Fig. 3. Comparison of different algorithm recalls

Figure 4 shows the comparison of precision ratio of different algorithms. It can be found that the diagnostic accuracy of KNN algorithm is lower than that of the deep learning model. As the historical prediction input data increases, the performance of KNN algorithm improves, but it is still at a relatively low accuracy rate. CNN algorithm has little difference in performance when more historical data is input. When the

Fig. 4. The comparison of precision ratio of different algorithms

562

Y. Wang et al.

number of historical data records is from 300 to 450, the accuracy of the two-phase CNN fault diagnosis model is almost near to that of direct CNN model. Therefore, considering the performance of recall ratio and precision ratio, the proposed two-phase CNN fault diagnosis model has better diagnostic performances.

4 Conclusions In this paper we propose a two-stage fault diagnosis algorithm based on deep learning convolutional neural network for HNNs. Based on the analysis of the causes of network failures, this algorithm considers the usage of wireless network resources, selects network parameters with greater influence on nodes by using feature selection methods, and monitors the variation characteristics of network parameter timing distribution. Based on the monitoring results, CNN is used to classify network failures. Simulation results show that this proposed algorithm has good performances on short diagnostic delay, high fault recall ratio and precision ratio. Acknowledgements. This work was supported by Natural Science Foundation of China (61871237), Natural Science Foundation of the Higher Education Institutions of Jiangsu Province (16KJA510005), Science Foundation for Young Scholars of NIIT(YK18-02-12).

References 1. Andrews JG (2014) What will 5G be? IEEE JSAC 32(6) 2. Steinert R, Gillblad D (2010) Towards Distributed and Adaptive Detection and Localisation of Network. In: Faults/advanced international conference on telecommunications. IEEE Xplore, 2010, pp 384–389 3. Li ZQ, Cheng L, Qiu XS et al (2013) Fault diagnosis for large-scale IP networks based on dynamic Bayesian model. Sci Technol Inf 6:67–71 4. Szilagyi P, Novaczki S (2012) An automatic detection and diagnosis framework for mobile communication systems. IEEE Trans Netw Serv Manage 9(2):184–197 5. Casas P, Vanerio J (2017) Super learning for anomaly detection in cellular networks. In: IEEE, international conference on wireless and mobile computing. Networking and communications, IEEE Computer Society, 2017, pp 1–8 6. Steinert R, Gestrelius S, Gillblad D (2011) A distributed spatio-temporal event correlation protocol for multi-layer virtual networks. In: Global telecommunications conference. IEEE, 2011, pp 1–5 7. Liu N, Zhang S, Wang X (2016) Virtual network fault diagnosis using trust evaluation. Video Eng 8. Onireti O, Zoha A, Moysen J (2016) A cell outage management framework for dense heterogeneous networks. IEEE Trans Veh Technol 65(4):2097–2113 9. Yu L, Sun Y, Li K, Xu M (2016) An improved genetic algorithm based on fuzzy inference theory and its application in distribution network fault location. In: 2016 IEEE 11th conference on industrial electronics and applications (ICIEA) 10. Peng H, Long F, Ding C (2005) Feature selection based on mutual information: criteria of max-dependency. max-relevance, and min-redundancy. IEEE Computer Society

A Two-Phase Fault Diagnosis Algorithm Based on Convolutional

563

11. Kusner MJ (2015) From word embeddings to document distances. In: International conference on machine learning JMLR.org, pp 957–966 12. Turkka J (2012) An approach for network outage detection from drive-testing databases. J Comput Netw Commun 13. Lee H, Kwon H (2017) Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans Image Process A Publ IEEE Signal Process Soc 26(10):4843 14. Xue W (2014) Classification-based approach for cell outage detection in self-healing heterogeneous networks. In: Wireless communications and networking conference IEEE, 2014, pp 2822–2826

A Wireless Power Transfer System with Switching Circuit of Power Grid and Solar Energy Ze Song1, Xin Zhang1(&), Xiu Zhang1, Ruiqing Xing1, and Lei Wang2 1

2

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected] Institute of Biomedical Engineering, Chinese Academy of Medical Science & Peking Union Medical College, Tianjin 300192, China

Abstract. In recent years, wireless power transfer technology has been recognized by users for its unique advantages such as convenience and distance transfer. It has been popularized in life and developed rapidly. The paper designs a wireless power charging system with two kinds of charging modes. The system mainly includes an electricity grid charging module, a solar energy charging module, and a power switching device of two charging modes. Through experimental analysis, the system can realize the switching of two power supply modes and the wireless charging function, which has certain practical application value, and verify the wireless transfer characteristics through experiments. Finally, on the design and experiment of the wireless energy transfer system, the system characteristics are summarized. The development of the system is expected to be useful in real world applications. Keywords: Wireless power transfer Charging system

 Solar energy  Circuit switching 

1 Introduction Wireless Power transfer (WPT), also known as wireless power transmission, is a kind of non-contact power transfer. It is a relay device that converts electrical energy into other energy forms such as magnetic field energy, and then is converted by the receiving device [1]. The power transfer means of electric energy realizes a certain distance transfer. Radio energy transfer can be divided into: electromagnetic induction, electromagnetic resonance and electromagnetic radiation. The three transfer modes are applicable to different power and distance requirements [2]. With the continuous development of science and technology, different wireless energy transfer principles are adapted to the production and life applications. For example, the charging of a mobile phone with a small power uses the principle of electromagnetic induction, which not only greatly changes the charging mode, but also unifies the charging aperture, and solves the problem that various wired charging interfaces are not uniform. For another example, electromagnetic radiation can be applied to the transfer of electrical energy in © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 564–571, 2020 https://doi.org/10.1007/978-981-13-9409-6_66

A Wireless Power Transfer System with Switching Circuit of Power

565

deserts and island special occasions. Therefore, the development of radio energy transfer technology is becoming more and more urgent [3]. As early as 1889, Tesla proposed the speculation of radio transfer and tested it. At the Tesla laboratory in Long Island, USA, he first invented a coil for high-frequency vibration, which was connected to a tower of a circular coil of 0.9 m in diameter and 60 m in height, and supplied 300 KW electric energy with a resonance frequency of 150 KW for the coil. In 1964, Brown designed a microwave-powered power helicopter and succeeded [4]. In this paper, a system with two wireless charging modes is designed. It includes an electricity grid wireless power transfer module, a solar energy wireless power transfer, and a power supply switching module. The feasibility and power transfer efficiency of the system are verified by experiments. The influencing factors and general laws of radio energy transfer are obtained.

2 Wireless Power Charging System In foreign countries, Intel used two large-sized coils at a distance of 3 in., using a 10 MHz oscillation frequency to realize the transfer of electric energy, which enabled an incandescent lamp to light up successfully; Sony also launched a piece in 2009 [5]. The radio energy transfer system using magnetic resonance technology successfully provided 60 W power; in the same period, Dell introduced a notebook computer that supports wireless charging; in 2013, Korea Dongyuan OLEV completed the design experiment of high-power electric bus, calm down the actual The line runs and the transfer efficiency of the entire system can reach 85%. In 2015, Solace Power cooperated with Boeing to develop the UAV radio energy transfer technology, which successfully realized the conversion of electric energy into electric field in a small range by using the resonant resistance capacity method, and realized the hover charging with an effective charging distance of about 25 cm. Domestically, it started late in wireless transfer, but it has developed rapidly and played a certain role in promoting the wireless energy transfer in China and the world [6]. Since 2002, the Sun Yue team of Chongqing University has carried out a large number of theoretical explorations and practical experiments with the “non-contact power transfer technology (CPT)” as the research target, and has taken the lead in solving the inductive wireless energy transfer technology problem 22, and in this technology [7]. On the basis of the establishment of a sound theoretical system and experimental model prototype, promote the development and trend of wireless energy transfer technology. Chinese company Haier presented their tailless TV at the 2010 International Electronics Show. This TV uses the wireless energy transfer technology provided by Witricity, with an effective transfer distance of about 1 m and a transfer power of about 100 W [8]. This is a new attempt by Chinese companies in the field of wireless energy transfer. At the same time, domestic wireless transfer technology has been applied in the civilian sector. Guangxi Power Grid Corporation’s “Research on Key Technologies for Wireless Energy transfer for Smart Grid” has passed the acceptance test [9]. The project provides power transfer for vehicles by laying the first wireless power supply lane of electric vehicles. It is fast and convenient, and has strong environmental adaptability. It is a new type of

566

Z. Song et al.

charging for electric vehicles. The way laid the theoretical foundation and let the world feel the wireless “Tesla” in China. In this paper, a wireless power charging system is designed as shown in Fig. 1. It consists of an electricity grid charging mode and a solar energy charging mode. The two modes are switched through a mode switching device, which is realized by a switching circuit.

Fig. 1. Model of wireless charging system

Next, the two charging modes are presented in detail. Switching circuit is also presented in detail. Finally, physical experiment is conducted to verify the designed system.

3 Electricity Grid Charging Mode The wireless charging model uses the principle of electromagnetic induction to achieve power transfer within a centimeter range [10]. At the same time, it consists of a transmitting circuit module, a receiving circuit module and a coil. Among them, the XKT-801 high-frequency high-power resonance wireless power supply chip and the XKT-3169 wireless power receiving chip are the core components of the transmitting and receiving circuit module, and their small output power. Large, can be used with a variety of wireless charging solutions, the circuit is simple, easy to apply [10].

A Wireless Power Transfer System with Switching Circuit of Power

567

Fig. 2. Model of electricity grid charging subsystem

As shown Fig. 2, the charging module is composed of a transmitting circuit module, a transmitting coil, a receiving circuit module, a receiving coil and a load, respectively performing voltage conversion, rectification filtering and electric energy conversion on the electric energy, thereby realizing power transfer at a certain distance. Among them, when the system works in an ideal state, the input of the transmitting end is 24 V DC, and the output of the receiving end is 5 V DC.

4 Solar Energy Charging Mode Solar energy is an extremely important basic energy source in various renewable energy sources. Biomass energy, wind energy, ocean energy, and water energy all come from solar energy. Solar energy can be said to nourish all things on earth. As a kind of renewable energy, solar energy refers to the direct conversion and utilization of solar energy. The conversion of solar radiant energy into electrical energy through the conversion device not only realizes the direct utilization of solar energy, but also saves resources, greatly reducing the consumption and pollution of fossil energy.

Fig. 3. Design module of solar energy charging subsystem

The solar wireless charging model is mainly composed of a wireless transmitting circuit, a wireless receiving circuit, a charging interface circuit, a 5 V step-down circuit, a single-chip circuit, and a solar panel. Among them, the single-chip circuit through the multi-channel voltage acquisition chip, obtain the voltage of the solar panel voltage and the voltage of the buck regulator circuit, if both voltages are normal, the liquid crystal display prompts normal power supply. If the voltage output is incorrect, an alert is issued. The overall design of this circuit is shown in Fig. 3.

568

Z. Song et al.

5 Charging Modes Switching Circuit In the entire charging frame, the relay is used to implement circuit replacement. As shown in Fig. 4, when the relay is energized, the electromagnet attracts the armature, thereby disconnecting the power supply from the appliance, which is actually similar to a circuit switch to achieve circuit switching. In this design, the relay is a double-conversion contact as shown in Fig. 4. When energized, the two sets of contacts are closed with the A-channel power supply. When the power is off, the two sets of contacts and the B power supply are closed. The A-channel power supply is a solar power supply circuit, and the B-channel power supply is a household 220 V power supply. When the solar energy is sufficient, the A channel is energized, the relay works, the internal electromagnet attracts the switch contact and the A road is closed, and then, the A road power supply realizes circuit switching and saves resources.

Fig. 4 Charging modes switching circuit

6 Experiments In order to verify the rationality of the system and the feasibility of the technology, a radio energy transfer system was designed. The input is a household 220 V power supply, and then the 220 V step-down circuit module is connected to the transmitting module of the wireless charging system. The transmitting device on the left side is composed of a transmitting circuit module and a primary coil, and the receiving module on the right side is composed of a receiving circuit module and a secondary coil, and the receiving module circuit outputs a voltage to the load [11]. The transmitting coil is composed of two layers of copper wires having a diameter of 1.25 mm, and the receiving coil is composed of a single layer of hard copper wire, and the diameter of the copper wire is 1.23 mm. When the diameter of both coils is 10 cm and the distance between the two coils is greater than 5 cm, the parameters are shown in Table 1.

A Wireless Power Transfer System with Switching Circuit of Power

569

Table 1. Parameters of transmitting coil and receiving coil in the wireless charging system Name Transmitting coil Receiving coil

Parameter Number of plies Copper wire diameter Number of plies Copper wire diameter

Copper wire diameter Copper wire diameter 1.25 mm Copper wire diameter 1.23 mm

In the experiment, the transmitting circuit is connected to a 24 V DC voltage, and the output is connected to a measuring instrument for voltage and current measurement. The distance between the transmitting coil and the receiving coil is controlled to be greater than or equal to 5 cm, and the initial distance is set to 5 cm, as shown in the figure, and measurement is performed every 1 cm, and real-time data is recorded. The measurement results are shown in Table 2. Table 2. Energy transfer efficiency changes with the distance between transmitting coil and receiving coil Distance (mm) 50 60 70 80 90 100

Transmitting voltage (V) 24 24 24 24 24 24

Transmitting current (mA) 1000 1000 1000 1000 1000 1000

Receiving voltage (V) 5 5 5 5 5 5

Receiving current (mA) 1100 500 400 300 200 180

Efficiency (%) 22.90 17.70 10.40 8.30 6.20 4.20

Similarly, we perform measurement calculations on the transfer data of the solar charging system, as shown in Table 3. Table 3. Solar energy energy transfer efficiency changes with the distance between transmitting coil and receiving coil Distance (mm) 0 10 20

Transmitting voltage (V) 3.56 3.56 3.56

Transmitting current (mA) 880 880 880

Receiving voltage (V) 1.21 1.21 1.21

Receiving current (mA) 700 600 350

Efficiency (%) 27.03 23.17 13.50

Experiments show that the radio energy transfer has a considerable efficiency, but as the distance between the transmitting coil and the receiving coil increases, the transfer efficiency is lower, and the two are nonlinear, and the speed of the efficiency decreases with the coil. The distance between them is growing faster and faster. At the

570

Z. Song et al.

same time, we add a kind of intermediate medium such as plastic, which has little effect on the transfer efficiency of the system.

7 Conclusion In this paper, the power supply transfer, solar radio energy transfer and power supply switching of the whole system are designed. The feasibility and power transfer efficiency of the system are verified by experiments. The influencing factors and general laws of radio energy transfer are obtained. (1) The solar charging model has certain feasibility. Through actual operation and data measurement, we can derive its efficiency. At the same time, the system realizes effective switching between grid power supply and solar power supply through the use of relays, thereby reducing resource consumption and protecting the environment. (2) Through experiments, we derive the general law of radio energy transfer: the transfer efficiency of the system varies with the distance between the transmitting coil and the receiving coil. The greater the distance, the lower the efficiency, and the more the system power transfer efficiency decreases. The bigger it is. (3) In the experiment, the efficiency of the power grid can reach about 22%, and the solar energy transfer system can reach about 27%, which has certain feasibility. The solar radio energy transfer system described in this paper plays a certain role in promoting production and life. The radio energy transfer also improves the real-time solution for practical problems such as large-scale laying of power grid cables. Acknowledgements. This research was supported in part by the National Natural Science Foundation of China (Project No. 61601329, Project No. 61603275, Project No. 61701345, Project No. 61801327), the CAMS Innovative Fund for Medical Science (Project No. 2018-I2MAI-012), the Natural Science Foundation of Tianjin (Project No. 18JCQNJC70900, Project No. 18JCZDJC31900), and the Tianjin Higher Education Creative Team Funds Program.

References 1. Zhang X, Ho SL, Fu WN (2012) Quantitative design and analysis of relay resonators in wireless power transfer system. IEEE Trans Magn 48(11):4026–4029 2. Zhang X, Zhang X, Fu WN (2017) Fast numerical method for computing resonant characteristics of electromagnetic devices based on finite element method. IEEE Trans Magn 53(6). Article No. 16599203 3. Kumwenda B, Mwaku W, Mulongoti D, Louie H (2017) Integration of solar energy into the Zambia power grid considering ramp rate constraints. In: IEEE PES Power Africa, Accra, Ghana, IEEE, pp 254–259 4. Pan H, Qi L, Zhang X et al (2017) A portable renewable solar energy-powered cooling system based on wireless power transfer for a vehicle cabin. Appl Energy 195:334–343

A Wireless Power Transfer System with Switching Circuit of Power

571

5. Boardin C, Anita HO, Crossland A et al (2017) A linear programming approach for battery degradation analysis and optimization in off grid power systems with solar energy integration. Renew Energy 101:417–430 6. Cardenas R, Perez MA, Clare JC (2017) Guest editorial control and grid integration of MWrange wind and solar energy conversion systems. IEEE Trans Ind Electron 64(11):8786– 8789 7. Zhang L, Li F, Sun B et al (2019) Integrated optimization design of combined cooling, heating, and power system coupled with solar and biomass energy. Energies. Article No. 12 8. Boma M, Galbraith DC, White RL (1987) Radio-frequency coils in implantable devices: misalignment analysis and design procedure. IEEE Trans Biomed Eng 34(4):276–282 9. Frazier CM, Chairman ES (1996) Geometric approach for coupling enhancement of magnetically coupled coils. IEEE Trans Biomed Eng 43(7):708–714 10. Thong WX, Hui SYR (2015) Maximum energy efficiency tracking for wireless power transfer sytems. IEEE Trans Power Electron 30(7):4025–4034 11. Beh TC, Kato M, Imura T et al (2013) Automated impedance matching system for robust wireless power transfer via magnetic resonance coupling. IEEE Trans Industr Electron 60 (9):3689–3698

A Fiber Bragg Grating Stress Sensor for Hull Local Strength Measurement Chuanqi Liu(&), Wei Wang(&), Yuliang Li, Libo Qiao, and Jingping Yang Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected], [email protected]

Abstract. In this paper, a fiber grating stress sensor for measuring the local strength of the hull is introduced. The optical fiber stuck between the fixed plates on both sides of the sensor is deformed when the part of the hull in which the sensor is mounted is subjected to an external force, causing the wavelength of the fiber grating to change. The local intensity variation of the hull will be solved by the wavelength variation of the fiber grating. The meter shows that the deformation of the fiber grating is linear with the wavelength change of the fiber grating. To this end, the compensation algorithm of the demodulator can change the stress of the hull structure and the wavelength of the fiber grating. Therefore, the sensor enables real-time monitoring of local stresses in the ship’s structure in harsh marine environments. Keywords: Fiber Bragg grating

 Short base stress sensor  Reducer

1 Introduction Among the various types of transportation vehicles, the ship has a huge cargo capacity that cannot be compared with other transportation vehicles and a low price advantage, which makes the use of the ship increasingly demanding. However, the complex water conditions faced by ocean-going vessels in the course of operation have brought more challenges to the safety of ships. On the ever-changing water surface, many accidents are not feasible by personal judgment. The driver of the ship needs to monitor the actual condition of the ship in real time through the monitoring of the sensor. Therefore, it is essential to detect the stress of the hull structure under normal and load conditions. The research of traditional electric sensor monitoring system has been quite mature, and many hull strain monitoring systems have been successfully developed at domestic

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 572–578, 2020 https://doi.org/10.1007/978-981-13-9409-6_67

A Fiber Bragg Grating Stress Sensor for Hull Local Strength

573

and abroad. However, due to the limited size of traditional sensor systems, it is impossible to comprehensively monitor the real-time status of key parts of the hull [1]. At present, the use of fiber gratings to monitor the hull structure state in real time has been paid more and more extensively and systematically. The fiber grating sensor has very good reliability and stability, because the fiber grating encodes the sensed information by wavelength and the wavelength is not affected by the power loss of the light source and the system loss caused by the fiber bending. On the other hand, the fiber grating sensor head has a simple structure and small volume, which is convenient for forming a sensor network; the fiber itself can resist electromagnetic interference, corrosion resistance, and is suitable for working in such a harsh environment as the ocean [2]. The fiber grating stress sensor system for measuring the local strength of a ship is to provide a stress sensor in the hull structure to realize real-time automatic monitoring of local stress changes of the hull structure [3]. It can provide real and reliable data for the ship’s drivers, measure the strength of the ship’s parts, ensure the safety of the ship during transportation, and resist various accidents that may occur at any time.

2 The Basic Principle of Fiber Grating Sensor According to the fiber mode theory, except for the broadband light, other wavelengths of light will not be significantly attenuated during the transmission of the FBG, when the broadband light is transmitted in the FBG, mode coupling is generated, for the short period FBG. The incident light wave satisfying the Bragg condition is coupled into the reflection mode by the FBG, and the Bragg wavelength of the fiber grating can be expressed by the formula (1). kB ¼ 2neff K

ð1Þ

In the formula, kB is the Bragg wavelength without strain, K is the grating fringe period, and neff is the effective refractive index of the fiber [4]. When the FBG is affected by external force or temperature change, the geometry of the fiber grating will change accordingly, causing the grating fringe period and the effective refractive index to change, resulting in the displacement of the broadband light wavelength in accordance with the Bragg condition, which can be shifted by the Bragg condition. Get the formula (2). DkB ¼ 2Dneff K þ 2neff DK Figure 1 is a schematic diagram of the principle of FBG sensing [5].

ð2Þ

574

C. Liu et al.

Fig. 1. Schematic diagram of fiber Bragg grating sensing

The sensor system uses a reference grating method for temperature compensation to eliminate the effect of temperature on the local strength of the hull. One fiber grating is stuck in the middle of the strained portion of the sensor, and is affected by stress and temperature. Another fiber grating is attached to one side. The sensor base is fixed in the board and is only affected by temperature. Record the wavelength variation of the two fiber gratings on the demodulator, and set the wavelength shift of the two gratings to Dk1 and Dk2 respectively. Then the two reflection wavelength changes can be expressed as: Dk1 ¼ Dk1 ðDTÞ

ð3Þ

Dk2 ¼ Dk2 ðDT; eÞ

ð4Þ

In this way, the temperature and the strain can be separated. The detection device of the method is relatively simple, the detection cost is low, and the influence of the separation temperature on the strain measurement can be better, and the measurement accuracy corresponding to the change is high [6]. It is therefore particularly suitable for use in environmental conditions such as ships. The sensor strain variable is calculated as shown in Eq. (5): e0 ¼

Dl L

ð5Þ

where Dl represents the displacement change of the elastomer and L represents the reference length of the dependent variable. According to the reference formula (5), the strain e0 of the sensor is calculated, and the wavelength change k0 of the sensor is recorded, and the sensitivity coefficient s of the short-base stress sensor can be calculated by the formula. The average strain sensitivity coefficient can be obtained by repeating several measurements and averaging.

A Fiber Bragg Grating Stress Sensor for Hull Local Strength

s0 ¼

Dk0 e0

575

ð6Þ

Since the average sensitivity coefficient is constant within a certain range, the strain e of the sensor can be obtained by the wavelength change k0 of the sensor [7].

3 Short Base Stress Sensor Structure

Fig. 2. Schematic diagram of the new short-base stress sensor

Fig. 3. Plane view of the new short-base stress sensor

See Figs. 2 and 3. The structure of the new short-base stress sensor mainly comprises: a sensor base fixing plate, a fixing screw hole, a sensor strain part, a stress sensing sensitive component, a sensor connecting part, a temperature compensation grating, a sealing shell, sealed bottom plate, fiber waterproof plug, air plug interface and anchor hole. The fixing portion includes a sensor base fixing plate and a fixing screw hole, and the sensor base fixing plate is located at two sides of the sensor, and two fixing screw holes exist on the fixing plate on each side.

576

C. Liu et al.

The sensitive part comprises a sensor strain part, a stress sensing sensitive component, a sensor connecting part and a temperature compensation grating; the stress sensing sensitive component is composed of a single fiber grating, the optical fiber passes through the waterproof aerial plug, and the sensor base is attached on both sides. On the fixing plate, after fixing, the fiber is pre-tensioned at both ends; the strained portion of the sensor is deformed when the sensor is subjected to an external force, causing a change in the grating region of the fiber grating; the sensor connecting portion is a fixing plate for the sensor base on both sides. Connected; the temperaturecompensated grating is attached to the sensor base fixing plate on one side, and is not affected by the stress but only affected by the temperature. Through the compensation algorithm, temperature compensation can be performed to eliminate the influence of temperature on the local strength of the hull. The sealing part is composed of a sealing shell, a sealing bottom plate, a fiber optic waterproof plug, a nautical plug interface and a sling fixing hole, and a multi-layer sealing ring is arranged in the sealing port; wherein the sealing shell and the sealing bottom plate are combined into a sealed watertight box, wrapped In addition to the strain portion of the sensor base, the air inlet interface and the air socket fixing hole are located on the side of the sealing case, so that the fiber waterproof plug can be stably installed on the sensor, so that the fiber is also protected by watertightness.

4 Sensor Experiment Process The experiment was conducted by a reducer used to simulate local strength changes in the hull. By continuously rotating the rotating wheel at the front end of the reducer, the screw of the reducer drives the slider to shift left and right, so that the short base stress sensor fixed between the reducer slider and the fixed block receives different external forces, simulating the hull part. Different degrees of intensity change. Below is a diagram of the relevant operation of the test experiment (Fig. 4).

Fig. 4. Experimental diagram of the short base stress sensor

A Fiber Bragg Grating Stress Sensor for Hull Local Strength

577

First, in order to ensure the accuracy of the experimental data, the fiber grating is pre-pulled 2 nm. The short-base stress sensor is fixed on the speed reducer, the optical fiber is connected to the demodulator, and the initial wavelength of the current fiber grating is observed and recorded by a computer. Next, the rotating wheel was continuously rotated 10 times, and after waiting for the sensor to stabilize, all the changes of the wavelength in ten seconds were captured and averaged, and the average value was recorded. Then, according to the previous step, the values of the wavelength of the fiber grating when rotated 20 times, 30 times until 100 turns are respectively obtained. Since the reduction ratio of the reducer is 500:1, that is, in the case of rotating the rotary wheel 500 turns, the slider is shifted left or right by a distance of 1 mm, so that the corresponding displacement of the slider can be known according to the number of turns of the rotation. Finally, a linear graph is drawn based on the obtained ten sets of data as follows.

Fig. 5. Experimental data line graph

According to Fig. 5, with the continuous forward rotation of the rotating wheel of the reducer, the center wavelength of the fiber displayed by the demodulator is increasing, which is basically linear, and the linearity of the sensor is 97.57%. It can be concluded that the sensor has good linearity. According to formula (6), the sensitivity of the sensor is approximately 3.69 pm/a. Therefore, in the actual hull measurement environment, the short-base stress sensor can well capture the local intensity variation and display it through the change of the center wavelength of the fiber.

5 Conclusion In designing the short-base stress sensor used to measure the local strength variation of the hull, the fiber grating is attached to the strain portion of the sensor and connected to the transmission cable by a waterproof flange. The final intensity variation of the hull

578

C. Liu et al.

will be the wavelength of the fiber grating. The change is shown on the demodulator. The short-base stress sensor uses a reference grating method for temperature compensation to eliminate the effect of temperature on the local strength of the hull. The sensor structure is easy to install, calibrate and use. It is equipped with a stainless steel sealing device for the harsh engineering environment on the ship, which ensures the watertightness and damage resistance of the sensor, and also improves the durability and practicability of the sensor. Therefore, the sensor provides stable and reliable assistance for the long-term effective monitoring of the hull structure of ocean-going vessels. Acknowledgements. This paper is supported by Natural Youth Science Foundation of China (61501326, 61401310). It also supported by Tianjin Research Program of Application Foundation and Advanced Technology (16JCYBJC16500).

References 1. Wang W (2007) Study of key technology on ship hull structure health monitoring with fiber Bragg grating. Tianjin University, Tianjin 2. Jiang D, He W (2002) Review of applications for fiber Bragg grating sensors. J Optoelectron Laser 4(13):420–430 3. Chen X (2004) Distributed measurement system and its model for ship structure. Ship Eng 5 (26):62–66 4. Li J (2016) Research on a stress sensor based on fiber Bragg grating. J Chongqing Norm Univ (Nat Sci Edit) (4):23 5. Liu Y, Zhang Y, Jin Z, Liu J, Li M, Li X, Geng B, Dong B (2017) Research on the gear operating state detection based on the fiber Bragg grating sensing technology. IOP Conf Ser: Mater Sci Eng 6. Wan L, Wang D (2006) Temperature compensation of fiber Bragg grating strain sensor based on reference grating. Optoelectron Laser 17(1):50–53 7. Qiao X, Jia Z, Fu H et al (2004) Theory and experiment of fiber grating temperature sensing

Direct Wave Parameters Estimation of Passive Bistatic Radar Based on Uncooperative Phased Array Radar Jiameng Pan(B) , Panhe Hu, Qian Zhu, and Qinglong Bao National Key Laboratory of Science and Technology on ATR, National University of Defense Technology, Changsha, China [email protected]

Abstract. For passive bistatic radar based on uncooperative phased array radar, it is necessary to estimate the parameters of direct wave signal to achieve the time and frequency synchronization of the passive bistatic radar system. By using the template library of the radar waveform parameters obtained by the long-term monitoring and analysis of direct wave signals, a method based on template matching is proposed to estimate the direct wave parameters in real time, including carrier frequency, pulse width, bandwidth and time of arrival. The experiment demonstrates the process of direct wave parameters estimation, and the results verify the effectiveness of the method.

Keywords: Passive bistatic radar matching · Dechirp

1

· Parameters estimation · Template

Introduction

Passive bistatic radar (PBR) exploiting emitters of opportunity, such as FM radio [1], DVB-T [2,3], GNSS [4], ISDB-T [5] and Wi-Fi [6] as a radar transmitter to perform detection and localization of targets. Compared with conventional radar, PBR has the advantage of low cost, covert surveillance, and no requirement for frequency allocation. Furthermore, a dedicated radar transmitter usually has higher transmitting power than civil illuminators, so we constructed a PBR system based on uncooperative phased array radar (PAR). As shown in Fig. 1, the PBR system consists of two antennas and receivers, the reference antenna is directed to transmitter to receive the direct wave signal, and the surveillance antenna is directed to target to receive the scattered wave signal. Because the transmitter and receiver in bistatic radar are separated, in order to obtain accurate target information, the receiver must match all the parameters of transmitted signal in design. So it is necessary to extract the c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 579–587, 2020 https://doi.org/10.1007/978-981-13-9409-6_68

580

J. Pan et al.

Fig. 1. Geometry of passive bistatic radar

parameters of transmitted signal by analyzing the direct wave signal to achieve time and frequency synchronization of the system. As the illuminator used in our PBR system is uncooperative PAR, the radar transmits linear frequency modulated (LFM) signal and the parameters of the signal are agile. How to extract the parameters of direct wave accurately in real time to achieve the time and frequency synchronization of PBR system is the problem we need to solve. The parameters we need to estimate include bandwidth (BW), pulse width (PW), carrier frequency (CF), and time of arrival (TOA). To solve the problem of LFM signal detection and parameter estimation, many literatures have proposed many algorithms for different applications, including Ensemble Empirical Mode Decomposition-Fractional Fourier Transform (EEMD-FRFT) [7], Wigner-Hough Transform [8], and Lv’s Distribution (LVD) [9]. However, in order to apply the algorithm to engineering, it is necessary to find a method that can quickly estimate parameters. Since the signals transmitted by the phased array radar are deterministic signals, we can monitor the uncooperative radar and receive the direct wave signal for a long time, and establish a template library of radar waveform parameters, and use it as a priori information for real-time estimation of direct wave parameters. Therefore, we propose a direct-wave parameter estimation method based on template matching. By using the parameter template library to realize real-time estimation of direct wave parameters, the time and frequency synchronization of the PBR system can be realized afterwards.

Direct Wave Parameters Estimation of Passive Bistatic Radar

2

581

Signal Model

Suppose the uncooperative PAR transmits LFM pulse train signal with agile carrier frequency, pulse width and bandwidth, which can be expressed as          tˆ ˆ (1) exp j2πfm tm + tˆ exp jπμm tˆ2 st tm , t = ARect τm where tm is the slow time, m indicates the pulse number, tˆ is the fast time, Rect(·) is the window function, τm is the pulse width, fm is the carrier frequency, μm = Bm /τm is the chirp rate with bandwidth Bm . Suppose L is the distance between PAR and reference antenna. Then the direct wave signal received by the reference antenna can be represented as   sr tm , tˆ = ARect



tˆ − L/c τm



    2 exp j2πfm tm + tˆ − L/c exp jπμm (tˆ − L/c) (2)

The parameters to be estimated are fm , τm , Bm and the time of arrival. Based on the long-term monitoring and analysis of direct wave signals of uncooperative radar in the previous field experiments, signal reconnaissance technology and statistical methods are used, so we can build a template library  of radar parameters, including carrier frequency template library noted as f (k) |k = 1, 2, . . . , Nf ,  (k) pulse width template library noted as τ |k = 1, 2, . . . , Nτ and bandwidth  template library noted as B (k) |k = 1, 2, . . . , NB . Where f (k) , τ (k) , and B (k) are the specific carrier frequency template, pulse width template, and bandwidth template in the signal parameter template library, Nf , Nτ and NB are the number of parameter template respectively. Based on these prior information, we can use a method based on template matching to estimate the parameters of direct wave signals in real time.

3

Proposed Method

Figure 2 shows the flow chart of the proposed method, and we will introduce the process of the method of direct wave parameter estimation step by step. First, take the first point of the direct wave signal to be processed as the starting point to intercept a sequence with length n, noted as x(1), and take the second point of the signal as the starting point to intercept a sequence with length n noted as x(2), and so on, n  N , N is the number of sampling points of direct wave signal in one pulse repetition interval. The segmental autocorrelation function is defined as follows R(i) =

N 1

x(i + m)x∗ (i + m + 1) N m=1

(3)

In order to detect the LFM signal, it is necessary to determine the detection threshold. Since the signal-to-noise ratio (SNR) of direct wave signals is generally

582

J. Pan et al.

Fig. 2. Flow chart of direct wave parameter estimation method

high, after calculating the noise energy σ 2 , set a threshold VT that is slightly larger than σ 2 . Threshold detection is used to detect pulses, and the preliminary estimated arrival time T s and end time T e and preliminary estimated PW τ are obtained.  Comparing τ with the PW template library τ (k) |k = 1, 2, . . . , Nτ , choosing τ0 which is closest to τ in the PW template library, then τ0 is the precise estimation of PW. Thus, the PW estimation error is ε = τ0 − τ , and the arrival

Direct Wave Parameters Estimation of Passive Bistatic Radar

583

time and end time are compensated, which can be expressed as Tˆstart = Tˆs −ε/2, Tˆend = Tˆe + ε/2. Then, combined with the BW template library, dechirp method is used to accurately estimate the BW and preliminarily estimate the CF of the direct wave signal, and the

specific steps are as follows. Intercept the signal sp of interval Tˆstart , Tˆend , sp contains a total of M = τ0 · Fs points, where Fs is the sampling  rate of the signal. As the BW template library is B (k) |k = 1, 2, . . . , NB , so the possible chirp rate of the signal can be μ1 = B (1) /τ0 , μ2 = B (2) /τ0 , . . . , μNB = B (NB ) /τ0 , constructing reference signals with different chirp rate, which can be expressed as 2

srefk = e−jπμk (i/Fs ) , k = 1, . . . , NB ; i = 1, 2, . . . , M

(4)

Multiply sp with different reference signals sref1 , sref2 , . . . , srefNB to obtain the dechirped signal sdc1 , sdc2 , . . . , sdcNB , and then perform a fixed-point FFT on the signal to obtain the spectrum Sdc1 , Sdc2 , . . . , SdcNB . Measuring and comparing the peak value of Sdc1 , Sdc2 , . . . , SdcNB , the chirp rate μmax corresponding to the spectrum with the largest peak value is the chirp rate of the signal, and then the accurate BW can be calculated as B0 = μmax · τ0 . At the same time, the frequency position corresponding to the peak value in the spectrum is the preliminary estimation CF of the signal, noted as fˆc . Comparing fˆc with the  of (k) CF template library f |k = 1, 2, . . . , Nf , choosing f0 which is closest to fˆc in the CF template library, then f0 is the precise estimation of CF. Finally, combining with the estimated parameters of the signal, including B0 , τ0 and f0 , reconstruct a reference signal noted as   2 B j 2πf0 Fis +π τ 0 ( Fis )

smf = e

0

, i = 1, 2, . . . , M

(5)

Using the reference signal smf to perform matched filtering on the whole signal, then the peak position of the result is the accurate TOA of the pulse.

4

Experimental Results

Based on the long-term monitoring of the uncooperative PAR, in order to better illustrate the proposed method, only the transmitted signals in the tracking mode are analyzed, thus the parameter template library is established as in Table 1. Table 1. Parameter template library of direct wave signal System parameters (Unit) Values Carrier frequency (MHz)

210.25, 213, 214.65, 216, 217.5, 218.75, 220.25, 222.35

Bandwidth (MHz)

0.25, 0.5, 1.25, 1.5

Pulse width (µs)

40, 80, 100, 120, 150, 200

584

J. Pan et al.

Since the carrier frequency of the radar signal is within a certain range, the received radar signal is down-converted and mixed with 210 MHz signal to obtain intermediate frequency (IF) signal, and the sampling rate is 15 MHz. The imaginary part of time domain waveform and time-frequency diagram of directwave IF signal received by PBR system are shown in Fig. 3. It can be seen that the CF, PW and BW of the signal are agile, and there is a fixed frequency interference signal throughout the period.

Fig. 3. Imaginary part of time domain waveform and time-frequency diagram of directwave

Fig. 4. The pulse before and after threshold detection

Figure 4a is the amplitude of the last pulse in Fig. 3, the red part in Fig. 4b is the pulse after threshold detection. The preliminarily estimation of PW of the pulse can be calculated as 104.2 µs, and the accurate PW after template matching is 100 µs.

Direct Wave Parameters Estimation of Passive Bistatic Radar

585

By intercepting the signal in Fig. 4b, and then using the different BW templates to perform the dechirp method on the signal, the results are shown in Fig. 5. It can be seen that the peak value of the signal spectrum in Fig. 5c is the largest, so the accurate BW of the signal is 1.25 MHz, and the CF of the signal can be obtained according to the position of the peak point, then the accurate CF of the signal obtained after the template matching is 213 MHz.

Fig. 5. Dechirp results using different BW templates

Finally, the reconstructed reference signal is obtained by using the estimated CF, BW and PW of the signal, then the direct wave signal is matched filtered combining with the reference signal, and the result is shown in Fig. 6. So the peak position in Fig. 6 corresponds to the accurate TOA of the direct wave signal.

586

J. Pan et al.

Fig. 6. Signal after matched filtering

5

Conclusion

In this paper, we propose a method to deal with the direct wave parameters estimation of passive bistatic radar based on uncooperative phased array radar. Based on the long-term monitoring of the transmitted signal of the radar, we can build a template library of radar parameters, including carrier frequency, bandwidth and pulse width. Combining with the parameter template library, a method based on template matching is proposed to estimate the parameters in real time and achieve time-frequency synchronization of the system. Experimental results verify the effectiveness of the method.

References 1. Howland PE, Maksimiuk D, Reitsma G (2005) FM radio based bistatic radar. IEE Proc Radar Sonar Navig 152(3):107–115 2. Langellotti D, Sedehi M, Colone F et al (2013) Experimental results for passive bistatic radar based on DVB-T signals. In: 14th international radar symposium (IRS). IEEE, Dresden, pp 178–183 3. Derakhtian M, Sheikhi A (2017) Dynamic clutter suppression and multitarget detection in a DVB-T-based passive radar. IEEE Trans Aerosp Electron Syst 53(4):1812– 1825 4. Clemente C, Soraghan JJ (2014) GNSS-based passive bistatic radar for microdoppler analysis of helicopter rotor blades. IEEE Trans Aerosp Electron Syst 50(1):491–500 5. Honda J, Otsuyama T (2016) Feasibility study on aircraft positioning by using ISDB-T signal delay. Antennas Wirel Propag Lett 15:1787–1790

Direct Wave Parameters Estimation of Passive Bistatic Radar

587

6. Falcone P, Colone F, Bongioanni C et al (2010) Experimental results for OFDM WiFi-based passive bistatic radar. In: 2010 IEEE radar conference. IEEE, Washington, pp 516–521 7. Huiyan H (2013) Multi component LFM signal detection and parameter estimation based on EEMD-FRFT. Optik Int J Light Electron Opt 124(23):6093–6096 8. Barbarossa S (1995) Analysis of multicomponent LFM signals by a combined Wigner-Hough transform. IEEE Trans Signal Process 43(6):1511–1515 9. Lv X, Bi G, Wan C, Xing M (2011) Lv’s distribution: principle, implementation, properties, and performance. IEEE Trans Signal Process 59(8):3576–3591

Noncooperative Radar Illuminator Based Bistatic Receiving System Caisheng Zhang(&), Hai Zhang, and Xiaolong Chen Naval Aviation University, Yantai 264001, Shandong, China [email protected]

Abstract. Target detection and tracking systems using illuminators of opportunity have received significant interest in the past few years. The passive bistatic radar system under investigation in this paper exploits non-cooperative navigation radar. In order to provide useful surveillance or cueing information, certain data must be collected from the direct-path signal of the illuminator. Detection of aircraft is an important step to demonstrate its potential in remotely surveillance. The PBR illustrates the detection of civil passenger aircrafts in the airspace by receiving a bistatic return when they are illuminated from noncooperative emitters. The results show that target detections have been achieved from real data. “Air-truth” data obtained by a Mode S ADS-B receiver is used to verify the results of this bistatic system. Keywords: Passive bistatic radar (PBR) Surveillance and cueing  ADS-B

 Illuminators of opportunity 

1 Introduction Generally speaking, bistatic radar is defined as one that uses antennas at different locations for transmission and reception. PBR is one kinds of bistatic radar where there is no cooperative emitter, but instead exploits existing, non-cooperative, radionavigation transmissions known as illuminators opportunity [1–5]. In this paper, A scenario with non-cooperative radar illuminator considered in this paper is illustrated in Fig. 1. The design and trial system of a PBR framework will be presented. The PBR receiver performance will be analyzed using measurements of field data. The data presented has been processed coherently. Experimental results of this bistatic passive radar are shown for the purpose of aircraft detection. Detection plots of real aircraft targets are presented and compared with ‘air-truth’ data obtained from a Mode S/ADS-B receiver.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 588–595, 2020 https://doi.org/10.1007/978-981-13-9409-6_69

Noncooperative Radar Illuminator Based Bistatic Receiving System

589

e

lob main d by inate tenna m lu an l il Signa f transmit o

Dir

sig nal pat h ter ed cat rge ts Ta

Opportunity illuminator

A nt e an nn ti- a s cl ca oc nn kw ed ise in

θt

Target

ect

sig

nal

pat

h

Passive bistatic receiving system

Fig. 1. Schematic of passive bistatic pulsed radar system

2 Framework of the Bistatic Receiving System The PBR includes two receive channels: one is for the reception from an area of interest (surveillance channel), the other is used to receive a signal directly from the transmitter of opportunity (reference channel) which provides a reference for correlation based matched filtering. A block diagram of the system hardware is shown in Fig. 2.

LPF

Amplifier

LO

10 MHz Timer

LPF

Amplifier

LPF

A/D

Quadrature

LNA Frequency synthesizer

LPF

High Speed IO Card

A/D

Quadrature

LNA

Fig. 2. Block illustration of the passive receiving system

Buffer

MATLAB

590

C. Zhang et al.

3 Experimental Scheme, Hardware Setup and Results The system described in this paper was built on a low budget and is one of the simplest architectures that can be used to explore this technology.

Fig. 3. Photo of noncooperative trial system

Fig. 4. Photo of double-channel receiving antennas

Noncooperative Radar Illuminator Based Bistatic Receiving System reference channel

Pluses Deinterleaving Range Doppler Processing

surveillance channel

CFAR

Peak Detection

Location tracking

591

Future work

Target State Estimation

DSI

Fig. 5. Signal processing block diagram

Fig. 6. Waveforms of the raw data

This receiving system hardware was set up at a room on the top of a building, as shown in Fig. 3. The AIS subsystem is explored to induct the tracking of ship-borne navigation radar, while the ADS-B subsystem is used to record the “air truth” data of the aircraft of interest, which will be used to demonstrate the remotely surveillance feasibility of the PBR. The picture below in Fig. 4 shows the trials system situated on the roof of a building near the Yellow Sea. A block diagram of the processing algorithm is shown in Fig. 5. 3.1

Deinterleaving of Direct Path Signals

This Section details pre-processing of the raw data obtained from field test. Initial data gathered can be seen by the following graph in Fig. 6 showing power level against the rotation angle of the Radar. At this time, the direct path reference signal is intercepted from different lobe radiation of the transmit antenna. It is found that the propagation factor between the direct path channel and the target channel is different during the coherent dwell time. At the same time, as shown in Fig. 6, signals from other illuminators in the same channel are intercepted, inevitably. Therefore, it is desired to consider the deinterleaving of radar pulses for further processing. Maintaining the Integrity of the Specifications.

592

C. Zhang et al.

An improved dynamic correlation deinterleaving algorithm based on Ref. [6] is explored to sort the raw data. The result in Fig. 7 is the waveform of interest which will be used for detection. Zooming into the area around the main beam shows the following variation of power with respect to the radar transmitter direction, as shown in Fig. 8. By zooming in further to a region approximately the main beam illumination, which corresponding to the approximate point where aircraft are illuminated. A series of power levels against time can be obtained for the groups of pulses, as shown in Fig. 9.

1.8 1.6

Amplitude (V)

1.4 1.2 1 0.8 0.6 0.4 0.2 0

0

2

4

6

time (μ s)

8

10

12 6 x 10

Fig. 7. Deinterleaving pulses waveform

1.6

Amplitude (V)

1.4 1.2 1 0.8 0.6 0.4 0.2 0 3.6 3.8

4

4.2 4.4 4.6 4.8

time ( μ s)

5

5.2 5.4 5.6 x 10

Fig. 8. Radar mainlobe deinterleaving pulses waveform

6

Noncooperative Radar Illuminator Based Bistatic Receiving System

593

1.6 1.4

Amplitude (V)

1.2 1 0.8 0.6 0.4 0.2 0 4.5

4.55

4.6

4.65

4.7

4.75

4.8

4.85

4.9

time (μ s)

4.95

6

x 10

Fig. 9. Main beam illumination de-interleaving pulses waveform

3.2

Coherent Processing Results

The detection performance of PBR will be examined in the complex signal environment. The direct-path data examined were sampled from a marine radar which is an incoherent pulse radar. Once the desired signal has been extracted and leaked to the adaptive filter for DSI and clutter suppression, the 2-D detection plots are constructed in accordance with the bistatic ambiguity function [7].

C A B

Fig. 10. Amplitude-range-Doppler of cross-ambiguity function (normalized on the peak value)

594

C. Zhang et al.

The above analysis concerning detection and parameters estimation was conducted with actual measured data, i.e. raw I, Q data samples for an entire transmit antenna rotation. Since then, such data become available, and we will examine it with respect to above analysis. Herein, the direct path interference is suppressed according to the algorithm in [8–10]. The Fourier transform receives inputs from each group of 16 pulses, which similarly located range bins in 16 pulse repetition intervals. The PRF is 550 Hz which can be estimated from the direct-path signal. The main difference between the basic algorithm and this modification is the presence of two additional steps involved by CIC filter decimation and low-pass FIR filter. Figure 10 shows the result of coherent processing described above. The detection plot is compared against ‘air-truth’ data from a Mode S/ADS-B receiver as shown in Fig. 11.

&

Tx %

$

Rx Fig. 11. Targets are displayed on the corresponding air-truth positions, respectively

4 Conclusions In this paper, we have presented the design and framework of an experimental PBR. The PBR illustrates the detection of civil passenger aircrafts in the airspace by receiving a bistatic return when they are illuminated from non-cooperative emitters. The performance of the PBR receiver is examined using measurements of field data.

Noncooperative Radar Illuminator Based Bistatic Receiving System

595

The data presented has been processed coherently. The results show that target detections have been achieved from real data. Experimental results of this bistatic hitchhiking radar are shown for the purpose of aircraft detection. Detection plots of real aircraft targets are presented and corroborated by air-truth data from a Kinetic Mode S/ADS-B IFF receiver.

References 1. O’Hagan DW, Baker CJ (2008) Passive bistatic radar (PBR) using FM radio illuminators of opportunity. Rome, Italy 2008 IEEE Radar conference, pp 1–6 2. Ringer MA, Frazer GJ (1999) Waveform analysis of transmissions of opportunity for passive radar. In: Surveillance Systems Division, DSTO. ISSPA’99. pp 511–514 3. He Y, Zhang CS, Ding JH et al (2010) The impact of time synchronization error on passive coherent pulsed radar system. Sci China Inf Sci 53:2664–2674 4. Willis NJ, Griffiths HD (eds) (2007) Advances in bistatic radar. SciTech Publishing Inc., Raleigh, NC, pp 1–23 5. Griffiths HD (2008) New directions in bistatic radar. In: IEEE international radar conference, Rome, Italy. 18–20 Oct 2008, pp 1–6 6. Milojevec DJ, Korbyashi M (1992) Improved algorithm for the deinterleaving of radar pulses. IEEE Proc F, Commun, Radar, Signal Process 139(1):98–104 7. Howland PE (ed) (2005) Special issue of IEE proceedings radar sonar and navigation on passive radar systems 152(3) 8. Colone F, O’Haganp DW, Baker CJ (2009) A multistage processing algorithm for disturbance removal and target detection in passive bistatic radar. IEEE Trans AES 45(2): 698–722 9. Chen XL, Wang G, Dong Y, Guan J (2015) Sea clutter suppression and micromotion marine target detection via Radon-linear canonical ambiguity function. IET Radar Sonar Navig 9(6):622–631 10. Chen XL, Chen BX, Guan J, Huang Y, He Y (2018) Space-range-Doppler focus-based lowobservable moving target detection using frequency diverse array MIMO radar. IEEE Access 6:43892–43904

Research on Simulation Technology for Remote Sensing Image Quality Hezhi Sun1(&), Yugao Li2, Xiao Mei2, Yuting Gao2, and Dong Yang2(&) 1

School of Astronautics, Harbin Institute of Technology, Harbin, China [email protected] 2 China Academy of Space Technology, Beijing, China [email protected]

Abstract. Full-link image quality simulation analysis is an important part of the satellite development process. It can not only predict the satellite image quality before launch, but also help to adjust the satellite related design according to the simulation results. This paper expounds the construction idea, system scheme and composition of each subsystem of the full-link image quality simulation system for optical remote sensing satellites and carries out serial fulllink closed-loop simulation test on the preliminary simulation system to verify the reasonableness and feasibility of the system scheme. This system is constructed with functional modular design, while the model algorithm has good openness, good adaptability to different tasks and high application value. Keywords: Optical remote sensing

 Full-link  Image quality simulation

1 Introduction Image quality simulation analysis is a crucial link in the development of transmission high-resolution remote sensing satellite. It can not only fulfil the basic task to predict and analyse the satellite image quality before launch, but also timely adjust the relevant design of the satellite based on the results of simulation, in order to ensure that the satellite image quality in orbit meets the expected requirements. According to relevant information [1–3], the major aerospace powers such as the United States, France and Germany have already put great effort in the study of full-link image quality simulation system for remote sensing satellites. Commercial and generalized simulation software has also been developed in these countries. However, the key models and algorithms are not available in China. Domestic researches in this field are generally highly targeted. Only qualitative estimation and analysis of images are carried out, which is difficult to meet the requirements of quantitative overall design and verification of highresolution remote sensing satellites [4–6]. Under this background, the research work of the full-link remote sensing image quality simulation system based on optical remote sensing was carried out. Since the full-link image quality simulation system involves a wide range of fields, the implementation of accurate quantitative analysis is difficult, and the self-verification method of the model algorithms is lacking. Therefore, the idea of piecewise and staged © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 596–604, 2020 https://doi.org/10.1007/978-981-13-9409-6_70

Research on Simulation Technology for Remote Sensing

597

modular construction is adopted. (Therefore, the modular modeling is adopted) In the first stage, the typical visible spectral remote sensing satellite with panchromatic Time Delay Integration (TDI) CCD is taken as the simulation analysis object, and the Modulation Transfer Function (MTF), Signal Noise Ratio (SNR) and location accuracy of image are quantitatively analysed. At the same time, intuitive simulation images are output to complete the system construction and closed-loop test. In the later stage, the model precision and function are further improved on the basis of the original system, and the application field is expanded. The simulation system will be built into optical remote sensing simulation products with independent intellectual property rights in Beijing Institute of Control Engineering and even in China, so as to be widely used.

2 System Construction Ideas The system takes the image as the final output product and conducts systematic modeling and analysis for each link that affects image quality during satellite imaging. Through modeling, the actual state of the satellite on-orbit imaging and the transmission of the image data stream can be simulated realistically. The simulation results can be evaluated to obtain intuitive images and related image elements. Considering that the full-link image quality simulation system should be oriented to the development characteristics of remote sensing satellites such as serialization and multi-task, the principle of construction by stages and batches should be adopted, the system should have good universality and expansibility, and the model algorithm should have good openness. In this way, the simulation system is able to has good adaptability to different tasks and high application value, so as to provide guarantee for the pre-research and model development for remote sensing satellites such as high-resolution earth observation system. Due to the huge code rate and data volume of the in-orbit output image, it is difficult to conduct the online dynamic real-time simulation. Therefore, the offline simulation method is considered to perform analysis and calculation to obtain the output image for each link in this stage. The image quality in the whole link of satellite imaging can be affected by many sub links. It is unrealistic to integrate all possible influencing factors on the quality into the simulation system for modelling. Among these influencing links, some links are multi-stage coupled and thus difficult to conduct independent modelling; some links cannot be accurately expressed in mathematics or have poor simulation accuracy; and some links need huge and complex special analysis tools to support. Therefore, the fulllink image quality simulation system for satellites needs to carry out multi-level and indepth analysis and modelling work for the first time, which is an exploratory innovation.

3 System Scheme and Composition In the whole process of the generation and transmission of satellite image products, the main links that affect the image quality are the ground target, the atmosphere, the impact of the satellite imaging system and the platform, image compression, ground

598

H. Sun et al.

processing system and so on. After research and demonstration, the geometric simulation subsystem, the radiation simulation subsystem, the compression and decompression simulation subsystem, and the ground processing and subjective evaluation subsystem are determined to be the four components of the integrated modeling scheme for the full-link image quality simulation system. The input/output interface relationship is stipulated in the system scheme, forming the whole system modeling planning report and their respective technical requirements. The system modeling block diagram and the input/output interface are shown in Fig. 1.

DOM/DEM

Model boundary conditions and parameters

Geometric Simulation Subsystem Discrete static images

Model boundary conditions and parameters

Radiation Simulation Subsystem Continuous images by push-broom integral

Satellite auxiliary data Model boundary conditions and parameters

Compression and Decompression Subsystem

Model boundary conditions and parameters

Ground Processing Subsystem

Images after compression and decompression

Objective evaluation results

Subjective evaluation Positioning accuracy and evaluation results

Fig. 1. The block diagram of the full-link image quality simulation system with the input/output interface

3.1

Geometric Simulation Subsystem

The geometric simulation mainly projects the image points onto the ground based on the imaging geometric model of the CCD linear array. According to the parameters of the satellite camera and the integral time of the push broom imaging, the corresponding ground plane projection image of each discrete imaging moment is given for the use of radiation simulation. The geometric plane image simulation considers the following main links: DOM and DEM reference source database, conversion between space and satellite coordinate system, parameter setting of the onboard camera and the integral time, simulation of orbit and attitude measurement, plane image generation, etc.

Research on Simulation Technology for Remote Sensing

3.2

599

Radiation Simulation Subsystem

The radiation simulation system consists of three parts, atmospheric model, optical system model of camera and CCD sampling system model, whose main task is to complete the simulation process from the input image of the target plane to the camera output image. Among them, the atmospheric and optical system model are static system. For continuous static input images, the static systems can complete the same simulation process and output static images. On the other hand, the CCD sampling system, based on its sampling integral method, is able to carry out multistage pushbroom integral for multiple continuous static images that passing through the atmospheric and optical system. Its output images are described by continuous images after integration. Other than the traditional frequency domain analysis method based on MTF, the image simulation is realized by using ray tracing method in the optical system. With the use of the point-by-point tracing method of ZEMAX software, the influence of camera distortion and the image quality in different field of view can be described quantitatively. Thus, the model of optical system not only has universality but also can reflect the detailed characteristic parameters concerned by users. 3.3

Compression and Decompression Subsystem

In compression and decompression subsystem, the JPEG2000 and SPIHT algorithms for typical applications are used to carry out subjective and objective evaluation for image quality. Quantitative simulation evaluation of image elements such as MTF and SNR are also considered, in order to give out the simulation output image. Due to the impact of compression and decompression on image quality, there are mature methods for subjective and objective evaluation at present, and the simulation process is relatively independent. Thus, the simulation modeling work can be carried out independently and then integrated into the system links. 3.4

Ground Processing and Subjective Evaluation Subsystem

The main purpose of the image ground processing is to complete the geometric precision processing and analysis of the simulated image output from the satellite. Based on the simulated image output, the satellite’s orbit and attitude are acquired through auxiliary data files. Meanwhile, the CCD sampling position of the camera and its installation and rotation parameters are set, and the output image is projected to the earth surface through a series of transformation of spatial coordinate systems. Then with the comparison to the control point information in the original DEM image data, the positioning accuracy of the satellite simulation image can be analyzed. In addition, in order to verify the influence of main parameters in the satellite imaging links on subjective image quality, the subjective evaluation of the simulated image after compression and decompression and after changing the typical sensitive parameters of the satellite and camera is carried out.

600

H. Sun et al.

4 Applications According to the subsystem modeling and software completed in the first stage, the main typical parameters used are as follow: • A 0.8 m resolution DEM data from Beijing for the original ground data source. • An optical system model of on-axis three mirror anastigmatic space camera for typical satellite applications. • A TDICCD model for typical satellite applications. Combining the model parameters under the above typical design state and the typical on-orbit imaging environment, the simulation output image with 1 m resolution, as well as MTF, SNR and positioning accuracy of each link are obtained through the serial full-link closed-loop simulation test, as shown in Figs. 2, 3, 4, 5, 6, 7, 8 and 9. As can be seen from the output image, the system completes the full-link simulation task with expected functionality and performance. In addition, other sensitive parameters, such as the focal plane position of the camera, the stability of satellite’s attitude (including high frequency buffeting), and the RMS noise of camera, can also be input to the system through the open interface of the model. Thus, the system can be adjusted and tested under different parameters and conditions. The influence of key parameters on three objective indicators of image quality and subjective output images can be verified quantitatively.

Fig. 2. Original image

Research on Simulation Technology for Remote Sensing

Fig. 3. Image transmitted through the atmosphere

Fig. 4. Image from an optical system

Fig. 5. Image transmitted by TDICCD

601

602

H. Sun et al.

Fig. 6. MTF of the optical system

Fig. 7. MTF of the camera (the optical system and CCD)

SNR

Original image

After atmosphere

After optical system

After CCD

Fig. 8. SNR results of each link after radiation simulation

Research on Simulation Technology for Remote Sensing

603

Positioning accuracy/m

100

80 60 40 20

0

Plane Orbit

20

40

60

80

100

Satellite position error/m

Fig. 9. Analysis of positioning accuracy after ground processing

5 Subsequent Improvement and Consideration Through the construction of full-link image quality simulation system for optical remote sensing satellites in the first stage, the closed-loop test of four subsystems has been completed. The test results obtained are as expected, but there are still improvements needed in the following aspects: • System integration and productization. • Universalization of the I/O interface. • Further improvement of key models (e.g. atmospheric models, mechanical and thermal analytical models of camera, etc.). • Modeling of new satellite platforms and payload. Therefore, in the following work, the key models and algorithms of the system need to be optimized with the comparison to the on-orbit satellite images, in order to improve the simulation accuracy of the full-link image quality simulation system, based on the research results of the first stage. The system function module needs to be supplemented to meet the requirements of multi-task simulation analysis other than the task of optical remote sensing satellite with panchromatic TDICCD camera, in order to provide a more accurate and extensive application basis for model development.

6 Conclusion It is the first time that the construction of the full-link image quality simulation system for optical remote sensing satellites is oriented to all the main links of the full imaging link. All the internal and external links, such as ground scenery, atmosphere, satellite, ground reception and processing, are systematically modeled. The preliminary establishment of the whole system is completed strictly following the phased, hierarchical, universal and modular construction ideas. The algorithm model has good openness, which makes it have good adaptability and high application value for different tasks. The closed-loop simulation test of serial full-link is completed under the condition of

604

H. Sun et al.

typical application parameters. The simulation output image with 1 m resolution, as well as the three main image objective factors, MTF, SNR and positioning accuracy of each link, are obtained. The feasibility and rationality of the construction scheme for the full-link image quality simulation system are verified through typical application test work by developers and users in this stage, which provides primary reference basis and verification means for the overall design of the optical remote sensing satellites. Through continuous improvement in the later period, this simulation system will gradually become a powerful assistant and necessary tool for the overall design and task analysis of transmission optical remote sensing satellites.

References 1. Hulst HC, van de Hulst HC (1981) Light scattering by small particles. Courier Corporation 2. Wiscombe WJ (1979) Mie scattering calculations: advances in technique and fast, vectorspeed computer codes, vol 10. National Technical Information Service, US Department of Commerce 3. Wiscombe WJ (1980) Improved Mie scattering algorithms. Appl Opt 19(9):1505–1509 4. Yan HP, Shen TS et al (1998) The application of LOWTRAN7 in the dynamic IR imagery simulation system and integration of the system. Infrared Laser Eng 4:14–17 (in Chinese) 5. Mao KB, Tan ZH (2004) The transmission model of atmospheric radiation and the computation of transmittance of MODTRAN. Geomat Spat Inf Technol 27(40):1–3 6. Jiang WQ, Zhao YF, Yuan SP (2009) Modeling and open GL simulation of sea surface infrared images. Electron Opt Control 16(11):19–21

Distributed Measurement of Micro-vibration and Analysis of the Influence on Imaging Quality Yugao Li1, Hezhi Sun2, Chen Ni1, Xiang Li1, and Dong Yang1(&) 1

2

China Academy of Space Technology, Beijing, China [email protected] School of Astronautics, Harbin Institute of Technology, Harbin, China [email protected]

Abstract. The micro-vibration environment of optical remote sensing satellites is one of the main factors affecting the quality of in-orbit imaging and has become a research hotspot in the field of remote sensing. In order to ensure the in-orbit imaging quality of the satellite, this paper analyzes the requirement of the imaging quality on the vibration isolation and suppression of the microvibration of the satellite, and adopts the method of measuring the angular displacement of the main optical elements to analyze the impact of the microvibration on the imaging quality, analyzes and verifies the impact of the turning on of a certain type of satellite control moment gyroscope (CMG) group on the camera transmission. Keywords: Optical remote sensing Micro-vibration

 Image quality simulation 

1 Introduction When the optical remote sensing satellite is in-orbit imaging, the motion of the moving parts such as GMC, momentum wheel, solar wing, relay/ground data transmission antenna, and the jetting of thruster will cause the satellite platform to produce “jitter” with high frequency and small amplitude, which can’t be measured or suppressed by the control system. Domestic scholars usually call it “flutter” or “vibration”. In order to distinguish it from “flutter” in aviation field, this paper calls it “micro-vibration”. The main load of optical remote sensing satellite is usually a space camera using TDICCD as a photosensitive imaging device. The premise of its normal operation is that the transfer speed of the photoelectric charge packet is synchronized with the transfer speed of the image on the focal plane. However, the matching error caused by micro-vibration will produce image motion, which will lead to image distortion or even blurring, and reduce the quality of the in-orbit imaging of satellite [1]. With the increasing of resolution and focal length of camera of optical remote sensing satellite in recent years, the requirement for micro-vibration environment of satellite platform is becoming higher and higher, the requirement for micro-vibration environment of satellite platform is becoming higher. At the same time, in order to © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 605–613, 2020 https://doi.org/10.1007/978-981-13-9409-6_71

606

Y. Li et al.

improve the on-orbit rapid attitude maneuverability of satellites, more satellites are using a CMG with large torque output and other moving parts with large disturbance torque, which in turn causes the problem of vibration isolation suppression of microvibration has become more and more prominent. With more and more research in the field of micro-vibration and deepening understanding of micro-vibration in China, a series of means such as accurate measurement, micro-vibration suppression and high-precision simulation have been proposed in engineering practice. In this paper, based on the research results of microvibration at home and abroad, to ensure the quality of in-orbit imaging of satellite, the demand for satellite micro-vibration mechanical environment suppression was analyzed. A method of analyzing the impact of micro-vibration on image quality by measuring the micro-vibration response of main optical components is proposed and tested on a certain satellite.

2 Requirements of Micro-vibration Based on Imaging Quality 2.1

Division of Micro-vibration Frequency

The effects of micro-vibration at different frequencies on the imaging quality of optical remote sensing satellites are different [2]. Therefore, considering the influence of micro-vibration on the imaging quality, it is necessary to divide the vibration frequency range according to the imaging sampling frequency. The following definitions are given in this paper. Camera sampling frequency: For optical remote sensing cameras that use TDI CCD devices for integral imaging, the camera sampling frequency is defined as the reciprocal of the total exposure time of the multi-level imaging. For example, the single-stage minimum integration time of WorldView-1 satellite is about 63.6 ls. Assuming that the satellite uses 48-stage integration for imaging, its sampling frequency is: 1.0/ (48  63.6  10−6) = 327.5 Hz. Obviously different integration stages correspond to different imaging frequencies. Low frequency micro-vibration: A vibration whose vibration frequency is less than the sampling frequency of the camera is defined as a low frequency micro-vibration. For satellite systems, low frequency micro-vibrations typically have larger amplitudes that can distort the image and affect the geometric quality of the image. Since the vibration has not completed a cycle during camera imaging, the image motion is usually small, and the effect on the imaging radiation quality is not as remarkable as the high frequency micro-vibration. High-frequency micro-vibration: High-frequency micro-vibration refers to vibration whose frequency is larger than or equal to the sampling frequency of the camera. “High frequency with small amplitude” is its distinctive feature. One or more vibration periods are included during camera exposure imaging, resulting in a decrease in imaging modulation and a significant effect on imaging radiation quality.

Distributed Measurement of Micro-vibration and Analysis

2.2

607

Characteristics of Optical Remote Sensing Camera Imaging

The imaging sampling frequency and angular resolution of typical optical remote sensing satellites are shown in Table 1. The satellites have different integration times for different orbital heights, latitudes and attitude maneuvers. In the case where the imaging task conditions are not yet specified, the camera sampling frequency is defined by the average of the integration time. Table 1. Partial imaging characteristics of typical optical remote sensing satellites Satellites

Ground pixel resolution (m)

Orbital altitude (km)

Angular Focal resolution length of camera (m) (″)

Integration time (ls)

Pleiades

0.70

695

12.9

0.208

IKONOS-2

0.82

681

10.0

0.248

GEOEYE-1 WorldView-1 WorldView-2 ZY-3 ZY-1 (03)

0.41 0.45 0.46 2.1 2.36

681 494 770 505 778

13.5 8.8 13.3 1.7 3.3

0.124 0.188 0.123 1.2 0.732

103.4– 147.7 120.7– 172.5 60.4–86.2 63.6–90.9 69.0–98.6 281–387 246.7–450

Imaging frequency for 48level (Hz) 141–201 120–172 241–345 229–327 211–302 53.8–74.1 46.3–84.4

As shown in Table 1, the imaging frequency of the optical remote sensing satellite is usually in the range of 40–350 Hz under the 48-stage TDI integral imaging. Considering that the high-frequency micro-vibration has a more significant influence on the imaging quality, the range of the measurement of satellite micro-vibration frequency can be tentatively set to 1–1000 Hz. 2.3

Calculation of the Effect of Micro-vibration on Imaging Quality

At present, researchers have formed mature theories and models in the field of analysis of the influence of micro-vibration on the image transmission function of TDICCD imaging system [2] and have a clear calculation method for the influence of the change in the diameter of the speckle on the CCD image surface caused by micro-vibration on the image transmission function [3]. 1. Low frequency micro-vibration. Low-frequency vibration can be considered to cause unilateral linear image motion during imaging, thus leading to the decrease of MTF as follows:

608

Y. Li et al.

MTFLowfrequency vibration ¼ where d ¼ D2 M

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    ffi 5  2 cos Ttp0  3 cos2 Ttp0 ;

sinðp  NTDI  d  vÞ  NTDI  sinðp  dvÞ T T0

ð1Þ

\1; T is the n-order integration time,

T = N  t, T0 is the vibration period, f0 is the vibration frequency, D is the amplitude, and t is the start time of TDICCD integral. 2. High-frequency micro-vibration. The formula of the influence of high-frequency vibration on image MTF is as follows: MTFðNÞ ¼ J0 ½2pND

ð2Þ

where, J0 is the 0-order Bessel function, N is the sampling frequency of the camera, and D is the amplitude of high-frequency micro-vibration [4]. 2.4

Requirements of Imaging Quality for Micro-vibration

According to the analysis results of the whole link process of optical remote sensing satellite image quality, in order to ensure the in-orbit imaging quality, the image motion caused by low-frequency micro-vibration is required to be no more than 0.3 pixel. The image motion caused by high-frequency micro-vibration is no more than 0.1 pixel, and the influence of micro-vibration on MTF is no more than 0.95 [5]. If this is taken as the threshold of the micro-vibration of the whole satellite and a certain type of satellite is taken as an example, the specific requirements of the amplitude-frequency characteristics of the micro-vibration in the rolling and pitching directions of the satellite can be obtained, as shown in Table 2.

Table 2. Amplitude requirements of satellite vibration in rolling and pitching directions Frequency (Hz) 1.00–2.00 2.00–10.0 10.0–50.0 50.0–100.0  100

Rolling + pitching: maximum amplitude (″) TDI stage = 12 TDI stage = 24 125.478 62.739 5.019 2.510 2.510 1.255 1.255 0.627 0.251 0.126

TDI stage = 48 31.370 1.255 0.627 0.314 0.063

According to the data in Table 2, in order to ensure that the influence factor of the in-orbit imaging quality index MTF in the micro-vibration link is not less than 0.95, under the typical imaging parameters of level 48, it is necessary to ensure that the micro-vibration amplitude of the satellite in the high-frequency region greater than 100 Hz is not greater than 0.063″.

Distributed Measurement of Micro-vibration and Analysis

609

3 Micro-vibration Measurement Based on Optical System At present, measurement methods for micro-vibration disturbance sources and structural transmission characteristics have been basically available in China, such as sixcomponent force measurement platform, diaphragm force sensor, permanent magnet sensor and laser gyroscope. However, the measurement technology inside the optical remote sensing camera is still immature. The traditional acceleration measurement method can obtain the vibration response of the satellite structure, but it cannot directly establish the relationship with the imaging optical system. The intermediate solution will introduce a large error, which cannot provide a reliable basis for imaging quality assessment and correction [6]. When a certain type of optical remote sensing satellite is used for micro-vibration measurement test, on the basis of completing the simulation of free-free boundary conditions, the micro-vibration measurement sensors are directly arranged in the primary mirror, secondary mirror, plane mirror and focal plane of the camera to obtain the angular displacement data of each component in the optical system under microvibration mechanics. Substituting these data into the optical design model can obtain the time-domain variation curve of the optical axis position of the optical system and the influence of micro-vibration on imaging quality can be analyzed. This verifies that whether the on-board micro-vibration environment meets the imaging quality requirements and provides a quantitative basis for vibration suppression design. Due to the limited installation space of optical system components, the volume and structure of micro-vibration measurement components are highly required. A distributed measurement is adopted for the satellite. The measuring equipment selects the laser gyro as the high-precision angular vibration sensor, which has small size, light weight and high precision. And the micro-vibration accelerometer is used to measure the transmission characteristics of the structure to the micro-vibration. The block diagram of the micro-vibration measurement sensor is shown in Fig. 1. Sensor network

Pulse counting: DSP+FPGA

+5V

+5V

+5V,-5V, +15V

AC 220V

Power system:AC/DC

DC/DC

Fig. 1. Micro-vibration measurement sensors

Computer: receiving data, display and analysis

Laser gyro -n

LVDS to USB

FIR filter

Pulse counting

Opticalcoupler isolation

Laser gyro -2

LVDS Data transmission

Laser gyro -1

610

Y. Li et al.

The position and parameters of each sensor in the micro-vibration measurement process are shown in Table 3. Table 3. Micro-vibration measuring equipment Sensor Angular displacement sensor KD110-5A

Micro-vibration accelerometer PCB: 356B18

Performance parameters Maximum sampling rate: 20 kHz Measuring frequency: 0– 1000 Hz Accuracy of angular displacement measurements: 0.007″ (3r) Non-linear error:  1% Sensor sensitivity: about 1000 mv/g Measuring range: ±5 g Frequency band: 0.5– 3000 Hz

Position of sensors The primary mirror, secondary mirror, reflecting mirror and focal plane of the camera

The upper and lower interfaces of the vibration isolator of CMG, the connecting part of bearing cylinder and wall,installation surface of the camera

4 Results and Analysis of Micro-vibration Test 4.1

Results of Micro-vibration Measurement

The main moving parts of a satellite’s micro-vibration measurement are CMG. The disturbance of the moving parts such as the solar wing and the ground/relay digital antenna is difficult to be realized due to the test state. Therefore, the simulation method is used to verify its influence on the imaging quality. The measurement results when CMG is turning on are shown in Fig. 2.

611

Arcsecond

Arcsecond

Distributed Measurement of Micro-vibration and Analysis

Time/s

Time/s

(b) Micro-vibration of the secondary mirror

Arcsecond

Arcsecond

(a) Micro-vibration of the primary mirror

Time/s

(c) Micro-vibration of the reflecting mirror

Time/s

(d) Micro-vibration of the focal plane

Fig. 2. Micro-vibration measurement data of each optical component of the camera

4.2

Effect of Micro-vibration on Image Quality

The whole camera is regarded as a flexible body, and the data of the angular displacement measurement sensor of the remote sensing camera and the focal plane are used for the simulation of the angular displacement of the visual axis combined with the optical system model. The specific method is to use the CODE V software to calculate the time domain change of the optical axis of the optical system by using the time domain data of the angular displacement of each optical component obtained by the experiment, so as to obtain the time domain data of the optical axis change caused by the angular displacement of each lens in the integration time. The results are shown in Fig. 3.

Y. Li et al.

Amplitude/mm

Image motion/pixel

612

Time/s (a) Image motion caused by micro-vibration

Frequency/Hz (b) Frequency response of micro-vibration

Fig. 3. Image motion and frequency domain curve caused by micro-vibration

It can be seen from the figure above that after the CMG is turned on, the image motion caused by micro-vibration is about 0.01 pixel. Compare with the data in Table 2, all meet the requirements of imaging quality. The effect of micro-vibration on camera MTF can be calculated through Eqs. 1 and 2, and the results are shown in Table 4.

Table 4. Calculations of the effect of micro-vibration on camera MTF (full chromatographic section) Integration stage 8 16 32 48 64 96

Integration time (s) 0.00051 0.00102 0.00205 0.00307 0.00410 0.00614

Image motion after multi-stage integration (pixel) 0.0215 0.0365 0.0464 0.0513 0.0545 0.0584

Impact factor of MTF 0.9925 0.9869 0.9788 0.9742 0.9709 0.9667

According to the above results, the impact of the turning on of the CMG group on the satellite MTF is about 0.967 at 96-stage integration, which is better than the coefficient of 0.95 usually adopted in the full-link imaging quality analysis.

5 Conclusion In this paper, aiming at ensuring the imaging quality of satellites in orbit, the demand for imaging quality for satellite micro-vibration mechanical environment is analyzed by referring to the theoretical formula that description the influence of micro-vibration on MTF of the space camera optical system using TDICCD. By measuring the angular displacement data of the camera optical system components, the image motion of the

Distributed Measurement of Micro-vibration and Analysis

613

optical axis of the camera optical system due to micro-vibration was obtained by CODE V software simulation. Based on this, the influence of a certain satellite on the image MTF after the CMG group being turned on is analyzed. This method provides a reference for the vibration reduction design of optical remote sensing camera while exploring the micro-vibration measurement method and explores a new idea for the subsequent research on micro-vibration of high-precision optical remote sensing satellite.

References 1. Yang L, Pang SW, Qu GJ (2007) Modelling of micro-vibration and integrated evaluation of high-precision spacecraft—progress review and research ideas. In: Chinese symposium on structural dynamics 2. Hadar O, Fisher M, Kopeika NS et al (1992) Image resolution limits resulting from mechanical vibrations. Part 2: experiment. Opt Eng 30(5):577–588 3. Xu P (2003) Study on system simulation technologies for space-borne optical remote sensors. Doctoral dissertation of Beijing Institute of Technology 4. Raiter Sergey, Norman Oier Hadart, Kopeika S (2002) Influence of motion sensor error on image restoration from vibrations and motion. Opt Eng 41(12):3276–3282 5. Gerald C. Holst, CCD Arrays, camera, and Displays, SPIE Optical Engineering Press, 1997 6. Li N, Han XJ, Li JH (2011) Ground test method for micro-vibration signals of spacecraft. Spacecr Environ Eng 28(1)

Analysis and Verification of the Effect of Space Debris on the Output Power Decline of Solar Array Enzhu Bao(&), Li Ma, Peng Tian, Linchun Fu, and Shijie Chen Beijing Institute of Spacecraft System Engineering, Beijing 100094, China [email protected]

Abstract. As a large component directly exposed to space environment, the solar array suffers from the impact of space debris very significantly. The cumulative effect of space debris will lead to the performance decline of solar cells. The effect of space debris on the decline of output power of the solar array is often accompanied by abnormal attitude. This article takes the decline of output power of the solar array of a medium-high orbit satellite as an example, the main factors affecting the decline of output power of the solar array are analyzed. Based on the analysis of satellite attitude telemetry data and the verification results of simulation experiments, the effect of space debris impact on solar array and its mechanism are analyzed in detail. The influence of space debris on the output power of solar array is highly correlated with the attitude of the satellite, therefore, it is necessary to analyze the output power of solar array and satellite attitude data for guiding the design of solar array, as well as for onorbit monitoring and early warning. Keywords: Space debris

 Output power  Decline

1 Introduction The solar cell is a power generation system composed of photoelectric converters, which is used for satisfying the power requirements of spacecraft. The solar cell converts solar energy into electricity in the illumination period, supplies power to the spacecraft and batteries charging. Because the solar array is exposed outside the cabin, it will face harsh outer space environment, which will aggravate the decline of the output power of the solar array. There are many factors affecting the output power of the solar array, among which the main factors are solar intensity, solar incidence angle, satellite attitude, offset angle of solar wing, space environment, working temperature of solar array, antenna shielding, etc. [1, 2]. With the development of human space exploration activities, the increasing number of secondary collisions between orbital objects has resulted in a lot of space debris in orbit. In the past 20 years, according to published data at home and abroad, there have been eight more serious collisions between on-orbit satellites [3]. As a large part of satellite directly exposed to space environment, the solar array suffers from the impact of space debris very significantly. The cumulative effect of space debris will lead to the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 614–621, 2020 https://doi.org/10.1007/978-981-13-9409-6_72

Analysis and Verification of the Effect of Space Debris

615

performance degradation of solar cells, and even damage a certain series of solar cells. Hypervelocity impact could produce plasma, and diffused plasma will induce discharge, which will cause arc discharge in solar cells. In addition, the combined effects of space radiation, atomic oxygen and ultraviolet rays will lead to the decline of output power of solar array [4]. Space debris causes the output power of solar arrays to decline, but it is often accompanied by satellite attitude anomalies. This article takes the output power decline of the solar array of a medium-high orbit satellite as an example, the main factors affecting the output power decline of solar array are analyzed. According to the analysis of satellite attitude telemetry data and ground simulation test, the effect of space debris impact on solar array and its mechanism are analyzed in detail.

2 Analysis of Output Power Decline Process of the Solar Array 2.1

Method for Calculating Output Power of Solar Array

On-orbit phase of satellites, for satellites with solar array output power measurement, the actual output power of solar array during illumination period is inquired directly. For satellites without solar array output power measurement, the actual output power of solar array is represented by the sum of load current, shunt current and charging current. The calculation formula is as follows: P ¼ ðI1 þ I2 þ I3 Þ  V P I1 I2 I3 V

ð1Þ

Formula: Output power of solar array, unit (W); Load current, unit (A); Shunt current, unit (A); Charging current, unit (A); Bus voltage, unit (V).

In Formula (1), for on-orbit satellites, I1 ; I3 ; V are telemetry dates of satellites. For some satellites, I2 needs to be obtained by bus error voltage signal. The relationship between bus error signal and shunt current is described in Sect. 2.2. 2.2

Shunting Principle of Solar Array

The shunt regulator adopts the local-linear-sequential shunt mode to regulate the power, so that the bus voltage in the illumination period can be stabilized within the nominal voltage range. After the output power of solar array is regulated by shunt, the current and signal are transmitted to the satellite through the driving mechanism of solar array. When the output power of solar array increases, the main error amplifier (MEA) signal increases, and the shunt current of the first shunt circuit increases until the first shunt circuit is saturated. The further increase of the error signal would cause

616

E. Bao et al.

the second shunt circuit to work, and so on until the output power of the solar array is balanced with the power required by load. The working principle of shunt regulator is shown in Fig. 1. Bus(+)

SR_20 UP

SR_20 DOWN

20th shunt controler

SR_1 UP

SR_1 DOWN

1th shunt controler

PCN1 Measurement(+) Thermal Control Circuit Measurement(-) Bus(-)

Fig. 1. Schematic diagram of shunt regulator

2.3

Analysis of the Decline Process of Output Power

In the long illumination period, the output power of solar array is expressed by the sum of load current, shunt current and charging current. When the output power of the solar array and load current remain unchanged without charging, the shunt current should remain unchanged. Taking a medium-high orbit satellite as an example, the output current of solar array decreases by 3A approximately at T0. The bus error voltage signal reflects the value of shunt current, as the error voltage of—Y bus decreases, two-

Analysis and Verification of the Effect of Space Debris

617

stage shunt exit and the shunt current of the satellite decreases by 3A approximately. The abnormal curve of output power decay process is shown in Fig. 2.

Fig. 2. Abnormal curve of output power decay process

At the same time, the yaw angle of the satellite fluctuates, maximum to 0.12°, the pitch and rolling angles are normal, the attitude of the control system returns to normal after 2 min of autonomous control, and the speed of the reaction wheel tends to be stable. Before the problem occurs and after the stability of the system, the angular velocity of the satellite body remained basically unchanged. Therefore, the variation of satellite angular momentum is equal to the variation of wheels angular momentum. According to the simulation results, the rolling and pitching postures remain unchanged, the yaw attitude changes by 0.11°, and the rotational speed of momentum wheel changes under the applying disturbing moment to z-axis. The yaw angle, the change of wheel speed and the synthetic angular momentum of wheels are close to the real satellite orbit curve. Aiming at the 0.12° fluctuation of the satellite yaw attitude, the orbit changes of solar array before and after abnormal decline of output power are analyzed. After the abnormal decline of solar array output power, the total RMS value of orbital R (Radius radial), T (Tangent tangent), N (Normal direction) has changed greatly compared with the normal situation. According to the principle of orbital dynamics, the reason is that the satellite is affected by some external forces. 2.4

Summary

The reason why the output power of solar array decreases and the yaw attitude of satellite fluctuates by 0.12° is that the Z-axis of the satellite is subjected to a certain disturbing moment. According to the analysis of orbital data, the total RMS value of RTN of the satellite has changed greatly compared with the normal situation.

618

E. Bao et al.

The reason is that the satellite is subjected to a certain external force, which is caused by the impact of space debris.

3 Simulation Validation 3.1

Attitude Variation Mechanism of Space Debris Impact Disturbing

When the output power of solar array decreases at T0, the solar panels are in a vertical state, therefore, the normal direction of the solar panels coincides with the X-axis of the satellite body basically. The satellite attitude diagram is shown in Fig. 3. When—Y solar wing is impacted by space debris or micrometeoroid, the impact force produces smaller disturbing moments on the rolling and pitching axes, but the yaw axis is obviously affected by disturbing moments. The diagram of the external moment acting on the back of the—Y solar wing is shown in Fig. 4.

Fig. 3. Diagram of the satellite attitude

3.2

Mechanics Analysis of Space Debris Impact Disturbance

The momentum change of space debris impacting solar array is as follows: m  DV ¼ 0:0824N  S

ð2Þ

According to the formula (2), inverse problem for the analysis of the space debris impact effect could be solved by analyzing the combination value of m and DV. Table 1 gives some solutions of the m and DV combination after the debris with the diameter of 0.01 mm to 10 cm is converted into the certain mass.

Analysis and Verification of the Effect of Space Debris

Satellite orbit

Orbital coordinate system

619

Y aw axis Rolling axis

Normal Direction Of the Solar Wing -Y Solar Wing

Fig. 4. Schematic diagram of solar array impacted by space debris Table 1. Mass-velocity characteristics of space debris which may impact with satellites Diameter 0.01 mm 0.1 mm 1 mm 2.5 mm 5.0 mm 7.5 mm 1 cm

Mass (g) 1.414E−09 1.414E−06 1.414E−03 2.209E−02 1.767E−01 5.964E−01 1.414E+00

Velocity variation DV (km/s) 5.831E+07 5.831E+04 5.831E+01 3.732E+00 4.665E−01 1.382E−01 5.831E−02

620

E. Bao et al.

Table 1, the labeled combinations of space debris m and DV have greater possibilities, each pair of data corresponding to the space debris impact, will lead to a sudden change in attitude. According to the simulation results, the most physically significant combination solution is the particle with a diameter of 2.5 mm and a DV of 3.7 km/s.

4 Impact Effect of Space Debris on the Solar Array The impact effects of space debris on solar array mainly include: attenuation of optical properties of solar cells, physical damage of solar cells, mechanical damage of solar cells, plasma discharge and damage of electrical connectors and cables [4]. (1) Attenuation of optical properties of solar cells. Attenuation of optical properties means that the incident light of solar cells is dissipated or absorbed partially, so the optical energy converted into electricity is reduced. Optical damage usually occurs only on the cover glass and does not damage the solar cells. The mechanism of attenuation of optical properties of solar cells is that the surface of cover glass is damaged and micro-cracks or scratches are formed [5]. (2) Physical damage of solar cells. solar cells will be damaged by the high-speed impact of large particle space debris, even broken down or short-circuited [5]. The decrease of output power of solar array caused by physical damage is much greater than that caused by damage of optical surfaces. If the solar cells itself is damaged, the loss current is proportional to the damage area. (3) Mechanical damage of solar cells. Solar panels usually consist of carbon fibre panels and aluminum honeycomb cores. Large particles at higher speeds may penetrate the solar panels, leaving a clear hole. Because of the different impact velocities, the shape of the perforation is irregular as well, at the same time, the faster the impact speed is, the larger the damage area of aluminum honeycomb core will be. (4) Plasma discharge. The impact between space debris and solar wings may cause electrical damage, which is due to the partial high-concentration plasma discharge of solar wing caused by high-speed impact, which will lead to parasitic capacitance short-circuit and high-current discharge between solar cell strings [5]. In an experiment conducted by Kyushu Institute of Technology in Japan, when space debris impacted directly with solar array, the circuit of solar array was almost instantly short-circuited with the substrate. (5) Damage to electrical connectors and cables. Electrical connectors and cables on solar arrays are the carrier of current transmission. Cables transmit the output power of solar arrays to the satellite through electrical connectors. Once electrical connectors and cables are impacted by space debris, it will cause two failure modes: The first is that the electrical connectivity is destroyed, resulting in the opening of cables and loss of the output power of the solar array on the corresponding circuit. The second is the continuous discharge of adjacent cables, the heat generated by the arc burns the insulation skin of adjacent healthy cables, resulting in the loss of output power of some or the whole solar array.

Analysis and Verification of the Effect of Space Debris

621

5 Conclusions The influence of space debris on the output power of solar array is highly correlated with the attitude of the satellite. Based on the analysis of the output power of solar array and satellite attitude on-orbit data, and the verification of ground simulation experiments, a judgment method for judging the influence of space debris on the output power of solar array is proposed, which could be used to guide the design of solar array and be used as on-orbit monitoring and early warning means as well.

References 1. Wan X, Lu Q, Liu P (2017) Method for calculating sunlight incident angle of solar array slanted. Spacecr Eng 26(2):38–43 (in Chinese) 2. Zuo Z, Jin D, Tian D (2017) Fitting algorithm research on solar array output of GEO satellite. Spacecr Eng 26(2):85–89 (in Chinese) 3. Huang J, Han J (2008) Investigation on the surface damage to solar cells by impacts of space microdebris on low earth orbit. ACTA Phys Sin 57(12):7950–7954 (in Chinese) 4. Jiang D, Zheng S, Ma N (2017) Study of space debris and meteoroid impact effects on spacecraft solar array. Spacecr Eng 26(2):114–120 (in Chinese) 5. Moussi A, Drolshagen G, McDonnell JAM (2005) Hypervelocity impacts on HST solar arrays and the debris and meteoroids population. Adv Space Res 35:1243–1253

A New Nonlinear Method for Calculating the Error of Passive Location Shuncheng Tan(&), Guohong Wang, Chengbin Guan, Hongbo Yu, Siwen Li, and Qian Cao Naval Aviation University, erma Road, No. 188, Yantai 264001, China [email protected]

Abstract. The existing literatures usually adopt the linear method to obtain the location error which is usually evaluated by geometrical dilution of precision (GDOP) in nonlinear passive location systems. However, the linear method is not always suitable in any condition, and the obtained GDOP may deviate the true values of location errors severely when the location system is of severe nonlinearity. Thus, a new nonlinear method for the calculation of location Error based on unscented transformation (UT) is proposed and verified in this paper. Keywords: Nonlinear method  Geometrical dilution of precision  Unscented transformation  Location error

1 Introduction The location error of a system depends on the geometric distribution of the target and location stations. In order to evaluate the performance of passive location system, geometrical dilution of precision (GDOP) is used as the evaluation criterion [1–3]. By the calculation of GDOP, we can evaluate the impact that the measurement precision and the geometric distribution of location stations have on the performance of target location. For nonlinear system, the calculation of GDOP is usually a severe nonlinear problem, and the general method for that is to utilize the linear method [3–6]. However, as far as we know, whether the linear method is reliable and suitable for the calculation of GDOP when the location system is of severe nonlinearity has not been reported in the existing literatures. Therefore, this paper focuses on the reliability of the linear method, and a numerical example is presented to show that the linear method is not always suitable for the calculation of GDOP. To deal with this problem, an unscented transformation (UT) based method is proposed and compared with the linear method. The rest of the paper is organized as follows. Section 2 analyzes the reliability of GDOP based on linear method by a numerical example. Section 3 proposes an UT based method for the calculation of GDOP. Section 4 ends the paper with concluding results.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 622–628, 2020 https://doi.org/10.1007/978-981-13-9409-6_73

A New Nonlinear Method for Calculating the Error of Passive

623

2 Analysis of GDOP Based on Linear Method To verify the reliability of the linear method, time difference of beam scan (TDOBS) passive location in [1] is presented in this section. Figure 1 shows the schematic diagram of TDOBS passive location. Three passive sensors, namely sensor 1, sensor 2 and sensor 3 are placed at A, O and B respectively. Y

Emitter

( x, y )

θ2

θ1

Sensor 3

B

l2 Sensor 1

Sensor 2

o

θ

l1

A

X

Fig. 1. Schematic diagram of TDOBS passive location

According to [1], the location equations are given by (

x ¼ k lk11ðð11þþkkk21ÞÞ y ¼ lk11ðð11þþkkk21ÞÞ

ð1Þ

where k¼

k1 l2 ðk2 sin h þ cos hÞ  k2 l1 k1 k2 ðl1 þ l2 cos hÞ  k1 l2 sin h

ð2Þ

k1 ¼ tan h1

ð3Þ

k2 ¼ tan h2

ð4Þ

and

where ðx; yÞ is the position of the emitter, l1 , l2 are the lengths of OA and OB ! ! respectively, h is the angle of OA and OB , and h1 ; h2 are the angles formed by the beam scan arriving at sensors orderly.

624

S. Tan et al.

Suppose that T is the scanning period of emitter, Dt12 is the TDOBS between sensor 1 and sensor 2 and Dt23 is that between sensor 2 and sensor 3. Then h1 ¼ 2pDt12 =T

ð5Þ

h2 ¼ 2pDt23 =T

ð6Þ

and

The errors of l1 ; l2 ; Dt12 and Dt23 are supposed to be independent zero-mean Gaussian noises with standard deviations of rl1 ; rl2 ; rDt12 and rDt23 respectively. For calculation convenience, set h ¼ 0 , i.e., the three sensors are in line. Then the GDOP based on the linear method is given by GDOPL ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PL;y ð1; 1Þ þ PL;y ð2; 2Þ

ð7Þ

where   2 r PL;y ¼ A1 B l1 0

  2 rDt12 0 T B þ C r2l2 0

0 r2Dt23



   T CT A1

l ðx2 þ l x  y2 Þ yl1 ð2x þ l1 Þ A¼ 1 2 1 l2 ðx  l2 x  y2 Þ yl2 ð2x  l2 Þ  B¼

yðx2 þ y2 Þ 0 0 yðx2 þ y2 Þ

ð8Þ

 ð9Þ

 ð10Þ

and " C¼

P1 ðx2 þ l1 x þ y2 Þ 0

2

# 0 2 P 2 ð x2  l 2 x þ y2 Þ

ð11Þ

where Pi ¼

2p ; i ¼ 1; 2 T cos2 hi

ð12Þ

Set T ¼ 10 s; l1 ¼ 80 km; l2 ¼ 40 km; rl1 ¼ rl2 ¼ 50 m and rDt12 ¼ rDt23 ¼ 10 ms. The scatter of average true location errors (based on 200 Monte Carlo runs) and scatter of GDOP based on the linear method are shown in Figs. 2 and 3 respectively. Half of the figures are given for the sake of symmetry. It can be seem from Figs. 2 and 3 that the scatter of GDOP based on the linear method is close to the scatter of average location errors when the emitter is far away from the sensors. However, the obtained GDOP deviate the average location errors severely when the emitter is near to the sensors where the nonliner are very high. Thus,

A New Nonlinear Method for Calculating the Error of Passive

625

Fig. 2. Scatter of the average true location errors based on 200 Monte Carlo runs

Fig. 3. Scatter of GDOP based on the linearization method

GDOP based on the linear method may be unreliable and the utilization of which for evaluating the precision of location system must be cautious when the system has severe nonlinearity.

626

S. Tan et al.

3 GDOP Based on UT To deal with the problem mentioned above, an UT based method is proposed in this section. The UT is a new method for calculating random variables that are transformed by nonlinear system, and it is very popular for the merit that it can be used for state estimation of nonlinear system easily [7–13]. The basic idea of UT is that the approximation of nonlinear function’s probability density function (PDF) is much easier than the approximation of the function itself [9]. Thus, UT approximates the PDF of the nonlinear functions by a series of sigma sampling points which can reflect the actual mean and covariance of Gaussian density sufficiently. For discussion convenience, define a non-linear function f : Rnz ! R2 corresponding to the location equations, so that y ¼ f ðzÞ

ð13Þ

where y ¼ ½x; yT denotes the position of the emitter, and z ¼ ½l1 ; l2 ; Dt12 ; Dt23 T is the measurement vector with nz ¼ 4 as its dimension. Similar to [11], the general process for the calculation of GDOP based on UT method can be described as follows: z Firstly, select a series of sigma point-set Cz ¼ fzi ; wi g2n i¼0 , where 8 < z0 ¼ z pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðnz þ cÞPz i zi ¼ z þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : zi þ nz ¼ z  ðnz þ cÞPz i

i ¼ 1; 2; . . .; nz i ¼ 1; 2; . . .; nz

ð14Þ

and

wz;0 ¼ c=ðnz þ cÞ wz;i ¼ 1=½2ðnz þ cÞ

i ¼ 1; 2; . . .; 2nz

ð15Þ

where c is a dimension parameter which can be evaluated by any value satisfied pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi nz þ c 6¼ 0; z and Pz are the mean and covariance of Z respectively, and ðnZ þ cÞPz i is the ith row or column of the square root of ðnZ þ cÞPz . 2n Then, a new sigma point-set Cy ¼ yi ; wy;i i¼0z is obtained through the nonlinear transformation yi ¼ f ðzi Þ and wy;i ¼ wz;i for i ¼ 0; 1; . . .; 2nz . The mean and covariance of y can be calculated based on Cy . y ¼

2nz X

wy;i yi

ð16Þ

i¼0

and PUT;y ¼

nz X i¼0

wy;i ðyi  yÞðyi  yÞT

ð17Þ

A New Nonlinear Method for Calculating the Error of Passive

627

Then, the GDOP based on the UT method is given by GDOPUT ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PUT;y ð1; 1Þ þ PUT;y ð2; 2Þ

ð18Þ

Set c = 0, take the example and simulation parameters given in Sect. 2, and then the scatter of GDOP based on the UT method is shown in Fig. 4.

Fig. 4. Scatter of GDOP based on the UT

It can be seen from Figs. 2, 3 and 4 that the scatter of GDOP based on the UT method is much closer to the scatter of the average location errors compared with that based on the linear method. This may be explained by that the UT based method can exact to the second order items to get the GDOP, while the linear method only keeps the first order item of the Taylor expansion to approximate the GDOP, which may result in large errors by ignoring high order items when the nonlinearity of location system is severe.

4 Conclusions In this paper, GDOP based on the linear method has been proved to be unreliable when the location system has severe nonlinearity, and then the UT based method is proposed as an alternative to solving this problem. Simulation results demonstrate that the UT based method is much more suited than the linear method for the calculation of GDOP in the case of severe nonlinearity. Thus, the linearization based method should be used

628

S. Tan et al.

cautiously and the UT based method may be a better alternative for calculating the GDOP of severely nonlinear system. Acknowledgements. This work is supported by the National Natural Science Foundation of China (61671462, 61731023, and 61501489) and Special Foundation Program for Mountain Tai Scholars.

References 1. Xu HL (1998) Study on TDOA passive accuracy location of three stations. Electron Warf Technol 13(1):15–23 2. Sharp I, Yu K, Guo YJ (2009) GDOP analysis for positioning system design. IEEE Trans Veh Technol 58(7):3371–3382 3. Wang Z, Zhang J (2007) Analysis and simulations of GDOP in the location of GPS. Proc SPIE 6795:679572 4. Xiu JJ, He Y, Wang GH, Xiu JH (2005) Constellation of multisensors in bearing-only location system. IEE Proc-Radar Sonar Navig 152(3):215–218 5. Trasi K (2007) Localization algorithms for passive targets in radar networks. Master thesis, the University of Texas at Arlington 6. Beck A, Stocia P, Li J (2008) Exact and approximate solutions of source localization problems. IEEE Trans Signal Process 56(5):1770–1778 7. Julier S, Uhlmann J, Durrant-Whyte H (1995) A new approach for filtering nonlinear systems. In: Proceedings of American control conference, Seattle, WA, pp 1628–1632 8. Wan EA, vander Merwe R Nelson AT (2000) Dual estimation and the unscented transformation. Adv Neural Inf Process Syst 12:666–672 9. Julier SJ, Uhlmann J, Durrant-Whyte HF (2000) A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans Autom Control 45(3):477–482 10. Lefebvre T, Bruyninckx H, De-Schutter J (2002) Comment on a new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans Autom Control 47(8) 11. Julier SJ (2002) The scaled unscented transformation. In: Proceedings of the American control conference, Anchorage, AK, 8–10 May 2002 12. Julier SJ (2003) The spherical simplex unscented transformation. In: Proceedings of the American control conference, Denver, CO, 4–6 June 2003 13. St-Pierre M, Gingras D (2004) Comparison between the unscented Kalman filter and the extended Kalman filter for the position estimation module of an integrated navigation information system. In: 2004 IEEE intelligent vehicles symposium, University of Parma, Parma, Italy, 14–17 June 2004

A Static Method for Stack Overflow Detection Based on SPARC V8 Architecture Tao Zhang(&), Rui Zhang, Ruijun Li, Yanfang Fan, and Hongjing Cheng Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected]

Abstract. With the rapid development of Space technology, on-board software plays a more and more important role in the spacecraft. Stack is an important storage resource for on-board software. If the allocation space of stack is not enough, it may cause stack overflow and software crash. Based on SPARC V8 architecture, this paper introduces a static method for detection of stack overflow. This method does not need to run the on-board software dynamically or design complex test cases. By directly analyzing assembly file generated by the compiler, the stack usage space and the call relationship of functions can be obtained. Taking the entry function of each task as the starting point of stack depth analysis, the function call path is traversed by the stack data structure, and the maximum stack depth of each task is finally calculated. An instance of a task stack detection shows that by analyzing the static assembly file, the maximum depth of stack can be obtained directly, the risk of stack overflow can be avoided, and the reliability and security of on-board software can be improved. Keywords: Static method depth  Stack overflow

 SPARC V8  On-board software  Stack used

1 Introduction SPARC V8 (Scalable Processor ARChitecture Version 8) is a CPU instruction set architecture (ISA), derived from a reduced instruction set computer (RISC) lineage. It was designed as a target for optimizing compilers and easily pipelined hardware implementations. The processors based on SPARC V8 are widely used in the field of aerospace. With the support of RTEMS operating system, on-board software runs stably on SPARC V8-based processors and completes the telemetry, telecommand, attitude-orbit control and self-management functions [1]. The on-board software is usually classified as the key level software which is very critical to the satellite. Onboard software functions are coordinated by different tasks. Each task has its own independent stack space. The stack not only stores the parameters and return address of the calling function, but also stores the local variables of the function. If the stack allocation is too much, it will cause serious waste of space resources, and if the stack space allocation is not enough, the stack overflow may occur during the software running process which will lead to software crash and threaten the safety of spacecraft.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 629–637, 2020 https://doi.org/10.1007/978-981-13-9409-6_74

630

T. Zhang et al.

Stack depth detection methods include static method and dynamic method [2]. Static detection method uses software tools such as Stack Analyzer for automated analysis. Stack Analyzer is a commercial software, and its source code is not available to the user. Therefore the tool is not applicable in aerospace and military industries. RTInsight tool of Shanghai Chuangjing Science and Technology Company is based on dynamic detection method. It requires special hardware tools to support the test and specific requirements for the hardware interface of the embedded system. In the meantime, If the design of test cases is not reasonable, it is not possible to get the maximum stack usage and the stack overflow will occur during the software running process. To solve these problems, this paper analyzes the characteristics of register windows and stack allocation in SPARC V8 architecture and proposes a static overflow detection method. The maximum depth of stack can be obtained directly by analyzing the static assembly file. The method is also applicable to the stack overflow detection of on-board software which running on SPARC-V7-based processors.

2 Introduction of SPARC V8 2.1

Register Windows

The greatest feature for SPARC V8 Embedded Processor is Register Windows [3]. The processor such as BM3803 has eight sets of window registers, each of them consists of eight input registers(%i0–%i7), eight local registers(%l0–%l7), eight output registers(% o0–%o7) and eight global registers(%g0–%g7). The in and out registers are used primarily for passing parameters to subroutines and receiving results from them, and for keeping track of the memory stack. When a procedure is called, the caller’s outs become the callee’s ins. Each window shares its ins and outs with the two adjacent windows. Figure 1 shows the overlapping of windows. r[31] : r[24] r[23] : r[16] r[15] : r[8]

ins

locals

outs

r[31] : r[24] r[23] : r[16] r[15] : r[8]

ins

locals

outs

r[31] : r[24] r[23] : r[16] r[15] : r[8]

Fig. 1. Overlapping of windows

ins

locals

outs

A Static Method for Stack Overflow Detection Based on SPARC

631

There are two registers related to the stack Management. One of a procedure’s out registers %o6 is used as its stack pointer %sp. It points to an area in which software can store %l0–%l7 and %i0–%i7 when the register window overflows, and is used to address most values located on the stack. The input register %i6 is used as the frame pointer %fp, addressable automatic variables on the stack are addressed with negative offsets relative to %fp. The caller’s stack pointer %sp (%o6) automatically becomes the current procedure’s frame pointer %fp (%i6) when the SAVE instruction is executed. 2.2

Stack Management

The stack allocation is accomplished through the “SAVE %sp, -NUM, %sp” instruction in SPARC V8 architecture. The SAVE instruction can be used to atomically allocate a new window in the register file and new software stack space in memory. The size of new stack allocation is NUM bytes. The stack consists of frame pointer and stack pointer, and the stack grows from high address to low address direction. The stack space occupied by each function is called a stack frame [4]. For example, function A calls function B, function B then calls function C, and the stack allocation is shown in Fig. 2

Space (if needed) for automatic arrays Space dynamically allocated via alloca(), if any Space (if needed) for compiler tem- poraries and saved floating-point registers Outgoing parameters past the sixth, if any 6 words into which callee may store register arguments One-word hidden parameter (address at which callee should store aggregate return value) 16 words in which to save register window (in and local registers)

Stack Growth (decreasing memory addresses)

Fig. 2. The user stack frame

632

T. Zhang et al.

Fig. 3. The task stack

With the support of the real time operating system, the functions of on-board software such as telecommand and telemetry services are usually implemented by several tasks. Every task has its own stack. Figure 3 shows the task stack allocation. When the stack of one task overflows, it might overwrite the stack of another task which may cause an software crash.

3 Static Method of Stack Overflow Detection 3.1

Principle Introduction

Considering the stack overflow detection problems discussed above, this paper proposes a static method which provides the detection of the stack depth and overflows based on SPARC V8 architecture. The detection principle is: (a) Analyzing the assembly code produced by GCC, searching for all methods and calculating the stack depth of every method. (b) According to the specified on-board software task entry function, Parsing out the function call relationship for each task. (c) Traversing all paths of the function call relationship, calculating the stack depth on each path. Once the stack is found to be using more stack space than was allocated when the task was created, a stack overflow error is reported and the path of the function call that caused the stack overflow is logged. It ends up with the maximum stack depth and a record of all function call paths that caused the stack overflow. Figure 4 shows the flowchart of the static stack overflow detection method.

A Static Method for Stack Overflow Detection Based on SPARC

633

Fig. 4. The flowchart of the static stack overflow detection method

3.2

Analysis of Function Stack Usage

The analysis of function stack usage is done by Parsing Assembly Code. To adjust the top and bottom stack pointers, the compiler generates a special piece of assembly code through which we can get the stack size of each function. Figure 5 shows the assembly code of TASK1. The stack allocation is accomplished through the “save %sp, -104, % sp” instruction, and the allocated stack size is 104 bytes. The destruction of the stack is done by the “restore” command. Through the regular expression of the function name and “save” instruction, the stack size of each function can be obtained easily as the underlying data for stack overflow detection.

634

T. Zhang et al. 4000e2cc : 4000e2cc: 9d e3 bf 98 …… 4000e2f4: 7f ff e1 c1 4000e2f8: d6 34 20 02 4000e2fc: d0 34 20 3e 4000e300: 01 00 00 00 4000e304: 81 c7 e0 08 4000e308: 81 e8 00 00

save %sp, -104, %sp call 400069f8 sth %o3, [ %l0 + 2 ] sth %o0, [ %l0 + 0x3e ] nop ret restore

Fig. 5. Example for stack allocation of TASK1 function

3.3

Analysis of Function Call Relationships

The analysis of function call relationships is also done by Parsing Assembly Code. The function of on-board software is accomplished by the cooperation of different tasks. The entry function for each task acts as the root node and the function called acts as the tree node. Figure 6 shows the compiled assembly code and the corresponding function call relationship diagram. The function call is made by the “call” instruction in the assembly code, and the function return is made by the “ret” instruction. After the entry function of each task is specified, through the regular expression of the function name and “call”, “ret” instruction, the tracing and analysis of function call relationship can be solved easily.

Fig. 6. Example of function call relationship

A Static Method for Stack Overflow Detection Based on SPARC

3.4

635

Implementation of Stack Overflow Detection Algorithm

Establish a stack structure, push and pop related function information from the stack when a function calls and returns. Figure 7 shows the algorithmic flow of the stack overflow detection.

Fig. 7. Flow of stack overflow detection algorithm

636

T. Zhang et al.

(a) The entry function of the task is pushed into the stack first and then the assembly code is traversed. When it matches the “call” instruction, the function name after the “call” instruction is taken out, the related information of the function is pushed into the stack and the current stack depth is calculated. Jump to the code location of the called function and continue to match the “call” instruction. (b) If a “ret” instruction is matched, the function is popped out from the top of the stack and the current stack depth is recalculated. Return to the code location of the calling function and continue to match. (c) If a “ret” instruction is matched and the top of the stack is empty, the analysis is completed. (d) Monitor the current stack depth during the entire stack access process and check whether the stack used more space than was allocated when the task was created, if so, record the current stack size and overflow information. (e) All push and pop operation can be printed out during the analysis process to obtain the complete function call relationship and stack overflow detection information.

4 Case Verification and Result Analysis Taking an on-board software of the integrated electronic system as the test object, the stack usage of ten threads is tested, and the test results are compared with those using RTInsight dynamic testing tools. The test results are shown in Table 1. Table 1. A case for stack depth detection TASK ID

Stack space/byte

TASK1 TASK2 TASK3 TASK4 TASK5 TASK6 TASK7 TASK8 TASK9 TASK10

41,472 10,752 6656 20,992 102,912 20,992 20,992 10,752 10,752 25,088

RTInght dynamic tool Use Utilization depth/byte ratio/% 18,104 43.7 5296 49.2 996 15.0 10,264 48.9 20,892 20.3 10,008 47.7 9608 45.8 3692 34.3 3824 35.6 12,080 48.2

Static method Use depth/byte 18,104 5296 996 10,264 21,004 10,008 9608 3692 3824 12,080

Utilization ratio/% 43.7 49.2 15.0 48.9 20.3 47.7 45.8 34.3 35.6 48.2

The test results show that the stack depth can be obtained by using the static method and the accuracy is 4 bytes. According to the requirement of software task, the stack space of some tasks can be adjusted to realize the reasonable allocation of stack.

A Static Method for Stack Overflow Detection Based on SPARC

637

At the same time, the stack used depth of TASK5 detected by the static method is larger than that detected by the dynamic testing tool in Table 1. This is because by using RTInsight the test cases for TASK2 didn’t trigger the deepest function call path. Through modifying the test cases for TASK2, the detected stack depth is equal to that detected by static method. In the above test case, TASK2 is selected to illustrate how task stack overflow can be detected, through calling the following “fun_test” function. void fun_test (void) { int test_array[1380]; test_array[1379] = 0; return; } Through the static stack overflow detection method, the maximum stack depth is calculated to be 10,816 bytes when calling the “fun_test” function. It exceeds the stack space which is 10,752 bytes allocated by TASK2. The stack overflow is detected and prompted.

5 Conclusion Based on the SPARC V8 architecture, a static method for stack overflow detection is proposed to solve the problem of reasonable stack resource allocation. The advantages of this method are: the principle is simple and easy to implement, and it does not need to run the on-board software dynamically or design complex test cases. The maximum stack usage depth for each task can be obtained by directly analyzing the assembly file generated by the compiler. This method has been applied to the test of spacecraft onboard software which can greatly improve the reliability and security of on-board software.

References 1. He X, Sun Y (2007) Engineering realization of software in central terminal unit of satellite data management system. Spacecr Eng 16(5):47–53 2. Yuan Y (2011) The research of static analysis for stack overflow of embedded software. Beijing Jiaotong University, Beijing 3. Beijing Microelectronics Technology Institute (2017) BM3803FMGRH 32 bit space processor user manual. Beijing Microelectronics Technology Institute, Beijing 4. SPARC International, Inc (1991) The SPARC architecture manual version 8. SPARC International, Inc., USA

Enhanced Double Threshold Based Energy Detection Omar Aitmesbah1(&) and Zhuoming Li1,2 1

School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China 2 Peng Cheng Laboratory, Shenzhen 518052, China [email protected]

Abstract. For the past few decades, the use of wireless communication devices have exponentially increased and invaded all domains. And the limited amount of spectrum bands can possibly cause an overcrowded or interference in communication between different users. The implementation of Cognitive Radio in the network can successfully deal with the huge demand and the lack of wireless spectrum. Cognitive radio energy detection is a famous method used to sense the state of the primary user PU. In this study, cooperative spectrum sensing together with double threshold have been used. Two thresholds were considered and compared to a test statistic an accordingly a decision regarding the presence and absence of the primary user is made. The simulation results shows that during cooperative sensing the proposed double threshold method performs better than both energy detection and conventional double threshold detection methods, by considering different parameters such as SNR, number of samples, probabilities of detection and false alarm. Keywords: Cognitive radio  Cooperative spectrum sensing  Energy detection technique  Double threshold detection technique

1 Introduction Cognitive radio (CR) improves spectrum efficiency by implementing Dynamic Spectrum Access (DSA) [1] where Cognitive users operate as Secondary Users (SUs) in Frequency Bands (FBs) originally allocated to primary users (PUs), in the attempt to enhance the exploitation of under-utilized FBs. Relevant standardization already exists, e.g. IEEE 802.22 and ECMA-392 [2], spectrum sensing has a fundamental role that avoiding interference to primary user and opportunistically access the spectrum [3]. Several spectrum sensing techniques have been used in literature with different complexities and requirement of the primary user signal knowledge [4]. Several spectrum sensing techniques are available such as energy detection technique, feature based detection, matched filter-based detection, eigenvalue-based technique etc. Among all the cited methods, Energy detection method is widely used due to its simplicity [5]. However, this method has also drawbacks, its performance decreases under low SNR, which known as noise uncertainty problem. As well known, the aim of any spectrum sensing technique is to increase the detection performance. The detection performance © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 638–645, 2020 https://doi.org/10.1007/978-981-13-9409-6_75

Enhanced Double Threshold Based Energy Detection

639

is determined by two parameters: probability of miss detection and false alarm. The probability of miss-detection is known as miss-detecting a primary user signal when it is present actually. The false alarm probability represents the probability of detecting the presence of primary user signal when it is not present actually. It is preferably that the probability of miss-detection have to be minimized, or the probability of detection must be maximized, and probability of false alarm should be as low as possible. However, there are several issues such as multipath fading, shadowing and noise uncertainty which affect the detection performance. Therefore, cooperative spectrum sensing technique was used for aim to enhancing the detection performance by cooperation i.e. each user share their individual sensing information and make a combined decision using AND/OR rule, which is more accurate than the individual decisions of each CR [6–9]. In this work, a new algorithm for spectrum sensing based on the energy detection double threshold was proposed in order to check the presence or absence of the signal by comparing it to a certain test statistic Y ðiÞ. This one is equal to average of the squares energies of the received signal, the idea is, when the received energy lay between the two threshold and instead of doing the basic resining, we propose two increase the number of samples in order to get more accurate energy which lead to a better detection.

2 System Model 2.1

General Model

Let’s consider a cognitive radio network with N cognitive users which senses a particular spectrum independently to detect the presence of primary user. Then the signal received under hypothesis H1 and H0 i.e. when primary user is active and when user is inactive respectively, can be described as  yð k Þ ¼

nð k Þ sðk Þ þ nðkÞ

H0 H1

ð1Þ

Where yðkÞ represents the received signal at SU’s receiver; sðkÞ and nðkÞ denote the PU signal and the additive noise, respectively; H1 and H0 stand for the hypotheses of PU’s presence and absence, respectively. 2.2

Energy Detection (Single Threshold Based Detection)

Traditional ED, also known as radiometry, is commonly used due to its low computational complexity and hardware simplicity [10, 11]. Additionally, ED, which requires no prior information about the PU, is more generic compared to many advanced spectrum sensing techniques. The signal is detected by comparing the measured energy with the threshold value, which is determined according to the assumed noise variance and desired false alarm probability (Fig. 1).

640

O. Aitmesbah and Z. Li

Fig. 1. Conventional energy detector

In Conventional energy detection (CED), whether a target PU is present is determined by comparing the received energy with the preset thresholds, which can be modeled as a binary hypothesis described as: In this case, the test statistic of CED is given by Y ði Þ ¼

N 1X jyi ðnÞj2 N n¼1

ð2Þ

where N denotes the number of samples. Expressions for probability of detection and probability of false alarm can be given as 0



d2s

þ d2w



1

Bk  N C Pd ¼ PðY [ kÞ ¼ Q@qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  A 2 2 2N ds þ dw 0

ð3Þ

1 Nd2w

B k C Pfa ¼ ðY\kÞ ¼ Q@qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA 4 2N ðdw Þ

ð4Þ

 2 1 1 y Z Qð xÞ ¼ pffiffiffiffiffiffi exp dy 2 2p x

ð5Þ

And

 qffiffiffiffiffiffiffiffiffiffiffi k ¼ Q1 Pfa 2Nd4w þ Nd2w N¼2

 1   2 Q Pfa  Q1 ðPd Þ  Q1 ðPd Þ SNR

ð6Þ ð7Þ

where, k is threshold to which the energy Y is compared to determine the presence or absence of PU. d2w and d2s are the noise and signal variance respectively. Based on the cooperative spectrum sensing, cooperative false alarm detection probability Qf and probability of detection Qd can be obtained as follows

Enhanced Double Threshold Based Energy Detection

Qf ¼ 1 

M  Y

1  Pf ;i



641

ð8Þ

i¼1

Qd ¼ 1 

M  Y

1  Pd;i



ð9Þ

i¼1

where Pd;i and Pf ;i are the probability of detection and probability of false alarm of ith. SU respectively and M is the number of CRs participating in CSS. 2.3

Double Threshold

The conventional energy detector used a single threshold for ascertaining the presence of PU. Thus identification of an optimal value of k is very essential for reducing the missed detections. Due noise estimation error, may degrade the system performance, and also under low SNR the traditional energy detection might not be able to differentiate the signal of noise and PU. In order to improve the reliability of decision double threshold scheme is implemented [12, 13]. The two thresholds k1 and k2 are illustrated in Fig. 2. Also we define: Here hypothesis H1 is true if Y is greater then k2 and hypothesis H0 is true when Y is less than k2 and incase observed energy Y lies in between the two thresholds k1 \Y\k2 , no decision is taken and CR will go for the sensing again. And thresholds k1 and k2 can be found as: k1 ¼ ð1  qÞk

ð10Þ

k2 ¼ ð1 þ qÞk

ð11Þ

where q is the uncertainty region width.

Fig. 2. Double threshold based spectrum sensing

But the simple double threshold still also weak under low SNR, in the next section we will discuss the proposed algorithm which aim to enhance the probability of detection under low SNR. For double threshold detection method, the probabilities of detection and false alarm are given as below

642

O. Aitmesbah and Z. Li

0





þ d2w

d2s

þ d2w



1

Bk2  N C Pd ¼ PðY [ k2 Þ ¼ Q@ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  A 2 2 2N ds þ dw 0

d2s



ð12Þ

1

Bk2  N C Pd ¼ PðY [ k2 Þ ¼ Q@ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A  2  2 2N ds þ d2w

ð13Þ

3 Proposed Algorithm In our proposed algorithm a method for re-sensing have been used. So the decision regarding the presence/absence of the primary user can be taken even in the uncertainty or confusion region. The two thresholds can be calculated as shown in Fig. 3.

Fig. 3. Proposed flowchart

Enhanced Double Threshold Based Energy Detection

643

4 Simulations and Results In this section, we compared double threshold-based detection with our proposed double threshold based detection using two performance detection metrics i.e. probability of false alarm, probability of detection and SNR as well, under cooperative base Spectrum sensing. ‘OR’ rule is used under cooperative spectrum sensing and for simulation purpose, 5 secondary users have been considered.

ROC Probability of False Alarm vs Proba of Detection for SNR = -10 dB 1

0.9

Probability Of Detection

0.8 0.7 0.6 0.5 0.4 signal threshold double threshold

0.3

proposed double threshold

0.2 0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Probability Of False Alarm

Fig. 4. Receiver operating characteristic of Pf versus Pd for ED

Figure 4 illustrates ROC curves of the three methods at distinct values of probability of false alarm. For Pf situated between 0.1 and 1 and fixed SNR at −10 dB the curve shows significant increase in comparison with the double threshold method and single threshold method. The number of samples are increased by 500 every time resensing is performed, and 3 iterations are taken after which the decision is taken. Figure 5 shows the variation of probability of detection as function of SNR in db. SNR value is considered from 0 to −20 for 1000 monte-carlo simulations. The probability of false alarm is 0.1. At −10 dB SNR the value of Pd for proposed approach is 0.92 whereas it is 0.71 for double threshold method and approximately 0.5 for single threshold method.

644

O. Aitmesbah and Z. Li 1 0.9

Probability Of Detection

0.8 0.7 0.6 0.5 0.4 0.3 signal threshold conventional double threshold

0.2

proposed double threshold

0.1 0 -20

-18

-16

-14

-12

-10

-8

-6

-4

-2

0

SNR

Fig. 5. Receiver operating characteristic of SNR versus Pd for ED

5 Conclusion In summary, a double threshold-based detection method has been proposed which successfully raised the probability of detection. The ‘OR’ rule has been used for taking decision about the presence/absence of the primary user and the three techniques i.e. double threshold based detection, single threshold based detection, and the proposed scheme have been compared using three main performance metrics—probability of detection, probability of false alarm, and SNR, re-sensing is performed in case of uncertainty in the decision i.e. if the test statistic lies between the two thresholds. The two thresholds are taken above and below the uncertainty region having D width. By comparing the other methods with our results, we noticed that the proposed method gives a better performance than the other two methods. As well as, it gives enhanced results about the absence or the presence of the primary user. Acknowledgements. This work is supported by the project “The Verification Platform of Multitier Coverage Communication Network for oceans (PCL2018KP002)”.

Enhanced Double Threshold Based Energy Detection

645

References 1. Mitola J, Maguire GQ Jr (1999) Cognitive radio: making software radios more personal. IEEE Pers Commun 6(4):13–18 2. Kordali AV, Cottis PG (2017) A contract-based spectrum trading scheme for cognitive radio networks enabling hybrid access. IEEE Access 3:1531–1540 3. Axell E et al (2012) State-of-the-art and recent advances spectrum sensing for cognitive radio state-of-the-art and recent advances. Sig Process Mag IEEE 29(3):101–116 4. Ali A, Hamouda W (2017) Advances on spectrum sensing for cognitive radio networks: theory and applications. IEEE Commun Surv Tutorials 19(2):1277–1304 5. Patil VM, Patil SR (2016) A survey on spectrum sensing algorithms for cognitive radio. In: 2016 international conference on advances in human machine interaction (HMI). IEEE 6. Arjoune Y et al (2018) Spectrum sensing: enhanced energy detection technique based on noise measurement. In: 2018 IEEE 8th annual computing and communication workshop and conference (CCWC). IEEE 7. Medina EEA, Barbin SE (2018) Performance of spectrum sensing based on energy detection for cognitive radios. In: 2018 IEEE-APS topical conference on antennas and propagation in wireless communications (APWC). IEEE 8. Nasrallah A et al (2018) Energy detection with adaptive threshold for cognitive radio. In: 2018 international conference on communications and electrical engineering (ICCEE). IEEE 9. Sahu AK, Singh A, Nandakumar S (2018) Improved adaptive cooperative spectrum sensing in cognitive radio networks. In: 2018 2nd international conference on electronics, materials engineering & nano-technology (IEMENTech). IEEE 10. Ruttik K, Koufos K, J’antti R (2009) Detection of unknown signals in a fading environment. IEEE Commun Lett 13(7):498–500 11. Herath SP, Rajatheva N, Tellambura C (2011) Energy detection of unknown signals in fading and diversity reception. IEEE Trans Commun 59(9):2443–2453 12. Liu X, Zhang C, Tan X (2014) Double-threshold cooperative detection for cognitive radio based on weighing. Wireless Commun Mob Comput 14(13):1231–1243 13. Charan C, Pandey R (2017) Double threshold based spectrum sensing technique using sample covariance matrix for cognitive radio networks. In: 2017 2nd international conference on communication systems, computing and IT applications (CSCITA). IEEE

Self-generating Topology Coloring Scheduling for Interference Mitigation in Wireless Body Area Networks Jiasong Mu1(&), Yunna Wei1, and Xiaorun Yang2 1

2

College of Physical and Electronic Information, Tianjin Normal University, Tianjin, China [email protected] College of Communication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, China

Abstract. Wireless body area networks (WBANs) are key means to provide real-time health detection. If WBAN was deployed that lacks inter-WBAN coordination in dense environment, it will reduce network performance seriously. Therefore, it was crucial to coordinate co-existing WBANs. Based on the idea of link resource allocation, self-generating topology coloring scheduling was proposed in this paper. The experimental results show that self-generating topology coloring scheduling is effective mitigation interference for interWBAN. In mobile WBAN scenarios, low interference and high frequency utilization are always maintained. Keywords: Inter-WBAN interference mitigation Colouring scheduling

 Self-generating topology 

1 Introduction As the age of the population becomes more prominent, the cost of the health care system is increasing. The emergence of WBAN has opened up a new way to reduce the cost of medical monitoring. WBAN is a single-hop (up to two-hop) star network consisting of a group of lightweight, small-sized wireless sensor networks (WSNs) and a central processing node (CPN) with coordination functions. The WSN collects various important signals and transmits them to CPN. The CPN then transmits the received data to the remote-control centre through the external network interface, thereby realizing the function of diseases. Due to the security and convenience of WBAN, users have exploded. According to the IEEE 802.15.6 of the WBAN protocol, at least 10 WBAN networks are supported in the range of 63 m3 [1]. Such high-density deployment has caused interference of coexistence WBAN, especially in crowded places. The typical WBAN structure and inter-WBAN interference are shown in Fig. 1.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 646–653, 2020 https://doi.org/10.1007/978-981-13-9409-6_76

Self-generating Topology Coloring Scheduling

647

Inter-WBAN interference usually occurs when multiple WBANs are adjacent to each other. Simultaneously, they send and receive information without CPN scheduling. How to effectively suppress or eliminate interference, thus enhancing the stability and robustness of the entire network is an extremely challenging issue, which has received extensive attention from researchers at domestic and abroad. Existing schemes for mitigation inter-WBAN interference are mainly based on methods of link resource allocation and power control.

Fig. 1. The typical WBAN structure and WBAN interference

Based on the technology of link resource allocation, link resources are divided into time domain, frequency domain and other different dimensions to avoid interference. Reference [2] is proposed to suppress inter-WBAN interference by using a partially orthogonal channel allocation method. In the proposed method [2], an interinterference list was generated for each WBAN coordinator. Although it improves the channel reuse rate, the generation of the list significantly increases the energy loss of the nodes and reduces the lifetime of the WBAN. Reference [3] modelling the antiinterference scheduling problem of WBAN as a graph colouring problem, using random incomplete shading algorithm to map the spatial reuse scheduling of dense sensor network to spatial reuse colouring problem, improving spatial reusability and algorithm convergence speed fast. However, [3] can only be applied to a specific network topology, and cannot be scheduled in real time with the dynamic changes of WBAN. Based on the power control method, inter-WBAN interference suppression is converted to maximize the signal to interference ratio (SIR) while minimizing the transmission power. Reference [4] using the game theory algorithm, and social interaction is introduced, and a power control strategy based on social interaction information is proposed. The WBAN interference model is established. On this basis, the game control power is used to control the transmission power of the WBAN, so that the system has the best performance and consumes the least energy.

648

J. Mu et al.

In this paper, we use the link resource allocation idea to solve the interference problem of coexistence WBAN in dense environment, and consider the frequency utilization problem. A self-generating topology colouring scheduling was proposed. The algorithm can automatically generate topology maps with human body movement, and coordinate network frequency bands in real time based on network density. It can achieve the purpose of suppressing interference between WBANs. The content of this paper is arranged as follows: the second section introduces the inter-WBAN interference and resource scheduling scheme, the third section proposes the self-generating topology colouring scheduling, the fourth section is the relevant experimental analysis, and the fifth section summarizes the full text.

2 The Inter-WBAN Interference and Resource Scheduling Scheme 2.1

Inter-WBAN Interference

With the widespread use of WBAN, the inter-WBAN interference has become a huge challenge. When there is no coordination between multiple WBANs. While receiving the useful information of the network, the CPN is likely to receive data sent by other WBANs of the same frequency, which is interference information. Therefore, interference suppression and resource scheduling between WBANs is very significant. Figure 2 shows the interference between the three WBANs, the WBAN is replaced by a circle of radius r, where the CPN is at the center of the circle and the WSN are randomly distributed within the circle. When the CPN of WBAN1 receives the information sent by its WANs, it was affected by its adjacent coexisting WBAN. For example, if the e-node in the WBAN2 or the F-node in WBAN3 simultaneously transmits information at the same frequency as WBAN1, CPN of WBAN1 receives a large amount of interference information from the e-node and the F-node. Similarly, interference occurs in WBAN2 and WBAN3. This will affect network performance seriously. However, due to the mobility of WBAN, multiple WBANs within the CPN receiving radius occur from time to time. In addition, crowded places are also typical scenes that generate inter-WBAN interference.

Fig. 2. The typical WBAN structure and WBAN interference

Self-generating Topology Coloring Scheduling

2.2

649

Inter-WBAN Interference Resource Scheduling Scheme

The idea of the inter-WBAN interference suppression scheduling scheme is to allocate channels of different frequency bands for WBAN under close coexistence. The individual WBAN is abstracted into a particle represented by CPN, and the receiving radius of CPN is represented by r. Then the above ideas can be converted into that different frequency bands are allocated to the points adjacent to each other, where the distance between the two points is less than r, which means the two points are adjacent. Furthermore, a plurality of adjacent or non-adjacent points form a network topology map. The problem of assigning frequency bands to the points of the network topology map can further abstract the vertex coloring problem. In fact, in the sensor network or MANETA, the sensor network is modeled as a graph structure. The concept of distributed graph coloring is a mature scheduling scheme for suppressing interference in the field [5–7]. Compared to sensor networks and MANETA that focus on static or low mobility scenarios, WBAN has faster moving speeds and more frequent changes in network topology. Therefore, it is necessary to study a coloring scheme suitable for WBAN. Reference [3] theoretical proof and simulation verify the feasibility of applying graph coloring idea to WBAN. Based on the above ideas, this study proposes a selfgenerating topology coloring scheduling. Unlike other coloring schemes, the main contributions of this paper are as follows: 1. Generate a network topology based on network location information. The previous research method was a network interference model predicted by historical data or social interaction information [4]. There are certain errors that can have unpredictable effects on a certain WBAN. 2. There is still fine adaptability in the case of sudden changes in network density. Due to the randomness of human movement, the originally empty area will reach a higher distribution in a short time. The proposed scheme can cope well with such scenarios and has a good frequency reuse rate. 3. In order to reduce time delay and save energy, the idea of cloud processing is introduced. It can reduce the energy loss of the WBAN node and prolong the network life that the main program was run in the cloud.

3 Self-generating Topology Colouring Scheduling The self-generating topology coloring scheduling is mainly composed of two parts. (1) Real-time generation of network topology (2) Coloring the network topology. The network topology map is the basis of coloring. And the rationality of the random coloring scheme has a direct impact on the correctness of the frequency allocation. However, the authors of [8, 9], and [10] pointed out that fast coloring and optimal space reuse coloring cannot be achieved at the same time, which is a traditional NP problem. This study does not pursue optimal spatial reuse rates and is designed to generate a global solution to suppress interference. The algorithm structure diagram is shown in Fig. 3.

650

J. Mu et al.

Fig. 3. The algorithm structure diagram

WBAN is responsible for collecting and transmitting real-time network location information and receiving frequency allocation commands from remote servers. Among them, the mobile phone and the smart bracelet act as CPN role, and its built-in position sensor can collect the location of the network, and then transmit the location information to the remote server via the wireless access network. After being processed by the remote server, the frequency division command is transmitted back to the CPN via the Internet, and the CPN coordinates WSN to operate in the allocated frequency band to achieve the purpose of interference suppression. The remote server is responsible for generating the network topology and coloring scheme. This part is the key to the self-generating topology coloring scheduling. First, the remote server generates a network topology map from the location information collected by CPN. This study uses the adjacency edge table Graph to represent this graph structure. Secondly, vertex coloring algorithm is used to color the adjacency edge table Graph, so that vertex i and its adjacency point j are different colors, which meets the requirements of using different frequency bands for other WBANs within the receiving radius of WBANi. The algorithm process of this part is shown in Fig. 4.

4 Experimental Analysis This section verifies the rationality of the self-generating topology coloring scheduling. Experiments demonstrate the feasibility of two aspects of system anti-interference performance and frequency reuse rate under different human body distribution densities. The anti-interference performance of the system is measured by the signal-tointerference ratio (SIR). The larger the SIR, the better the anti-interference performance of the system. The frequency reuse rate is measured by the amount of colors. The less color used, the higher the color reuse between WBAN.

Self-generating Topology Coloring Scheduling

651

Fig. 4. Algorithm process of self-generating topology coloring scheduling

The experimental scene simulation parameters are set as follows: n nodes (CPN) are randomly distributed in a square area of 10 m * 10 m, and values are 50, 100, 150, and 200. The receiving radius (interference distance) r of the CPN is set to 2 m, and the sensor transmission distance r0 in the WBAN is set to 0.5 m. As can be seen from Fig. 5, as the deployment density of WBAN increases, the SIR tends to decrease but still tends to be stable. The distance between the same-frequency networks affects the SIR in an exponential relationship, and the number of co-frequency nodes affects the SIR in a linear relationship. Under the change of distance, the influence of the number of co-frequency networks on SIR is negligible. Therefore, the increase in the number of co-frequency nodes has a relatively small impact on the average SIR. The experiment shows that the SIR is still stable under the irregular distribution of nodes and the topology, which meets the stability requirements of the system. As can be seen from Fig. 6, as the WBAN deployment density increases, the amount of coloring tends to increase significantly. This is because the denser the WBAN distribution, the more coexisting WBANs are accommodated in the interference distance. In order to ensure that the adjacent WBANs coexist without interference, these WBANs occupy different frequency bands, resulting in an increase in the amount

652

J. Mu et al.

Fig. 5. SIR under different density conditions

of colors. However, even if the number of WBANs increases to 200, the number of bands used will still fluctuate around 20. The experimental diagram shows that the selfgenerating topology coloring scheduling still has a good frequency resource reuse rate when the network deployment density is very high.

Fig. 6. Number of bands used under different density conditions

5 Conclusion In this paper, a self-generating topology coloring scheduling is proposed, which realizes link resource allocation for interference suppression between WBANs. Different from the traditional coloring scheme for pre-set network topology maps, the algorithm can automatically generate topology structure with human body movement. In addition to, it can coordinate network frequency bands in real time based on network density. The experimental results show that, the proposed self-generating topology coloring scheduling maintains the stability of the network and the high frequency reuse rate in the variable network topology under the premise of taking into account the time

Self-generating Topology Coloring Scheduling

653

complexity. The impact of the algorithm on network throughput and node energy consumption will be further studied in the future.

References 1. Martelli F, Buratti C, Verdone R (2011) On the performance of an IEEE 802.15.6 wireless body area network. In: 17th European wireless 2011-sustainable wireless technologies, VDE, pp 1–6 2. Zhang Z et al (2013) Interference mitigation for cyber-physical wireless body area network system using social networks. IEEE Trans Emerg Top Comput 1(1):121–132 3. Cheng SH, Huang CY (2013) Coloring-based inter-WBAN scheduling for mobile wireless body area networks. IEEE Trans Parallel Distrib Syst 24(2):250–259 4. Movassaghi S, Abolhasan M, Smith D (2014) Smart spectrum allocation for interference mitigation in wireless body area networks. In: 2014 IEEE international conference on communications (ICC). IEEE, pp 5688–5692 5. Gandham S, Dawande M, Prakash R (2008) Link scheduling in wireless sensor networks: distributed edge-coloring revisited. J Parallel Distrib Comput 1122–1134 6. Fonseca BJB Jr (2006) An augmented graph-based coloring scheme to derive upper bounds for the performance of distributed schedulers in CSMA-based mobile ad hoc networks. In: Proceedings of international wireless communication and mobile computing conference, pp 761–766 7. Ramanathan S (1999) A unified framework and algorithm for channel assignment in wireless networks. Wireless Networks 81–94 8. Bjorklund A, Husfeldt T, Koivisto M (2009) Set partitioning via inclusion-exclusion. SIAM J Comput 546–563 9. Johansson Ö (1999) Simple distributed d+1-coloring of graphs. Inf Process Lett 229–232 10. Kothapalli K, Scheideler C, Onus M, Schindelhauer C (2006) Distributed coloring in O(radic (log n)) bit rounds. In: Proceedings of 20th international parallel and distributed processing symposium, p 10

Smart Parking and Recommendation System Under Fog Calculation Jiasong Mu1(&), Yunna Wei1, and Xiaorun Yang2 1

2

College of Physical and Electronic Information, Tianjin Normal University, Tianjin, China [email protected] College of Communication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, China

Abstract. Parking is becoming a major problem in urban traffic, and intelligent parking system is an effective way to solve this problem. Intelligent parking system requires high accuracy of local information and strong real-time feedback, which have brought challenges to traditional cloud computing. In this case, a fog computing framework based on the idea of “layered processing” is proposed. The fog layer is close to the data source to ensure that requests and information from users and terminals can be quickly processed and responded by system. This paper designs a smart parking system with parking space and road condition information collection, road condition analysis, parking space reservation, road navigation and parking space guidance functions, and introduces two fog calculations in the intelligent parking system that based on the fog calculation framework, and introduces two typical application scenarios: video detection of license plates and parking spaces, driver parking space recommendations based on big data. Keywords: Fog computing recommendations

 Intelligent parking  Parking space

1 Introduction High-speed cars push the boundaries of time and space, bringing closer interaction and more open traffic. At the same time, “parking difficulty” has become a new problem for people to travel and urban management. The current parking mode is dominated by the traditional parking. The reasons for the difficulties in parking are as follows: Firstly, the absolute number of parking Space is insufficient, which leads to the tension between the supply and demand of parking space in China. The number of parking space is far less than the speed of vehicle growth, which is the direct cause of the tension of parking space. Secondly, the parking system is not perfect, and the information of the parking space is difficult to transmit to the driver timely and effectively. The information is highly asymmetric, and the

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 654–661, 2020 https://doi.org/10.1007/978-981-13-9409-6_77

Smart Parking and Recommendation System Under Fog Calculation

655

utilization rate of the parking space is low. The shortage of parking space puts pressure on urban traffic, boosting illegal parking behavior of citizens [1]. In order to solve the problem of difficult parking, smart parking system gradually comes into our sight. Smart parking system core is the Internet of Things technology that applied vehicle identification technology, communication technology, mobile terminal technology, positioning technology and GIS (Geographic Information System) to complete the collection, management, inquiry, reservation and navigation services of urban parking space. The system senses the status information of parking space through sensors, and chooses the appropriate communication mode for different scenes to transmit the status information of parking space to the cloud platform, and the user acquires the information of parking space through the cloud platform [2]. However, the number of vehicles and parking information are complex in the parking system, and the real-time information is required high. These problems bring challenges to the cloud computing of intelligent parking. Firstly, due to the huge number of parking areas and intelligent cars in cities, the amount of information data collected by parking sensor and automobile terminal are even enormous. These data are uploaded to the cloud center in the distance, where the work is low-computational and highly repetitive. This takes up cloud resources and network bandwidth. Secondly, the data uploaded to the cloud center has been calculated and sent back to the car owners looking for parking space, resulting in delay and impaired information effectiveness. Finally, data are transmitted from terminals e.g. sensors to data centers through a long network path, during which the information of car owners and parking lots is exposed to hackers, vulnerable to hijacking and poor security. In this case, the fog calculation came into being. The emergence of fog calculation provides strong support for the performance improvement of smart parking system. Fog computing makes full use of the computing, communication and storage capacity of fog nodes distributed in the network space, and shares the work tasks of clouds. Fog nodes perform tasks with low computational and highly repetitive. Therefore, the communication data volume and delay of the system are reduced, as the same time, the network path is shortened, and the information security is improved.

2 Fog Computing In order to make smart parking system develop in a more intelligent direction, it is necessary to make full use of various communication devices e.g. routers and gateways distributed in the system. They are capable of communication, storage, calculation and control, and can adopt the pattern of cooperative processing and hierarchical control to meet the requirements of intelligent parking with high real-time performance. This is typical fog computing. The concept of fog computing was first proposed by Cisco in 2011 and has been echoed by the industry. The OpenFog Consortium was established in November 2015

656

J. Mu et al.

by IoT leaders e.g. ARM, Cisco, Dell, Intel, Microsoft, and Princeton University. According to the OpenFog Alliance [3, 4] definition, fog computing is a system architecture that takes advantage of resources from the cloud to the things (vertical direction) and across different networks (horizontal direction) to perform computation, storage, control, communication, and configuration of acceleration resources. This can improve execution efficiency. Fog computing is a supplement to cloud computing. Compared with cloud computing, fog computing mainly relies on distributed computer resources closer to local devices rather than remote servers located in the central location to realize efficient utilization of computing resources. It introduces fog layer and core network layer on the basis of cloud computing three-layer architecture, i.e. end user layer, access layer network, fog layer, core network layer and cloud layer. The fog layer is the core of fog calculation. The fog layer is equipped with fog nodes, e.g. border routers and intelligent gateways, which are close to the user and have computing power, communication capability and storage capability. Based on different deployment locations and functions of fog nodes, they can be divided into three categories: fog edge node, micro fog and fog servers [5]. The fog-edge node is an indispensable part of the fog layer and is also the closest fog node to the terminal device. It has the ability of simple calculation and analysis, storage and transmission of data, and is directly connected with terminal equipment to process simple data sent by terminal equipment. Compared with the fog edge node, the fog server has stronger computing capacity and storage capacity and can handle more complex instructions. It connects with the cloud server through the core network layer and requests beyond the processing capacity of the fog server will be sent to the cloud data center. The micro fog is between the fog edge node and the fog server and plays the role of middleware. Under some relatively simple fog-layer structures, fog-edge nodes process data and return results. In relatively complex fog-layer structure, the computing power of fogedge node alone cannot meet the requirements of the system, so the pre-analyzed data need to be transmitted upward to the fog-server for further processing. For more complex systems, micro fog is required [6]. Combination of three types of fog nodes to deal with trivial and complex data tasks, share the workload of cloud server, so as to reduce transmission delay, shorten network path and improve security [7].

3 Fog Computing Architecture of Smart Parking System 3.1

Hierarchical Framework of Cloud Computing for Smart Parking Systems

As shown in the figure, Fig. 1 is the hierarchical structure diagram of fog calculation under smart parking system.

Smart Parking and Recommendation System Under Fog Calculation

657

Cloud layer

Core network layer Fog server Micro fog Fog layer

Fog edge node

Access network

End user layer

Fig. 1. A hierarchical structure diagram of smart parking systems

(1) End user layer: In the smart parking system, the end user layer is mainly composed of smart phones, cameras, sensors, on-board terminals and other devices. Cameras and sensors collect traffic congestion conditions and parking space usage conditions. Smartphones and on-board terminals send location information and appointment requests, and these data are transmitted to the fog node for calculation and processing. In addition, some data with decision or guidance information will be transmitted back to the terminal via the network path, e.g. the smartphone to receive parking reservation information and road information. Therefore, terminal equipment is not only data producer, but also data consumer [8]. (2) Access network: Due to the different parking environment, the access network has various forms based on the construction cost. (a) In a scene where the infrastructure is relatively complete, the wired network serves as the access network of the system. e.g., the video monitoring system deployed on both sides of the road and around the parking lot can transmit the original video or image collected by the camera to fog nodes e.g. the gateway or router through wired mode, waiting for further information extraction.

658

J. Mu et al.

(b) In the case that the wired network is not fully covered, because wireless network communication has advantages in deployment cost and system scalability, the wireless network is the best choice for the access end network, e.g. ZigBee, Wi-Fi, Bluetooth and other local networking methods. (c) In the case that the number of parking space is scarce, and the construction of the wired network and the wireless local area network are too high, the WAN (Wide Area Network) is used as the access network. The sensor acquisition information is transmitted to the fog node through the operator base station. (3) Fog layer: Fog layer is composed of fog nodes with computing, storage and transmission capability. The existence of fog layer is very important to improve the performance of smart parking system [9]. (a) The fog layer has computing power, and it will undertake most of the computing power in the cloud. Taking advantage of the large number of fog nodes and proximity to data sources, the system processes data and requests in the near information source, improving the real-time performance and information security of data processing. After image video information is processed by fog layer, the amount of data extracted will be far less than the image or video itself, which is more conducive to further transmission to the cloud or the terminal. (b) The fog layer has storage capacity. Data in the cloud can be backed up in foglayer data centers for smart parking systems [10]. When the smart parking system receives the driver’s parking request, the fog node can make decisions directly by using the data in the local fog node, and avoid network delay caused by requesting such information from the cloud. (4) Cloud layer: In the system, the cloud servers involved include map service provider cloud, urban traffic cloud, parking provider cloud and smart parking cloud. These clouds sink part of their functions into the mist layer and process regional information in real time, commonly known as cloud unloading. Some of the capabilities of the cloud. 3.2

Cloud Computing Architecture for Smart Parking Systems

In the fog computing framework of smart parking system, stratification processing and coordination and cooperation are the most important features. As shown in Fig. 2, the fog layer in the system can be roughly divided into two categories: First, the fog layer near the terminal equipment, e.g. road fog layer and parking lot fog layer, which is composed of fog edge nodes, collects and processes simple information about the road and parking lot. Second, the regional fog layer, e.g. urban traffic fog, parking provider fog, map service provider fog and smart parking fog, are composed of fog servers, representing all road information, parking information, map information and algorithm scheduling information within the region. The fog layer is the core part of the system operation scheduling. In the road traffic subsystem, fog nodes in the road fog layer collect information and perform local processing. Traffic analysis is carried out in the urban traffic fog, e.g. the traffic condition of which road in a certain period of time in the city, and then the

Smart Parking and Recommendation System Under Fog Calculation

659

analysis results are recorded in the urban traffic cloud. At the same time, urban traffic fog ensures the real-time monitoring of traffic conditions within its range, provides local traffic information, and combines users’ needs to carry out allocation and scheduling algorithm calculation.

Urban traffic cloud

Smart parking cloud

Map service provider cloud

Parking provider cloud

Road fog layer Urban traffic fog Map service provider fog

senser

Smart parking fog

Road traffic subsystem

Parking lot fog layer

Parking provider fog

Parking subsystem

Fig. 2. The fog computing architecture of smart parking system

In the parking subsystem, parking information in various environments can be collected and aggregated in the parking supplier fog layer, and uploaded to smart parking fog for smart parking fog call. The parking space supplier will include parking space information in the area. For the parking space with terminal execution equipment, the parking space supplier fog is also responsible for issuing the control command to the execution device on the parking space through the fog layer of the parking lot, and controlling the state of the parking space by executing the device. The information stored by map service provider fog and smart parking fog is unloaded from corresponding clouds. The map information and navigation services in the area are stored in the map service fog, and the parking information provided by parking space suppliers is integrated and displayed on the map. There is a data distribution scheduling algorithm in the smart parking fog to provide users with convenient and accurate parking guidance. Besides vertical transmission between fog layers,

660

J. Mu et al.

data information can also be transmitted horizontally between the same level. As shown in Fig. 2, smart parking fog is the core of fog layer. It can call information of other fog layers to complete reasonable parking request. Parking space supplier provides information on parking space, urban traffic fog provides road conditions and traffic information, and map service fog provides road information. This information is processed in the smart parking fog, providing parking solutions to the user, guiding the user to park reasonably, to improve the utilization of parking resources and increase the income of the parking lot. In addition, the fog layer communicates with the cloud through the core network. The smart parking fog uploads the driver’s historical parking situation to the cloud, and uses the deep learning method to train the demand model of different driver groups in the cloud, and returns to the smart parking fog. Smart parking fog will further enhance its operational efficiency by using the models trained. The cloud has a larger database and computing power to process this data, training historical data from users, traffic conditions, and historical data used by parking space to mine information for future predictions. The smart parking system can allocate resources according to these predictions in advance. In addition, according to the use of historical parking space in the region, the saturated area of parking space is predicted and resource allocation is adjusted by means of market economy. The parking space saturation area increases the parking charge price, reduces the parking fee of the unsaturated parking area, guides the owner to make full use of the parking resources. 3.3

Key Technologies of Fog Calculation in Smart Parking System

(1) Parking lot video detection technology The video equipment is installed at the entrance and exit, and records the vehicle license plate number and the entry and exit time to realize the automatic timing charge of the vehicle. The image acquisition unit is triggered when the vehicle detection algorithm detects that the vehicle arrives, and the current video image is acquired. The license plate recognition unit processes the image, locates the license plate position, and then divides the characters in the license plate for identification, and then forms the license plate number output. When the fog calculation is applied to the identification of the parking lot license plate number, the algorithm can be split into several parts and placed in the fog node. The fog edge node stores the image preprocessing algorithm; the parking lot fog server node A feature extraction algorithm is stored to identify the license plate number. (2) Personalized parking space recommended There is an intelligent scheduling model in the fog layer of the smart parking system, which is based on the analysis of the driver’s historical parking situation. Consider different drivers’ different characteristics for parking demand to recommend the most suitable parking space. e.g., the office worker prefers to choose a parking space with a short parking time in order to hurry, and people who go out to go shopping are more inclined to park a parking space with a short walking distance. The algorithm running

Smart Parking and Recommendation System Under Fog Calculation

661

on the center node of the fog can fully meet the special needs of different drivers on the basis of satisfying the basic parking requirements of the users, embody the wisdom of the smart parking system and provide user satisfaction.

4 Conclusions Fog calculation is a new generation of distributed network, which makes full use of resources from cloud to fog to realize data analysis and information transmission. It is an extension and supplement of cloud computing. Smart parking system can improve the real-time of parking guidance service by using fog calculation, and help drivers to realize parking more quickly and conveniently.

References 1. Truong N, Cao Q, Um T, Lee G (2016) Leverage a trust service platform for data usage control in smart city. In: Proceedings of IEEE global communications conference. IEEE Press, Piscataway, Dec 2016, pp 4–8 2. Satpathy S, Sahoo B, Turuk A (2018) Sensing and actuation as a service delivery model in Cloud edge centric internet of things. Future Gener Comput Syst 86:281–296 3. Chiang M, Zhang T (2017) Fog and IoT: an overview of research opportunities. IEEE Internet Things J 3:854–864. https://doi.org/10.1109/JIOT.2016.2584538 4. Bonomi F, Milito R, Zhu J, Addepalli S (2012) Fog computing and its role in the internet of things. In: Proceedings of the first edition of the MCC workshop on mobile cloud computing, Aug 2012, pp 13–16 5. Li J, Jin J, Yuan D, Palaniswami M, Moessner K (2015) EHOPES: data-centered fog platform for smart living. In: Proceedings of international telecommunication networks and applications conference. IEEE Press, Piscataway, Nov 2015, pp 18–20 6. Jutila M (2016) An adaptive edge router enabling internet of things. IEEE Internet Things J 3:1061–1069 7. Lopez P, Montresor A, Epema D, Datta A (2015) Edge-centric computing: vision and challenges. ACM Sigcomm Comput Commun Rev 45:37–42 8. Yi S, Hao Z, Qin Z, Li Q (2015) Fog computing: platform and applications. In: Proceedings of hot topics in web systems and technologies. IEEE Press, Piscataway, Nov 2015, pp 12–13 9. Bhatt V, Brutti A, Burns M, Frascella A (2017) An approach to provide shared architectural principles for interoperable smart cities. In: Proceedings of international conference on computational science and its applications. Springer Press, New York, July 2017, pp 415– 426 10. Shwe H, Jet T, Chong P (2016) An IoT-oriented data storage framework in smart city applications. In: Proceedings of international conference on information and communication technology convergence. IEEE Press, Piscataway, Oct 2016, pp 19–21

Speech Synthesis Method Based on Tacotron + WaveNet Yingnan Liu1, Qitao Ma2, and Yingli Wang1(&) 1

College of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected] 2 The Hong Kong Polytechnic University, No.11 Yucai Road, Hong Kong, People’s Republic of China

Abstract. In view of the Tacotron Griffin-Lim algorithm in speech synthesis system recovery phase information of the obvious effect of the synthetic speech artificial processing, low protect boomed, this paper proposes a speech synthesis method based on Tacotron + WaveNet network architecture, the method is based on the sequence mapping Seq2Seq structure, first of all, the input text into one—hot vector, and introduces attention mechanism for MEL spectrograms, finally using WaveNet vocoder back-end processing network reconstruct the phase information of speech signal, so as to convert the input text into waveform. The test language of the experiment was LJ-Speech, and the experiment was conducted for English language. The experimental results showed that the average subjective opinion score MOS was 4.23, which was higher than Tacotron end-to-end speech synthesis method in terms of synthesis naturalness. Keywords: Speech synthesis

 Tacotron  WaveNet  Vocoder  Seq2Seq

1 Introduction Speech synthesis (TTS) technology is to transform any text into a speech with a certain voice quality and intelligibility after computer analysis and processing. Speech synthesis technology is one of the important directions of modern speech signal processing, and it is also an essential module to realize human-computer intelligent speech interaction system. The mechanism of speech production is very complex. The traditional speech synthesis technology is based on the analysis of speech characteristics and the pronunciation model. Waveform synthesis method [1, 2], which is a relatively simple speech synthesis technology. The speech waveform of human pronunciation is stored directly or encoded and then stored, and the combined output is edited as required. The speech synthesizer of this method is limited by the content of corpus, so it has great requirements for waveform splicing algorithm.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 662–670, 2020 https://doi.org/10.1007/978-981-13-9409-6_78

Speech Synthesis Method Based on Tacotron + WaveNet

663

Parametric synthesis [3] is a progressive form of compressed storage, uses vocoder technology. In order to save the storage capacity, it is necessary to analyze the speech signal and extract the speech parameters to compress the storage. This method reduces the storage space with efficient coding, and there are inevitably errors in the extraction of parameters or coding process, thus resulting in poor sound quality of synthetic speech. With the rapid development of artificial intelligence technology, the field of speech synthesis has new technical support in deep learning. In the speech synthesis method of Tacotron + WaveNet, a large number of processes related to the speech feature engineering are reduced, which may lead to heuristic errors and fragile design choices, and more attention is paid to the speech synthesis system directly from text to speech matching [4, 5]. Compared with the traditional speech synthesis method, this technology can make the computer produce smoother and more robust speech based on the training data set. In this paper, we describe a unified integrated neural network speech synthesis method [6], a model of a Tacotron style method used to generate MEL spectrograms, followed by a WaveNet vocoder, allow direct use of text character sequence to training study of speech synthesis and speech waveform data, its synthetic speech naturalness is close to the real speech level.

2 Speech Synthesis Model Based on Tacotron The main part of Tacotron is a Seq2Seq model with attention mechanism. As shown in Fig. 1, it contains an encoder, an attention-based decoder and a post-processing network, in which the model takes text characters as input and the resulting sound spectrum frame data is then converted into wave-forms. Linear / Mel Spectrogram

CBHG

CBHG

AƩenƟon

Pre-Net

Decoder RNN

Decoder RNN

Decoder RNN

AƩenƟon RNN

AƩenƟon RNN

AƩenƟon RNN

Pre-Net

Pre-Net

Pre-Net

Character Embeddings

Fig. 1. Tacotron generates linear or Mel spectrograms

664

Y. Liu et al.

2.1

CBHG Module

CBHG module is contains a 1-D convolution filter groups, followed by a highway network and a bidirectional gated recurrent unit (GRU) recurrent neural network (RNN). As shown in Fig. 2. CBHG is a powerful module for extracting feature expressions from text sequences. First, K group of one-dimensional convolution kernels are used for convolution operation on the input sequence. The results of the convolution output will be superimposed together. The maximum value will be further collected to increase the local invariance. The resulting sequence of results is passed to several fixed-length onedimensional convolution, and the output result is superimposed with the original input sequence through the residual connection. In addition, batch standardization is adopted in all convolution layers, so that it has a unified data standard. The output result after convolution is sent to a multi-layer highway network to extract high-level features, and finally a bidirectional GRU RNN is stacked to extract the bidirectional sequence features.

BidirecƟonal RNN

Highway layers

Residual connecƟon

Conv1D layers Conv1D projecƟons

Max-Pooling (stride =1)

Conv1D bank +Stacking

Fig. 2. The CBHG (1-D convolution bank + highway network + bidirectional GRU) module

2.2

Encoding-Decoding Model

The encoder-decoder architecture is a Seq2Seq model. In this model, the role of the encoder is to change an input sequence of indefinite length into a background vector of fixed length and encode the input sequence information in the background vector.

Speech Synthesis Method Based on Tacotron + WaveNet

665

Decoder is to convert the fixed length vector generated by the previous coder into an output sequence. Common encoders and decoders use a combination of cyclic neural networks. Suppose the input sequence is hx1 ; x2 ; . . .; xTx i. In time step t, the cyclic neural network transforms the eigenvector of input xt and hidden state ht1 of the previous time step into hidden state ht of the current time step. The function f is used to express the transformation of the hidden layer of the circular neural network: ht ¼ f ðxt ; ht1 Þ

ð1Þ

The encoder changes the hidden state at each moment into the background variable c through the custom function q: c ¼ qðh1 ; h2 ; . . .; hTx Þ

ð2Þ

the custom function q is expressed as a nonlinear function, when selecting qðh1 ; h2 ; . . .; hTx Þ ¼ hTx , the background variables is the final time step input sequence of hidden states hTx , namely: c ¼ hTx

ð3Þ

the background variable c that the encoder outputs encodes the entire sequence. The   0 output sequence in a given training sample y1 ; y2 ; . . .; yTy , for each time step t , decoder output yt0 before the conditional probability of will depend on the output sequence of hy1 ; y2 ; . . .; yt0 1 i and background variables c: Pðyt0 jy1 ; y2 ; . . .; yt0 1 ; cÞ

ð4Þ

0

In the output sequence of time step t , the output of the decoder will step on time yt0 1 , and background variables c as input, and in a time step on them with hidden state st0 1 transform for the current time step hidden state st0 . Therefore, we can use the function g to express the transformation of the decoder hidden layer: st0 ¼ gðyt0 1 ; c; st0 1 Þ

ð5Þ

function g is a nonlinear transformation. After a decoder hidden state, use a custom output layer and softmax to calculate Eq. (4). For example, based on the current time step decoder hidden state st0 , the time step output yt0 1 , and background variables c, to calculate the current time step output yt0 probability distribution. 2.3

Attention Mechanism

The encoding and decoding architecture solves the inconsistency between the input sequence and the output sequence, however, the focus is different each time the voice is generated. In order to make up for the limitations of the above encoding-decoding model, a content-based tanh attention mechanism needs to be introduced to act on the

666

Y. Liu et al.

decoder. In this decoder, a stateful looping layer generates a point of interest query at each time step. The background vector is spliced together with the output of the attention RNN unit as the input of the decoder. If the encoder in the time step t hide status of ht , steps and the total time for Tx , 0 decoder in the time step t background variables for all encoders hidden state of the weighted average, i.e.: ct 0 ¼

Tx X

at 0 t ht

ð6Þ

t¼1 0

in which a given t , weights at0 t at t ¼ 1; 2; . . .; Tx value is a probability distribution. Softmax operation was used to obtain its probability distribution: expðet0 t Þ at 0 t ¼ P Tx k¼1 expðet0 k Þ

t ¼ 1; 2; . . .; Tx

ð7Þ

of et0 t in decoder in time step t0 hidden state st0 1 in the time step t hide status and encoder ht as input, and through the function of a computing et0 t : et0 t ¼ aðst0 1 ; ht Þ

ð8Þ

If two input vectors have the same length, we can compute their inner product aðs; hÞ ¼ sT h. The first paper to propose the mechanism of attention is a multilayer perceptron with a single hidden layer: aðs; hÞ ¼ vT tanhðWs s þ Wh hÞ

ð9Þ

where v, Ws and Wh are all model parameters that can be learned. 2.4

Griffin-Lim Algorithm

In Tacotron, the Griffin-Lim algorithm is used to estimate the phase of the predicted spectrogram and then synthesize the waveform. By Eq. (2), the square error between the amplitude of the Fourier transform of the estimated signal and that of the original signal is minimized. i

x ðnÞ ¼

P1

1

Rp xðms  nÞ p X ðtÞðm; nÞeixn dx P 2 2p 1 1 x ðms  nÞ

ð10Þ

where xðms  nÞ is a window sequence, and s represents the length of window shift. xi ðnÞ is the ith iteration after the reconstruction of speech signal. By iteratively reconstructing the phase information and the known amplitude information of the signal, the estimated value of the speech signal is obtained. The difference between the magnitude of the Fourier transform of the estimated speech

Speech Synthesis Method Based on Tacotron + WaveNet

667

signal and that of the original signal is compared. However, the artificial synthesis trace of the speech is still obvious, and the fidelity is low.

3 Speech Synthesis Model Based on WaveNet In this paper, in view of the shortcomings of Griffin-Lim algorithm, the neural network synthesizer is replaced by the voice coder to recover the signal phase information. Based on the predicted Merle spectrum, the WaveNet network architecture is adopted to learn and generate time-domain waveform samples of different languages, and the Seq2Seq speech synthesis system based on attention mechanism is improved. 3.1

Feature Selection

In this paper, the two parts of the system are connected by a low-level acoustic representation and a Mel frequency spectrogram. Therefore, the Mel frequency spectrogram is related to the linear frequency spectrogram, and the Merle frequency spectrum is obtained by applying a nonlinear transformation to the frequency axis of the STFT and compressing the frequency range with fewer dimensions. Mel spectrogram is easily obtained by calculating the time domain wave-form, and using such a representation, it is possible to independently train two components. The Mel spectrum is smoother than the waveform sample, and because each frame is phaseinvariant, it is easier to train with MSE. 3.2

WaveNet Network Architecture

WaveNet architecture is adopted in this paper as a speech vocoder to improve the quality of synthesized speech. WaveNet is a neural network that can directly generate audio signals. The predicted spectrogram samples are input into the network, and the natural regression characteristics of WaveNet architecture are used to recover the lost phase information in the spectrogram, and finally the speech waveform can be obtained. WaveNet is mainly composed of a one-dimensional convolution layer of extended causal convolution, the input goes through these convolution layers and the gated activation function, and finally the softmax function outputs the posterior probability of the waveform sampling value encoded by the l-law algorithm. The specific performance of the gated activation function is:     z ¼ tanh Wf  y  r Wg  y

ð11Þ

z and y are the activated inputs and outputs, respectively. rðÞ is sigmoid function, W is the convolution weight, f and g are the filters and gates. From the perspective of model generation, the joint probability of waveform  sampling point y1 ; y2 ; . . .; yTy can be written:

668

Y. Liu et al.

PðyjkÞ ¼

Ty Y

Pðyt jy1 ; y2 ; . . .; yt1 ; kÞ

ð12Þ

t¼1

Given build model parameter k. In this case, the network input is the waveform sampling generated in the past. Through the WaveNet network structure, refactoring the lost phase and the Mel spectrum feature is inversely converted into the time-domain waveform sample.

4 Experiment and Results 4.1

Experimental Data

In order to test the audio modeling performance of the model, this paper uses LJSpeech English public speech to evaluate it. The total corpus length is 23.9 h, with a total of 13,100 sentences. Each sentence has an average of 17 words, and the average length is 6.6 s. 4.2

Experimental Process

Feature extraction and preprocessing: the corpus speech signal is sampled at a frequency of 16 kHz and a sampling bit of 16 bit, processed by Hamming window, with frame length of 50 ms, frame shift of 12.5 ms and pre-weighting coefficient of 0.97. Experimental training process: firstly, the Seq2Seq network is trained to predict the frame sequence of the Mel spectrum from the input character sequence. The embedded character is 256 dimensions, and a pre-net module (nonlinear transformation) is added after the embedded layer. It consists of two hidden layers, and the full connection is between the layers. ReLU activation functions were adopted for both hidden layers, and the regularization process was performed by using 0.5 dropout. The encoder and decoder adopt two layers of ResRNN, each layer containing 256 GRU units, to improve the generalization ability of the model. Batch size is 32, adopt Adam optimizer and specify the parameters of the b1 ¼ 0:9, b2 ¼ 0:99, e ¼ 106 , set the initial vector ls ¼ 0:002, after 20,000 iterations reduced to 105 , and use weighs 106 for the L2 of regularization. In the vocoder, about 30 expand convolution layer, the first k ðk ¼ 0 :: 29Þ layer of the expansion rate of 2kðmod10Þ , three loops into WaveNet network architecture, learning last time domain waveform figure samples. 4.3

Experimental Results

After 75,000 iterations of training, the loss basically reaches convergence, and the encoding and decoding approximate reach alignment level, as shown in Fig. 3, Fig. 4 shows the comparison between the predicted and actual Mel spectrogram. For this sentence: “Have now come into general use and are obviously a great improvement on the ordinary ‘modern style’ in use in England which is in fact the Bodoni type.”

Speech Synthesis Method Based on Tacotron + WaveNet

669

Fig. 3. Network alignment

(a) Actual Mel spectrogram

(b) Predicted Mel spectrogram

Fig. 4. Actual and predicted Mel spectrogram

4.4

Contrast Experiment

In order to compare the characteristics of linear predicted amplitude spectrum and Mel spectrum generated by Tacotron, as well as the ability of Griffin-Lim algorithm and WaveNet vocoder to recover the phase of the spectrogram, we conducted a cross experiment and made a MOS score (full score of 5 points). The comparison results are shown in Table 1. Table 1. MOS contrast experiment Model Linear prediction amplitude + Griffin-Lim Linear prediction amplitude + WaveNet Mel spectrum + WaveNet (this paper)

MOS 3.42 3.76 4.23

670

Y. Liu et al.

Experimental results show that Mel frequency is a more compact expression considering human auditory characteristics, so using Mel spectrogram as the prediction effect is better. WaveNet shows a stronger ability to recover phase information than Griffin-Lim algorithm.

References 1. Hunt AJ, Black AW (1996) Unit selection in a concatenative speech synthesis system using a large speech database. In: IEEE international conference on acoustics 2. Lin D (1998) Automatic retrieval and clustering of similar words. Meeting of the association for computational linguistics & international conference on computational linguistics 3. Tokuda K, Yoshimura T, Masuko T, Kobayashi T, Kitamura T (2000) Speech parameter generation algorithms for HMM-based speech synthesis. In: Proceedings of ICASSP, Istanbul, Turkey, vol 3, pp 1315–1318 4. Zen H, Tokuda K, Black AW (2009) Statistical parametric speech synthesis. Speech Commun 51(11):1039–1064 5. Ze H, Senior A, Schuster M (2013) Statistical parametric speech synthesis using deep neural networks. In: IEEE international conference on acoustics 6. Arik S, Diamos G, Gibiansky A, Miller J, Peng K, Ping W et al (2017) Deep voice 2: multispeaker neural text-to-speech

A Novel Spatial Domain Based Steganography Scheme Against Digital Image Compression Zheng Hui(&) and Quan Zhou Academy of Space Electrical Information Technology, Xi’an 710100, China [email protected]

Abstract. Spatial domain based steganography is well-known for its high embedding capacity, implement simplicity and computational complexity. However, it is vulnerable against digital image compression. In this paper, a robust spatial data hiding scheme is proposed by quantifying certain pixels with given integer as quantization value. In addition, inspired by reversible data hiding approach, a post-process aimed at recovering stego image is proposed as well. Experimental results show that, when applying various JPEG/JPEG2000 compression, bit error rate (BER) of extracted secret data is controllable which can be revised by error correcting code, moreover PSNR of recovered stego image maintains greater than 35 dB, which is fairly acceptable in human vision system. Keywords: Spatial domain  Robust steganography Quantization  Stego image recovery

 Image compression 

1 Introduction Steganography, also known as data hiding or data embedding, is the technology to hide secret data with publicly transmitted carrier by taking advantages of the redundancy of multimedia and the insensitivity of human vision or hearing which makes hidden data imperceptivity to guarantee its security via transmission [1]. In general, according to the different domain secret data embedded in, steganography can be roughly categorized as two aspects namely spatial domain and transform domain data hiding. In spatial domain steganography, data are embedded into cover image by directly manipulating its pixels. The classic method of spatial steganography includes least significant bit (LSB) substitution [2], pixel value differencing (PVD) [3] and exploiting modification direction (EMD) [4]. Another direction of spatial data embedding is reversible data hiding (RDH) technology, which the cover image can be totally recovered without any distortion while extraction of secret data is done. RDH mainly consists two approaches which are algorithm based on histogram shifting (HS) [5] and different expansion (DE) [6]. Basically spatial steganography has properties of implement simplicity, low computational complexity and high embedding capacity (easily achieved 1 bpp or more). However, in most schemes, hidden secret data is vulnerable towards attacks such as compression or geometry transform. On the other hand, transform steganography including discrete wavelet transform (DWT) [7], discrete cosine transform (DCT) [8] © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 671–678, 2020 https://doi.org/10.1007/978-981-13-9409-6_79

672

Z. Hui and Q. Zhou

and singular value decomposition (SVD) [9] significantly outputs spatial methods in robustness which withstands various attacks efficiently, yet its embedding capacity is very limited compared with spatial algorithms [10]. In fact, to transmit raw image poses a great challenge to transmission bandwidth and storage, therefore in most scenes digital image needs to be compressed. As mentioned, since compression challenges the robustness of steganography method, yet the embedding capacity of transform domain approach is hardly eligible; in this paper, a spatial domain based robust data embedding is proposed in which secret data can survive from different compression schemes. In addition, inspired by RDH methods, we also propose post-process scheme to help quality of stego image recover after data extraction. The reset of this paper is organized as follows: in Sect. 2 the proposed method is illustrated. In Sect. 3, the experimental results is given out. Then in Sect. 4, the whole paper is concluded.

2 Proposed Method As mentioned, secret data in steganography technology is embedded into cover image without causing significant change visually. Meanwhile, most applied digital image compression is lossy in which distortion is induced in stego images, which challenges the robustness of data hiding algorithm. Therefore, besides the basic data embedding and extracting process as regular steganography schemes, inspired by other RDH methods, a cover image recovery procedure is proposed in this paper in order to make a compromise aiming at the contradiction between the imperceptivity and robustness of data embedding scheme.

Fig. 1. Framework of proposed steganography scheme

A Novel Spatial Domain Based Steganography

673

Figure 1 illustrates the framework of proposed scheme, among which the bold frame namely Embedding and Extracting process and Cover image recovery are the essential aspects, while the dotted ones are optional. 2.1

Embedding Procedure

Given an input image I with size of M  N pixels, firstly partition it into m  n sized non-overlapping blocks. For each block, one or more pixel (in this article, only one pixel is selected) is manipulated for embedding secret bits. The implement of data embedding is achieved by quantization. Steps of embedding procedure are shown as follows: 1. Divide input image I into non-overlapping blocks Bi;j , where 1  i  R; 1  j  C and R ¼ M=m; C ¼ N=n. 2. A pair of random generated integer ðp; qÞ; 1  p  m; 1  q  n is used as key in both sender and receiver. Extracting one bit from secret data W ¼ fwk 2 f1; 0g; k ¼ 1; 2; . . .; RC g, pixel Bi;j ðp; qÞ are altered into watermarked Bi;j ðp; qÞ according to: 8   Bi;j ðp; qÞ d > > >  ; 2d  round < 2d 2   Bi;j ðp; qÞ¼ > B ð p; q Þ d > > 2d  round i;j þ ; : 2d 2

if wk ¼ 0 ð1Þ if wk ¼ 1

where d is the quantization step and roundðÞ donates the rounding function. 3. Repeat operations described in step 2 till all Bi;j has been scanned, therefore the modified image I is obtained. 2.2

Extracting Procedure

Similarly, the extracting procedure is approximately the inverse process against embedding procedure. The watermarked I is firstly divided into m  n sized block. Based the generated integer ðp; qÞ; pixels which has been quantified is obtained. Therefore the hidden secret bit is acquired by a discriminant function. Steps of extracting procedures is illustrated as below: 1. Partition I into m  n sized blocks Bi;j 2. According to the randomly generated pair ðp; qÞ, apply following formula to pixel Bi;j ðp; qÞ, therefore the hidden secret bit wk 2 f1; 0g; k ¼ 1; 2; . . .; RC is extracted: Wk ¼

B ðp;qÞ

8 < 1;

if Bi;j ðp; qÞ  2d  round

: 1;

if Bi;j ðp; qÞ  2d  round

i;j

B2d  ðp;qÞ i;j

2d

[0 0

ð2Þ

674

Z. Hui and Q. Zhou

3. Repeat operations described in step 2 till all Bi;j ðp; qÞ has been scanned. Each obtained wk is allocated to the corresponding position, therefore the extracted secret data W is acquired. 2.3

Cover Image Recovery Procedure

In order to compensate the distortion induced by quantify pixels in stego image, a filtering scheme is proposed by means of replacing the modified pixel with the average value of its adjacent pixels in a 3  3 mask, which is: 1. According to pair ðp; qÞ, a series of coordinate ðu; vÞ is obtained, where: (

u ¼ ð i  1Þ  m þ p  1 v ¼ ð j  1Þ  n þ q  1

ð3Þ

2. Each border of I is padded with zeroes (two rows and columns respectively, therefore the size is turned to ðM þ 2Þ  ðN þ 2Þ. And the newly generated coor 0 0 dinates are u ; v ¼ ðu þ 1; v þ 1Þ.  0 0 3. For each u ; v , a 3  3 mask is applied to figure out the non-zero elements of   0 0 I u ; v ’s adjacent pixels, which are A ¼ fat 6¼ 0; t ¼ 1; 2; . . .; T; T  8g. The  0 0 watermarked I u ; v is replaced by mean value of A: T  0 0 1 X at Ir u ; v ¼ T t¼1

ð4Þ

 0 0  0 0  0 0 4. Substituted I u ; v with obtained Ir u ; v till all coordinates u ; v are scanned. Eventually, the padded rows and columns are removed, therefore the recovered host image Ir is achieved.

3 Experimental Results In this paper, five standard gray-scale images are served as test images with each size is 512  512, namely ‘Lena’, ‘Pepper’, ‘Boat’, ‘Jet’ and ‘Tiffany’ which are shown in Fig. 2.

A Novel Spatial Domain Based Steganography

675

Fig. 2. Test standard images: a lena; b pepper; c boat; d jet; e tiffany

In addition, in order to evaluate the performance of proposed scheme, several metrics are induced, namely peak signal to noise ratio (PSNR) to measure the imperceptivity, bit error rate (BER) to evaluate robustness. PSNR between image I1 and I2 with size M  N is defined as below:  PSNR = 10 log10

 255 ðdBÞ MSE

ð5Þ

where, MSE donates the mean square error of which definition is as follow: MSE ¼

M X N 1 X ðI1 ðx; yÞ  I2 ðx; yÞÞ2 MN x¼1 y¼1

ð6Þ

Definition of BER between secret data w and extracted w of which length is L is shown as follow: BER ¼

L 1X ðwðlÞ  w ðlÞÞ L l¼1

ð7Þ

where,  is exclusive or operation. Figure 3 shows the BERs/PSNRs curves of test images embedding 0.125 bpp ðm ¼ 4; n ¼ 2Þ. Sub-figure (a)/(b) shows BERs/PSNRs of images compressed by JPEG2000 of compression ratio (CR) equals 4, while (c)/(d) show such curves of images compressed by JPEG of quality (Q) is 90. In (b)/(d), lines with larger scope is the compression reconstructed stego images, while lines of which scopes is less with the same color is recovered stego image by proposed post-process method. It evident that no matter under JPEG2000 or JPEG compression, with quantization value enlarges, PSNRs of reconstructed stego image significantly decreases, while performance of BER is enhanced. When d 2 ½14; 18, performance of image quality and data secret error rate goes to an equilibrium, in addition proposed recovery method achieves a maximum benefit of which PSNRs are approximately 2–5 dB better than original reconstructed stego image. Therefore, in subsequent experiments, the chosen quantization level is d ¼ 16.

676

Z. Hui and Q. Zhou

Fig. 3. Curves of BER/PSNRs varied with different quantization value d. a, b are stego image via JPEG2000 of CR = 4; c, d are stego image via JPEG of Q = 90

To further evaluate the performance of proposed scheme, despite d two different capacity level is employed namely 0.125 bpp ðm ¼ 4; n ¼ 2Þ and 0.0625 bpp ðm ¼ 4; n ¼ 4Þ. The attack compression approaches are JPEG2000 compression (CR = 2/4) and JPEG compression (Q = 95/90). Table 1 shows statistics resulting from 0.125 bpp and Table 2 displays statistics obtained in 0.0625 bpp, from which we can tell that for both capacity PSNR of recovered image is approximately 35–43 dB which is 1–4 dB better than original reconstructed stego image. Meanwhile, BER of extracted secret increases with the increasing intensity of compression, yet in the worst case obtained BER is still less than 9% which can be revised by error correcting code.

A Novel Spatial Domain Based Steganography

677

Table 1. Resulting PSNR and BER for embedding capacity of 0.125 bpp Image

Lena Peppers Boat Jet Tiffany Image

Lena Peppers Boat Jet Tiffany

Jpeg2000 compression CR = 2 PSNR (dB) BER (%) Stego Recovered 35.77 41.23 0 35.94 40.40 1.72 35.49 38.01 1.05 35.55 37.60 0.4 34.64 35.67 4.39 Jpeg compression Q = 95 PSNR (dB) BER (%) Stego Recovered 35.06 39.27 0 35.22 38.70 3.21 34.83 36.92 1.94 34.92 35.87 0.61 33.80 35.37 4.39

Jpeg2000 compression CR = 4 PSNR (dB) BER (%) Stego Recovered 35.39 39.12 0.18 35.46 38.41 6.20 35.05 36.60 4.07 35.08 36.18 3.02 33.76 34.90 6.53 Jpeg compression Q = 90 PSNR (dB) BER (%) Stego Recovered 34.87 37.23 3.41 34.58 36.58 7.82 34.55 35.55 5.73 34.72 34.95 4.06 33.22 34.23 7.16

Table 2. Resulting PSNR and BER for embedding capacity of 0.0625 bpp Image

Lena Peppers Boat Jet Tiffany Image

Lena Peppers Boat Jet Tiffany

Jpeg2000 compression CR = 2 PSNR (dB) BER (%) Stego Recovered 38.28 43.70 0 38.90 43.76 1.42 38.43 40.66 0.99 38.54 39.99 0.35 37.45 38.56 4.19 Jpeg compression Q = 95 PSNR (dB) BER (%) Stego Recovered 37.20 40.61 0 37.58 40.59 2.79 37.35 38.90 1.83 37.49 38.45 0.58 36.49 37.80 4.10

Jpeg2000 compression CR = 4 PSNR (dB) BER (%) Stego Recovered 37.56 40.84 0 37.98 40.62 5.75 37.55 38.75 3.56 37.57 38.18 1.83 35.93 37.14 5.98 Jpeg compression Q = 90 PSNR (dB) BER (%) Stego Recovered 36.78 38.34 2.37 36.61 38.01 8.75 36.94 37.25 4.99 36.87 37.55 3.94 35.10 36.16 6.77

4 Conclusion In this paper, a spatial domain based robust steganography scheme is proposed against JPEG2000/JPEG compression by quantization a random pixels in non-overlapping blocks given a certain quantization level. Besides the robustness against image

678

Z. Hui and Q. Zhou

compression, proposed methods also has the characteristics as other spatial data hiding schemes such as relatively high embedding capacity, implement simplicity and low computational complexity. In addition, inspired by reversible data hiding (RDH) technology, a post-process is proposed to help stego image to recover. Experimental results show that for different intensity of JPEG2000/JPEG compression, the bit error rate of extracted secret data is around 0–9% and PSNR of recovered stego image is above 35 dB which is fairly acceptable in human vision system. Acknowledgements. This work is supported by the National Natural Science Foundation of China (No.61372175) and National Key Laboratory Foundation (No.2018SSFNKLSMT-13, No. HTKJ2019KL504006, No.HTKJ2019KL504007).

References 1. Hussain M, Wahab AWA, Idris YIB, Ho ATS, Jung K-H (2018) Image steganography in spatial domain: a survey. Sig Process Image Commun 65:46–66 2. Muhammad K, Sajjad M, Mehmood I, Rho S, Baik SW (2016) A novel magic LSB substitution method (M-LSB-SM) using multi-level encryption and achromatic component of an image. Multimedia Tools Appl 75:14867–14893 3. Swain G (2015) Adaptive pixel value differencing steganography using both vertical and horizontal edges. Multimedia Tools Appl. https://doi.org/10.1007/s11042-015-2937-2 4. Kieu TD, Chang C-C (2011) A steganographic scheme by fully exploiting modification directions. Expert Syst Appl 38:10648–10657 5. Chen H, Ni J, Hong W, Chen T-S (2016) Reversible data hiding with contrast enhancement using adaptive histogram shifting and pixel value ordering. Sig Process Image Commun 46:1–16 6. Kumar V, Natarajan V (2016) Hybrid local prediction error-based difference expansion reversible watermarking for medical images. Comput Electr Eng 53:333–345 7. Atawneh S, Almomani A, Al Bazar H, Sumari P, Gupta B (2017) Secure and imperceptible digital image steganographic algorithm based on diamond encoding in DWT domain. Multimedia Tools Appl 76:18451–18472 8. Saidi M, Hermassi H, Rhouma R, Belghith S (2017) A new adaptive image steganography scheme based on DCT and chaotic map. Multimedia Tools Appl 76:13493–13510 9. Arunkumar S, Subramaniyaswamy V, Vijayakumar V, Chilamkurti N, Logesh R (2019) SVD-based robust image steganographic scheme using RIWT and DCT for secure transmission of medical images. Measurement 139:426–437 10. Subhedar MS, Mankar VH (2014) Current status and key issues in image steganography: a survey. Comput Sci Rev 13–14:95–113

Losen: An Accurate Indoor Localization System by Integrating CSI of Wireless Signal and MEMS Sensors Zengshan Tian, Linxiao Xie(B) , Ze Li, and Mu Zhou Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People’s Republic of China {tianzs,zhoumu}@cqupt.edu.cn, linxiaoxie [email protected], [email protected]

Abstract. With the development of Internet of Thing (IoT), high accuracy positioning is of a significant importance in indoor environments. In this paper we propose Losen, a real-time indoor localization system by integrating MEMS sensors of smartphone and PHY-layer information of WiFi signal. Losen consists of AOA-based localization module, MEMS sensors module and fusion localization module, which solves problem of the initial localization of the existing integrating system. Aim to reduce the time cost of calculating spatial spectrum, Losen employs synthetic aperture radar (SAR), which is used in radar filed, to estimate the AOA of the direct path between the smartphone and AP (access points). Then, Losen uses MEMS sensors to estimate relative location and fuses the location estimated by SAR for mitigating location errors caused by the movement of the person and serious multipath. Losen is a real-time indoor localization system, which fuses PHY-layer information and MEMS sensors. The extensive experiment shows that the proposed integration system achieve the 67% localization accuracy of 1.3 m.

Keywords: Indoor localization Data fusion

1

· Angle of arrival · MEMS ·

Introduction

The location of people is highly desirable with the development of Internet of Thing (IoT), thus Location Based Service (LBS) has become one of the most important service in people’s daily life. Recently, the localization technology using WiFi signal is advanced since that TOF and AOA can be estimated accurately. In [1], the authors propose a frequency combining algorithm with purpose of increasing bandwidth. The proposed method improve TOF estimation accuracy and then locate target accurately by using multiple Line-of-sight (LOS) APs. In [2], the authors propose a high accuracy TOF estimation method by hopc Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 679–687, 2020 https://doi.org/10.1007/978-981-13-9409-6_80

680

Z. Tian et al.

ping multiple frequency bands with the same purpose. But those systems need specialized hardware or protocol modification, which limit their deployment. In [3], the authors propose ArrayTrack, which is a AOA-based indoor localization system by using antenna array. ArrayTrack achieve centimeter accuracy but the bulk antenna array is required. In [4], the authors propose a centimeter-level localization system using only three antennas on commodity WiFi device. However, the proposed two-dimension Multiple Signal Classification (MUSIC) [5] algorithm has heavy computational burden making real-time localization difficult. In summary, achieving high accuracy localization as well as locating target using commercial WiFi device is challenging. Aim to this problem, in this paper we propose a fusion system, which obtain accurate and real-time location and the proposed method is inspire the prior fusion systems. The literature [6–8] uses filtering algorithm such as Extended Kalman Filter (EKF) to achieve fusion localization system, which reduces the cumulative error of Pedestrian Dead Reckoning (PDR) using MEMS sensor to a certain extent. The work [7] proposes a ToF/MEMS fusion algorithm, but the ToF-based localization technology needs precise time synchronization of transceivers. In this paper, we propose a real-time localization system using AOA and MEMS sensor. Firstly, in order to solve the problem that the high computational load of the two-dimensional MUSIC algorithm, we propose Kalman filtering synthetic aperture radar (SAR) (KFS) algorithm which adopts Kalman filtering for estimated AOA by using SAR and it has low computational load. Secondly, the gait detection method is used to estimate walking speed of people and the quaternion method is used to calculate heading angle. Finally, the robust EKF is designed to suppress the accumulated error of MEMS sensor. We briefly outline the organization of the rest of this paper. Section 2 we give the some related work on AOA estimation of multipath signals. Section 3 gives the system design and Sect. 4 demonstrates the experiment and results. Section 5 concludes the paper.

2

Related Work

The aim of this paper is to provide a real-time location using the existing commodity smartphone and WiFi APs. Thus, the computational load of localization algorithm and the time and labor cost of deploying system should be minimized. However, the most recent advanced AOA based localization system [4] suffers from computational load. So, how to reduce the time cost of AOA based method as well as achieve comparable accuracy? The initiation is that determining the target’s location relies on direct paths instead of knowing all the multipath components. However, knowing the AOA of the direct path from multipath using only three antennas may introduce big errors. In fact, the direct path experiences less attenuation comparing with reflect signals due to extra propagation path. Thus, for the AOA estimation of multipath signals using superresolution algorithm the weak reflect signals can be ignored due to serious attenuation. Moreover, there are lots of subcarriers, which can be used for improving

Losen: An Accurate Indoor Localization System by Integrating CSI

681

AOA estimation accuracy while the existing WiFi devices have limited antenna number. Orthogonal Frequency Division Multiplexing (OFDM) is widely used in WiFi and LTE networks due to its higher data rate comparing to single subcarrier modulation scheme. The amplitude and phase of received signals can be obtained by measuring channel state information (CSI) of subcarriers. Comparing to RSS CSI is a fine-grained measurement which captures the process of multipath propagation.

3

System Description

In this paper, we fuse WiFi signal and MEMS sensors to construct an integrated indoor localization system and Fig. 1 plots the schematic of the proposed system. Specifically, we focus on data fusion in the proposed integrated platform. The data fusion is a loose-coupled integration of WiFi signal and MEMS sensors to decrease the limitations of WiFi based indoor localization systems. This integrated platform includes several modules: AOA localization, heading and speed estimation, data fusion based on robust EKF.

MEMS

PDR

acceleration gyroscope magnetometer

speed Gait detection

Quaternion calculation

heading

Speed and heading caculation module

Robust Kalman filter

EKF Module

Location result

Localization based on AoA

AP Selection WiFi signal AOAs estimation AOA location module

Fig. 1. The schematic of the proposed integrated system and the proposed system is a loose-coupled integration of WiFi signal and MEMS sensors

682

3.1

Z. Tian et al.

Locating the Target Based on AoA

SAR is to estimate the signal strength along with every spatial direction. Up to now, there emerged several indoor localization systems, which employ SAR to estimate the AoAs of multipath signals in WiFi or LTE networks. Next, we mathematically demonstrate the model of SAR using WiFi signal. Assume that the AP with N antennas is located at origin and the smartphone sends WiFi signal along the azimuthal angle. According to SAR principle [9], the measured power of the received signal at AP can be written as 2

P (θ) = |h (θ)|

(1)

N where h (θ) = i=1 ai (θ) hi and hi is the CSI measured on i-th antenna, assuming frequency offset between the AP and target is zero. Considering the linear antenna array in this paper, ai (θ) = e−j2π(i−1)×d sin θ/λ , λ is wavelength of the signal and d is the equal distance between two adjacent antennas. Since WiFi signal uses multiple subcarriers h (θ) can be rewritten as h (θ) =

M  N 

aji (θ) hji

(2)

j=1 i=1

where aji (θ) = e−j2π(j×Δf +f0 )×(i−1)×d sin θ/c and hji is the CSI of j-th subcarrier at i-th antenna, f0 is the central frequency of WiFi signal, Δf is the frequency interval, j is the subcarrier index, c is the light speed and M is the number of subcarrier. Note that the carrier and sampling frequency offset between the AP and target do not affect the AOA estimation in Eq. (4) since those phase offsets are constant for each CSI packet. Finally, we simply select the AOA with the biggest power as the AOA of the direct path. 3.2

Step Detection and Velocity Estimation

In PDR position is calculated in seconds and thus the distance walked at adjacent two seconds is exactly velocity. We utilize tri-axis accelerometer to detect human gait and estimate step length. Since the step frequency of human is approximately 1–3 Hz, we apply a low pass filter with 5 Hz cut-off frequency for eliminating high frequency noise to acceleration data. Let sensor data frequency be f and the number of acceleration data during one  step be ΔNt . After human gait detection, we can calculate step frequency St = f ΔNt . Then, the valid gait can be determined by peak detection which defines the curve between adjacent valid peaks as one valid step. Analyzing the relationship between acceleration variation and step length, step-length lt can be estimated using the non-linear model. Supposing that the length of each step is constant per second, the velocity at time t can be calculated as Vt = S t · l t =

f · lt ΔNt

(3)

Losen: An Accurate Indoor Localization System by Integrating CSI

3.3

683

Heading Estimation

The rotation matrix between the geographic coordinate system and sensor coordinate system can be described by quaternion format Hbn and Euler angles format Cbn . We use Extend Kalman Filter to fuse multi-sensor measurements. Then, according to rotational relation between real gravity and magnetic field and sensor measurements (accelerometer and magnetometer) in different coordinate systems, the systematic observation equation is deployed as  a = Hbn (q) · G + va (4) m = Hbn (q) · M + vm T

where G = [0, 0, g] is gravity acceleration vector and M is the adjusted magnetic vector in geographic coordinate system calculated by distorting magnetic field. va and vm denote the noise vector of the accelerometer and magnetometer, respectively. a = [ax , ay , az ] and m = [mx , my , mz ] are the accelerometer readings and the magnetometer readings. In this way, the quaternion differential equation can also be used for quaternion with gyroscope. Hence, the optimal quaternion can be estimated by the recursive process of Extend Kalman Filter. Furthermore, the corresponding heading is computed according to the relationship between Hbn and Cbn . 3.4

AOA/PDR Integration

As mentioned above, the performance of AOA-based localization is sensitive to the accuracy of AOA estimation of the direct path. Consequently, the Non-Lineof-Sight (NLOS) propagation caused by the blockage of human body degrades the localization performance. Fortunately, PDR approach can realize continuous tracking, which can be utilized to overcome the temporary fault of AOA-based localization. However, the cumulative error in PDR makes the accurate localization difficult. To response those problems, we propose to use Robust Extended Kalman Filter to fuse the AOA-based and PDR-based localization approaches ˆ t , when effectively. Based on the trajectory model, the latest predicted location L the tth velocity Vt and the heading ψt are estimated by PDR-based method, can be denoted as   I2 Φ ˆ · Lt−1 (5) Lt = Ft · Lt−1 = 0 I2 T  where Lt−1 = Et−1 , Nt−1 , Vˆt−1 , ψˆt−1 is the corrected location at time t−1, in which Et−1 and Nt−1 are the east-axis and north-axis coordinate in geographic frame space, respectively. Vˆt−1 and ψˆt−1 are the corrected speed and heading at time t − 1, respectively. I2 denotes second order unit matrix, and Φ, which is the part of discretization matrix from the continuous state equation. In order to acquire the corrected location at time t, we need to calculate some parameters in the deduction process of EKF, which are as follows T P− t = Ft Pt−1 Ft + Qt

(6)

684

Z. Tian et al.

−1 T T HP− K t = P− t H t H + Rt

(7)

where Rt and Qt represent the measurement noise covariance matrix of AOA techniques and the state noise covariance matrix of fusion system, respectively. H denotes the observation matrix, also is a fourth order unit matrix. Hence, AOA is used to adjust the , y the AOA-based localization results Zt = xAOA t t ˆ predicted results Lt to obtain corrected location Lt in Eq. (8) and the update of covariance Pt is calculated in Eq. (9).

ˆ t + Kt Zt − HL ˆt (8) Lt = L Pt = (I3 − Kt H) P− t

4

(9)

Experimental Results

We conduct experiments in our meeting room with the size of 9 × 7.7 m2 shown in Fig. 2. We place total three APs for evaluating fusion localization system. At the same time, the person holds the smartphone to collect MEMS data and send WiFi packets. The real locations of APs are measured accurately using a laser range finder and we store those locations in the configuration file of the server. Figure 3 plots the Cumulative Density Function (CDF) of errors, from the figure we can find that the 67% error of integrated localization system is 1.3 m while the location errors of the method only using AOA or MEMS is quite large due to the movement of the person and heading errors. Thus, the proposed fusion algorithm can achieve higher accuracy. The main reason is that NLOS propagation caused by body blockage and multipath effect will make spurious AOAs and thus introduces big location errors. To demonstrate the performance of the proposed system the person first holds the smartphone and then walks along with

Fig. 2. The experiment testbed and APs are marked with red box

Losen: An Accurate Indoor Localization System by Integrating CSI

685

1 0.9 0.8

AOA/MEMS Position AOA Position MEMS Position

0.7

CDF

0.6 0.5 0.4 0.3 0.2 0.1 0

0

2

4

6

8

10

12

Location error (m)

Fig. 3. CDF of errors of different localization methods

the pre-defined trajectory for several times. Figure 4 compares the trajectories of different localization algorithms and the test track is the square marked with black. We can see that the trajectory calculated by PDR is gradually shifted to the real position and the result of AOA localization has big errors. Moreover, the results of fusion localization are obviously superior to the former two. In Fig. 5 the test track is the line marked with black and the person walks along with the track back and forth and from the figure we can get the same conclusion. 10

8

Y/m

6

4 fusion positioning WLAN positioning PDR positioning

2

0

-2 -2

0

2

4

6

8

10

12

14

X/m

Fig. 4. Localization errors under a circular path of different localization methods

686

Z. Tian et al. 8 7 6 5

Y/m

4

fusion positioning WLAN positioning PDR positioning

3 2 1 0 -1

0

2

4

6

8

10

12

14

X/m

Fig. 5. Localization errors under a straight path of different localization methods

5

Conclusion

In this paper we propose Losen, a real-time indoor localization system, which fuses the PHY-layer information and MEMS sensors. Losen neither requires high density of access points (APs) deployment nor extensive labor and time costs to construct databases. To achieve this goal Losen first uses SAR to compute the AoAs of the direct path between the smartphone and APs instead of resolving all multipath signals and then determines the target’s position by using trilateral positioning method. Then, to reduce the location error caused by body blockage and multipath effect Losen uses MEMS sensors to calculate the relative position and then fuse the position estimated by AoA. Finally, Losen can provide realtime position and track the target by using the developed application on the smartphone and the localization server.

References 1. Xie Y, Li Z, Li M (2015) Precise power delay profiling with commodity WiFi. MobiCom, 53–64 2. Vasisht D, Kumar S, Katabi D (2016) Decimeter-level localization with a single WiFi access point. USENIX NSDI, 165–178 3. Xiong J, Jamieson K (2013) ArrayTrack: a fine-grained indoor location system. USENIX NSDI, 71–84 4. Kotaru M, Joshi K, Bharadia D, Katti S (2015) SpotFi: decimeter level localization using WiFi. ACM SIGCOMM, 269–282

Losen: An Accurate Indoor Localization System by Integrating CSI

687

5. Schmidt R (1986) Multiple emitter location and signal parameter estimation. IEEE Trans Antennas Propag 34(3):276–280 6. Zhou M, Wang B, Tian Z, Wang J, Zhang Q (2017) A case study of cross-floor localization system using hybrid wireless sensing. In: IEEE global communications conference, pp 1–6 7. Mariakakis A, Sen S, Lee J, Kim K (2014) SAIL: single access point-based indoor localization. MobiSys, 315–328 8. Huang C, Liao Z, Zhao L (2010) Synergism of INS and PDR in self-contained pedestrian tracking with a miniature sensor module. IEEE Senss J 10(8):1349–1359 9. Kumar S, Hamed E, Katabi D, Li E (2014) LTE radio analytics made easy and accessible. ACM SIGCOMM, 211–222

A Direct Target Recognition Algorithm for Low-Resolution Radar with Unbalanced Samples Kefan Zhu(&), Jiegui Wang, and Miao Wang Electronic Countermeasure Institute, National University of Defense Technology, Hefei, Anhui, China [email protected]

Abstract. The existing methods of low-resolution radar target recognition are based on feature extraction, which are difficult to improve the recognition rate and lack of generalization. In this paper, a direct target recognition algorithm for low-resolution radar based on focal loss is proposed. The algorithm using Convolutional Neural Network (CNN) automatically to obtain sample data deep essence characteristics, without feature extraction, realize the target recognition directly. In order to further improve the recognition effect under the condition of unbalanced samples, the focal loss function is used to calculate the error. By using focal loss, CNN can focus on the difficult samples in the training process to improve the ability to recognize difficult samples. Experimental results show that, the proposed direct target recognition algorithm for low-resolution radar based on focal loss than traditional based on weighted Support Vector Machine (WSVM) recognition algorithm of recognition rate increased by 7.95%, than CNN recognition algorithm based on cross-entropy loss function recognition rate increased by 5.17%. The experimental results fully demonstrated the effectiveness of the proposed algorithm and the superiority to traditional recognition method based on characteristic. Keywords: Low-resolution radar target neural network  Focal loss function

 Direct recognition  Convolutional

1 Introduction Radar Target Recognition (RTR) is an important direction of Radar research. Due to the high-resolution radar research cost is high and the research cycle is long, most active radar is low resolution radar. And with the popularity of pulse compression technology, the traditional low-resolution radar can also have very high radial resolution and can extract target microscopic characteristics. Such as based on the lowresolution radar target recognition technology research is still an important research topic [1–3]. Traditional low-resolution radar target recognition using recognition algorithm of target classification after feature extraction. The algorithm firstly extracts the features of the target based on the characteristics of echo fluctuation, pole distribution and modulation spectrum, and then classifies the target by means of Bayes, Support Vector © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 688–695, 2020 https://doi.org/10.1007/978-981-13-9409-6_81

A Direct Target Recognition Algorithm

689

Machine (SVM), nearest neighbor classification and hidden Markov model. The method of extracting target features can realize target classification and recognition. However, most of the features are designed manually, which are shallow features and incomplete. Using these features is not conducive to the further improvement of target recognition rate. Moreover, the features are often designed only for specific targets, and the generalization of the method is also insufficient. Since Hinton proposed the theory of deep learning, Convolutional Neural Networks (CNN) as an important model in the field, is widely used in field of target classification and recognition for learning the characteristics of deep essence data automatically [4– 7]. However, the method of low-resolution radar target recognition based on deep learning often needs enough training samples with balanced number of different categories. In modern warfare, when the radar target is an advanced non-cooperative target or stealth target, it is difficult to obtain enough training samples for the target, resulting in extreme imbalance in the number of samples of different categories and low target recognition rate. To solve the problem of sample imbalance, the traditional low-resolution radar target recognition technology adopts improved Support Vector Machine (SVM) algorithms, such as weighted SVM (WSVM) [8] and cost-sensitive SVM (CS-SVM) [9], etc., for target recognition. Paper [10] use the improved Synthetic Minority Oversampling Technique (SMOTE) algorithm to expand the number of minority samples to balance the sample set and improve the recognition rate. However, SMOTE algorithm may blur the boundary between majority classes and minority classes and use noise to synthesize new samples. So, the data set expansion effect is not ideal. Paper [11] uses the class-balanced cross-entropy loss function to automatically balance the loss caused by positive and negative samples. However, the addition of class equilibrium weight in cross-entropy loss function is equivalent to changing the original data distribution. So, the CNN fitting distribution is deviated from the original data distribution, and the recognition effect is limited. Paper [12] proposed the focal loss function. By greatly reducing the weight of simple samples, the network focuses on the recognition of difficult samples in the training process, which effectively improves the ability of target detection. In this paper, under the condition of unbalanced samples, a direct target recognition algorithm for low-resolution radar based on focal loss is proposed. This algorithm uses the CNN to automatically extract the data deeply essential feature firstly. Then it uses the focal loss function to greatly reduce the loss of simple samples. Finally, the loss is back propagation optimization weights in order to improve the recognition effect.

2 Direct Recognition Algorithm of Low-Resolution Radar Target Based on Focal Loss Function 2.1

CNN

CNN is a deep feedforward artificial neural network, including convolutional layer, pooling layer and full connection layer. The structure of CNN is shown in Fig. 1.

690

K. Zhu et al.

Fig. 1. The structure of CNN

2.2

Focal Loss Function

As the cross-entropy loss function equally accumulates the loss of each sample, when the sample number of a certain class is large, it will play a major role in the process of error back propagation, which is not conducive to the training of the network. The focal loss function was proposed in Paper [12] to solve the problem of sample disequilibrium, which is defined as: L¼

1 XX i y ð1  pin Þc ln pin N n i n

ð1Þ

where c is focal parameters and c  0. Since the focal loss function can adaptively reduce the weight of simple samples according to the network recognition effect, strengthen the network’s ability to identify difficult samples, and effectively improve the target recognition rate. In this paper, the focal loss function is used to calculate the error. 2.3

Direct Recognition Algorithm of Low-Resolution Radar Target Based on Focal Loss Function

2.3.1 The Structure of CNN in This Paper In order to adapt to the one-dimensional characteristics of sampled data, a onedimensional CNN is constructed in this paper. Table 1 shows a one-dimensional CNN for radar target recognition.

A Direct Target Recognition Algorithm

691

Table 1. One-dimensional CNN Input: 1X500 signal Convolution layers: 1X13, 6, ReLU Pooling layers: 1X4 Dropout = 0.2 Convolution layers: 1X11, 12, ReLU Pooling layers: 1X4 Dropout = 0.2 Convolution layers: 1X5, 30, ReLU Pooling layers: 1X2 Dropout = 0.2 Full connection layer: 360, 3 Softmax classifier Output: 1X3 probability vector

2.3.2 Algorithm Steps we propose a radar target recognition algorithm which is summarized in Algorithm 1. Algorithm 1 Direct target recognition algorithm for low-resolution radar based on focal loss. for number of training iterations do Sample mini batch of m samples ðx1 ; x2 ; . . .:; xm Þ from augment data set. Perform gradient descent on the parameters of D by calculating the gradient as: m P rhg m1 L (objective function in Ep. 1) i¼1

end for

3 Experimental Results and Analysis 3.1

Experimental Data Set

The experimental data are generated by MATLAB software simulation. The frequency modulation period of the simulation signal is 0.1 ms, the frequency modulation bandwidth is 100 MHz, and the sampling frequency is 5 MHz. The simulation signal with gaussian white noise is taken as radar target echo signal, and the sampled data after mixed frequency sampling is taken as experimental data. Numerical simulation experiments were carried out to identify three types of targets, humans, motorcycles and trucks. Take the sample data within a frequency modulation cycle as one sample, then the sample size is 1  500. The training dataset and the testing dataset are generated independently. In training dataset, the number of truck is 30, the number of

692

K. Zhu et al.

motorcycle is 300, and the number of human is 3000. In testing dataset, the number of each type is 200. 3.2

Focal Loss Function Parameter

There is a variable focal parameter in the focal loss function. In this algorithm, the focal parameter is selected as follows. The function of the focal parameter is to greatly reduce the weight of simple samples and make CNN focus on the training of difficult samples. The focal loss values under different c conditions are shown in Fig. 2, where the refers to the probability that CNN forecasts the target as a real tag.

Fig. 2. The focal loss values under different c

Paper [12] regards samples with prediction probability higher than 0.6 as simple samples. It can be seen from Fig. 2 that the introduction of simple samples can effectively reduce the loss of simple samples and make CNN focus on the recognition of difficult samples during training. This paper chooses c ¼ 2 as the focus parameter value. 3.3

The Influence of Different Loss Functions on the Recognition Effect

In order to verify the effectiveness of the focal loss function, the CNN with crossentropy loss function and focal loss function was identified. Gaussian white noise is added to the data for noise processing. In the experiment, the simulated target echo sampled data of SNR = −4 dB is used as the network input, and the designed onedimensional CNN structure is used as the recognition network. Figure 3 is the recognition rate of the test set of CNN with two loss functions under different iteration times.

A Direct Target Recognition Algorithm

693

Fig. 3. CNN recognition rate of different loss functions

As can be seen from Fig. 3, the recognition rate using the focal loss function is significantly higher than that using the cross-entropy loss function. In order to more intuitively show the recognition effect of different loss functions, the confusion matrix of the two loss functions is shown in Fig. 4, and the ROC curve and AUC value are shown in Fig. 5. In the confusion matrix, each column represents the real category to which the target belongs, and each row represents the recognition result of one-dimensional CNN. The labels are trucks, motorcycles and people from top to bottom and from left to right.

(a) cross-entropy loss

(b) focal loss

Fig. 4. Confusion matrix

694

K. Zhu et al.

(a) cross-entropy loss

(b) focal loss

Fig. 5. ROC curve and AUC value

It can be seen from the confusion matrix in Fig. 4 and ROC curve and AUC value in Fig. 5 that the CNN recognition algorithm based on the focal loss function is superior to the CNN recognition algorithm based on the cross-entropy loss function in the recognition rate and AUC value of various targets. 3.4

Recognition Effect of Low-Resolution Radar Target Direct Recognition Algorithm Based on the Focal Loss Function

In order to further illustrate the effectiveness of the proposed algorithm and the superiority to traditional recognition method based on characteristic, the proposed algorithm is compared with other four kinds of methods. First method uses WSVM proposed by Paper [8]. Second method uses CNN based on cross-entropy loss. Third method uses CNN based on class-balanced cross-entropy (CB-CE) loss. Fourth method uses the algorithm proposed by Paper [10] which uses CNN based SMOTE. At the same time, in order to verify the robustness of the proposed method, the recognition experiments were carried out under different SNR conditions by adding gaussian white noise. The recognition effects of different methods are shown in Table 2. Table 2. Recognition effects of different methods Methods Paper [8] CE + CNN CB-CE + CNN Paper [10] Focal loss + CNN

SNR −10 dB (%) 58.07 57.64 65.67 58.33 66.02

−8 dB (%) 57.29 75.83 79.25 62.00 87.07

−6 dB (%) 61.46 79.83 83.23 59.33 85.00

−4 dB (%) 59.90 73.69 84.34 67.66 84.45

As can be seen from Table 2, the recognition rate of the proposed method is the highest under different SNR conditions, which is at least 7.95% higher than that of the traditional recognition algorithm based on WSVM and 5.17% higher than that of the

A Direct Target Recognition Algorithm

695

recognition algorithm based on CNN with cross-entropy loss. This proves the effectiveness and superiority of the proposed method over the traditional method.

4 Conclusion Aiming at the problem that the recognition rate of traditional low-resolution radar target recognition method is low under the condition of unbalanced samples, this paper proposes the direct target recognition algorithm for low-resolution radar based on focal loss. First, the deep essential characteristics of the data are obtained through CNN, and then the focal loss function is used to make the CNN focus on the difficult samples in the training process, and finally the loss is back-propagated to improve the network recognition performance. Simulation results show that the proposed algorithm has obvious advantages over traditional recognition methods, and the use of focal loss function can effectively improve the recognition effect of CNN on low-resolution radar targets under the condition of unbalanced samples.

References 1. Chen F, Liu HW, Du L et al (2010) Target classification with low-resolution radar based on dispersion situations of eigenvalue spectra. Sci China Ser F (Inf Sci) 53(7):1446–1460 2. Du L, Wang B, Li Y et al (2013) Robust classification scheme for airplane targets with low resolution radar based on EMD-CLEAN feature extraction method. IEEE Sens J 13 (12):4648–4662 3. Junping S, Yi D (2007) Research on ship target auto-recognition technique for low resolution radar. In: International conference on radar. IEEE 4. Ding J, Liu H, Chen B, Feng B, Wang Y (2016) Application of similarity constrained deep confidence network in SAR image target recognition. J Electron Inf 38(01):97–103 5. Hu G, Wang KJ, Peng Y et al (2018) Deep learning methods for underwater target feature extraction and recognition. Comput Intell Neurosci (3):1–10 6. Tian Z, Zhan R, Hu J et al (2016) SAR image target recognition research based on convolutional neural network. J Radar 5(3):320–325 7. Fang C, Xue Z (2018) Signal classification method based on full bispectrum and convolutional neural network. Comput Appl Res 12:1–2 8. Rosten E, Porter R, Drummond T (2008) Faster and better: a machine learning approach to corner detection. IEEE Trans Pattern Anal Mach Intell 32(1):105–119 9. Kjeldsen TH (2000) A contextualized historical analysis of the Kuhn-Tucker theorem in nonlinear programming: the impact of World War II. Historia Math 27(4):331–361 10. White C, Ismail HD, Saigo H et al (2017) CNN-BLPred: a convolutional neural network based predictor for b-Lactamases (BL) and their classes. BMC Bioinf 18(S16) 11. Xie S, Tu Z (2015) Holistically-nested edge detection. Int J Comput Vision 125(1–3):3–18 12. Lin T-Y, Goyal P, Girshick R et al (2017) Focal loss for dense object detection. In: Proceedings of the 2017 IEEE international conference on computer vision. IEEE Computer Society, Washington, DC, pp 2999–3007

DFT-Spread Based PAPR Reduction of OFDM for Short Reach Communication Systems Yupeng Li1,2(&), Yaqi Wang1,2, and Longwei Wang3 1

2

3

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected] College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX 76019, USA

Abstract. In this paper, discrete Fourier transform (DFT)-spread OFDM signal is studied and the performance is investigated, which is designed for an intensity-modulation/direct-detection (IM/DD) system. The results show that the peak-to-average power ratio (PAPR) of OFDM could be reduced effectively by applying DFT-spread. As result, the strong nonlinearity tolerance is obtained. The measured BER could meet the requirement of forward-error correction (FEC) limit well. Keywords: OFDM

 DFT-spread  PAPR  Nonlinearity tolerance

1 Introduction The capacity of short reach optical communication systems is urgently to be improved to meet the rapid growing information traffic, such as high-definition television (HDTV), internet of things (IOT), and the fifth generation (5G) communication system [1, 2]. Benefiting from the low cost and good performance, passive optical network (PON) [3, 4] has been widely adopted as main solutions for the access networks. The time-/wavelength-division multiplexing (TWDM) scheme has been adopted in the standard of next generation PON stage 2 (NG-PON2), which has an expected 40 Gb/s capacity [5]. However, conventional OOK modulation, whose transmission speed is preserved at 10 Gb/s per wavelength, is still used in NG-PON2. As a result, the spectral efficiency (SE) is relatively low. It is commonly agreed that the PON of next generation must have an obvious speed improvement, which will without doubt exceed the 10 Gb/s per wavelength, and is out of the reach of conventional OOK. High SE modulation formats are preferred to be used for the PON system beyond NG-PON2, such as multi-level pulse amplitude modulation (PAM), high-order QAM, orthogonal frequency-division multiplexing (OFDM). By virtue of the impressive advantages, like strong robustness to dispersion, high SE and simple equalization, OFDM has gained a lot of interests from both industrial and academic fields [6–9]. In addition, the capacity of OFDM is easy to be improved by combining different advanced modulation formats, which is benefit from its transparence to modulation © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 696–703, 2020 https://doi.org/10.1007/978-981-13-9409-6_82

DFT-Spread Based PAPR Reduction of OFDM for Short Reach

697

formats. The OFDM-PON has been regarded as a promising solutions for future broadband access networks. Several different OFDM schemes have been proposed. Generally, Optical OFDM could be classified into coherent optical OFDM (CO-OFDM) and direct detection OOFDM (DDO-OFDM). CO-OFDM gains higher receiver sensitivity by the cost of complex system configuration, hence it is more suitable for long distance optical transmission. On the contrary, the DDO-OFDM has simple system configuration and low deployment cost, as a result, it is preferred for short reach transmission. One of factors that restrict OFDM is the inherently high peak-to-average power ratio (PAPR). The high PAPR will cause severe nonlinearity distortion and degrades the performance, especially for the high-order QAM encoded signals. Moreover, highresolution analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) are required. Several techniques for PAPR reduction have been proposed, such as coding, predistortion and probability [10–12]. However, the existing schemes show some drawbacks like reducing spectral efficiency, degrading performance and increasing complexity. In wireless communications, the uplink of the 4G mobile standard, has adopted discrete Fourier transform (DFT)-spread OFDM [13]. In addition, DFT-spread has also been introduced into the intensity-modulation/direct-detection (IM/DD) system for the purpose of PAPR reduction [14, 15]. In this paper, the performance of conventional OFDM and DFT-spread OFDM is studied, which is in IM/DD system over 40-km standard single mode fiber (SSMF). DFT-spread OFDM shows lower PAPR and stronger nonlinearity tolerance. In addition, DFT-spread OFDM performs well with low-bit resolution DAC/ADC. 11.89 Gbit/s DFT-spread 16-QAM OFDM transmission performance is demonstrated.

2 Theoretical Analysis The DSP processes which to modulate and demodulate DFT-spread OFDM are shown Fig. 1. Figure 1a shows the modulation process. First, the serial original data are rearranged into M parallel channels and mapped to form complex QAM symbols. Second, all the mapped symbols are spread into M subcarriers by implementing M-point DFT. Third, Hermitian conjugate symmetric structure is formed to obtain realvalued outputs from N-point IFFT. Cyclic prefix (CP) is appended for the reason of avoiding inter-symbol interference (ISI). At last, the serial data are recovered from parallel IFFT outputs and sent out.

698

Y. Li et al.

Fig. 1. The DSP processes at a transmitter and b receiver side.

Figure 1b depicts the demodulation process. The DSP processes like CP removal, N-point FFT, channel equalization, M-point IDFT and QAM demapping are implemented for data recover. It is worth noting that other processes are the same as conventional OFDM except for the DFT and IDFT parts. Hence, it is easy to upgrade the existing conventional OFDM without much modification.

3 Simulation Setup Figure 2 shows the simulation setup of DFT-spread and conventional OFDM. Digital signal generation and demodulation are realized with Matlab programs. The optical and electrical modules, as well as SSMF links are simulated with VPItransmissionMaker. The original data is Pseudo-Random Binary Sequence (PRBS). The system sampling rate is Fs = 10 GSa/s. The 128 subcarriers are contained in one OFDM symbol. However, because the DAC generates the signal with a zero-order hold characteristic, and will create an image above the Nyquist frequency (Fs/2), oversampling is needed and not all of the subcarriers could be used to carry data. In the simulation, data are carried in 99 subcarriers and are mapped into 16-QAM. The 99-point DFT operation is implemented and the outputs are assigned to corresponding subcarriers. 27 subcarriers in high frequency band are filled with zero, resulting the oversampling rate of 1.28. Hermitian conjugate symmetric structure is formed and the IFFT size is 256. In order to avoid ISI, CP with 32 samples is added to each OFDM symbol. 10 training symbols (TSs) as a group are inserted every 64 OFDM frames for channel equalization. Therefore, an OFDM signal which has a net data rate of 11.89 Gb/s is obtained. An 8bit resolution DAC is used to transfer the digital signal to analog signal. The OFDM

DFT-Spread Based PAPR Reduction of OFDM for Short Reach

699

Fig. 2. Setup for simulation of two schemes.

band image generated by DAC is filtered out with a low pass filter (LPF). The LPF output is loaded to a Mach-Zehnder modulator (MZM) for optical waveform generation. 40 km SSMF without dispersion compensation is used as the transmission link. The fiber parameters are set as follows, the attenuation, dispersion index, nonlinear index and effective core area of fiber are 0.2 dB/km, 16 ps/nm/km, 2.6  10−20 m2/W and 80  10−12 m2 respectively. The launch power into fiber is controlled by variable optical attenuator (VOA) and Erbium-doped fiber amplifier (EDFA). The received optical power (ROP) at the receiver side is controlled by another VOA. The received optical signal is converted to electrical signal by a photodiode (PD) with direct detection. An 8-bit resolution ADC is used to sample the PD outputs and the sampled signal are processed with designed Matlab algorithm for demodulation.

4 Results and Discussions For fair comparison, both DFT-spread and conventional OFDM modulation use same original data. The PAPR is obtained first. For the conventional OFDM signal, the PAPR is 12.9 dB, and for the DFT-spread OFDM, the PAPR reduced to 10.3 dB, which proves that DFT-spread is an effective way to reduce PAPR. For the receiver sensitivity performance investigation, the launce power of 8 dBm is fixed into fiber, and the ROP is controlled from −10 dBm to 0 dBm, which is realized by adjusting the VOA at receiver side.

700

Y. Li et al.

(a)

(b) FEC limit

FEC limit

Fig. 3. Simulation results with different ROP.

The results of Q value and BER performance with different ROP are shown in Fig. 3. The blue line represents conventional OFDM and red line represents DFTSpread OFDM. Figure 3 shows that DFT-spread OFDM gains better receiver sensitivity performance than conventional OFDM. The error free status is obtained when the ROP of the DFT-Spread OFDM system is larger than −5 dBm, while for conventional OFDM, the ROP should be larger than −2 dBm to be error free. At the FEC limit, about 2.5 dB receiver sensitivity improvement is obtained by employing DFT-spread, which means that the DFT-Spread OFDM is more suitable for long distance transmission. The influence of fiber nonlinearity effect will appear and will influence the system performance when the launch power into fiber is extra-large. DFT-spread OFDM is expected to have better nonlinearity tolerance because of the lower PAPR. For the nonlinearity tolerance performance investigation, the ROP of −2 dBm is fixed and the launce power is controlled from 6 to 22 dBm by adjusting the VOA and EDFA at transmitter side.

(a)

(b)

FEC limit

FEC limit

Fig. 4. Simulation results with different launch power.

DFT-Spread Based PAPR Reduction of OFDM for Short Reach

701

The results of Q value and BER performance with different launch power are shown in Fig. 4. The results show that the DFT-spread OFDM always outperforms conventional OFDM. With low launch power, 3 dB Q value gain is obtained. With the launch power of 14 dBm, the system performance deteriorates obviously. The errors appear for the conventional OFDM system when the launch power is larger than 17 dBm. For DFT-Spread OFDM, errors begin to appear until launch power is larger than 19 dBm. The DFT-Spread OFDM shows better nonlinearity tolerance performance. At the FEC limit, 1 dB nonlinearity tolerance improvement is obtained for the DFT-Spread OFDM. The results prove that DFT-spread has stronger nonlinearity tolerance, and larger launch power is acceptable, which is helpful for the system with massive users. The limited bit-resolution of DAC and ADC will influence the system performance. Signal nonlinearity quantization distortion will be caused by low bit resolution. The system performance will be deteriorated severely. However, extra-high bit resolution will increase the system cost, as well as the computational complexity. As a result, it is necessary to find a reasonable bit resolution by considering both facts. The system performance of various bit resolution DAC/ADC is investigated.

(a)

(b)

FEC limit

FEC limit

Fig. 5. Simulation results with different bit-resolution.

The results of Q value and BER performance with different bit resolution are shown in Fig. 5. The launce power of 19 dBm and ROP of −7 dBm are fixed respectively. The bit resolution varies from 4-bit to 12-bit. As shown in Fig. 5, when bit resolution is higher than 5-bit, the FEC limit requirement is met. The result proves that DFT-spread OFDM is able to resist nonlinearity quantization distortion caused by DAC/ADC, which is helpful for the system cost reduction. The system performs quite well when the bit resolution of 8-bit is used. The system performance does not improve significantly by further increasing bit resolution. Therefore, the 8-bit resolution is the most reasonable choice for the practical system.

702

Y. Li et al.

5 Conclusion In this paper, the performance of DFT-spread 16-QAM OFDM signal is investigated. The DFT-spread OFDM performs well in reducing the PAPR, and the nonlinearity tolerance is improved. Furthermore, the receiver sensitivity of DFT-spread OFDM is improved. Moreover, DFT-spread OFDM represents strong robustness to lowresolution quantization nonlinearity distortion of DAC/ADC. The research results show that DFT-spread OFDM is suitable for the short reach optical communication systems, and could be regarded as a promising candidate for the system beyond NGPON2. Acknowledgements. This work is supported by the Natural Science Foundation of China under Grant 61901301 and Natural Science Foundation of Tianjin under Grant 18JCQNJC70900.

References 1. Chagnon M (2019) Optical communications for short reach. J Lightwave Technol 37 (8):1779–1797 2. Wey JS, Zhang J et al (2019) Passive optical networks for 5G transport: technology and standards. J Lightwave Technol 37(12):2830–2837 3. Altabas JA, Valdecasa GS et al (2019) Real-time 10 Gbps polarization independent quasicoherent receiver for NG-PON2 access networks. J Lightwave Technol 37(2):651–656 4. Yu B, Guo C et al (2018) 150-Gb/s SEFDM IM/DD transmission using log-MAP viterbi decoding for short reach optical links. Opt Express 26(24):31075–31084 5. Luo Y, Zhou X et al (2013) Time- and wavelength-division multiplexed passive optical network (TWDM-PON) for next-generation PON stage 2 (NG-PON2). J Lightwave Technol 31(4):587–593 6. Jansen Sl, Morita I, Tanaka H (2007) 10-Gb/s OFDM with conventional DFB lasers. In: ECOC 7. Li Y, Ding D (2018) Spectrum efficiency improvement for quasi-constant envelope OFDM. Photonics Technol Lett 30(15):1392–1395 8. Du X, Zhang J et al (2016) Efficient joint timing and frequency synchronization algorithm for coherent optical OFDM systems. Opt Express 24(17):19969–19977 9. Giddings R (2014) Real-time digital signal processing for optical OFDM-based future optical access networks. J Lightwave Technol 32(4):553–569 10. Armstrong J (2002) Peak-to-average power reduction for OFDM by repeated clipping and frequency domain filtering. Electron Lett 38(5):246–247 11. Davis JA, Jedwab J (1997) Peak-to-mean power control and error correction for OFDM transmission using Golay sequences and Reed-Muller codes. Electron Lett 33(4):267–268 12. Wang J, Guo Y et al (2009) PTS-clipping method to reduce the PAPR in ROF-OFDM system. IEEE Trans Consum Electron 55(2):356–359 13. Myung HG, Lim J, Goodman DJ (2006) Peak-to-average power ratio of single carrier FDMA signals with pulse shaping. In: IEEE 17th international symposium on personal, indoor and mobile radio communications, Sep 11–14, pp 1–5

DFT-Spread Based PAPR Reduction of OFDM for Short Reach

703

14. Li F, Li X et al (2015) Transmission of 100-Gbs VSB DFT-spread DMT signal in shortreach optical communication systems. IEEE Photonics J 7(5):7904307 15. Tang Y, Shieh W et al (2010) DFT-spread OFDM for fiber nonlinearity mitigation. IEEE Photonics Technol Lett 22(16):1250–1253

Underdetermined Mixed Matrix Estimation of Single Source Point Detection Based on Noise Threshold Eigenvalue Decomposition Miao Wang(&), Xiao-xia Cai, and Ke-fan Zhu Electronic Countermeasure Institute of National University of Defense Technology, Hefei 230037, China [email protected]

Abstract. Aiming at the problem that the signal recovery accuracy of the underdetermined blind source separation algorithm is low, the mixed matrix estimation algorithm is improved by using the sparse characteristics of the signal time-frequency domain. By applying the eigenvalue decomposition detection single source point algorithm based on noise threshold to matrix estimation, instead of the single source point detection algorithm of the real time and real part of the traditional time domain, the signal and noise are connected, and the algorithm is improved. Anti-noise performance; then, The k-means algorithm is used to implement the hybrid matrix estimation. Experiments show that the improved algorithm is more accurate than the traditional algorithm under the same conditions, which is more conducive to subsequent signal separation. Keywords: Underdetermined blind source separation  Noise threshold detection  Eigenvalue decomposition  k-means clustering

1 Introduction Blind Source Separation (BSS), the problem to be solved is how to separate the source signals according to the characteristics of the signal without knowing the source signal and not making any a priori information assumptions on the parameters of the unknown receiving system. Blind source separation has been favored by experts and scholars in many fields. On the one hand, they explore some basic theories and basic problems of blind separation algorithms; on the other hand, the unique mathematical model of blind source separation. It is widely used as an important technology in the fields of digital communication, robot navigation, biomedical engineering, ship engineering, speech processing and image processing [1]. Underdetermined Blind Source Separation (UBSS), is a difficult point in the current signal separation. In order to solve the problem of underdetermined blind source separation, a sparse component analysis (SCA) algorithm based on the “two-step method” [2–4] is proposed. The first step is to estimate by clustering [5], potential function and so on. An irreversible mixing matrix; the second step uses the shortest path method [6], greedy class reconstruction [7] and other algorithms to recover the source signal; Among them, the first step—the estimation of the mixing matrix is © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 704–711, 2020 https://doi.org/10.1007/978-981-13-9409-6_83

Underdetermined Mixed Matrix Estimation of Single Source

705

related to the accuracy of the recovered signal. However, for the less sparse signal, the traditional “two-step method” performance is not good. Subsequently, Abrard proposed [8] based on the time-frequency ratio method to solve the underdetermined blind separation problem; Kim [9] proposed a single source signal detection method to improve the source signal sparsity and improve the hybrid matrix estimation accuracy. Aiming at the low accuracy of traditional hybrid matrix estimation, this paper proposes an improved algorithm, which uses eigenvalue decomposition algorithm based on noise threshold detection instead of traditional time-frequency single source point detection algorithm to improve the accuracy of hybrid matrix estimation. After the frequency single source point, the K-means clustering method is used to obtain the estimated value of the mixed matrix.

2 Underdetermined Mixed Signal Model The underdetermined mix system model is as shown in Fig. 1.

s1 (t)

a11 a12 a21 a aM 1 22

s2 (t)

a1N a2 N

s N (t)

r1 (t)

r2 (t)

aM 2 aMN

rM (t)

Fig. 1. Underdetermined mix system

Among them, sN(t) represents N original signals, rM(t) represents M mixed signals (M < N), aMN represents a mixed steering vector that constitutes the M  N dimensional matrix, which is the mixing matrix A to be estimated. Receiving N signals (M < N) in the air with a uniform array containing M receiving antennas, it is known that the incident angle of the nth (n = 1, 2, …, N) signals on the array is hn, the mth(m = 1, 2, …, M) sensing signals of array elements are: xm ðtÞ ¼

N X

sn ðtÞ expðjmhn Þ

m ¼ 1; 2; . . .M  1

ð1Þ

n¼1

Considering noise, the total observed signal model can be expressed as: rm ðtÞ ¼ xm ðtÞ þ nm ðtÞ ¼

N X n¼1

sn ðtÞ expðjmhn Þ þ nm ðtÞ

ð2Þ

706

M. Wang et al.

Write the above form into a matrix form: 2

3 2 r1 ðtÞ 6 r2 ðtÞ 7 6 6 7 6 6 7 6 6 .. 7 ¼ 6 4 . 5 4 rM ðtÞ

1 expðjh1 Þ; .. .

1 expðjh2 Þ .. .

expðjðM  1Þh1 Þ; expðjðM  1Þh2 Þ

  

1 expðjhN Þ .. .

32

32 3 s1 ðt) n1 ðt) 76 s2 ðt) 76 n2 ðt) 7 76 76 7 76 76 7 76 .. 76 .. 7 54 . 54 . 5

   expðjðM  1ÞhN Þ

sN ðt)

nN ðt)

¼ A  sþN

ð3Þ Under-determined hybrid matrix estimation, that is, under the premise of unknown signal source information, only using the received data to achieve the estimation of the mixed form of the signal is the core of the signal blind source separation, and its accuracy is related to the similarity between the separated signal and the original signal.

3 Algorithm Principle 3.1

Traditional Algorithm Principle

Since the signal has a sparseness in the time-frequency domain, the signal r(t) is extracted from the single-source data pair (t, f) in the time-frequency domain. Equation (2) can be written as: Rm ðt; f Þ ¼ amn sn ðt; f Þ þ nm ðt; f Þ

ð4Þ

The clustering algorithm can only perform operations on real numbers, and both the observed signal and the mixed matrix are complex numbers. Therefore, the upper part takes the real part and the imaginary part, and at the same time, make amn ¼ Rm;n þ jIm;n : ReðRm ðt; f ÞÞ þ jImðRm ðt; f ÞÞ ¼ ðRm;n þ jIm;n Þ½ðReðSn ðt; f Þ þ jImðSn ðt; f ÞÞ þ Nm ðt; f Þ

ð5Þ

The real and imaginary parts are equal: 

ReðRm ðt; f ÞÞ ¼ Rm;n  ReðSn ðt; f ÞÞ  Im;n  ImðSn ðt; f ÞÞ þ Nm ðt; f Þ ImðRm ðt; f ÞÞ ¼ Rm;n  ReðSn ðt; f ÞÞ þ Im;n  ImðSn ðt; f ÞÞ þ Nm ðt; f Þ

ð6Þ

Make m = 1, known according to Eq. (6) R1;n ¼ 1; I1;n ¼ 0, can get 

ReðR1 ðt; f ÞÞ ¼ ReðSn ðt; f ÞÞ þ N1 ðt; f Þ ImðR1 ðt; f ÞÞ ¼ ImðSn ðt; f ÞÞ þ N1 ðt; f Þ

Since R1 ðt; f Þ is known, it will be substituted with Eq. (7):

ð7Þ

Underdetermined Mixed Matrix Estimation of Single Source



ReðRm ðt; f ÞÞ ¼ Rm;n  ReðR1 ðt; f ÞÞ  Im;n  ImðR1 ðt; f ÞÞ þ Nm ðt; f Þ ImðRm ðt; f ÞÞ ¼ Rm;n  ReðR1 ðt; f ÞÞ þ Im;n  ImðR1 ðt; f ÞÞ þ Nm ðt; f Þ

707

ð8Þ

Obviously, in the above formula, Rm(t, f), Nm(t, f) is known, and the mixed matrix Rm,n and Im,n can be obtained and written into a matrix operation form: "

Rm;n Im;n

#

" ¼

ReðRm ðt; f ÞÞ ImðRm ðt; f ÞÞ ImðRm ðt; f ÞÞ  ReðRm ðt; f ÞÞ

#1 "

# ReðR1 ðt; f ÞÞ þ Nm ðt; f Þ ImðR1 ðt; f ÞÞ

ð9Þ

[Rm,n, Im,n] is used as the clustering object, and all the extracted data pairs are   ^ m;n ; ^Im;n , and the mixed matrix esticlustered to obtain the best central data pair R mation value is obtained, so that the “single source point detection + clustering algorithm” is used. The value of the mixing matrix can be obtained. 3.2

Improved Algorithm in This Paper

Aiming at the problem that the traditional single source point detection algorithm has poor anti-noise performance and large computational complexity, which leads to low accuracy of hybrid matrix estimation and difficult signal separation, this paper proposes an improved algorithm of hybrid matrix estimation based on “noise threshold eigenvalue decomposition + clustering”. Equation (4) becomes: Rm ðt; f Þ ¼ Amn sn ðt; f Þ þ nðt; f Þ

ð10Þ

First, find the average of each single source point (t, f):  m ðt; f Þ ¼ Rm ðt; f Þ  E½Rm ðt; f Þ R

ð8Þ

Then, find the covariance matrix of the observation matrix Rm(t, f):    m ðt; f Þ  R  m ðt; f ÞH ¼ Rm ðt; f Þ RR m ðt;f Þ ¼ E R   ¼ E ðAmnsn ðt; f Þ þ nðt; f ÞÞ  ðAmnsn ðt; f Þ þ nðt; f ÞÞH ¼ Amn  Rs  AH mn þ nðt; f Þ

2

am1

3

6 7 6 am2 7 6 7 ¼ ½am1 ; am2 ; . . .amn   Rs  6 7 þ nðt; f Þ 6 ...7 4 5 amn

ð9Þ

where []H is transposed, the covariance matrix of Rm(t, f) is the M  M dimensional matrix, and M is the number of receiving antennas. Then, the eigenvalue

708

M. Wang et al.

decomposition is performed to obtain a diagonal matrix K in which the unitary matrix U and the eigenvalues are arranged from large to small: 2 RR 2 ðt;f Þ

6 6 ¼ UKU ¼ ½u1 ; u2 ; . . .uM 6 4

r21

H

32 r22

..

.

3 u1 76 u 7 76 2 7 76 7 54 . . . 5 r2M

ð10Þ

uM

If sn(t, f) is a time-frequency single source point, in the case of no noise, there are 6 0; r22 ¼    ¼ rM2 ¼ 0. Compare the sn(t, f) covariance matrices: r12 ¼ 

RR m ðt;f Þ ¼ amn  s2n  aH mn RR m ðt;f Þ ¼ u1  r21  uH 1

ð11Þ

the noise threshold is introduced to relax the detection conditions: r2 1  PM 1

j¼1

r2j

b

ð12Þ

b is the noise threshold [10], arbitrarily taking values between 0 and 1. Setting the noise threshold can link the observed signal with the noise, improving the anti-noise performance of the algorithm, and b can reflect the signal-to-noise ratio characteristics very well. Combined with the clustering algorithm, the single source point detection accuracy can be improved.

4 Simulation 4.1

Evaluation Criteria

Normalized Mean Square Error (NMSE) is used to measure the accuracy of the hybrid matrix estimation. NMSE ¼ 10  log10

PM

PN

ð^ amn  amn Þ2 PM PN 2 m¼1 n¼1 amn

m¼1

n¼1

! ð13Þ

In the formula, M represents the number of rows of A, and N represents the number of columns of A, ^amn is the estimated mixed matrix element. The smaller the value of NMSE, the higher the accuracy of the mixed matrix estimate. 4.2

Experimental Verification

Experiment 1 Mixed matrix estimation. Set up a signal mixing model. There are five independent signal sources, which are received by a uniform antenna array with three array elements. The incident angles are

Underdetermined Mixed Matrix Estimation of Single Source

709

[5°, 31°, 55°, 66°, 87°], and the sampling rate is fs = 1500 Hz. The signal-to-noise ratio is 10 dB, and the theoretical value of the mixed matrix is: 2

1 A ¼ 4 0:9627  0:2704i 0:8538  0:5207i

1 0:0472  0:9989i 0:9955 þ 0:0943i

1 0:8429  0:5381i 0:4210 þ 0:9071i

1 1  0:0043i 1 þ 0:0086i

3 1 0:9633  0:2683i 5 0:8561 þ 0:5196i

The time domain diagram of the original signal and the mixed signal is shown in Fig. 2.

Fig. 2. Signal time domain diagram

Using the algorithm of this paper, the estimated value of the mixed matrix is: 2

1 1 ^ ¼ 4 0:8035  0:2312i 0:9829  0:0996i A 0:5921  0:3715i 0:9561 þ 0:1958i

1 0:9799  0:0588i 0:9568  0:1153i

3 1 1 0:9829  0:1514i 00:8035 þ 0:2312i 5 0:9431 þ 0:2977i 0:5921 þ 0:3715i

The normalized mean square error of the calculated real matrix and the estimated matrix is: NMSE = −33.0045 dB. The analysis results show that under the condition of 10 dB SNR, the normalized mean square error of the proposed algorithm can reach −33 dB, and the hybrid matrix estimation accuracy is higher. Experiment 2 Algorithm performance varies with signal to noise ratio. In order to compare the anti-noise performance of the improved algorithm and the traditional algorithm, it is assumed that the signal-to-noise ratio varies from −10 dB to 20. The results of the normalized mean square error are as shown in Fig. 3.

710

M. Wang et al.

Fig. 3. Algorithm performance comparison

The analysis results show that under the same SNR, the proposed algorithm has lower normalized mean square error than the traditional algorithm. Under the condition of low SNR, the improved algorithm outperforms the traditional algorithm; and with the SNR The larger the normalized error gap between the two algorithms, the better the anti-noise performance of the proposed algorithm.

5 Conclusion Compared with the traditional algorithm, the paper uses SNR eigenvalue decomposition instead of the traditional single source point detection algorithm. The experimental results show that the accuracy of the hybrid matrix estimation is improved under the same SNR, especially at low SNR. In the case, the algorithm still has better performance, which solves the under-determined hybrid matrix estimation problem well and lays a foundation for the subsequent source signal recovery.

References 1. Comon P, Jutten C (2010) Handbook of blind source separation: independent component analysis and applications. Academic Press 2. Fadili JM, Starck JL, Bobin J et al (2010) Image decomposition and separation using sparse representations. Process IEEE 98(6):983–994 3. Xu T, Wang WA (2009) compressed sensing approach for underdetermined blind audio source separation with sparse representation. Stat Signal Process 493–496

Underdetermined Mixed Matrix Estimation of Single Source

711

4. Sun J, Li Y, Wen J et al (2016) Novel mixing matrix estimation approach in underdetermined blind source separation. Neurocomputing 173(P3):623–632 5. Bai L, Cheng X, Liang J et al (2017) Fast density clustering strategies based on the k-means algorithm. Pattern Recognit 71(11):375–386 6. Zhang W (2016) Study on hybrid matrix estimation and source signal recovery method for underdetermined blind source separation. Yanshan University 7. Yang M, de Hoog F (2015) Orthogonal Matching Pursuit With Thresholding and its Application in Compressive Sensing. IEEE Trans Signal Process 63(20):5479–5486 8. Abrard F, Deville YA (2005) time-frequency blind signal separation method applicable to underdetermined mixtures of dependent sources. Signal Process 85(7):1389–1403 9. Kim SG, Yoo CD (2009) Underdetermined blind source separation based on subspace representation. IEEE Trans Signal Process 57(7):2604–2614 10. Ruan G, Guo Q, Gao J (2018) Novel underdetermined blind source separation algorithm based on compressed sensing and K-SVD. Trans Emerg Tel Tech 29(9):3427–3440

Optimization of APTEEN Routing Protocol for Wireless Sensor Networks Based on Genetic Algorithm Minghao Wang, Shubin Wang(&), and Bowen Zhang College of Electronic Information Engineering, Inner Mongolia University, Hohhot, China [email protected]

Abstract. The APTEEN routing protocol has been widely used due to its practicability. But it exists the problems of uneven network energy consumption, premature death of some nodes and low effective coverage of the whole network. To solve these problems, this paper uses the genetic algorithm to optimize the APTEEN routing protocol. By adding residual energy, distance from node to base station, distance from node to geometric center of the whole network, node degree and other selection factors to cluster heads selection, the genetic algorithm is used to select cluster heads for the first time, and the second cluster heads selection based on density adaptive algorithm. Some nodes are selected to sleep according to the position and degree of nodes. The residual energy of cluster head, the distance between node and cluster head, and the number of cluster members are taken into account when nodes join clusters, and the GA-APTEEN routing protocol is obtained through the above optimization. The simulation results show that the GA-APTEEN improves the lifetime, coverage and robustness of the network, reduces the energy consumption of the overall network system and avoids the phenomenon of the hot zone of energy. Keywords: Genetic algorithm  Density adaptive algorithm  Optimal selection of cluster heads  Lifetime  Coverage

1 Introduction Wireless sensor networks (WSNs) are composed of many sensor nodes. WSNs have been widely used in military, marine and other fields due to its excellent monitoring performance. The routing protocols are used in WSNs system control. Currently, there are many well-known routing protocols, such as SPEED protocol, GEAR protocol, GAF protocol, LEACH and other protocols [1]. However, sensor nodes cannot replenish energy. Therefore, an efficient and energy-saving routing protocol becomes a major goal of WSNs to improve network lifetime and robustness of network system. At present, the popular clustering routing protocol is the Adaptive Threshold-sensitive Energy Efficient Sensor Network Protocol (APTEEN). The APTEEN routing protocol chooses cluster head nodes randomly in a circular way. The algorithm uses the concept of “round”. Each node generates a random number between 0 and 1 in each round. If the random number generated is less than the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 712–721, 2020 https://doi.org/10.1007/978-981-13-9409-6_84

Optimization of APTEEN Routing Protocol for Wireless Sensor

713

preset number T(n) of the protocol, the node is selected as the cluster head for data transmission. On this basis, APTEEN defines hard and soft threshold to reduce unnecessary data transmission, and it can not only regularly collect data, but also respond quickly in sudden environments. Since the distribution of sensor nodes is basically irregular, APTEEN has the following disadvantages: the protocol can’t choose the best cluster heads, some high-energy nodes are not fully utilized, energy consumption between clusters is uneven, when large amounts of data need to be transmitted in emergencies, it is easy to generate the phenomenon of the hot zone of energy, which leads to nodes premature death, there is no good sleep mechanism for densely distributed nodes, and the coverage of cluster heads is too low. In view of the above shortcomings, the APTEEN has been improved a lot at home and abroad. Many protocols optimize the cluster heads selection method for APTEEN, mainly to limit cluster heads selection in terms of energy and location [1–3], so as to select the optimal cluster heads to improve the performance of the whole network and reduce the energy consumption of the whole network [2]. However, the less factors considered in the selection lead to the possibility of not choosing excellent nodes as cluster heads, and unable to balance the energy consumption between clusters, leading to premature death nodes and low coverage. In order to solve the above problems, this paper introduces the genetic algorithm to select cluster heads according to multiple factors of nodes, and introduces the mechanism of node dormancy and multiple factors clustering to prolong the lifetime and reduce the phenomenon of the hot zone of energy.

2 Related Work 2.1

Energy Consumption Model

The energy consumption of the APTEEN routing protocol mainly comes from transmitting and receiving data. The energy consumption model of the system is shown in formula (1) [4]. The energy consumption is mainly determined by the data transmission distance d and the amount of data l. When the transmission distance is within the do range, it is obvious that a lot of energy consumption can be saved. In the formula, the pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi value of do is do ¼ nfs=nmp, nfs and nmp represent power amplification coefficients of free space channel and multipath fading channel respectively.  Etxðl; dÞ ¼ EtxelecðlÞ þ Etxmpðl;dÞ ¼

2.2

Eelec  l þ nfs  l  d 2 ; d  do Eelec  l þ nmp  l  d 4 ; d [ do

ð1Þ

Genetic Algorithm

Genetic algorithm is a biological evolution based on natural selection. The population in the algorithm is a set of questions to be solved. Each chromosome is a member of the population. Chromosome is represented by binary code, and every binary number represents a gene segment. Use the methods of selection, crossover, mutation and so on

714

M. Wang et al.

to optimize the problem and use the method of roulette selection to find the optimal solution to the problem [5]. 2.3

Density Adaptive Algorithm

Because of the random distribution of nodes in WSNs, the density adaptive algorithm chooses the region of high node degree as clusters, which according to the size of node degree in the network structure [6, 7], In order to ensure the effective coverage of the whole network, shorter transmission distance, reduce the number of disconnected nodes and energy consumption.

3 GA-APTEEN Optimization Agreement Aiming at the deficiencies of the research status, the system mainly optimizes the selection of cluster heads, the overall coverage of the system, the dormancy of nodes and the clustering of common nodes. The Application of Genetic Algorithms to Adaptive Threshold-sensitive Energy Efficient Sensor Network Protocol (GAAPTEEN) is obtained by the above methods. The specific idea is shown in Fig. 1.

Fig. 1. Overall flow chart of the optimization protocol

3.1

Cluster Heads Optimization

Firstly, the optimization scheme selects the first cluster heads by four factors. The purpose of choosing cluster heads for the first time is to determine the approximate location of all clusters in the system. Since the energy consumption of the cluster heads is much higher than that of the common nodes, the more residual energy of the node, the more likely it is to become the cluster head. Secondly, the nearer the cluster head is to the base station, the shorter the distance that the data in the cluster need to be forwarded, the lower the energy consumed for transmitting information, and the lower the probability of data interference. The cluster heads near the base station are also responsible for forwarding the data from the cluster heads far away from the base station. By increasing the number of clusters near the base station, the burden of information forwarding from the remote cluster heads can be shared. The cluster head is close to the geometric center in order to reduce the probability of edge clustering and improve the effective coverage area of the unit cluster. The higher the degree of a node, the more neighbors it has, after clustering, the coverage rate increases, the number of disconnected nodes decreases, and the average distance of data transmission within the cluster also decreases, according to formula (1), it can be found that the shorter the

Optimization of APTEEN Routing Protocol for Wireless Sensor

715

distance of transmission, the less energy consumption of nodes. The improved formula (2) is obtained by the above method. ( TðnÞ ¼ temp1 ¼ a 

p 1pðrmod 1pÞ

0;

 temp1 ;

n2G n 62 G

Eini  Eres D1i D2i n  Nnei þb  þc  þd  Eini D1max D2max n

ð2Þ ð3Þ

Among them, the temp1 is shown in formula (3), which consists of the ratio of energy consumption to the initial energy of the node, the ratio of the distance from the node to the base station to the distance from the farthest node to the base station, the ratio of the distance from the node to the geometric center of the model to the distance from the farthest node to the geometric center of the model, and the ratio of the number of nodes not covered by the current node to the total number of nodes. Where Eini represents the initial energy of the node and Eres represents the residual energy of the node. All the distance factors in the formula are Euclidean distance. Nnei represents the number of nodes that the node can cover as a cluster head, and the sum of a, b, c and d should be 1, the specific proportion can be adjusted according to actual needs. 3.2

Genetic Optimization Algorithm

The system uses the genetic algorithm to process each node as a chromosome. The fitness of each chromosome is the temp, which consists of many factors. Using the crossover, variation, and roulette selection mechanisms, select the node with the best factor and use it as the cluster head. The selection of crossover probability and mutation probability in the parameters of genetic algorithm is the key to affect the performance of genetic algorithm, which directly affect the convergence of the algorithm and the accuracy of the results. If inappropriate crossover probability and mutation probability are used, the result of the algorithm will be inaccurate or the process of the algorithm will be slow, and even become a random search algorithm. In order to reduce the problem of genetic algorithm, the system puts the best result directly into the next generation before each generation of the algorithm crosses and mutates, so as to prevent the optimal solution from being eliminated. In the process of generating new individuals, the crossover and mutation probability formulas (4) and (5) are optimized, where f represents individual fitness, fmax represents maximum fitness in the round, and favg represents average fitness in the round.  Pcc ¼

fmax f Pcrossover  fmax favg ; f [ favg Pcrossover ; f \ favg

 Pmm ¼

fmax f Pmutation  fmax favg ; f [ favg Pmutation ; f \ favg

ð4Þ ð5Þ

716

M. Wang et al.

Through the above optimization, the crossover probability and mutation probability can change automatically with the change of fitness. For an individual whose fitness is higher than the average fitness of the group, the individual has a lower probability of crossover and mutation, so that the individual can have a big possibility to be protected into the next generation, while the individual whose fitness is lower than the average fitness of the group has a higher probability of crossover and mutation, which makes it has a great possibility to be eliminated. Therefore, the adaptive genetic algorithm can get the optimal solution faster and more accurately. 3.3

Select the Cluster Heads for the Second Time

The second cluster heads selection is slightly adjusted on the basis of the first cluster heads selection to get a more suitable clustering situation. The density adaptive optimization algorithm is introduced when the cluster heads are reselected. The optimization algorithm eliminates the common nodes covered by other clusters from the current cluster coverage nodes, leaving only the nodes covered by the current cluster head. Thus eliminating the influence of other clusters coverage nodes on the cluster heads secondary selection. After removing the repeatedly covered nodes in the cluster, The three elements in the formula (6) are the ratio of the distance from node to centroid of cluster to the covering radius R of cluster, the ratio of the consumed energy to the initial energy, the ratio of the number of uncovered nodes to the total number of nodes, and the sum of a, b, and c should be 1. The distance formula from node to centroid of cluster is shown in formula (7). Where Xj and Yj respectively represent the horizontal and vertical coordinates of the nodes in the cluster which are not covered by other clusters, and m is the number of nodes in the cluster that are not covered by other pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi clusters. The coverage radius of the cluster is R ¼ S2 =ðp  kÞ, where S2 represents the model area and k represents the number of clusters. Formula (1) shows that the energy consumption is mainly calculated according to the distance of data transmission between nodes. Therefore, the closer the cluster head is to the centroid and the higher the node degree is, the shorter the distance of data transmission in the whole cluster and the lower the energy consumption in the cluster. Because the influence of the repeated coverage nodes is removed when the cluster heads are selected for the second time, so that the nodes that are not covered by other clusters can be covered as much as possible when the cluster heads are selected. However, if the coverage decreases after the change, the result of the first clustering is still used as the cluster head. Through the above improvements, the number of effective coverage nodes of the whole cluster is increased, and the repetitive coverage area, the number of disconnected nodes and the data transmission distance of disconnected nodes are reduced. Meanwhile, the average distance between nodes and cluster heads is shortened, so as to reduce energy consumption. temp2 ¼ a 

Dcen Eini  Eres n  Nnei þb  þc  R Eini n

ð6Þ

Optimization of APTEEN Routing Protocol for Wireless Sensor

Dcen

3.4

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pm  2  Pm 2ffi  2 X Yj j ¼ Xi  1 þ Yi  1 m m

717

ð7Þ

Node Sleep and Clustering Mechanism

3.4.1 Node Sleep Mechanism Because of the uneven distribution of nodes, the nodes in some areas are too dense, and the areas monitored by some nodes will also be monitored by their neighbor nodes at the same time, which lead to transmitting too much redundant data. To solve this problem, the system proposes a node dormancy mechanism to avoid the above problems and reduce the unnecessary energy consumption. As for how many nodes reach the dormancy, it can be concluded from the literature [8] that when the node is not at the system boundary and there are 11 nodes in the monitoring area of the node, or when the node is at the system boundary and there are 7 nodes in the monitoring area of the node, the monitoring area of the node is almost completely covered. Therefore, when the node degree in the model is greater than or equal to 11 and the boundary node degree is greater than or equal to 7, the system makes it dormant, and reduces the node degree of other neighboring nodes near the node by 1, so as to prevent excessive dormancy of nodes. In this way, not only the energy of the dormant nodes can be saved, but also the transmission and reception of redundant data can be reduced, and the pressure of the cluster heads on data receiving, processing and forwarding can be reduced. 3.4.2

Node Clustering Optimization Mechanism

Fig. 2. Node clustering

Formula (1) shows that energy consumption is closely related to the amount of information transmission. Because of the uneven distribution of nodes, the number of members between clusters will be very different, resulting in the phenomenon of energy hot zone. In order to solve this problem, the system changes the rules of common nodes clustering, from joining clusters according to distance factor to joining clusters according to multiple factors. As shown in Fig. 2, because of too many members of A

718

M. Wang et al.

cluster, node i chooses to join B cluster, so as to balance the energy consumption between clusters and avoid the phenomenon of the hot zone of energy. As shown in formula (8), the three parts of formula are composed of the ratio of the distance between node and cluster head to the segmentation point do of energy consumption model, the ratio of the number of working nodes covered by cluster head to the total number of nodes, and the ratio of the energy consumption of cluster head to the initial energy of the node. Where the sum of a, b, and c should be 1. By adding these three factors, nodes covered by several clusters can join clusters selectively, so that the relationship of energy consumption between clusters can be well balanced. temp3 ¼ a 

Di Mc Eini  Eres þb  þc  Eini do n

ð8Þ

4 Simulation Analysis The GA-APTEEN protocol is tested on the MATLAB platform to verify its performance. The experimental scenario is that 100 nodes are randomly distributed in the region of 100 m * 100 m, the initial energy of the node is 0.125 J, and 10% highenergy nodes are introduced to create a non-uniform energy environment. The energy of the high-energy node is the energy of the common node 2 times, the base station coordinates are at (50, 100), the data fusion degree is 60%, the coverage of each cluster and the monitoring area size of the nodes are all the same, and when the monitoring area of the node is crossed by the model boundary, the node is regarded as being located on the model boundary. When transmitting and receiving data, only the energy consumption of data transmitted and received is considered, and the energy consumption such as cluster heads selection, algorithm energy consumption, and system

Fig. 3. Remaining nodes diagram

Optimization of APTEEN Routing Protocol for Wireless Sensor

719

internal consumption are not considered. Among them, nfs ¼ 0:0013 pJ=ðbit=m2 Þ, nmp ¼ 10 pJ=ðbit=m4 Þ, receiving or transmitting 1 bit data consumes energy is Eelec ¼ 50 nJ=bit.

Fig. 4. Residual energy diagram

Fig. 5. Number of nodes covered

720

M. Wang et al.

Fig. 6. Cumulative number of overlay nodes

In order to highlight the improvement of network in lifetime and energy consumption, in the simulation model, APTEEN protocol is compared with the EDDAPTEEN protocol which is improved cluster heads selection based on energy and location factors, and the GA-APTEEN optimization protocol. As shown in the Figs. 3 and 4, with the increase of the number of network cycles, the GA-APTEEN protocol has greatly improved compared with other routing protocols in terms of prolonging the lifetime, reducing the energy consumption and improving the network robustness. Since the protocol cannot cover all nodes, the simulation model is used to compare the coverage rates of APTEEN, EDD-APTEEN and GA-APTEEN protocols under the same condition of nodes survival. As shown in the Figs. 5 and 6, in the stage where the dead nodes have not yet appeared, the GA-APTEEN protocol significantly increases coverage, which reduces the number of disconnected nodes and improves the communication quality of the whole network.

5 Conclusion Through the above optimization methods, the APTEEN protocol can prolong the network lifetime while ensuring the quality of work, enhance the coverage of the system, balance the energy consumption within the cluster and between clusters, avoid the phenomenon of the hot zone of energy, and reduce the distance of data transmission and receiving, energy consumption and the volume of redundant data.

Optimization of APTEEN Routing Protocol for Wireless Sensor

721

Acknowledgements. Shubin Wang ([email protected]) is the correspondent author and this work was supported by the National Natural Science Foundation of China (61761034), and the Natural Science Foundation of Inner Mongolia, China (2016MS0616).

References 1. Choi J, Jung S, Han Y, Chung T (2008) Advanced concentric-clustering routing scheme adapted to large-scale sensor networks. In: 2008 second international conference on sensor technologies and applications (sensorcomm 2008), Cap Esterel, pp 366–371 2. Ma J, Wang S, Meng C, Ge Y, Du J (2018) Hybrid energy-efficient APTEEN protocol based on ant colony algorithm in wireless sensor network. EURASIP J Wireless Commun Networking 102:1–13 3. Suhas More S, Nighot MK (2017) Optimization of wireless sensor networks using artificial intelligence and ant colony optimization for minimizing energy of network and increasing network lifetime. In: 2017 international conference on computing, communication, control and automation (ICCUBEA), Pune, pp 1–6 4. Omari M, Laroui S (2015) Simulation, comparison and analysis of wireless sensor networks protocols: LEACH, LEACH-C, LEACH-1R, and HEED. In: 2015 4th international conference on electrical engineering (ICEE), Boumerdes, pp 1–5 5. Guo P, Wang X, Han Y (2010) The enhanced genetic algorithms for the optimization design. In: 2010 3rd international conference on biomedical engineering and informatics, Yantai, pp 2990–2994 6. Mao X (2018) Study and application of adaptive density peak clustering. Jilin University (Chinese) 7. Tao LI, Hong-Wei GE, Shu-Zhi SU (2017) Density peaks clustering based on density adaptive distance. J Chin Comput Syst 38(6):1347–1352 8. Wen T, Zhang D, Guo Q, Song X (2014) Sleeping scheduling algorithm for redundant nodes in wireless sensor networks. J Commun 35(10):67–80 (Chinese)

Optimization of APTEEN Routing Protocol in Wireless Sensor Networks Based on Particle Swarm Optimization Bowen Zhang, Shubin Wang(&), and Minghao Wang College of Electronic Information Engineering, Inner Mongolia University, Hohhot, China [email protected]

Abstract. APTEEN is a typical routing protocol for wireless sensor networks, but when clustering, cluster heads are randomly selected, which makes it easy to select nodes with low residual energy as cluster heads, thus forming network holes. To solve this problem, this paper uses particle swarm optimization (PSO) to optimize APTEEN routing protocol. When APTEEN routing protocol networking, considering the residual energy of the node, the location of the node and the energy distribution around the node, the particle swarm optimization is used to select the cluster head. The simulation results show that the optimized APTEEN routing protocol significantly prolongs the network lifetime and reduces the network energy consumption rate. Keywords: Wireless sensor network  APTEEN routing protocol swarm optimization  Network lifetime

 Particle

1 Introduction As a distributed sensor network, wireless sensor network can be widely used in military, intelligent transportation, environmental monitoring, health care and other fields. Clustering routing protocol has the advantages of convenient topology management, high energy utilization, and beneficial to data fusion and transmission processing. Adaptive Threshold-sensitive Energy Efficient Sensor Network Protocol (APTEEN) is a typical clustering routing protocol, It defines two thresholds, hard threshold and soft threshold, which respectively specify the range of measured values and the amount of change of the two measured values before and after. The use of two thresholds can reduce both unnecessary and repeated data transfers. APTEEN, which adopts the idea of clustering and data fusion, has high routing efficiency. It can collect data periodically and react quickly to emergencies. Meet the needs of today’s many application scenarios [1, 2].

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 722–730, 2020 https://doi.org/10.1007/978-981-13-9409-6_85

Optimization of APTEEN Routing Protocol

723

The APTEEN routing protocol is improved by the LEACH routing protocol. The protocol uses LEACH’s original clustering mode and cluster head selection. In the cluster formation stage, all nodes are divided according to clusters, and each node randomly generates a number between [0, 1], if less than the threshold T(n), the node is selected as the cluster head. However, the APTEEN protocol does not consider the energy and position of the selected nodes when selecting the cluster heads in the cluster. It may choose the nodes with lower energy in the cluster as the cluster heads, thus causing the nodes to die too fast, generating energy holes and directly affecting the entire network lifetime. The current improvements to the APTEEN protocol are designed around the formation mechanism of cluster structures, the election of cluster heads, and data transmission. Literature [3] proposes that proper selection of cluster head probability can improve the effectiveness of clustering and optimize network performance. Literature [4] proposes that cluster heads can be dynamically selected by setting threshold energy, so as to extend network life. Literature [5] proposes a method to select cluster heads by taking node energy and distance between node and base station as factors. In this paper, the cluster head optimization algorithm is introduced into the APTEEN clustering protocol for the unreasonable selection of cluster heads. The particle swarm optimization algorithm is used to select the optimal intra-cluster through three conditions: energy, position and energy balance. The node is elected as the cluster head, which slows down the node death rate and prolongs the node network lifetime.

2 Particle Swarm Optimization and Wireless Communication Model 2.1

Particle Swarm Optimization

PSO is a swarm intelligence algorithm which simulates the flight and foraging behavior of birds. The basic idea of the particle swarm optimization algorithm is to seek the optimal solution through mutual cooperation and information sharing among individuals in the group. The advantage of the PSO algorithm is that it is simple and easy to implement [6]. At present, it is widely used in various fields such as function optimization. In the particle swarm algorithm, the solution of the optimization problem corresponds to the position of the particle in the search space. In the algorithm, the particle needs to calculate its own fitness function value according to the fitness function, in order to judge the pros and cons of its current position. The flight speed and position of the particle are represented by vectors, and the particle updates its own motion formula and position by updating the following two extreme values. One is the current individual optimal value gbest, which represents the optimal position found by the particle itself. The other is the collective optimal value pbest, which represents the best location for the entire particle population to find. The update formula for particle velocity and position is shown in Eqs. (1) and (2).

724

B. Zhang et al.

vkij þ 1 ¼ vkij þ c1 r1 ðpkij  xkij Þ þ c2 r2 ðpkgj  xkgj Þ

ð1Þ

xkij þ 1 ¼ xkij þ vkij þ 1

ð2Þ

where vij is the velocity vector of particle i in j-dimensional space, and its value range is ½vmin ; vmax , xij is the position vector of particle i in j-dimensional space, pij is the individual optimal value of particle i, pgj is the collective optimal of all particles. Value r1 and r2 are random numbers from [0, 1]. c1 and c2 are learning factors. 2.2

Wireless Communication Model

The wireless communication model commonly used in wireless sensor networks is shown in Fig. 1 [7]. The energy consumption of the transmitter node is mainly composed of two parts: the transmitting circuit and the power amplifier. The energy required to transmit the k-bit data is: 

ETx ðk; dÞ ¼ Eelec  k þ efs  k  d 2 ETx ðk; dÞ ¼ Eelec  k þ eamp  k  d 4 d0 ¼

d\d0 d  d0

rffiffiffiffiffiffiffiffi efs eamp

ð3Þ ð4Þ

The energy consumption at the receiving end of the sensor node only includes the energy consumed by the receiving circuit. ERx ðkÞ ¼ Eelec  k

ð5Þ

where Eelec is the energy consumed by the sensor node for each 1-bit message received or transmitted, efs and eamp are the power amplification factors under different conditions.

d

ERx (k )

ETx (k ) K bit packet

Power Power amplifier amplifier circuit circuit

Fig. 1. Wireless communication model

Receiving Receiving circuit circuit

K bit packet

Optimization of APTEEN Routing Protocol

725

3 Particle Swarm Optimization Cluster Head Selection Algorithm 3.1

Network Model

It is assumed that N static nodes whose initial energy is E0 are randomly distributed in the M  M region. At the same time, it is assumed that the wireless sensor network built by the group has the following characteristics: (1) The base station supplies energy is not limited, and the base station is arranged in the center of the network area (2) Each node can locate its own coordinate position (3) Each node has the same capabilities, equal status, and can work independently. 3.2

Energy Position Equalization-Adaptive Threshold-Sensitive Energy Efficient Sensor Network Protocol (EPE-APTEEN)

3.2.1 Pre-clustered In a wireless sensor network, the optimal number of cluster heads can make the network life cycle longer. In the initial stage of the network, the base station collects energy and coordinate information of all nodes, and calculates the optimal number of cluster heads in the network kopt according to formula (6). kopt

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Nefs ¼M 4 2pðeamp dtoBS  Eelec Þ

ð6Þ

Then, based on the collected energy information, the base station calculates the average energy remaining Eavg in the network according to formula (5), selects kopt node as the pre-selected cluster head according to formula (6), and divides the clusters by the network. Eavg ¼ ( TðnÞ ¼

Pn

i¼1

EðiÞ

ð7Þ

n

EðiÞ p 1pðrmod1pÞ Eavg

0

n2G n 62 G

ð8Þ

3.2.2 Optimize Cluster Head After the Pre-clustered phase, the cluster head is adjusted by considering the energy, position and energy balance of each node. This paper reduces the computational complexity by introducing the particle swarm optimization algorithm.

726

B. Zhang et al.

In order to make the particle swarm algorithm suitable for optimizing the cluster heads selection problem, the formula (1) and the formula (2) need to be adjusted. Since the distribution of nodes can be imagined as a plane space, we need to split the velocity and position vectors into components in both the x-axis and the y-axis. Formulas (1) and (2) are split into: (

vkixþ 1 ¼ vkix þ c1 r1 ðpkix  xkix Þ þ c2 r2 ðpkgx  xkgx Þ vkiyþ 1 ¼ vkiy þ c1 r1 ðpkiy  xkiy Þ þ c2 r2 ðpkgy  xkgy Þ (

xkixþ 1 ¼ xkix þ vkixþ 1 xkiyþ 1 ¼ xkiy þ vkiyþ 1

ð9Þ

ð10Þ

Since the network nodes are discretely divided in the wireless sensor network, the particle swarm optimization algorithm is designed for continuous problems. The values calculated by the nodes according to formula (8) cannot be mapped to the corresponding network nodes one by one, so need to adjust the position obtainedby the algorithm, so that the adjusted xix 2 fP1x ; P2x ; . . .; Pnx g, xiy 2 P1y ; P2y ; . . .; Pny . Pix is the x-axis component of the i-th node in the cluster, and Piy is the y-axis component  of    the i-th node in the cluster. Order DPjx ¼ xix  Pjx , DPjy ¼ xiy  Pjy  and qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi DPj ¼ ðDPjx Þ2 þ ðDPjy Þ2 , DPjx represents the absolute value of the difference between xix and the x component of node j in the cluster, and DPjy represents the absolute value of the difference between xiy and the y component of node j in the cluster. Let DPk ¼ minfDP1 ; DP2 ; . . .; DPn g. It indicates that the position of the k-th node in the network is closest to the distance of xi , so adjust xix  Pkx , xiy  Pky , that is, the point of the search is located at the position of the node k. In the network, because the cluster head needs to forward the information of each node, it consumes a lot of energy. Therefore, the amount of remaining energy is an important factor in the selection of cluster heads. f1 ¼

EðiÞ Eavg

ð11Þ

The energy consumption of sending messages is proportional to the transmission distance. In order to reduce the energy consumption of the entire network, the position of the cluster head in the cluster should be close to the geometric center of the entire cluster. RðjÞ is the Euclidean distance from node j to node i. Pn f2 ¼

j¼1

n

RðjÞ

ð12Þ

Optimization of APTEEN Routing Protocol

727

Node energy balance is an important condition for measuring the energy distribution of selected cluster heads and surrounding nodes. The higher the node energy balance of the selected cluster head, the more balanced the energy distribution in the network, and the life cycle of the network can be maximized. f3 ¼

n 1X EðjÞRðjÞ n j¼1 RðjÞ þ 1

ð13Þ

Based on the above three influencing factors, the fitness function modeled in this paper is as follows: f ðiÞ ¼ af1 þ b

1 þ cf3 f2

ð14Þ

where a, b, and c are the weights of the fitness function impact factor, the value range is [0, 1], and a + b + c = 1. By adjusting the weight value, the performance of the algorithm can be emphasized. As long as the value of f ðiÞ is larger, the overall life of the network will be longer. The specific process of PSO for cluster heads is shown in Fig. 2.

4 Simulation and Analysis The network size is M = 100, the number of network nodes is N = 100, and the base station is placed at (50, 50) efs ¼ 10 pJ=ðbit  m2 Þ, eamp ¼ 0:0013 pJ=ðbit  m4 Þ. Each node has the same initial energy E0 = 0.1 J. Figure 3 is a simulation of the remaining surviving nodes of the network, which reflects the network lifetime of the two algorithms under the same energy. As can be seen from the figure, the clustering method of the original APTEEN protocol has the first dead node in the 87th round, and all the nodes in the 328 rounds die. The EPEAPTEEN algorithm showed the first dead node in 151 rounds and all died in about 600 rounds. This shows that the algorithm can extend the lifetime of the network. Figure 4 is the simulation diagram of energy consumption in the network. Because the EPE-APTEEN algorithm considers the location problem and the energy distribution equilibrium problem when the cluster heads are selected, the transmission energy consumption is significantly reduced, and the energy consumption rate is also significantly slowed down. It can be seen that the EPE-APTEEN algorithm can reduce the energy consumption in the network and the energy consumption is more balanced in the network.

728

B. Zhang et al.

Start

Random iniƟalizaƟon of parƟcle swarm, calculaƟon of fitness funcƟon value, and iniƟalizaƟon of individual extremes and all extremes

IteraƟve displacement, updaƟng parƟcle velocity and posiƟon

CalculaƟng the fitness funcƟon of the current parƟcle

If the fitness value greater than the individual extreme value

Y

Update Individual Extremum

N

If the individual extremum greater than the global extremum

Y Update Global Extremum

N N Achieving iteraƟon condiƟons or iteraƟon Ɵmes

Y Output opƟmal soluƟon as the main cluster head

End

Fig. 2. Particle swarm optimization algorithm flowchart

Optimization of APTEEN Routing Protocol

Fig. 3. Number of surviving node

Fig. 4. Energy consumption

729

730

B. Zhang et al.

5 Conclusion Aiming at the problem that APTEEN routing protocol is easy to select low-energy long-distance nodes in cluster head selection, this paper proposes a new EPE-APTEEN algorithm to select high-energy nodes with balanced energy distribution between close and surrounding nodes by adding optimal cluster head to control the proportion of selected cluster head nodes in cluster head selection stage and introducing particle swarm optimization algorithm. Network simulation shows that the algorithm can significantly prolong the survival time of the entire network. At the same time, it can slow down the network energy consumption rate and improve the network energy utilization efficiency. Acknowledgements. Shubin Wang ([email protected]) is the correspondent author and this work was supported by the National Natural Science Foundation of China (61761034), and the Natural Science Foundation of Inner Mongolia, China (2016MS0616).

References 1. Anjali, Garg A, Suhali (2015) Distance adaptive threshold sensitive energy efficient sensor network (DAPTEEN) protocol in WSN. In: Proceedings of international conference on signal processing, computing and control, pp 114–119 2. Bhagyashree S, Prashanthi S, Anandkumar KM (2015) Enhancing network lifetime in precision agriculture using Apteen protocol. In: Proceedings of IEEE technological innovation in ICT for agriculture and rural development, pp 44–48 3. Ma J, Wang S, Meng C, Ge Ynhong, Du J (2018) Hybrid energy-efficient APTEEN protocol based on ant colony algorithm in wireless sensor network. EURASIP J Wireless Commun Networking 102:1–13 4. Suhas More S, Nighot MK (2017) Optimization of wireless sensor networks using artificial intelligence and ant colony optimization for minimizing energy of network and increasing network lifetime. In: 2017 international conference on computing, communication, control and automation (ICCUBEA), Pune, pp 1–6 5. Choi J, Jung S, Han Y, Chung T (2008) Advanced concentric-clustering routing scheme adapted to large-scale sensor networks. In: 2008 second international conference on sensor technologies and applications (sensorcomm 2008), Cap Esterel, pp 366–371 6. Parsopoulos KE, Vrahatis MN (2010) Particle swarm optimization and intelligence: advances and applications. Information Science Reference, Hershey 7. Yuan YE, Mei W-B, Wu S-L et al (2008) Hop period estimation for frequency hopping signals based on Hilbert-Huang transform. In: CISP ’08: proceedings of the 2008 congress on image and signal processing. IEEE Computer Society, Washington, DC, vol 5, pp 452–455

Research Status of Wireless Power Transmission Technology Xudong Wang1(&), Changbo Lu1, Feng Wang2, Wanli Xu1, and Shizhan Li1 1

2

Institute of Military New Energy Technology, Beijing 102300, China [email protected] Equipment Department of Army, Beijing Martial Delegate Agency, Beijing 10012, China

Abstract. This paper introduces the classification and research status of wireless power transmission technology, analyzes the basic principles, key technologies and application scenarios, and forecasts the future development trend of technology. As a revolutionary progress of energy transmission technology, it will have a profound impact on energy interconnection in the future. Keywords: Wireless power transmission requirements

 Research status  Application

1 Introduction The wireless power transmission technology refers to a technology for transferring power from a source to a load without relying on an power transmission line. According to the transmission mode, it can be classified into electromagnetic induction, magnetic coupling resonance, microwave, laser, ultrasonic, etc. With the breakthrough of wireless energy transmission power, transmission distance and transmission efficiency, the research and application of this technology in the fields of aerospace, rail transit, electric vehicles, household appliances, human implanted devices and weapons and equipment have developed rapidly in recent years [1]. Wireless energy transmission is not a new concept recently. As early as the 17th century, American physicist Tesla first proposed and based on the idea of wireless transmission [2]. He used a Tesla coil with a ball 60 m high and 90 cm in diameter. The masts of the toroidal coils are connected together to produce a resonance of 150 kHz to supply electrical energy. Later, he proposed the grand concept of wireless energy transmission on a global scale, and built the Woden Cliff Tower that shocked the world at that time. Tesla’s wireless transmission method is to use the earth as the inner conductor and the earth’s ionosphere as the outer conductor. By amplifying the transmitter in a radial electromagnetic wave oscillation mode, a low-frequency resonance of about 8 Hz is established between the earth and the ionosphere. The surface electromagnetic waves that surround the earth are used to transmit energy. Although the feasibility of this scheme has been fully confirmed in theory, Tesla’s bold idea has not been realized due to insufficient financial resources. Figure 1 shows Tesla’s laboratory in Long Island, New York, USA. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 731–738, 2020 https://doi.org/10.1007/978-981-13-9409-6_86

732

X. Wang et al.

Fig. 1. Tesla wireless energy transmission test

2 Magnetically Coupled Wireless Power Transfer Technology 2.1

Fundamentals

The magnetic coupling resonant wireless energy transmission technology utilizes the principle of resonance to reasonably set the parameters of the transmitting device and the receiving device, so that the transmitting coil and the receiving coil and the whole system have the same resonant frequency, and the system is driven by the power of the resonant frequency. An “electrical resonance” state can be achieved, enabling efficient transfer of energy at the transmitting and receiving ends. When energy is applied to the transmitting coil, a strong coupling relationship is established between the two coils and a large proportion of energy exchange occurs. When the receiving coil is connected with a power load, the load absorbs part of the energy, thereby realizing wireless transmission of electric energy, and its working principle is shown in Fig. 2.

Fig. 2. Magnetically coupled wireless energy transfer schematic

In November 2006, at the American Physical Society’s Industrial Physics Forum, the Marin Soljacic research team at the Massachusetts Institute of Technology first

Research Status of Wireless Power Transmission Technology

733

proposed the theory of magnetic resonance wireless transmission, and then published an article in the journal Science, through the principle of magnetic resonance. Achieving 60 W wireless transmission within 2 m, the whole system efficiency is 40%. In the experiment, a pair of coil antennas with 5 mm copper wire wound 5.25 turns and diameter 600 mm and distributed inductance and capacitance characteristics are used. The LC resonant frequency is 9.90 MHz. 2.2

Key Technology

(1) High frequency power supply technology The reason why magnetically coupled resonant wireless energy transmission can be efficiently transmitted depends mainly on whether the system can work in the resonant state. Especially in the aspect of high-power energy transmission, the power supply not only can provide sufficient driving capability but also has corresponding output frequency. The high-power power modes that can be realized under the Hertz level are mainly oscillating, inverter and power amplification. (2) Resonant coil design technology The optimization of the parameters of the resonator coil is mainly based on the number of turns of the coil itself, the winding method, the pitch spacing design, the material selection, etc., and the resonator coil is optimized in combination with the output performance requirements of the system. The efficiency of the system is improved by optimizing the number of turns of the coil at a given frequency and by varying the radius of the coil. (3) Frequency tracking optimization Frequency splitting is a common phenomenon of magnetically coupled resonant wireless energy transmission technology, and it is a major factor affecting the transmission efficiency of the system. At present, the optimization and control of resonant wireless energy transmission are mostly realized around the resonant frequency. The high quality factor of the resonator coil often leads to poor stability of the system during operation. In order to improve the stability of the system, the closed-loop tracking control of the phase-locked loop can be used to solve the problem of poor stability of the resonant wireless energy transmission system. 2.3

Application Status

Domestically, it started late in magnetic coupling wireless energy transmission. It has developed rapidly in recent years, and research hot spots are mainly concentrated in the field of electric vehicle charging. China Electric Power Research Institute built a 150 m electric vehicle wireless charging test section in December 2017. It is the longest and highest power mobile wireless charging test in China. Magnetically coupled wireless energy transmission technology can achieve better transmission efficiency within a distance of less than 2 m, and can pass through obstacles such as wood, plastic, and walls. It is the development direction of medium

734

X. Wang et al.

and short-range wireless energy transmission technology, but as the transmission distance increases, Its efficiency drops rapidly, and energy loss is more serious at longer distance.

3 Laser Wireless Energy Transfer Technology 3.1

Fundamentals

Laser wireless energy transmission is a high-energy laser beam as an energy carrier, which is transmitted by a collimating optical system, and uses a laser battery array to convert light energy into electrical energy at a remote end to realize long-distance wireless energy transmission. The working principle is shown in Fig. 3.

Fig. 3. Laser wireless energy transmission schematic

3.2

Key Technology

The laser wireless energy transmission converts the electric energy into a laser beam of a certain wavelength and certain technical requirements through the laser, and is collimated by the optical system and transmitted through the spatial link; the receiving end receives the laser, and the input optical energy is passed through the high efficiency photoelectric converter. Converted to electrical energy, photoelectric energy conversion, power supply equipment. Its key technologies mainly include: (1) High efficiency and high beam quality laser emitter The transmitting end of the laser wireless energy transmission system mainly converts the electric energy into a laser beam of a certain wavelength and a certain technical requirement through a laser, and is collimated by the optical system and transmitted through the space link. It is necessary to analyze the far-field distribution characteristics

Research Status of Wireless Power Transmission Technology

735

according to the characteristics of the laser, and study the beam conversion technology of the high-power laser array and the microlens structure of the laser array collimation. (2) High-efficiency laser battery photoelectric conversion technology Ordinary solar cells are designed for the solar spectrum structure, and they have high laser reflection in practical applications, and cannot be directly applied to the laserelectric energy conversion module. The development of laser energy conversion devices with specific surface light-limiting structures is the key technology for realizing laser wireless energy transmission, including spectral matching laser cell material technology, laser cell process loss control technology, and laser cell structure radiationresistant reinforcement. Technology, laser battery composite cooling technology, etc. (3) Laser energy control management technology The output characteristics of the laser battery have nonlinear characteristics. The output power is maximum only when the voltage value is output at a certain point. The maximum power point tracking technology, energy storage technology and energy management technology of the laser battery are studied to improve the effective utilization of laser wireless energy transmission. 3.3

Application Status

Japan has been conducting ground-based experiments on laser wireless energy transmission technology since 1997. Typical experiments, such as the laser power supply experiment of kite aircraft completed by Gyeonggi University in Japan in 2006. The two fiber-coupled semiconductor lasers are excited by a power supply of about 600 W to generate about 200 W of laser energy, and the GaAs battery array is irradiated to generate 46 W of electric energy, and the kite aircraft of about 50 m in height is continuously and stably flying for more than one hour. Among them, the laser conversion efficiency is 34.2%, the conversion efficiency of GaAs photovoltaic cells is 21%, and the overall photoelectric conversion efficiency is 7.2%. At present, China began to pay attention to laser wireless energy transmission technology from 2012. Aerospace Science and Technology Group 811 carried out various types of laser cells such as single crystal silicon, indium gallium phosphide and gallium arsenide under the conditions of 532 nm, 633 nm, 808 nm and 1064 nm laser light sources. The photoelectric test of the material system explored the law of photoelectric conversion efficiency and laser wavelength of the battery. Laser wireless energy transmission has the advantages of high energy density, good energy convergence, small emission and receiving aperture, etc., and can maintain the beam concentration after long-distance transmission, and the beam is easy to focus, and has better directivity, and has long-distance energy transmission. Good development prospects. However, at the same time, its penetration performance is poor, and it is easily affected by the intermediate medium, and the dust mist will seriously affect the laser transmission.

736

X. Wang et al.

4 Microwave Wireless Energy Transmission Technology 4.1

Fundamentals

Microwave wireless energy transmission refers to the use of microwave devices to convert electrical energy into electromagnetic energy [3]. The microwave electromagnetic energy of the space is transmitted wirelessly through the transmitting antenna, and the receiving antenna converts the electromagnetic energy into electrical energy, which is used for the electric load after a certain electric energy conversion. The working principle is shown in Fig. 4.

Fig. 4. Schematic diagram of power transmission using microwave as carrier

In 1975, the US Jet Propulsion Laboratory established the world’s largest power microwave radio transmission test device at Goldstone, successfully transmitting 30 kW of energy through a 26 m diameter parabolic antenna at 2.5 GHz to a silicon rectifier diode antenna at 1.6 km [4]. 4.2

Key Technology

Microwave long-distance wireless transmission can convert space into microwave electromagnetic energy through high-power microwave source to realize space transmission, and then adopt high-power and high-efficiency microwave rectification technology through the receiving section. The key technologies include: (1) High current density microwave source technology Field emission electrons generally require the surface electric field strength of the material to reach the order of GV/m. The strong electric field on the chip will lead to chip breakdown, and the nanometer structure can effectively reduce the required electric field strength. At the same time, the output power of the radiation source is proportional to the current density of the electron source, and the on-chip electron source needs to have a low threshold and a large current density characteristic at the same time. Therefore, it is necessary to study the electron emission characteristics of different nanostructures and nanomaterials under different pulsed electromagnetic fields from a quantum angle. The mechanism of nanomaterials emission under different frequency electromagnetic fields was studied, and a low turn-on voltage, high current density, and miniaturized free electron microwave emission source were developed.

Research Status of Wireless Power Transmission Technology

737

(2) Spatial microwave non-diffraction focusing transmission technology Due to the volatility of electromagnetic waves, microwaves exhibit diffusion transmission in space, which will result in a decrease in energy density at the target at a certain distance. A non-diffracting beam is a set of ideal solutions that satisfy the electromagnetic scalar equation, such as plane waves, Bessel beams, Cosine beams, and so on. Based on the structured non-diffracting beam design, long-distance focus transmission can be realized, and the new microwave control technology can be used to realize the non-diffracting energy transmission to the 100-m or even-kilometer target by precisely controlling the phase and timing of the radiating element. At the same time, due to the depth of focus It will directly affect the wireless energy transmission performance to unmanned devices, and achieve ultra-long-range focus depth through structured design. (3) Microwave rectenna design and high efficiency energy harvesting technology The microwave receiving antenna and rectifier circuit design is the key to the electromagnetic energy-electric energy conversion efficiency of the receiving end in the wireless energy transmission architecture. Combined with the electromagnetic supersurface electromagnetic wave regulation method, the array patch microstrip antenna structure is studied to ensure the wide-band and wide-beam performance, and has a small size and a low height, which can meet the needs of the array and the device, and realize high efficiency. The principle and characteristics of electromagnetic wave energy capture and collection. Study high-efficiency microwave rectifier circuit design, RF or DC power synthesis technology, power conversion technology, including RF control, impedance matching, bandpass filtering and energy storage structure. 4.3

Application Status

Mitsubishi researchers in Japan converted 10 kW of electricity into microwaves and then used wireless energy transmission. Some of the power successfully illuminates the LEDs on the receiving device 500 m away. This is by far the longest and most powerful microwave in Japan [5]. Domestic research on microwave wireless energy transmission technology began in the 1990s. Lin Wei Qian first introduced microwave wireless energy transmission technology in China. University of Electronic Science and Technology, Shanghai University and other universities have carried out research on microwave energy wireless transmission, mainly involving medium-range and low-power microwave wireless energy transmission systems. The overall conversion efficiency is low, and most of the research is still in theoretical research or initial stage [6]. Microwave wireless energy transmission technology has high conversion and transmission efficiency, and the atmospheric and cloud penetration in a specific frequency band is very good, the beam power density is low, and high-precision pointing control can be performed through the beam, which has high security. However, due to the beam width, the size of the transmitting and receiving antennas is very large, so the miniaturization of the antenna becomes a problem that restricts the engineering application of microwave wireless energy transmission [7].

738

X. Wang et al.

5 Summary Compared with traditional wired transmission methods, wireless energy transmission technology has a very broad application prospect in the fields of transportation, medical electronics, consumer electronics and space solar energy, and is a revolutionary advancement of energy transmission technology. With the development of electronic devices, power conversion and measurement and control technologies, wireless energy transmission technology has gradually made breakthroughs in power, distance and efficiency, and will provide a safer, more reliable and flexible transmission method for global energy interconnection.

References 1. Oliveri G, Poli L, Massa A (2013) Maximum efficiency beam synthesis of radiating planar array for wireless power transmission. IEEE Trans Antennas Propag 61(5):2490–2499 2. Brown WC (1996) The history of wireless power transmission. Sol Energy 56(1):3–21 3. Glaser PE (1968) Power from the sun: its future. Science 162(3856):857–861 4. Nansen RH (1996) Wireless power transmission: the key to solar power satellites. IEEE Trans Aerosp Electron Syst Mag 11(1):33–39 5. Kurs A, Moffatt R, Soljacic M (2010) Simultaneous mid-range power transfer to multiple devices. Appl Phys Lett 96(4):23–30 6. Yang Z, Guo S, Yang J (2008) Transmitting electric energy through a closed elastic wall by acoustic waves and piezoelectric transducers. IEEE Trans Ultrason Ferroelectr Freq Control 55(6):1380–1386 7. Takehiro I, Yoichi H (2011) Maximizing air gap and efficiency of magnetic resonant coupling for wireless power transfer using equivalent circuit and Neumann Formula. IEEE Trans Industr Electron 58(10):4746–4752

Flexible Sparse Representation Based Inverse Synthetic Aperture Radar Imaging Lu Wang1 , Guoan Bi1(B) , and Xianpeng Wang2 1 School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore {wanglu,egbi}@ntu.edu.sg 2 College of information and communication engineering, Hainan University, Haikou, China [email protected]

Abstract. The motion of target rotation entitles different Doppler frequencies for scatterers located in cross-range domain. Due to this fact, we can produce an un-scaled target image by using the technique of inverse synthetic aperture imaging (ISA). However, the rotation also smears the image of the target since it can easily cause unwanted range migration and Doppler migration. This paper presents a new ISAR imaging algorithm based on sparse Bayesian Learning by using sparse probing frequency signals, which can easily solve the problem of range migration caused by target rotation. The source causing range migration is theoretically modeled in the mathematical signal model under sparse representation. Then sparse Bayesian learning is applied to automatically learn the sparse coefficients from the original radar data to form the focused and high resolution target image. Keywords: Inverse synthetic aperture imaging (ISAR) · Range migration · Sparse representation · Sparse Bayesian learning

1

Introduction

Inverse synthetic aperture radar (ISAR) has been used as an important and efficient tool in remote sensing, which provides high-resolution images of airtime and maritime moving targets, such as aircraft, satellite and ship. The resolution in range and cross-range domain depends on transmitting wide-band signals and the rotation angle between radar and the observed target, respectively. The longer coherent processing interval or larger effective rotation angle is, the higher resolution of the considered target image will be. However, the rotation during the CPI tends to introduce two-dimensional (2-D) migration through resolution c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 739–748, 2020 https://doi.org/10.1007/978-981-13-9409-6_87

740

L. Wang et al.

cells (MTRC) in ISAR image, which causes the smear of the target image that is necessary to be corrected for high-quality imaging. In this paper, we focus on the problem of migration through range cells due to the rotation. To remove the migration, range alignment using amplitude correlation method (ACM) and its derivative methods is very common. However, there exists drift and jump error. Furthermore, low signal to noise ratio (SNR) always degrades the correlation level, which results in the failure of the method. For a relatively short CPI, the range migration can be regarded as linear and Keystone transform is another more robust way to deal with the range migration. Another very important issue of ISAR imaging is the resolution. The traditional Range Doppler (RDA) method always suffers a low resolution problem. Sparse representation provides a possible way to achieve higher resolution radar imaging [1–5] than the conventional range Doppler algorithm [6] with a limited number of pulses by representing ISAR imaging problem into an underdetermined inverse problem by exploiting the sparsity of ISAR images. However, the conventional sparse representation-based ISAR imaging approaches are commonly based on a constant Fourier-based basis and implicitly assume that range migration caused by rotational motion can be neglected [1,2]. Pre-processing, such as, ACM or Keystone, can also be performed to correct the range migration. In [7], the range migration is modeled in the signal model of sparse representation by constructing a keystone transform like matrix. The involved matrix is just equivalent to the keystone transformation. Investigations on the problem of range migration due to rotational motion combining with the sparse representation based algorithms have not been sufficiently conducted. In contrast to most of sparse representation based ISAR in recent literature where the radar data samples are collected in the compressed range and slowtime domain, we use the data samples in range frequency and slow time domain under the assumption that a set of sparse probing frequencies are transmitted. Due to the superiority of sparsity, it is demonstrated in [8–10] that the Radar system merely transmitting the sparse probing frequency (SPF) signal can be sufficient to achieve ISAR image with high resolution. Under such a system, data samples can be easily mathematically modeled under sparse representation. In this paper, we will further show that the range migration due to the rotation can also be easily incorporated in the signal model by a coupling term of frequency and slow time under the signal model. Sparse Bayesian learning can be applied to automatically learn the sparse coefficients from the original radar data to form the high resolution target image. Extensive simulations are conducted to show the effectiveness of the proposed method.

2

Sparse Representation Based ISAR Imaging

In this section, we briefly introduce the baiscs of ISAR and construct the basic mathmatical signal model for sparse representation using the sparse probing frequency signal.

Flexible Sparse Representation Based Inverse Synthetic

2.1

741

Basic Geometry of ISAR and the Recieved Radar Signal

The basic geometry used in ISAR imaging is given [11] as shown in Fig. 1. Three coordinate systems X, Y and Z are used to discript the relative motions between radar and target, where Z is the coordinate system of radar with the axis z2 defined along the line of sight (LOS). X is reference coordinate system which helps to keep the axis x2 along the LOS and the x3 along the effective rotation vector Ωef f . Rotation center is fixed at point O. Y is the target coordinate system and is the same as X at the very beginning. ΩT and Ωef f are, respectively, thel rotation angular vector and the rotation vector projected to the LOS. Suppose scatterer P is of position yk = [ y1,k y2,k y3,k ]T . The Doppler term of scatter P due to the rotation can be calculated by [11] x2 (yk ,t) = a2 (yk ) + b2 (yk ) cos (|ΩT | t) + where a2 (yk ) =

c2 (yk ) sin (|ΩT | t) |ΩT |

(1)

(ΩT yk ) ΩT2 , |ΩT |

b2 (yk ) = y2,k −

(ΩT yk ) |ΩT |

2

ΩT2 ,

c2 (yk ) = |Ωef f | y1,k , and |Ω| is the modulus of vector Ω, ΩT2 is projection of ΩT along x2 axis. If a small constant angular velocity is considered, the first-order polynomial approximation of (1) can be given by x2 (yk ,t) = y2,k + |Ωef f | y1,k t.

Fig. 1. Geometry of ISAR

(2)

742

L. Wang et al.

Therefore, the radar data sample in slant-rang frequency ( i.e., f ) and slow time domain can be expressed as    j4πf (y2 + |Ωef f | y1 t) dy1 dy2 . σy1 ,y2 exp − (3) r (f, t) = c y1 ,y2 ∈target 2.2

Sparse Probing Frequencies

It is demonstrated in [8] that the transmission of the wide band signal is unnecessary and a few probings with random frequencies can be sufficient for ISAR imaging by the sparse representation. As a byproduct, the Radar system can be simplifed by transmitting merely the sparse probing frequency signal. As shown in [8], when a few probing frequency signal exp(j2πfl tl ) with frequency fl is transmitted at time tl , the echo received by radar at the range frequency fl and at time tl ∈ [1, L] can be expressed below r(l) =

K  k=1

  y2,k + |Ωef f | y1,k tl , ak exp −j4πfl c

(4)

where ak is the amplitude of scatterer k. According to the sparse representation, our interested target scene should be firstly discretized into an M × N grid with scatterer locating at (m, n) represented by (y2,m , y1,n ) with amplitude am,n . y1,n is the coordinate of scatterer in cross-range domain and y2,m is its coordinate in range domain. Accordingly, an sparse representation dictionary D of size L × M N can be constructed with atom d following dm,n = exp {−j4πfl /c (y2,m + y1,n tl )} ,

(5)

where y1,n is actually a scaled version of y1,k in (4) by shrinking |Ωef f | inside the y1,n . Therefore, it should be noted that the image given by the proposed method in cross-range domain is no longer the true coordinate, but a re-scaled one by the effective rotation velocity. Denote the coefficient vector   aT = a1,1 . . . am,1 a1,2 . . . a1,n . . . am,n , then r = Da.

(6)

Since L < M N , D is over-complete and thus Eq. (6) is under-determined. However, observing that vector a is sparse since K  M N , sparse representation methods such as, Basis Pursuit (BP) [8], Greedy Pursuit, sparse Bayesian learning (SBL) and so on, can be used to solve the problem.

Flexible Sparse Representation Based Inverse Synthetic

2.3

743

Discussions

The range migration caused by the rotation can be seen from term f |Ωef f | y1 t in the phase in Eq. (4). It is the coupling effect of frequency f with the slow time t that introduces the range migration. The dictionary we constructed in (5) contains similar coupling term of f y1,n t to introduce the effect of range migration. Therefore, the signal model constructed in Eq. (6) has already taken inherently the range migration into consideration. Therefore, no pre-processing or correction method to correct the range migration is needed in our formulated sparse recovery algorithm.

3

Sparse Bayesian Learning

In this section, we use sparse Bayesian learning method to solve problem in (6). Following [12], under the framework of Bayesian Compressed Sensing, the sparse coefficient vector a can be statistically modeled. The detailed probabilistic models and the updating rules for unknowns are given below. 3.1

Probabilistic Models

In Bayesian treatment, proper probabilistic models are first constructed for hidden variables n and a. To model the additive noise, we assume n follows a complex normal distribution with precision α0 . Therefore, observation r in (6) is of the likelihood function:  (7) r |a, α0 ∼ CN Da, α0−1 IL , where IM is an identity matrix of size L. The symbol CN indicates the complex normal distribution. A gamma prior with hyperparameters e and f is further imposed on α0 for easy estimation of noise precision. p (α0 |e, f ) = Γ (α0 |e, f ) .

(8)

To impose the sparsity on a, a is modeled hirerachically by a Gaussian distribution with precision vector α:  a|α ∼ CN 0, Λ−1 I , (9) with Λ = diag (α) defined as the inverse of the variance and αT = [α1 . . . αN M ] and N

×M Γ (α|g, h), (10) α|g, h ∼ n=1

where diag(α) is a diagonal matrix with its diagonal being vector α. Parameters e, f, g and h in (8) and (10) are pre-fixed to be small values, say, 10−6 , to impose an non-informative prior according to the conventional sparse Bayesian learning [14].

744

3.2

L. Wang et al.

Unknown Variable Inference

According to the probabilistic models for each kinds of hidden variables, expression of the posterior of the hidden variables in set Θ = {a, α, α0 } given the measurements r can be calculated by p (Θ |r ) =

p(r,a,α ,α0 ) p(r)

=

p(r|a,α0 )p(a|α )p(α )p(α0 ) , p(r)

(11)

where the numerator can be obtained by substituting the likelihood in (7), priors of a, α and α0 into the above equation. The denominator is given by p (r) = p (r, a, α, α0 )dadαdα0 . (12) Since the computation of the marginal distribution p (r) involves multidimensional integral, closed-form expression of p (Θ |r ) is computational intractable. Variational Bayesian Inference To avoid the problem, we use the variational Bayesian inference (VBI) [13] to find an approximate posterior. In VBI, the approximation to the true posterior q(Θ) is found by minimizes the KullbackLeibler (KL) divergence between q(Θ) and p(Θ|r) under the constraint that q(Θ) is in the factorization form as bellow, q(Θ) = q ({a, α, α0 }) = q(α0 )q(α)q(a).

(13)

The optimal distribution that minimizes the KL divergence can be expressed as ln q ∗ (Θk ) = ln p(r, Θ)q(Θ \Θk ) ,

(14)

where ·q(Θ \Θk ) denotes the expectation with respect to q(Θ\Θk ) and Θ\Θk represents the set Θ without Θk . Hidden Variables Updating The proposed algorithm employs the VBI to iteratively update the hidden variables in Θ = {a, α, α0 }. The updating rule of each kind of variable is derived according to Eq. (14). The updated estimation ˜ and α ˜, α ˜ 0 . The derived for hidden variable a, α and α0 is represented by a updating formulas are summarized as bellow, ˜ = μ, a μ=α ˜ 0 ΣDH r,

−1 ˜ +α Σ= Λ ˜ 0 DH D ˜ = diag(α), ˜ Λ

(15)

Flexible Sparse Representation Based Inverse Synthetic

˜ = (g + 1)/(h + diag(μμH + Σ)), α α ˜0 =



(16)

e+L 2

745

,

f + r − Dμ2 + Tr (DΣDH )

(17)

where Tr denotes the trace operation. The algorithm successively updates a, α and α0 by (15), (16) and (17), respectively, till convergence.

4

Simulation Results

Synthetic data experiment are conducted to verify and compare the proposed ISAR imaging method with traditional RDA. We assume that scatters are ongrid. Suppose target is R0 = 500 km away from the Radar. Translation motion is assumed to be compensated. A constant rotation velocity of ω = 0.15 rad/s is assumed and 80 pulses are collected. It should be noted that the range migration caused by rotation is assumed to be obvious and should be considered. The Radar system parameters used in the simulation are given in Table 1. Table 1. Radar system parameters for simulation Carrier frequency Fc

10 GHz

Band width B

1.5 GHz

Range resolution Rr = C/(2B)

0.1 m

Pulse repetition frequency (PRF) 100 Hz

(a)

(b)

(c)

Fig. 2. Radar signal generation. a Target scene; b radar signal in range frequency and slow time; c signal in range and slow time

Signals within a dwell time of 0.8 s are used, where 80 pulses are collected. Locations of scatterers are shown in Fig. 2a, where 11 scatterers are present. The amplitudes of the scatterers follow complex Gaussian distribution. The signal

746

L. Wang et al.

to noise ratio used in our simulations is 20 dB. Measurements are obtained by sampling randomly in the slant-range frequency and cross-range domain and measurements of full, 0.8 and 0.64 of the full data are used in the following synthetic experiments. The results obtained by the RDA [6] and the proposed method are also provided for comparison. Figure 2b shows the received radar signal in the range frequency and slow time domain. After performing the Fourier Transform in the range frequency, the range compressed radar signal is shown in Fig. 2c. It is obvious that scatterers far away from the rotation center undergo a server range migration due to the motion of rotation.

(a)

(b)

(c)

Fig. 3. ISAR images. a RD algorithm with full measurements; b RD algorithm with undersampling; c proposed algorithms with undersampling; the first row shows the results with undersampling ratio of 0.8 in range frequency domain; the second row shows the results with undersampling ratio of 0.8 in slow time domain; the third row shows the results with undersampling ratio of 0.64 in range frequency and slow time domain

The ISAR results of different algorithms under different sampling ratios in both range frequency and slow time are given in Fig. 3. Figure 3 also shows the undersampling effect on the ISAR imaging algorithm in different domains. As shown in the first row of Fig. 3, undersampling in range frequency domain using the RD algorithm causes random artifacts in the whole image plane; while

Flexible Sparse Representation Based Inverse Synthetic

747

undersmapling in the slow time domain mainly causes the energy extension of the scatterers in the cross-range domain. Undersampling in both domains shows an additive effect of underampling in each domain. As shown in column (c) of first row, the proposed method can recover a highly accurate target image, which shows that the random artifacts caused by undersampling in range frequency, if not too heavy, does not affect the proposed method. As shown in column (c) of second row, the scatterer energy extension effect deteriorates the performance of the proposed method. As shown in column (c) of third row, though the scatterer located at the very bottom of the image is lost by the proposed method, the obtained image is of a much higher contrast with clean background and high resolution.

5

Conclusion

In this paper, we present a united frame for sparse representation based ISAR imaging considering the problem of range migration caused by target rotation. The proposed algorithm does not require additional correction procedure to correct the range migration and achieves high resolution ISAR image with high contrast. Since simple sparse probing frequency signals are to be transmitted, the radar system can be greatly simplified. The proposed algorithm allows different undersampling patterns in different domains or undersampling in both domains. The proposed algorithm also shows different undersampling effects to the ISAR images in different domains. Simulation results have demonstrated the effectiveness of the proposed algorithm.

References 1. Patel VM, Easley GR, Healy DM Jr, Chellappa R (2010) Compressed synthetic aperture radar. IEEE J Sel Topics Signal Process 4:244–254 2. Zhang L, Xing M, Qiu C, Li J, Sheng J, Li Y, Bao Z (2010) Resolution enhancement for inversed synthetic aperture radar imaging under low SNR via improved compressive sensing. IEEE Trans Geosci Remote Sens 48:3824–3838 3. Li G, Zhang H, Wang X, Xia X (2012) ISAR 2-D imaging of uniformly rotating targets via matching pursuit. IEEE Trans Aerosp Electron Syst 48:1838–1846 4. Zhao L, Bi G, Wang L, Zhang H (2013) An improved auto-calibration algorithm based on sparse Bayesian learning framework. IEEE Signal Process Lett 20:889–892 5. Xu J, Pi Y, Cao Z (2012) Bayesian compressive sensing in synthetic aperture radar imaging. IET Radar Sonar Navig 6:2–8 6. Walker JL (1980) Range-Doppler imaging of rotating objects. IEEE Trans Aerosp Electron Syst 16:23–52 7. Xu G, Xing M, Xia X, Chen Q, Zhang L, Bao Z (2015) High-resolution inverse synthetic aperture radar imaging and scaling with sparse aperture. IEEE J Sel Topics Appl Earth Obs Remote Sens 8:4010–4027 8. Wang H, Quan Y, Xing M, Zhang S (2011) ISAR imaging via sparse probing frequencies. IEEE Geosci Remote Sens Lett 8:451–455

748

L. Wang et al.

9. Wang L, Zhao L, Bi G, Wan C, Yang L (2014) Enhanced ISAR imaging by exploiting the continuity of the target scene. IEEE Trans Geosci Remote Sens 52:5736– 5750 10. Wang L, Zhao L, Bi G, Wan C (2015) Sparse representation based ISAR imaging using Markov random fields. IEEE J Sel Topics Appl Earth Obs 8:3941–3953 11. Berizzi F, Dalle Mese E, Diani M, Martorella M (2001) High-resolution ISAR imaging of maneuvering targets by means of the range instantaneous Doppler technique: modeling and performance analysis. IEEE Trans Image Process 10:1880–1890 12. Ji S, Xue Y, Carin L (2008) Bayesian compressive sensing. IEEE Trans Signal Process 56:2346–2356 13. Beal MJ Variational algorithms for approximate Bayesian inference. PhD thesis, Gatsby Unit, University College London (2004) 14. Tipping M (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244

Localization Schemes for 2-D Molecular Communication via Diffusion Shenghan Liu(B) , Shijian Bao, and Chenglin Zhao Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Beijing, China night [email protected]

Abstract. Recently, with the development of nano-technology, molecular communication has become a promising communication paradigm. In molecular communication via diffusion systems, nano-machines utilize the physical (concentration) or chemical (composition and structure) characteristics of message molecules to represent information. In order to restore the original information intended to be transmitted effectively at the receiver, the channel state information is essential, one of which is the relative position between the transmitter and the receiver. Previous studies focused mainly on one-dimensional distance estimation without considering the exact location of nano-machines. In this paper, four novel localization schemes based on trilateration method in a 2-D environment are proposed. The simulation results show that our proposed schemes can effectively locate the target nano-machine even under a low signal-to-noise ratio (SNR) and each scheme makes its own compromise between the accuracy and system complexity. Keywords: Molecular communication via diffusion · Nano-machines Localization schemes · Peak concentration time · Trilateration

1

·

Introduction

Traditional electromagnetic wave (EMW) based wireless communication has good performance in air medium environment, with numerous mature technology applying to various communication scenarios [2]. However, in liquid medium environment, due to obvious limitations such as severe attenuation and multipath loss, EMW does not work as well [6]. In addition, in the field of intra-biological communication, exogenous electromagnetic devices may lead to enormous and inestimable consequences, such as neurological confusion. Therefore, as a new communication paradigm that overcome these weaknesses, molecular communication has abruptly attracted more and more attention [1]. In diffusive molecular communication, message molecules are utilized to convey information between transmitter (Tx) and receiver (Rx) nano-machines. The c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 749–756, 2020 https://doi.org/10.1007/978-981-13-9409-6_88

750

S. Liu et al.

information to be transmitted is encoded into a series of molecular pulses by Tx, then the message molecules are received by Rx and restored into original information. During this period, molecules propagate in the form of Brownian motion in the channel without additional facilities or energy budgets. At the receiver side, the information is decoded by the time-varying molecule concentration, in which the diffusion coefficient and the distance between transceivers are critical factors. Since the diffusion coefficient depends mainly on transmission medium and temperature that are inherent in a communication system, distance estimation becomes the emphasis of study. To date, existing works have proposed several methods on distance estimation between transceivers, and most of them focused on one-dimensional environment. In [5], a distance estimation method based on synchronized feedback signal was proposed. This scheme utilized the round-trip time of message molecules and the variation of molecular concentration to calculate the distance. Huang et al. [3] studied the effect of noise on accuracy of distance estimation, and established a smoothing method based on weighted concentration to reduce the adverse influence of noise. The authors in [7] derived the Cramer-Rao lower bound (CRLB) for the variance of estimation error as a guideline for designing estimation schemes. Nevertheless, the complexity of maximum likelihood (ML) estimator for calculating the CRLB is relatively high and the result is simply a theoretical field which is unable to be directly applied to actual channel. Wang et al. [8] analyzed the molecular pulse at Tx in distance estimation, and replaced a single pulse by a rectangular pulse, which was more in line with the actual situation. The above studies mainly focus on the distance estimation between transceivers. However, due to the geometry size of Rx, the orientation or angle between Tx and Rx affect the quality of communication as well. Therefore, for the first time, this paper studies the two-dimensional (2-D) localization issue in molecular communication via diffusion, including both distance and angle estimation. Inspired by traditional wireless scenario, a trilateration localization method is proposed to locate the target. Furthermore, four schemes with different time delay and system complexity are proposed to solve the localization problem in specific scenarios. The estimation accuracy of each of our schemes under various SNR is evaluated to validate our method.

2

System Model

In this section, a detailed typical 2-D molecular communication system with a pair of nano-machines (i.e., Tx and Rx) is elaborated. As sketched in Fig. 1, Tx releases message molecules into the environment, then Rx counts the concentration of molecules within the sensing area after the propagation process. Synchronization is assumed to be perfect between Tx and Rx, so that the beginning time of communication is unified for both transceivers. To obtain a theoretical expression for channel impulse response (CIR), Tx is approximatively modeled as a point source and Rx as a circle with radius R. The coordinate origin is selected at Tx thus the coordinates of the center of Rx  is denoted by (x0 , y0 ). The distance between Tx and Rx is calculated as d = x20 + y02 .

Localization Schemes for 2-D Molecular Communication

751

Sensing Area

R

y

d Rx

x

Molecule

Tx

Fig. 1. 2-D system model for molecular communication via diffusion

The information transmission begins with Tx emitting a number of message molecules (N ) at time t = 0. These molecules propagate in the channel in a form of Brownian motion which obey the Fick’s Second Law with an initial condition as follows [4]: ⎧ ⎨ 1 · ∂C(x, y, t) = ∇2 C(x, y, t), t > 0, (1) D ∂t ⎩ C(x, y, t = 0) = δ(x, y), where ∇2 , δ(·) and C(x, y, t) are the Laplacian operator, Dirac delta function, and the molecule concentration distribution function at time t and location (x, y), respectively. D denotes the diffusion coefficient which is determined by the temperature, viscosity of the medium, and the Stokes’ radius of the message molecule. Therefore, the 2-D channel impulse response is derived as:  2  x + y2 N exp − , (2) C(x, y, t) = 4πDt 4Dt  Similarly, if Rx is located at (x0 , y0 ) with d = x20 + y02 , the channel response becomes:   d2 N exp − , (3) C(d, t) = 4πDt 4Dt which means that the concentration of molecules at the center of Rx is C(d, t). To investigate the property of CIR further, the first derivative of C(d, t) with respect to t is derived as:  2    d N d2 ∂C(d, t) = −1 , (4) exp − ∂t 4Dt 4πDt2 4Dt 2

d thus the solution to ∂C(d,t) = 0 is tp = 4D . Furthermore, it is easily to observe ∂t ∂C(d,t) that when 0 < t < tp the value of ∂t > 0 and when t > tp the value of ∂C(d,t) < 0, resulting in a global maximum value of C(d, t). For Rx, once the ∂t

752

S. Liu et al.

peak time of message molecule concentration is reached, the distance between Tx and Rx can be simply derived as:  (5) d = 4Dtp

3

Localization Schemes

Similar to traditional wireless localization issues, at least three points with determinate coordinates are needed for locating an unknown target. In this section, four scenarios with different time delay and system complexity cost are discussed to obtain the estimated distance between transceivers, then the actual position of the target is derived based on these distances. To avoid confusion, the receiver to be located is called target nano-machine (TN) and the transmitter with prior knowledge of the location is called sensor nano-machine (SN), since both transmitting and receiving functions are performed by Tx (or Rx) with a feedback link under some circumstances. TN is assumed to be fixed during the entire localization process. • Scenario 1: A set of three fixed SNs with different coordinates are considered in this scenario. At the beginning of the localization process, a number of molecules are released by TN. Later, each SNi records its own peak concentration time and derives the distance from TN as:  (6) dˆi = 4Dtpi (i = 1, 2, 3) • Scenario 2: Similar to Scenario 1, three fixed SNs work together to locate a target. At the beginning of the localization process, SNi emits a number of molecules in specific type Ai into the channel, then TN emits corresponding molecules for each SN in type Bi after sensing the peak concentration of Ai . Later, each SNi records the peak concentration time of Bi and derives the distance from TN as:  DAi DBi tpi (i = 1, 2, 3) (7) dˆi = 4 DAi + DBi where DAi and DBi are diffusion coefficients for specific molecules. • Scenario 3: In this case, only a single SN is available to perform localization task. To satisfy the minimum number of points needed for localization, the SN moves to two new positions. The time interval of SN’s movement and the molecules released interval Ts are equal to remain synchronized and the rest steps are the same as in Scenario 1. For the sake of fairness in complexity, it is assumed that TN utilizes a single type of molecules, otherwise the performance is exactly the same as in Scenario 1. • Scenario 4: Similar to Scenario 3, a single movable SN is considered in this situation. Since the molecules that originated from previous pulses will stay in the channel and cause adverse inter-symbol interference (ISI) on the current CIR. To mitigate the influence of ISI, SNs could revise the molecule

Localization Schemes for 2-D Molecular Communication

d3

753

P3 (5 μm, 5 μm)

P0 (2 μm, 2 μm) d1 d2 P1 (0 μm, 0 μm)

P2 (5 μm, 0 μm)

Fig. 2. An illustration for three-point based localization scheme

concentration properly utilizing the estimated distance at previous positions as: ⎧ ⎪ C (t) = C1 (t), ⎪ ⎨ 1 (8) C2 (t) = C2 (t) − C1 (dˆ1 , t + Ts ), ⎪ ⎪ ⎩ C3 (t) = C3 (t) − C2 (dˆ2 , t + Ts ) − C1 (dˆ1 , t + 2Ts ) where Ci (t), Ci (dˆi , t) and Ci (t) represent the concentration to perform distance estimation, the analytical concentration with estimated distance and the actual received concentration over time, respectively. After the distances between three auxiliary points and the target are derived through specific methods, we adopt a simple yet effective trilateration algorithm to locate the target P0 (x0 , y0 ). As shown in Fig. 2, three points P1 (x1 , y1 ), P2 (x2 , y2 ), and P3 (x3 , y3 ) are distributed in a two dimensional coordinate sys−−−→ tem. The positive direction of x-axis is set according to the vector P1 P2 and the point P1 locates at the origin (i.e., P1 (x1 , y1 ) = P1 (0, 0)), therefore P2 (x2 , y2 ) = P2 (x2 , 0). The radius for each circle is dˆ1 , dˆ2 and dˆ3 , the intersection

754

S. Liu et al.

of three circles is the target to be measured, thus we have: ⎧ 2 ⎪ ⎪ dˆ1 = x20 + y02 ⎪ ⎨ 2 dˆ2 = (x0 − x2 )2 + y02 ⎪ ⎪ ⎪ ⎩ ˆ2 d3 = (x3 − x0 )2 + (y3 − y0 )2 the solutions to these equations are derived as: ⎧ 2 2 ⎪ dˆ1 − dˆ2 + x22 ⎪ ⎪ = x ˆ ⎨ 0 2x2 2 2 ⎪ ⎪ ˆ20 + (ˆ x0 − x3 )2 + y32 dˆ1 − dˆ3 − x ⎪ ⎩ yˆ0 = 2y3

(9)

(10)

thus the angle between the positive half of the x-axis and the line which connects target and origin is measured as: yˆ0 θˆ = arctan x ˆ0

(11)

It is note worthy that the estimation of angle is developed on the foundation of distance estimation, so that the performance of localization is mainly depended on the accuracy of distance estimation.

4

Numerical Results

In this section, the simulation results assessing the performance of the localization schemes described in this paper are presented in terms of SNR. To model the noise reasonably, an additive  signal dependent noise n(t) with zero mean and a standard deviation σ = C(t)/(πR2 ) is considered, i.e., n(t) ∼ N (0, σ 2 ). The simulation is performed by a particle-based Monte Carlo method, in which the molecules released by Tx are independent of each other following a the form of Brownian motion. After every time step t, the location of each molecule is updated and Rx counts the number of molecules inside its sensing area to obtain real-time concentration. The value of parameters are set as N = 10000, Ts = 0.15 s, R = 2 µm, D = 1000 µm2 s−1 and t = 10−4 s, which are common in molecular communication literatures. Note that the diffusion coefficient is unified by D with a single value to maintain fairness among all the schemes. The results are obtained by averaging over 104 realizations. From Figs. 3 and 4, it is clearly shown that the localization scheme in Scenario 2 yields a best performance compared with the others even in a low region of SNR. It is because that incorporating a feedback link could significantly counteract the effect of Gaussian random noise. In addition, from the comparison between Scenario 3 with Scenario 1 it is shown that saving the cost of system complexity would cause severe ISI, which makes the peak time of concentration

Localization Schemes for 2-D Molecular Communication

755

-4

Absolute Distance Error (m)

x 10

Scenario1 Scenario2 Scenario3 Scenario4 1

0 -10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

SNR (dB)

Fig. 3. Absolute distance error for four scenarios under various levels of SNR

Absolute Angle Error (°)

140

Scenario1 Scenario2 Scenario3 Scenario4

120 100 80 60 40 20 0 -10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

SNR (dB)

Fig. 4. Absolute angle error for four scenarios under various levels of SNR

inaccurate, resulting in a poor performance for distance estimation. To solve this problem, the scheme proposed in Scenario 4 utilizes the estimated distances at former position to cancel out the ISI. The results show that the ISI mitigation is able to improve the accuracy of localization and is comparable to the performance in Scenario 1 without additional expenditure. The absolute errors of both distance and angle converge to relative small values when SNR exceeds 0 dB, which is related to the measurement precision decided by the system parameters such as the length of time step. It can be seen that the accuracy is still high even at relative low SNR that is easy to attain in a molecular communication system.

5

Conclusions

In this paper, four localization schemes for different scenarios based on a trilateration location algorithm are proposed. Instead of a set of nano-sensors, we

756

S. Liu et al.

further consider a single mobile nano-machine to locate a target. Moreover, the angle estimation in localization for molecular communication is achieved for the first time, which could also be extend from a 2-D to a 3-D environment. The comparison among these schemes depends on the performance of accuracy, i.e., absolute error of distance and angle. Simulation results show that the estimation scheme with a feedback link enjoys a better performance in terms of accuracy while suffers from a cost of delay. In addition, the ISI mitigation process can significantly reduce the influence of interference among pulses, when only a single type of molecules is available. To select a localization scheme properly, all the factors should be considered such as the complexity of the system, the required precision and the tolerance of delay.

References 1. Alberts B (2015) Essential cell biology. Mol Reprod Develop 51(4):477–477 2. Farsad N, Yilmaz HB, Eckford A, Chae CB, Guo W (2017) A comprehensive survey of recent advancements in molecular communication. IEEE Commun Surv Tutor 18(3):1887–1919 3. Huang J-T, Lai H-Y, Lee Y, Lee C, Yeh P (2013) Distance estimation in concentration-based molecular communications. In: 2013 IEEE global communications conference (GLOBECOM), pp 2587–2591 4. Llatser I, Cabellos-Aparicio A, Pierobon M, Alarcon E (2013) Detection techniques for diffusion-based molecular communication. IEEE J Sel Areas Commun 31(12):726–734 5. Moore M, Nakano J, Enomoto S (2012) Measuring distance from single spike feedback signals in molecular communication. IEEE Trans Signal Process 60(7):3576– 3587 6. Nakano T, Eckford AW, Haraguchi T (2013) Molecular communication—application areas of molecular communication 7. Noel A, Cheung KC, Schober R (2014) Bounds on distance estimation via diffusive molecular communication. In: Global communications conference 8. Wang X, Higgins MD, Leeson MS (2015) Distance estimation schemes for diffusion based molecular communication systems. IEEE Commun Lett 19(3):399–402

Research on Support Vector Machine in Estimating Source Number Xiaoli Zhang, Jiaqi Zhen(&), and Baoyu Guo College of Electronic Engineering, Heilongjiang University, No. 74 Xuefu Road, Harbin, China [email protected]

Abstract. In order to reduce the calculation amount of the source code estimation algorithm of the Gerschgorin Radii, and improve the accuracy of the source number estimation under the background of low signal to noise ratio, small snapshots and white noise. Feature extraction of the signal and noise received by the antenna array is performed by using the characteristics of the noise vector orthogonal to the array pattern. A classifier based on Support Vector Machine is designed. The structure of the classifier and the related parameters of the optimal classification accuracy are determined by theoretical analysis and actual data testing. The validity and feasibility of the proposed method are verified by simulation data and actual data test. Keywords: Gerschgorin radii  Estimation of source number Feature extraction  Support vector machines

 Orthogonal 

1 Introduction Spatial spectrum estimation technique is an important branch of array signal processing [1]. The highest resolution DOA estimation algorithms are proposed in the case of known source number. When the number of sources is unknown or the number of sources does not match the assumptions, the performance of the algorithm will be seriously degraded or invalidated [2]. Estimation of source number has always been a research hotspot and difficulty in the field of blind source separation and spatial spectrum estimation [3]. At present, the method of source number estimation can be summarized into two categories: one is the traditional source number estimation method based on the discriminant function method, including Hypothesis testing (HYP) [4] algorithm, Akaike information criterion (AIC), Gerschgorin Radii (GDE), Minimum Description Length Principle (MDL) and so on. The other is a pattern recognition taxonomy based on machine learning. The research work of this paper is based on SVM to study the method of source number estimation. Experiments show that the algorithm has high accuracy in estimating the number of sources in low SNR, small snapshots and white noise environments.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 757–761, 2020 https://doi.org/10.1007/978-981-13-9409-6_89

758

X. Zhang et al.

2 Signal Number Estimation Based on Support Vector Machine For two types of nonlinear classification problems, the samples from the primitive space to a higher dimensional feature space, is the samples in the sample in the feature space can be divided into linear. SVM is implemented by introducing a kernel function:        k xi ; xj ¼ /ðxi Þ; / xj ¼ /ðxi ÞT / xj

ð2:1Þ

The optimal decision function can be obtained by solving the problem: ( f ð xÞ ¼ sgn

m X





)

ai y i k x i ; x j þ b

ð2:2Þ

i¼1

The kernel function in Eq. (2.1) selects the RBF kernel function:   ! xi  xj 2 k xi ; xj ¼ exp  2r2 

2.1



ð2:3Þ

Feature Extraction of Source Number Estimation Based on SVM

The classification characteristics of signal and noise are extracted by GDE, and two kinds of vector machines are constructed and trained to divide the data received by the antenna array into signal subspace and noise subspace [5].  According to RT after unitary transformation, qi ¼ qH i AM1 Rss bM . If qi in qi corresponds to the noise subspace, jqi j is close to zero. Conversely, jqi j is a relatively large number. Therefore, taking qi in the RT matrix constitutes the ðM  1Þ  1 column vector as the first eigenvector of the SVM classifier input. The covariance matrix R is used to construct ðM  1Þ  1 column vectors the first element from the second element to the M element. Therefore, this column vector is used as the second feature vector input by the SVM classifier. The feature values corresponding to the feature space U are taken and sorted from small to large. As the third input vector as the third input vector. These three feature vectors form a matrix of ðM  1Þ  3 as an input to the SVM classifier.

3 Implementation of Source Number Estimation Algorithm In the algorithm implementation process, the support vector machine used is the Libsvm software. It can be used to solve problems such as classification, regression, and distribution estimation.

Research on Support Vector Machine in Estimating Source Number

3.1

759

Establishment and Optimization of Classifier Parameters

Using RBF as the kernel function, the relaxation factor (g) and the penalty factor (c) have an important influence on the classifier. We use the grid search method and then cross-validate to optimize the two parameters of the classifier g and c, and draw a 3D view. The parameter optimization result is shown in Fig. 1.

Fig. 1. Classifier parameter optimization

The optimal parameter c = 1, g = 4 is obtained when the classification accuracy is the highest. 3.2

Simulation Experiment

Text 1: Supposing that the array element spacing is one-half of the signal wavelength and the number of sources is four. {10°, 35°, 50°, 65°} is the angle of incidence. The signal-to-noise ratio is [−30:10:30] and the experiment is repeated 100 times. As shown in Fig. 2. The accuracy is much higher than the GDE algorithm in the range of −30 to 20 dB, and the detection accuracy reaches hundred percent from −10 to 30 dB. Test 2: Assuming that the SNR is 10 dB, the range of snapshots is [100, 1200], and the test is repeated 100 times to obtain the detection accuracy. Figure 3 shows that the accuracy of the algorithm has been maintained at a high and stable state as the snapshot is increased.

760

X. Zhang et al.

Fig. 2. Relationship between different SNR and detection rate

Fig. 3. Relationship between different snapshots and detection rate

4 Conclusions This paper makes use of the good classification ability of SVM algorithm and uses the open source software toolbox Libsvm to design SVM classifier. Combining with the basic principle of Gerschgorin Radii in the estimation of the source number, the accuracy of source number estimation and the improvement of the operation are improved effectively under the background of Gaussian white noise, low SNR and small snapshot.

Research on Support Vector Machine in Estimating Source Number

761

Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61501176, Natural Science Foundation of Heilongjiang Province F2018025, University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province UNPYSCT-2016017, and the postdoctoral scientific research developmental fund of Heilongjiang Province in 2017 LBH-Q17149.

References 1. Lu Z, Zoubir AM (2013) Generalized bayesian information criterion for source enumeration in array processing. IEEE Trans Signal Process 61(6):1470–1480 2. Wang YL (2004) Spatial spectrum estimation theory and algorithm. Tsinghua University Press, Beijing 3. Wang R, Zhan Y (2015) A method of dynamic DOA estimation with an unknown number of sources. In: IEEE international conference on mechatronics & automation. IEEE 4. Chen W, Wong KM, Reilly JP (2002) Detection of the number of signals: a predicted Eigenthreshold approach. IEEE Trans Signal Process 39(5):1088–1098 5. Zhao HQ, Zhang YP, Zhao B (2011) Source number estimation based on support vector machine. J Hebei Univ Sci Technol 32(4):342–346

Wireless Electricity Transmission Design of Unmanned Aerial Vehicle Charging Systems Yashuo He1,2, Jingjing Wu1,2, Sumeng Shi1,2, Ze Song1,3, Qijing Qiao1,4, and Cheng Wang1,5(&) 1

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China {AsoHe_2018,JingjingWu1998,shisumeng1997,18822077572} @163.com, [email protected], [email protected] 2 Department of Communication Engineering, College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China 3 Department of Electronic Science and Technology, College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China 4 Department of Computer Science and Technology, College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China 5 Department of Artificial Intelligence, College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China

Abstract. Unmanned aerial vehicles have been becoming widely used tools along with the developing progresses of robotic, control and energy techniques. As a convenient carrier of payloads and apparatuses for specific missions, the replenishment of battery is significantly important for the using reliability and durability. To overcome the restrictive issue of conventional battery charging dependent on cables, the wireless electricity transmission based on electromagnetic coupling is investigated in this work. The non-contacted charging may enable more powerful facilities of unmanned aerial vehicles, and may further open up new application fields of drone scenarios. Keywords: Drone

 Wireless charging  Electromagnetic coupling

1 Introduction Unmanned aerial vehicles (UAVs), as well as drones, are classified by scale under civil aviation regulations, as micro, light, small and large drones [1]. As a kind of novel tools for executing critical mission, the power supply and duration play more and more important roles. Beneficial from the progresses of energy storage and battery investigations, such as the high energy density Lithium battery with excellent charge/discharge characteristics, UAVs have been becoming practically applicable beyond bench-top stage. However, the battery charging operation up to this day still usually relies on electrically cabled implements, which inevitably constrain the application convenience. Due to this issue, a reliable charging scenario free of cables is eagerly desired. Hereby, we reports a wireless charging research in this work for the more convenient application of UAVs. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 762–768, 2020 https://doi.org/10.1007/978-981-13-9409-6_90

Wireless Electricity Transmission Design

763

Wireless energy transmission (WPT) techniques exploiting the energy carriers (e.g., electromagnetic waves, microwaves) in physical space transfer energy between the charger and consumer without need of cables [2]. According to the power transmission principles, the electromagnetic coupling performs many advantages, such as low-cost of implementation and mature theoretical basis. As shown in Fig. 1, the electromagnetic coupling principle has been widely utilized.

Fig. 1. Schematic and examples of wireless power transmission applications

The electromagnetic coupling principle is usually realized via two approaches. The first one is the magnetic coupling resonance technique, also known as Wireless Electricity (WiTricity) technology [3]. The first realization of electromagnetic resonance radio energy transmission was reported, and a 60 W bulb point was placed at a distance of 2 m, with the transmission efficiency about 40%. This charging mode enables a long transmission distance, suitable of long-distance high-power charging [4]. However, this charging efficiency of this method with large power transmission loss is low, farther distance, more serious power loss. The other approach is the electromagnetic induction coupling (ICP) technique, which mainly uses the electromagnetic induction [5]. Current through the coil generates magnetic fields, which lead to induced current in adjacent coils. The inductively coupling methods have realized the transmission of power up to hundreds of kilowatts, and small-scale obstacles do not influent the power transmission. However, the transmission distance of this method is usually shorter than 10 cm. In addition, the accuracy of placement position is critical, only one-to-one aligned coils can be used [6].

764

Y. He et al.

2 Results and Discussion 2.1

Magnetic Coupling Resonance Method

The main principle of magnetic coupling resonance charging is transferring energy between two coils by mutual coupling [7]. It is mainly consisting of an electromagnetic energy emitting device, an electromagnetic energy receiving device and a driving circuitry. As shown in Fig. 2a, the energy transmitting part converts the input energy into DC electricity by rectifying and filtering, and the DC is modulated by high frequency AC carrier waveform via high frequency inversion, and the energy is transmitted to the transmitting coil, when the resonant frequency of the transmitting coil and the receiving coil are the same. The transmitting device generates an alternating magnetic field at a specific resonant frequency. The transmitting device first generates self-resonance under an alternating magnetic field and generates an alternating magnetic field of the same frequency [8]. When the receiving coil is close to the transmitting one, the self-resonance happens, and thereby generates a time-varying magnetic field centered on the resonant coil. Continuous exchange of energy between the two resonant coils through time-varying magnetic fields for wireless energy transmission.

Fig. 2. Schematic and realization methods of electromagnetic coupling principle. a Schematic of magnetically coupled resonant radio energy transmission. b Energy transmission circuit model. c Equivalent circuit

Wireless Electricity Transmission Design

765

The equivalent circuit model of the electromagnetic transmitting system and the electromagnetic receiving system are given in Fig. 2b. The exciting coil consists of an excitation source (high-frequency power amplifier) and a single-turn coil. The load coil consists of a single-turn coil and a load. The transmitting and receiving coils perform the same resonant frequency [9]. To simplify the analysis, we replace the circuit with an equivalent circuit of the magnetic coupling resonant wireless energy transfer system as shown in Fig. 2c. Here, : : : Us is the excitation voltage. I1 and I2 are the currents of the transmitting and receiving resonator circuits, respectively. R1 and R2 are the internal resistance of the transmitting resonator and the receiving resonator, respectively. Rs is the internal resistance of the power supply. RL is the load resistance. M12 is the mutual inductance value between the resonators. D is the transmission distance. Then the circuit function can be described by Eq. (1). (

:

:

:

:

Us ¼ Rs I1 þ R1 I1 þ jxL1 I1 þ :

:

:

0 ¼ RL I2 þ R2 I2 þ jxL2 I2 þ

: : 1 jxC1 I1  jxM12 I2 : : 1 jxC2 I2  jxM12 I1

ð1Þ

While x ¼ x0 ¼ ðL1 C1 Þ1=2 ¼ ðL2 C2 Þ1=2 , the resonance between the transmitting and receiving coils happens. As a result, we deduce Eq. (2) as below: (

:

:

:

Us ¼ ðRs þ R1 Þ I1  jx0 M12 I2 : : 0 ¼ ðR2 þ RL Þ I2  jx0 M12 I1

ð2Þ

And the output power P0 , and transmission efficiency g are given by Eq. (3) as functions of mutual inductance M12 , resonant frequency x0 , load resistance RL , and the internal resistance of both ends. 8 R x2 M 2 > < g ¼ ðR þ R Þ ðR þ RL Þ0ðR 12þ R Þ þ x2 M 2 2 L ½ s 1 2 L 0 12  ð3Þ 2 x20 M12 2 > : P0 ¼ Us RL 2 2 2 ½ðRS þ R1 ÞðR2 þ RL Þ þ x0 M12  2 While RS þ R1 ¼ x20 M12 =ðR2 þ RL Þ, the load power is maximized, but the efficiency g is lower than 50%. Thereby, the impedance matching method is usually used to maximize the received power of the load. When the transmitting coil and the receiving coil are the same as planar coaxial coils, the mutual inductance is given by Eq. (4):

lpN1 N2 r12 r22 M ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  3 2 r12 þ D2

ð4Þ

where l ¼ 4p  107 N=A2 is the vacuum permeability. According to Eq. (4), if x0 and RL are determined, the parameters, such as winding number of coil N, radius r, and coil interval D impact the output power and the transmission efficiency.

766

2.2

Y. He et al.

Electromagnetic Induction Coupling Method

According to the Maxwell equations, the resonant converting circuit converts DC into a high frequency AC and drives the transmitting coil to manipulate magnetic fields, as well as the current within receiving coil [10]. While the receiving coil is connected to the rectifier filter regulator circuit, the AC energy is converted into DC power supply for loads. The electromagnetic induction wireless charging system consist of a transmitting coil and a receiving coil, and the AC in transmitting coil generates a magnetic field, and the coupling coil generates a voltage through the coil coupling [8]. According to BiotSavar’s law, constant current in a closed loop leads to the electromagnetic induction at a point outside the closed loop is described by Eq. (5): U0 Z Id~l ~ ~ r B¼ 4p r3

ð5Þ

As shown in Fig. 3, the center of circular coil is o, the radius is r, and the current i leads to magnetic field.

Fig. 3. Magnetic field of current carrying element coil

We set a current element on the circular coil Id~l, and set the current from the current element to the point P to be r. As r is constant, Id~l is vertical. On the basis of BiotSavar’s law, the magnetic induction at point P is determined by Eq. (6): d~ B¼

l0 Id~l  4p r 2

ð6Þ

where d~ B can be separated into the planes of r and the axis and perpendicular to r. Obviously, the direction of the magnetic induction generated by each current element on the coil at point P is different. Therefore, d~ B must be divided into a sub-vector d~ B? perpendicular to the axis, and a sub-vector parallel to the axis d~ B== . Due to the symB== is expressed by Eq. (7): metry relationship, d~ B? is consumed by each other, and d~ d~ B== ¼ d~ Bsinh ¼

l0 Id~l R   4p r 2 r

ð7Þ

Wireless Electricity Transmission Design

767

We learn from the above formula that the parallel component of the magnetic induction intensity generated at the P point is related to the distance from the store to the coil and the diameter of the coil d~ B== / R=r 3 . Then, according to the magnetic flux, the coupling coefficient relations U ¼ M  I ¼ SdB determines K / R=r 3 . Similarly, it can be seen from the above formula that the radius of the transmitting coil and the relative distance between the two coils have a great impact on the induced current generated [5]. In addition, unlike the magnetically coupled resonant wireless charging system, the electromagnetic induction type wireless charging system requires a higher positional relationship between the transmitting coil and the receiving coil. In order to elucidate the influence of the distance between the transmitting and receiving coils under the magnetically coupled resonance condition. The parameters of coils are given in Table 1. Table 1. Parameters of magnetic coupling resonance Excitation voltage Internal resistance Coil turn number Coil radius Load resistance Resonant frequency

:

Us Rs N1 ; N2 r1 ; r 2 RL x0

10 V 50 X 20, 20 0.1 m, 0.1 m 100 X 200 kHz

Based on these conditions, we finally obtain the results as shown in Fig. 4.

Fig. 4. Interval impacts in the systems of magnetic resonance wireless charging and electromagnetic induction wireless charging configurations. a Output efficiency and b power performances in the magnetic resonance system. c Output efficiency and d power performances in the electromagnetic induction system

768

Y. He et al.

3 Conclusion In this work, the charging characteristics of wireless charging systems are discussed. The application range of magnetic coupling resonant and electromagnetic induction principles are analyzed. The application fields, the advantages and issues are presented. Besides, a hypothesis of drones is proposed, which may provide a development direction of the future practical wireless charging of UAVs. Acknowledgements. This work was supported by the Undergraduate Entrepreneurial Creativity Training Project of Tianjin Normal University, and made use of the Funding Program of Tianjin Higher Education Creative Team. The authors are grateful to the Natural Science Foundation of Tianjin City (18JCYBJC86000), and the Science and Technology Development Fund of Tianjin Education Commission for Higher Education (2018KJ153). C. W. acknowledges the Distinguished Young Talent Recruitment Program of Tianjin Normal University (011/5RL153).

References 1. Zeng Y, Zhang R, Lim TJ (2016) Wireless communications with unmanned aerial vehicles: opportunities and challenges. IEEE Commun Mag 54(5):36–42 2. Garnica J, Chinga RA, Lin J (2013) Wireless power transmission: from far field to near field. Proc IEEE 101(6):1321–1331 3. Mannoor MS et al (2012) Graphene-based wireless bacteria detection on tooth enamel. Nat Commun 3:763 4. Gozalvez J (2007) Witricity-the wireless power transfer [mobile radio]. IEEE Veh Technol Mag 2(2):38–44 5. Zierhofer CM, Hochmair ES (1996) Geometric approach for coupling enhancement of magnetically coupled coils. IEEE Trans Biomed Eng 43(7):708–714 6. Karacolak T, Cooper R, Topsakal E (2009) Electrical properties of rat skin and design of implantable antennas for medical wireless telemetry. IEEE Trans Antennas Propag 57 (9):2806–2812 7. Kiani M, Jow U-M, Ghovanloo M (2011) Design and optimization of a 3-coil inductive link for efficient wireless power transmission. IEEE Trans Biomed Circuits Syst 5(6):579–591 8. Pankrac V (2011) Generalization of relations for calculating the mutual inductance of coaxial coils in terms of their applicability to non-coaxial coils. IEEE Trans Magn 47(11):4552– 4563 9. Reinhold C, Scholz P, John W, Hilleringmann U (2007) Efficient antenna design of inductive coupled RFID-systems with high power demand. J Commun 2(6):14–23 10. RamRakhyani AK, Mirabbasi S, Chiao M (2010) Design and optimization of resonancebased efficient wireless power delivery systems for biomedical implants. IEEE Trans Biomed Circuits Syst 5(1):48–63

An ITD-Based Method for Individual Recognition of Secondary Radar Radiation Source Tianqi Li(&), Yu Zhang, and Xiaojing Yang College of Electronic Countermeasures, National University of Defense Technology, Hefei, China [email protected]

Abstract. In order to study the fine features and individual recognition of radiation signals, a method of individual recognition of secondary radar radiation source based on ITD method is proposed to solve the problem of poor antinoise performance in current research work. This method USES the inherent time scale decomposition to describe the unintentional modulation characteristics of the radiation source signal and USES the fast entropy algorithm to measure the difference of the subtle characteristics of different radiation source signals. Support vector machine (SVM) was selected as classifier for classification and recognition. Experiments show that the proposed method can significantly improve the recognition effect and speed. Keywords: Specific emitter identification Secondary radar  SVM

 Time-frequency analysis 

1 Introduction As the core research content in the field of military reconnaissance, individual identification of radiation source [1] is playing an increasingly important role in the highspeed development of modern warfare. It is a research focus in electronic reconnaissance field to identify the subtle features of radiation source signals efficiently and accurately and then determine the individual targets. At present, scholars mainly focus on the conventional intra-pulse characteristics [2] and time-domain characteristics [3] of the signal, which are difficult to deal with the complex and changeable noise environment. Literature [3] points out that due to the difference in hardware production process, radiation source signals contain unique and stable subtle characteristics under the nonlinear effect of transmitter power amplifiers, which can be used as the basis for individual recognition. Time-frequency analysis, as a method to extract the subtle features of signals, has been widely used in the field of communication and individual recognition of radar radiation sources. With Hilbert Huang Transform [4] (HHT) can be on the premise of don’t need any priori information processing non-stationary and nonlinear signals, this method is compared with the traditional time-frequency Transform can obtain the more accurate signal changes in frequency domain, but the Empirical Mode Decomposition © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 769–777, 2020 https://doi.org/10.1007/978-981-13-9409-6_91

770

T. Li et al.

(Empirical Mode Decomposition, the EMD) algorithm in the process of Decomposition signal iterations too much, and there are serious “endpoint effect [5]”, affect the subsequent individual recognition effect. The inherent time-scale Decomposition (inherent time-scale Decomposition ITD) method proposed by Frei in 2007 uses the linear transformation of the signal to obtain the high frequency component and baseline component, greatly reducing the computational complexity while at the same Time to alleviate the signal distortion caused by the “endpoint effect”, in the field of mechanical fault detection, biomedical applications. Literature [6] used ITD method to extract the fractal features, bispectral features and instantaneous amplitude spectrum features of the signal components at the same time for the individual identification of communication radiant source, and achieved good results [7]. Literature [8] characterizes the nonlinear effect of power amplifier by the complexity characteristics of signal arrangement entropy, and the simulation experiment reflects that such nonlinear dynamic characteristics have the advantages of less sensitive to noise and higher stability than shape characteristics, and have good recognition performance. Based on the above analysis, this paper uses ITD method to decompose and reconstruct the radiation source signal in view of the difference in nonlinear characteristics caused by the radiation source amplifier. For reconstructed signals, Fast Sample Entropy (FSE) is used to convert the unintentional modulation characteristics of the signal to the nonlinear dynamic characteristics for identification, in exchange for stronger separability and noise resistance. At the same time, the algorithm complexity is reduced, the realtime performance of identification is improved, and the identification ability and antiinterference performance of the signal are enhanced.

2 Secondary Radar Source Signal Model With the rapid development of information warfare, secondary radar signals play an important role in intelligence acquisition, anti-jamming, decision making and deployment. The working frequency of mode S interrogator and responder is respectively 1030 and 1090 MHz. This paper takes the query signal of mode S as the research object, and the specific style is shown in Fig. 1 [9]. 1.25μs

0.8μs P1

0.5μs

0.5μs

P6

P2

2.0μs

Synchronous phase reversal 2.75μs 0.4μs

P5

Protect the interval

The first chip

0.25μs Side lobe suppression pulse

0.8μs

Fig. 1. mode S query signal

An ITD-Based Method for Individual Recognition

771

This consists of two leading pulses, P1 and P2, with a pulse width of 0.8 s and an interval of 2 s, followed by a long pulse, P6, containing the inquiry information, which can be sustained as 16.25 s or 30.25 s, containing 56 or 112 data bits occupying 0.25 s, respectively. The phase inversion occurred at 1.25 s after the rise of P6, which was used as the clock synchronization of the transponder, and P5 was the sidelobe suppression pulse. Differential Phase Shift Keying (DPSK) was used to transmit the signal [1].

3 The Core Idea of the ITD Approach The key of EMD algorithm is to find out the local extremum point of the signal, get the upper and lower envelope through cubic spline interpolation fitting, and then calculate the mean value of the envelope to separate from the original signal. Overenveloping and underenveloping will occur in the process of fitting, which not only affects the decomposition of signal, but also leads to “endpoint effect”. Consistent with the EMD method, ITD can also divide non-stationary nonlinear radiation source signals into several adaptive components [10]. In this method, the baseline signal is obtained directly by linear transformation of each local extremum, and the high frequency rotation component is obtained after being separated by the original signal. Compared with EMD, the complexity of the algorithm is reduced. The specific implementation process is as follows: (1) It is defined that an original signal Xt ðt  0Þ can be represented by operator and operator to represent the low frequency component L and high frequency component H of the signal respectively, so the signal is expressed in the form of Lt and Ht : Xt ¼ LXt þ HXt ¼ Lt þ Ht

ð1Þ

where, respectively represent the baseline signal Lt and the proper rotation component (PRC) Ht of Xt . (2) After finding each local extreme value Xk of the signal Xt and the corresponding time sk ðk ¼ 0; 1; 2; . . .; M Þ, extract each baseline control point Lk : 

Lk þ 1

 sk þ 1  sk ¼ d Xk þ ð ÞðXk þ 2  Xk Þ þ ð1  dÞXk þ 1 sk þ 2  sk

ð2Þ

In the above formula, the decomposition coefficient d 2 ð0; 1Þ is usually 0.5. And Lk with Lk þ 1 represents the no. k and no. k + 1 baseline control points and sets the endpoint value of the baseline signal L0 ¼ X0 þ X1 =2. (3) Define the baseline signal on the adjacent extreme interval ðsk ; sk þ 1  as: LXt ¼ Lt ¼ Lk þ

Lk þ 1  Lk ðX t  X k Þ Xk þ 1  Xk

ð3Þ

772

T. Li et al.

According to formula (1), the rotation component Lt obtained after being separated from the original signal is decomposed in the above steps as the original signal cycle. When it becomes a monotone trend item, it is regarded as the completion of decomposition, namely: Xt ¼ HXt þ LXt ¼ ðH

p1 X

Lk þ Lp ÞXt

ð4Þ

k¼0

In the actual calculation, the number of decomposition layers can be preset to achieve the purpose of improving the operation efficiency, and the iteration can be stopped when the cycle number reaches the limit. This method comprehensively considers the shortcomings of EMD, not only optimizes the algorithm structure, but also improves the defects and processing efficiency.

4 ITD Method for Individual Recognition of Secondary Radar Radiation Source 4.1

Methods the Thought

In view of the low recognition rate of the secondary radar radiation source at the present stage, this paper adopts the ITD method to carry out the adaptive division of the signal, and then selects the key components that highlight the subtle difference of the signal to extract the effective subtle features for recognition. The specific implementation process of this method is shown in Fig. 2. Signal decomposition and reconstruction

The sampling signal

Gets a number of PR components

Signal reconstruction

Extract the fast sample entropy

SVM training and classification recognition

Fig. 2. Algorithm flow chart in this paper

The method in this paper starts from the time-frequency analysis method and the fast entropy aspect of the signal, which is mainly divided into three stages. Firstly, through the ITD time-frequency analysis method, several inherent rotating components with explicit nonlinear characteristics are screened out from the sampled signals intercepted by the receiver, among which several key components with high signal correlation are selected for signal reconstruction, and the interference of irrelevant information such as noise is removed. Secondly, the fast sample entropy of the reconstructed signal is extracted as a quantitative parameter representing the nonlinear characteristics of the power amplifier. Support vector machine (SVM) has been widely recognized and applied in radiation source signal processing, pattern recognition and other fields due to its good adaptability to non-linear and non-stationary signals and

An ITD-Based Method for Individual Recognition

773

high efficiency and accuracy in recognition. In this paper, support vector machine (SVM) is adopted as the classifier, and fast sample entropy is used as the basis for identification and sorting, so that the reliability and timeliness of IFF radiation source individual identification are significantly improved. 4.2

The Basic Principle of Fast Sample Entropy Algorithm

Sample entropy [10] is a nonlinear dynamic parameter commonly used to measure the regularity and unpredictability of time series fluctuations. It is developed on the basis of modified approximate entropy algorithm. On the basis of retaining the advantages of small sample data with approximate entropy and high anti-noise performance, the algorithm improves the result deviation caused by the difference comparison with its own data, improves the calculation accuracy, and is widely used in mechanical fault judgment and biomedical signal analysis research [8]. For the N point time series x½n, the sample entropy can be calculated as follows: (1) Reconstruct the phase space of x½n into the dimension number m: XðiÞ ¼ ½uðiÞ; uði þ 1Þ; . . .; uði þ m  1Þ

ði ¼ 1; 2; . . .; N  m þ 1Þ

ð5Þ

(2) Traverse each point in the phase space, and the distance dði; jÞ between XðiÞ and XðjÞ is the maximum value of the absolute value of the difference between the corresponding elements of two points: Definition Bm i ðrÞ represents any matching probability of any XðjÞ and the template XðiÞ, that is Bm i ðrÞ ¼ Bi =N  m. And the average value is obtained, denoted as Bm ðrÞ. (3) Increase the dimension m to m + 1, and follow the above process to get Bm þ 1 ðrÞ. (4) when the sequence length is N, the sample entropy of x½n is: SampEnðN; m; rÞ ¼ ln

Bm þ 1 ðrÞ Bm ðrÞ

ð6Þ

In this paper, the nonlinear dynamic parameter of sample entropy is used to characterize the nonlinear characteristics of the power amplifier, which can reflect the changes of rich details inside the signal, highlight the individual subtle differences, and can be used as the basis for individual recognition. By referring to the implementation method of fast approximate sample entropy [10] algorithm, the above algorithm steps are optimized to reduce the computational complexity and ensure the real-time performance of the method identified in this paper. The specific implementation process is as follows: (1) For the signal sequence x½n at point N, form a series of sequences fvð1Þ; vð2Þ; . . .; vðNÞg; i 2 ½1; N by ordinal number, define the distance square matrix CNN of N-dimension, and the calculation formula of each element of the matrix is as follows:

774

T. Li et al.

 ci;j ¼

1; jvðiÞ  vðjÞj\r 0; jvðiÞ  vðjÞj  r

ð7Þ

where, . According to the above formula, the vector distance calculation between XðiÞ and XðjÞ can be converted into binary variables. (2) If the value of m, r is set to be consistent with the above algorithm, then B2i ðrÞ can be deduced from the i row and j column elements in the distance matrix C: B2i ðrÞ ¼

N 1 X

ci;j \ cði þ 1Þ;ðj þ 1Þ

ð8Þ

j¼1

(3) Calculate the fast sample entropy according to Eqs. (6) and (8). Through the above analysis, the fast algorithm replaces the real number calculation with the logical “and” operation to derive the high-dimensional Bm i ðrÞ, which is significantly more efficient than the traditional sample entropy algorithm. It can describe the subtle differences in the nonlinear characteristics of the radiation source signals, and at the same time ensure the real-time performance of the whole recognition process. Considering the influence of the main parameters in the above steps on the calculation results of sample entropy. Fast sample entropy can reflect the slight difference of signal sequence before and after, and it has low requirement on sequence length, is not sensitive to noise, and has good anti-noise ability.

5 Performance Simulation Analysis 5.1

Experimental Data of Secondary Radar Radiation Source

This paper proposes to test the recognition performance of the proposed algorithm with the simulation signal of mode S query signal. Setting the sampling frequency as 200 MHz and the carrier frequency of the simulation signal as 60 MHz. Due to the limitation of production technology, the difference of manufacturer’s production batch and the difference of components’ workmanship, the actual power amplifier has its own unique and stable slight difference in input and output and presents nonlinear characteristics. In order to characterize the working characteristics of different radiation sources, this paper adopts the Taylor series model to model the power amplifier [2]. The formula of the model is expressed as follows: GðxðnÞÞ ¼

N X

an xn ðnÞ

ð9Þ

n¼1

where xðnÞ represents the power amplifier input signal, and the power amplifier output signal is denoted as GðxðnÞÞ; Taylor model of order n ¼ 1; 2; . . .; N and an is the Taylor

An ITD-Based Method for Individual Recognition

775

coefficient corresponding to the nth order. According to the above model, different individual radiation sources can be simulated with different Taylor parameters. If the Taylor order N is set as 5, signals from 5 radiation sources can be simulated, and the corresponding Taylor coefficients are a1 ¼ ½1 0:5 0:3 0:05 0:2, a2 ¼ ½1 0:08 0:6 0:4 0:8, a3 ¼ ½1 0:01 0:01 0:3 0:15, a4 ¼ ½1 0:1 0:8 0:04 0:06 and a5 ¼ ½1 0:6 0:04 0:05 0:4 respectively. K represents the number of radiation sources used for recognition and classification. SVM toolbox LIBSVM 3.23 was selected as the classifier. 5.2

Classification Recognition Performance Analysis

The method in this paper is based on the completion of signal sorting, so there is no aliasing of the default signals. When the simulation signal in 4.1 is adopted for recognition experiment, 100 training samples and 100 test samples are taken respectively, and the number of sampling points N is 4000. The parameter m = 2 and r ¼ 0:2rðxÞ are selected as fast sample entropy. In order to verify the improvement of recognition performance of ITD method compared with EMD and other feature extraction methods, comparison method 1 adopts the method mentioned in literature

(a) K=2

(b) K=3

(c) K=4

(d) K=5

Fig. 3. The curve of different K’s recognition performance with the change of SNR

776

T. Li et al.

[9], and comparison method 2 adopts the method mentioned in literature [10]. Figure 3 represent the recognition performance of each algorithm in the range of SNR −5 to 25 dB when K is different. The experiments show that the three algorithms can identify the radiation source signals accurately under the condition of high SNR when the radiation source is small. However, when the number of radiation sources increases, the recognition method of comparison method 1 is not robust enough, and the recognition effect may also become worse when the signal-to-noise ratio increases. The recognition rate of EMD-based recognition method is generally low, and the recognition rate below 0 dB is lower than 50%. The recognition performance of this algorithm is obviously better than other methods, especially in the case of low SNR.ITD method is verified to optimize the signal decomposition process and retain more subtle differences and time-frequency information inside the signal, which makes up for the shortcomings of EMD method and improves the recognition performance.

6 Conclusion Individual secondary radar emitter signals recognition, this paper studies the problem of insufficient, using the nonlinear characteristics of the power amplifier characterization of unintentional modulation characteristics of radiation source, the integration of intrinsic time scale decomposition method is adaptive to the sample data is divided into several have clear meanings, containing subtle characteristics of the instantaneous signal component, using the reconstructed signal, select quick sample entropy as the basis of individual identification. Experimental results show that the proposed method can not only guarantee the real-time and reliability of the classification recognition process, but also significantly improve the recognition performance compared with the EMD-based method. In the environment of low SNR, the good decomposition performance of ITD and the excellent stability of sample entropy make the recognition effect of the method in this paper more outstanding, and the adaptability in the application environment of complex noise interference is higher, which has higher application value. In the next step, more effective and stable features will be extracted to optimize the algorithm structure so that it can improve the recognition performance and efficiency in a more complex application environment.

References 1. Tan Y, Li S, Wang H (2011) Analysis on data format of mode 5 in western Mark IIA. J Univ Electron Sci Technol China 40(4):532–536 2. Li W (2016) Identification of Mark XIIA IFF signals based on time domain and coding characteristics. J Weaponry Equipment Eng 37(7):153–157 3. Liu M, Doherty JF (2011) Nonlinearity estimation for specific emitter identification in multipath channels. IEEE Trans Inf Forensics Secur 6(3):1076–1085

An ITD-Based Method for Individual Recognition

777

4. Zhang J, Wang F, Dobre OA et al (2016) Specific emitter identification via Hilbert-Huang transform in single-hop and relaying scenarios. IEEE Trans Inf Forensics Secur 11(6):1192 —1205 5. Frei MG, Osorio I (2007) Intrinsic time-scale decomposition: time–frequency–energy analysis and real-time filtering of non-stationary signals. Proc Roy Soc A Math Phys Eng Sci 463(2078):321–342 6. Gui Y, Yang J, Lv J (2017) Feature extraction algorithm based on intrinsic time—scale decomposition model for communication transmitter. Appl Res Comput 34(4):1172–1175 7. Hu A, Yan X, Xiang L (2015) A new wind turbine fault diagnosis method based on ensemble intrinsic time-scale decomposition and WPT-fractal dimension. Renew Energy 83:767–778 8. Ren D, Zhang T, Han J (2018) Approach of specific communication emitter identification combining ITD and nonlinear analysis. J Signal Process 34(3):331–339 9. Hu A, Xiang L, Gao N (2017) Fault diagnosis for the gearbox of wind turbine combining ensemble intrinsic time-scale decomposition with Wigner bi-spectrum entropy. J Vibroengineering 19(3):1759–1770 10. Zhang X, Liang J, Zhang X et al (2013) Combined model for ultra short-term wind power prediction based on sample entropy and extreme learning machine. Proc CSEE 33(25):33–40

Gaussian Mixture Model Based Multi-region Blood Vessel Segmentation Method Yaqing Fu(&), Maolin Wang, and Ting Liu(&) College of Marine Electrical Engineering, Dalian Maritime University, Dalian, China [email protected], [email protected]

Abstract. Vascular segmentation is the basis for medical diagnosis, surgical aid design, etc. The traditional Gaussian mixture model (GMM) can be introduced to well extract the main blood vessels, but the performance on small blood vessels is poor. Fortunately, gray intensity of blood vessels in different regions is different. Therefore, a Gaussian Mixture Model based Multi-region Blood Vessel Segmentation Method is proposed in this paper. Firstly, Nonsubsampled Contourlet (NSCT) transformed is employed to enhance the contrast of image. Secondly, the problem of optimal threshold selection for each region after GMM has been analyzed in detail by experimental method. Finally, adaptive filling filtering is performed on the integrated image to achieve noise reduction. The experimental results show that the proposed method can effectively reduce the missing classification ratio and improves the recall ratio. The proposed method is more suitable for situations where the color distribution is not uniform, or where small blood vessels need to be segmented but the demand of accuracy is low. It will have great significance for medical clinical applications. Keywords: Blood vessel segmentation Multi-region  Adaptive filling filtering

 Gaussian mixture model  NSCT 

1 Introduction Retinal images are increasingly used in medical field to detect diseases such as diabetes and high blood pressure. Blood vessel segmentation is a prerequisite step for disease diagnosis on retina images. After extracting retinal blood vessels, the parameters of their diameter and curvature could be measured and analyzed. It is possible to predict the retinopathy to a large extent, so as that preventive intervention and drug treatment could be implemented scientifically. Therefore, a method which can extract blood vessels more completely, is more useful for practical application. There are various methods for solving the problem which include vascular tracking [1, 2], matched filtering vascular enhancement [3, 4], regional growth [5], and so on. Jiang [6] proposed a method which obtains the threshold by hypothesis testing. It has a superior performance compared to the method of global thresholds. Zhong et al. [7] use NSCT which has made a significant improvement to GMM segmentation. However, the intensity of blood vessels in different regions are usually different in practical © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 778–786, 2020 https://doi.org/10.1007/978-981-13-9409-6_92

Gaussian Mixture Model Based Multi-region Blood

779

application. In order to make the method more practical, a multi-region based Gaussian mixture model is proposed for blood vessel segmentation in this paper. The GMM does not consider the spatial and edge information, so the ability to segment blood vessels is limited. It can well segment the main part of the blood vessel, while the segmentation ability of the small part is poor. That’s because the intensity of small blood vessel is too close to that of the non-vascular part. So NSCT transformation is utilized firstly to enhance the contrast between the blood vessels and the nonvascular. Besides, it is found that the intensity of both blood vessels and non-vascular part are different in different regions, so is the Gaussian distribution. Thus, the image is divided into smaller regions, while similar characteristics of each region is reserved. Specifically, the performance of the proposed method depends on the selection of the optimal threshold in each region. Finally, adaptive filling filtering is performed on the integrated image to reduce noisy. Due to the idea of division, the performance of GMM has been improved. The integrity of the whole blood vessel has improved. For the rest of the paper, the principle and algorithm of each step are introduced in detail. Then, results of the experiment are discussed in Sect. 3. And some discussions conclude the paper in Sect. 4.

2 Blood Vessel Segmentation Method The core idea of multi-region GMM based blood vessel segmentation is to make full use of the different distribution of blood vessels in different regions. The algorithm in this paper is summarized as the following four steps: (1) NSCT based contrast enhancement, (2) Multi-region Gaussian mixture model, (3) optimal threshold selection, (4) image integration and filtering. And the algorithm flow chart is shown in Fig. 1.

Fig. 1. Algorithm flow chart

780

2.1

Y. Fu et al.

NSCT Transform

Contourlet, which is the transformation of the edge, is the core of the NSCT. NSCT helps to maintain the edge information and contour structure of the image, and can obtain low-frequency and high-frequency components of the image. NSCT is based on Nonsubsampled Pyramid (NSP) and Non-subsampled Directional Filter Banks (NSDFB) [8]. For clarity, the illustration of NSCT is shown in Fig. 2. The image is first decomposed by the NSP into two parts: high-pass and low-pass components. Then, the high-frequency sub-band is decomposed into multiple-direction sub-bands by the NSDFB, while the low-frequency part continues to be decomposed by NSP and NSDFB as above. A large number of experimental results show that better result can be obtained when the number of directions is 8 and the decomposition scale is 4 [7]. Its inverse transformation is the inverse of the above operation.

Fig. 2. The illustration of NSCT transform

2.2

Gaussian Mixture Model Based Multi-region Blood Vessel Segmentation Method

2.2.1 Gaussian Mixture Model GMM is a probabilistic model of the approximated image’s gray histogram. It has a good ability to describe regions with slowly varying gray levels in the image [9]. Suppose X ¼ fx0 ; . . .; xj ; . . .; xn g is value of n observation samples. Assuming each sample is independent of each other and subject to a k-type  Gaussian mixture distribution. The distribution is marked as ðX1 ; . . .; Xk Þ. P xi jHj ðj ¼ 1; 2; . . .; KÞ is the corresponding probability density function, which is:

Gaussian Mixture Model Based Multi-region Blood

f ðxi jP; HÞ ¼

k X

  pi P xi jHj

781

ð1Þ

j¼1

 2 ! xi  l j 1 P xi jHj ¼ qffiffiffiffiffiffiffiffiffiffi exp  2r2 2pr2j 



ð2Þ

  where Hj ¼ lj ; rj , joint probability density of X is: PðXjP; HÞ ¼

N Y

f ðxi jP; HÞ ¼

i¼1

N K Y X i¼1



pi P xi jHj



! ð3Þ

j¼1

The key of extracting blood vessels lies in the estimation of the parameters H. First log-likelihood function for all samples of the image is: LðxjH; PÞ ¼ lgpðx1 ; x2 ; . . .; xn jH; PÞ n X   lgp xj jH; P ¼ j¼1

¼

n X j¼1

lg

k X

ð4Þ

  Pðxi Þp xj jxi ; Hi

i¼1

When the parameters of the hybrid model are equal to estimated parameters of the current image, the log likelihood function in (4) will be maximized. ðH ; P Þ ¼ argmaxLðH ; P Þ

ð5Þ

Then, the segmentation based on GMM is transformed into a parameter estimation problem. Here we use the EM algorithm [10]. We don’t make too many narratives here. 2.2.2 Multi-region Segmentation The GMM is classified on the basis of the gray intensity. Because the gray intensity of some blood vessels is similar to that of non-vascular of other regions, it is mistakenly divided into the same class, which is shown in Fig. 3. This problem can be well solved by multi-region segmentation. By multi-region segmentation, the contrast between blood vessels and the background is improved, and the intensity of the same category is as close as possible in each region. The number of regions can be adjusted according to the actual situation. This article divides the whole picture into 30 pieces.

782

Y. Fu et al.

Fig. 3. Gaussian probability map

2.3

Adaptive Filling Filtering

Most of the blood vessels are segmented by multi-region segmentation, but small blood vessels are missed and noise occurred. Reference to the thinking of statistical weighs [11], adaptive filling filtering is used for the noise reduction and missing vascular filling. As shown in the Fig. 4, if the surrounding seven or eight pixels are the same but different from the current pixel, the pixel is considered to be noise. Then the noise pixel will be removed. After filtering, the noise is suppressed and the continuity of the blood vessels is enhanced by considering the spatial information.

Fig. 4. The case where the pixel is judged to be noise

Gaussian Mixture Model Based Multi-region Blood

783

3 Experimental Evaluation Experimental data we use is the international public color retina image database DRIVE. The database consists of 40 images taken from a screening program for diabetic retinopathy in the Netherlands [12]. And there are two sets of manual segmentation results that can be used as a standard for segmentation algorithms. Due to the strong contrast between the circular base of the retina and the outer diameter of the hole, a large misjudgment will occur at the edge. To eliminate this effect, we process the green component instead. 3.1

Results Description

The results of different segmentation methods are shown in Fig. 5. The segmentation result of GMM is shown in (c). It can be seen that only the main vessels are segmented and the information loss is serious. For comparison, the result of NSCT-GMM, in which we use GMM after NSCT, is shown in (d). Its segmentation ability is improved compared with GMM, but it is still not ideal for small blood vessels. Finally, we can clearly see the advantages of the proposed method for the segment small blood vessels by (e).

Fig. 5. (From left to right) Manual segmentation 1 (a), manual segmentation 2 (b), GMM (c), NSCT-GMM (d), method of this paper (e)

To further analyze the performance of the proposed algorithm, three parameters: MCR (mistake classification ratio), precision and recall are used to make a quantitatively comparation. And the results are summarized in Table 1. From Table 1, the proposed method has the smallest MCR, which is a significant improvement to GMM method. The higher ratio of blood vessels is divided, while the ratio of true blood vessels to the segmented blood vessels is decrease. In summary, the proposed method has improved the error rate and better segmented small blood vessels. MCR ¼ ðMisclassified pixelsÞ=ðTotal number of pixelsÞ  100%

ð6Þ

Precision ¼ ðtruepositiveÞ=ðtruepositive þ falsepositiveÞ  100%

ð7Þ

Recall ¼ ðtruenegativeÞ=ðtruepositive þ falsepositiveÞ  100%

ð8Þ

784

Y. Fu et al. Table 1. Evaluation index

GMM-EM After NSCT transform Proposed method Manual segment 2

3.2

MCR (%) 13.12 8.58 7.05 5.09

Precision (%) 67.66 93.73 82.66 81.22

Recall (%) 32.02 37.09 58.58 79.58

Parameter Description

3.2.1 Selection of High Frequency Parameters and Low Frequency Parameters For contrast enhancement, two parameters are introduced. Increasing the highfrequency parameters to strengthen the contrast between blood vessel and the background, and change the low-frequency parameter to adjust the luminance of image. Results of different setting to the two parameters are displayed in Fig. 6. From the result, the low-frequency and high-frequency parameter are chosen as 1.5 and 5, respectively. In that case, the contrast between the blood vessels and the background is most obvious.

Fig. 6. Images after NSCT transform with different parameters

3.2.2 Selection of Optimal Threshold The performance of the segmentation depends on the optimal threshold in each region. Optimal thresholds allow for the most vascular and minimal noise in each region, which makes MCR minimal and Recall maximal. The method of selecting threshold used in this paper is divided into two steps: (1) select initial threshold (2) find the best threshold. The probability density curve of the GMM is presented in Fig. 7 as a bi-modal state. The part where the two Gaussian curves intersect is the blurred area. The intensity corresponding to the valley between the two peaks is selected as the threshold. The pixels in this area are hard to determine which class they are belonged to. In order to ensure the maximum correct rate, the leftmost point of the intersection is selected as the initial threshold. This means that we let the noise points least. But the initial threshold only can extract the main part of the blood vessels.

Gaussian Mixture Model Based Multi-region Blood

785

Fig. 7. Gaussian probability density curve for a region

Due to the different distribution characteristics of each region, the optimal threshold points for each region are also different. The optimal threshold is found by experimental method near the initial threshold. Figure 8 is the results with different thresholds. It is seen that the threshold is 0.183, the image has the more complete blood vessel and less noisy. Thus, the optimal threshold is selected as 0.183 in this region. Other regions also use the same way to find the optimal threshold.

Fig. 8. Initial threshold (the leftist one) and experimental threshold (the others)

4 Conclusion In this paper, we have proposed a GMM based multi-region blood vessel segmentation method. The proposed method solves the problem of blood vessel segmentation with uneven color distribution well which is more suitable for practical applications. From the results, the accuracy of the GMM and the ability to segment small blood vessels are improved. But this is at the expense of precision, which means that the non-vascular components in the segmented blood vessels are increased. Due to the base noise of the

786

Y. Fu et al.

retinal image and the influence of extremely fine blood vessels, there are still some problems to be solved in the retinal segmentation method. The detection of retinal capillaries and lesion detection can be used as future research directions.

References 1. Lupascu CA, Tegolo D (2012) Graph-based minimal path tracking in the skeleton of the retinal vascular network. In: IEEE international symposium on computer-based medical system, 1–6 2. Zou P, Chan P, Rockett P (2009) A model-based consecutive scanline tracking method for extracting vascular networks from 2-D digital subtraction angiograms. IEEE Trans Med Imaging 28(2):41–249 3. Zhang EH, Bian ZZ, Duan JH (2003) Image enhancement method for retinal vascular images based on morphology and matched filters. Guangxue Jishu/Opt Tech 29(5):523–525 4. Miao YC, Cheng Y (2015) Automatic extraction of retinal blood vessel based on matched filtering and local entropy thresholding. In: 2015 8th international conference on biomedical engineering and informatics (BMEI), Shenyang, China, 2015.10.14–2015.10.16. IEEE 5. Fan Z, Lu J, Li W (2017). Automated blood vessel segmentation of fundus images based on region features and hierarchical growth algorithm. In: IEEE symposium series on computational intelligence (SSCI), Honolulu, Hawaii, USA from Nov 27 to Dec 1, 2017 6. Jiang XY, Mojon D (2003) Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images. IEEE Trans Pattern Anal Mach Intell 25(1), 131–137 7. Zhong H, Jiao LC, Hou P (2011) Retinal vessel segmentation using nonsubsampled contourlet transform. Chin J Comput 34(3):574–582 8. Zheng YJ, Ren XY, Liu XJ et al (2011) Image fusion based on NSCT and fuzzy logic. Comput Eng Appl 47(11):171–174 9. Ou YJ (2015) Study on image segmentation based on gaussian mixture model. Beijing Jiaotong University, Beijing 10. Liu HH, Guo JL (2015) Image segmentation of abdominal aorta based on gaussian mixture model. J South-Central Univ Nationalities (Nat Sci Edition) 34(2):91–94 11. Chai WY, Yang F, Yuan SF, Huang J (2018) Multi-class Gaussian mixture model and neighborhood information based Gaussian mixture model for image segmentation. Comput Sci 45(11):272–287 12. Staal J, Abramoff MD, Niemeijer M, Viergever MA, van Zhong H (2004) Ridge based vessel segmentation in color images of the retina. IEEE Trans Med Imaging 23(4):501–509 13. Zhang Y, Brady M, Smith S (2001) Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imaging 20(1):45–57 14. Pham D, Prince JL (1999) Adaptive fuzzy segmentation of magnetic resonance images. IEEE Trans Med Imaging 18(9):737–752

Research on the Enhancement of VANET Coverage Based on UAV Tianci Liu(B) , Lixin Zhao, Bin Li, and Chenglin Zhao Key Laboratory of Universal Wireless Communications, MOE, Beijing University of Post and Telecommunications, Xitucheng Road 10, Beijing, China [email protected]

Abstract. In the process of Vehicular ad hoc network (VANET) infrastructure, Roadside Unit (RSU) can facilitate vehicle-to-vehicle (V2V) communications and enable communications between vehicles and the Internet. However, RSUs are expensive and immovable. Once installed on the side of the road, they cannot be moved. Therefore, as traffic flow changes, there always exists a certain amount of RSUs that will either be waste or shortage. Compared with the immovable RSUs, Unmanned Aerial Vehicles (UAVs) possess the advantage of convenient deployment and movement, which can move flexibly based on the changes of traffic flow. In this paper, we study the network coverage enhancement issue for VANETs by using UAVs, which serve as air base stations (BSs) for improving coverage and boosting connectivity. Specifically, the deployment problem is modeled as minimizing the amount of required RSUs and UAVs with the constraint of an enough coverage. A novel joint deployment scheme is proposed for RSUs and UAVs. The simulation results show that our deployment scheme can dynamically adapt to the changing traffic flow, and meanwhile guaranteeing coverage and cutting costs. Keywords: VANET · Roadside unit (RSU) (UAV) · Deployment · Coverage

1

· Unmanned aerial vehicle

Introduction

Research in Vehicular ad hoc network (VANET) has attracted much attention in recent years. The VANET is composed of vehicles with On-Board Units (OBUs) and Roadside Units (RSUs) deployed on the roadside. RSUs and OBUs can communicate with each other through a wireless manner. Therefore, the VANET contains two basic modes of communication: vehicle-to-vehicle (V2V) communication and vehicle-to-roadside unit (V2R) communication. It’s obvious that vehicle distribution is highly dynamical in road network, and usually shows some certain regularity. Since a fixed RSU cannot move, the distribution of vehicles change dynamically in the range covered by the RSU. Therefore, the resource

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 787–795, 2020 https://doi.org/10.1007/978-981-13-9409-6_93

788

T. Liu et al.

allocation of the RSU is also dynamically changed, as traffic flow changes, there always exists a certain amount of RSUs that will either be waste or shortage. In the current wireless communication systems, applying unmanned aerial vehicles (UAVs) to act as base stations (BSs) to serve the ground user equipment (UE) have attracted significant attentions due to their agility and mobility [1]. The UAV is easy to deploy as an air base station and the flight position of the UAV is easy to control. Therefore the communication coverage area of the UAVs can be effectively changed depending on communication needs. Based on the above, we studied the existing related research about VANETs coverage with different viewpoints. In [2–8], the authors proposed heuristic algorithms to determine the deployment interval of the roadside unit. some of them aim to maximize network throughput in highway and in the city area or considering maximizing network throughput. According to the distribution of traffic flow, they discussed different coverage areas for dissemination point (DP) deployment, including maximum coverage problem (MCP) and maximum coverage with time threshold problem (MCTTP). Forthemore, multi-UAV network is a hot topic of research now and has not been widely applied so far. To solve the problem of UAVs deployment in VANET. In [9–11] the authors use UAVs to deliver OBU messages and improve the communication performance of VANETs, and meanwhile deploying UAVs and dynamically adjusting the location of UAVs to improve the performance of UAV-assisted terrestrial communication systems. This paper propose a novel deployment model which uses minimum RSUs and UAVs to ensure the coverage capacity in VANET. By using RSUs and UAVs as ground or air BSs to serve vehicles, we can dynamically adjust the position of UAVs based on real-time change of traffic flow thus reducing the number of RSUs on the ground, saving costs and making the most of resources. To be specific, our contributions can be summarized as follows: (1) A non-congested state road traffic model is proposed based on Urban Traffic Management Evaluation Index System (UTMEIS) in which model we only deploy RSUs fixed on the ground to provide service for road vehicles under the condition of meetting the coverage threshold. (2) According to the characteristics of road traffic, regular traffic congestion will occur in certain time periods and certain locations in which case we deploy UAVs as air BSs to serve vehicles, and meanwhile the position and quantity of the drone are dynamically adjusted according to the change of the congestion state. Thus, UAVs and RSUs form a network to provide real-time dynamic services to vehicles. The rest of the paper is organized as follows. In Sect. 2 we describe the deployment model under non-congested and congested conditions. Section 3 we propose our heuristic algorithm and solve the deployment problem. In Sect. 4, we simulate the solutions and discuss the results. Finally, Sect. 5 concludes the paper.

Research on the Enhancement of VANET Coverage

2

789

System Model

The road network for an urban area can be represented as a collection of road segments and intersections, and a certain number of vehicles are distributed in the network. We assume that all the RSUs are connected by wire, the dissemination range of each RSU or UAV equals R and all the vehicles have GPS in the OBUs so that we can access the location of any vehicle on the road. Therefore in this paper we abstract the urban road network as an undirected graph G(N, E, V ), where N is the set of nodes representing road intersections, E is the set of road segments between intersections and V is the set of vehicles in the graph. If there is Ei , Ei ∈ E that the length of Ei defined L(Ei ) > 2R, we divide Ei into several small road segments by length of 2R, so that Ei = {Eij , j = 1, 2, 3, . . . , A(Ei )}. A(Ei ) represents the number of segments that the Ei can divided calculated by the (1). All the intersections and the centers of small segments constitute the set of candidate locations for deploying RSUs or UAVs. The RSUs or UAVs are placed by selecting a certain number of candidate locations to serve vehicles within coverage and meet the coverage threshold.   L(Ei ) − 2R (1) A(Ei ) = 2R 2.1

RSU Deployment in Non-congested State

Based on UTMEIS, we establish a road network model in a non-congested state. All the intersections and the centers of small segments constitute the set of candidate locations for deploying RSUs, denoted by P = {P1 , P2 , P3 , . . . , PN }. Let matrix B = [Bi ]1×N where Bi ∈ {0, 1} denote the state of RSUs. In detail, Bi = 1 indicates that a RSU is installed at Pi , else Bi = 0. Given the vehicles, denoted by V = {V1 , V2 , V3 , . . . , VM }. Let matrix H = [Hi,k ]N ×M denote the link between RSUs and vehicles. In detail, Hi,k = 1 means Vk is served by the RSU Bi located at Pi , otherwise Hi,k = 0. We set DR to indicate the maximum number of vehicles that can access to a RSU. While when the number of vehicles in the coverage of the RSU exceeds DR , we assume that the RSU is only connected to the nearest vehicles. Meanwhile from the (1) we can know that any small segments of road can only be covered by at most one RSU. Thus, make the most of RSU resources. 2.2

RSU and UAV Joint Deployment in Congested State

As mentioned before, in non-congested state, we can serve the vehicles and meet the coverage thresholds by only deploying RSUs in an intelligent solution. In congested state, the coverage threshold cannot be met only by the fixed RSUs, we consider deploying UAVs, acting as air-RSUs to serve vehicles in congested area. By analyzing the occurrence of regular congestion in a area, we choose certain positions to deploy the UAVs. Finally UAVs and RSUs form a network

790

T. Liu et al.

to provide real-time dynamic services to vehicles. For safety reason, there is a safe distance range among UAVs. To focus on solving the problem, we assume that all the UAVs are homogeneous. When congestion situation occurs, deploy a certain number of UAVs to enhance the coverage of the VANETs. Meanwhile, because of the movment of vehicles in real time, we need to adjust the locations of the UAVs according to the change of congestion status. Thus, we constructed a Markov model to predict traffic flow. In order to simplify the complex problem, we divide the time of a day into multiple time frames, analyze the traffic flow pattern of each time frame, and predict the congestion road segment. We model the UAVs deployment frame in congested state similarly. The set of candidate locations for deploying UAVs denoted by Q = {Q1 , Q2 , Q3 , . . . , QN }. Let matrix U = [Ui ]1×N where Ui ∈ {0, 1}. In detail, Ui = 1 indicates that a UAV is installed at Qi , else Ui = 0. Similarly, each UAVs has the same maximum accessibility limit set to DU . Matrix T = [Ti,k ]N ×M denoted the link between UAVs and vehicles. Ti,k = 1 indicates Vk is served by the UAV Ui located at Qi , otherwise Ti,k = 0. Delimit matrix C as a congestion transfer matrix, Cij represents the probability that the road segment j will be congested in the next time frame in the case where the road segment i is congested. To predict the congestion segment in the next time frame, we multiply the congestion matrix of the current time frame by the corresponding congestion transfer matrix, expressed as (2). Ft+1 is the predicted matrix of the next time frame, Ft is the congestion matrix of the current time frame and Ct is the congestion transfer matrix of the current time frame. Based on the prediction of congestion statue in the next time frame, we can determine whether UAVs can fly to the new locations where need to be deployed at a certain speed or we should deploy new UAVs to meet the coverage threshold and recycle idle UAVs. Ft+1 = Ft · Ct

3

(2)

Proposed Approach

According to the model we built, we can abstract the deployment of RSUs and UAVs as a coverage problem (CP). Since the CP is NP hard, we propose a greedy algorithm to solve it, by stepwise selection of local optimal solutions in an enough set hypothetically finally obtain the necessary locations where the RSUs and UAVs should locations while guarantee the coverage threshold and minimize costs. 3.1

Greedy Algorithm Applied to RSU Deployment in Non-congested State

Through the above definitions, in the actual deployment process, RSU deployment problem can be formulated by: min :

N  k=1

Bk

Research on the Enhancement of VANET Coverage

791

subject to: Bk ∈ {0, 1} , ∀k ∈ {1, 2, 3, . . . , N } M 

Hi,k ≤ DR , ∀i ∈ {1, 2, 3, . . . , M }

(3) (4)

k=1 M  N 

Hi,k ≥ ηM, 0 ≤ η ≤ 1

(5)

i=1 k=1

where constraint (3) indicates that there is at most one RSU in each candidate location, constraint (4) limit the maximum number of vehicles that a RSU can serve. constraint (5) guarantees that the percentage of served vehicles should be more than η. Based on greedy algorithm, we can deploy a certain number of RSUs to provide reliable service for vehicles. By calculating the number of vehicles covered by the RSU of each candidate deployment location and determining whether the number of accessible RSUs is exceeded in each coverage area, the RSU location with the largest number of served vehicles is gradually selected. Meanwhile mark the accessible vehicle in the coverage area as being served and deploy an RSU at that location. These RSUs can meet the network coverage threshold requirements in non-congested road conditions. 3.2

Joint Deployment in Congested State Based on Markov

In a congestion situation, we enhanced the coverage of VANETs by deploying UAVs. For the characteristics of real-time changes in road conditions, according to the Markov model we built, the change of road congestion status can be predicted, thereby to adjust the positions and quantity of UAVs. With the above definitions in the model, we can formulate the RSU and UAV joint deployment problem as follows: min :

N 

(Bk + Uk )

k=1

subject to: Bk ∈ {0, 1} , ∀k ∈ {1, 2, 3, . . . , N }

(6)

Uk ∈ {0, 1} , ∀k ∈ {1, 2, 3, . . . , N }

(7)

M 

Hi,k ≤ DR , ∀i ∈ {1, 2, 3, . . . , M }

(8)

Ti,k ≤ DU , ∀i ∈ {1, 2, 3, . . . , M }

(9)

k=1 M  k=1 M  N 

(Hi,k + Ti,k ) ≥ ηM, 0 ≤ η ≤ 1

i=1 k=1

(10)

792

T. Liu et al.

Constraint (6) indicates that there is at most one RSU in each candidate location, similarly constraint (7) indicates that there is at most one UAV in each candidate location. Constraint (8) and (9) limit the maximum number of vehicles that a RSU or a UAV can serve. Constraint (10) guarantees that the percentage of served vehicles should be more than η. In this problem, The UAVs are first deployed on congested segments through heuristic algorithm, thus form a joint service network with the deployed RSUs, enhancing VANETs coverage to serve the vehicles, and meanwhile re-reaching the coverage threshold. Based on the Markov prediction model, we can predict the segments where congestion will occur in the next time frame. Thus we can deploy the UAVs to the desired locations in time. Moreover, increase the number of UAVs or recycle redundant UAVs according to changes of congestion status. Thereby, build a dynamic RSU and UAV joint deployment network to provide services to vehicles.

4

Simulation Results and Analysis

This section simulates in the MATLAB environment, and analyzes the deployment results of the RSU and UAV deployment problem. The simulation is based on a 16 km * 16 km city map. In the road network, the RSUs and UAVs is deployed to meet coverage requirements in a day.

700

RSU deployment in non-congested state

600

RSU number

500 400 300 200 100 0 0.4

greedy algorithm random deployment

0.5

0.6

0.7

0.8

0.9

coverage threshold

Fig. 1. RSU deploy in non-congested state

4.1

RSU Deployment in Non-congested State

We first generate a road network and simulate traffic flow in non-congested conditions in the network. In the traffic flow generated based on our simulation,

Research on the Enhancement of VANET Coverage

793

750 700 650 600 550 500 450 400 350 300 250 200 150 100 50 0 0.4

RSU&UAV deployment in non-congested state

45 40 35 30 25 20 15

RSU number in joint deployment model UAV number in joint deployment model RSU number in only deploying RSU model

10 5

0.5

0.6

0.7

0.8

coverage threshold

Fig. 2. RSU deploy in non-congested state

4.2

50

UAV number

deployed number

the RSUs are deployed in the relatively dense traffic sections and intersections. According to the algorithm we have established, Fig. 1 shows that our algorithm can greatly reduce the number of RSU deployments compared to random deployment.

0.9

0

active UAV recyle UAV

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

time frame

Fig. 3. RSU and UAV deploy in congested state

RSU and UAV Joint Deployment in Congested State

In the joint deployment model, we deploy a certain number of UAVs to enhance the coverage of VANETs on congested segments and intersections. Figure 2 shows that in the joint deployment model, as the coverage threshold increases, the number of UAVs and RSUs increases, and the threshold can be met. In contrast, if only deployed the RSUs on the ground, as the coverage threshold increases, the number of accesses to the RSU is limited. Even if the RSUs are deployed in all candidate locations, the coverage requirements cannot be met. In addition, the deployment cost of RSUs are much higher than the flexible deployment and recovery of UAVs. We have greatly reduced the cost of the construction of the VANETs through the deployment of a small number of UAVs. Under the condition of selecting a coverage threshold, based on the Markov prediction model we established, the state of road congestion in one day can be predicted. We can judge whether the current position of the UAVs can reach the next required positions or need to be recycled. As shown in Fig. 3, we chose to deploy and recycle the UAVs with a coverage threshold of 0.7. As we can see, there are minor changes in the number of active UAVs in each time frame. We can also calculate the number of UAVs that need to be recycled. In this way, in the global road network, we only need a certain number of UAVs and adjust the positions or recycle them according to the changes of congested status to enhance the coverage of the VANETs.

794

5

T. Liu et al.

Conclusion

In this paper, we have presented a new model for VANETs coverage enhancement. In view of the dynamic characteristics of road traffic, the ground RSU and UAV are jointly deployed. By deploying RSUs that fixed-ground to meet network coverage under non-congested conditions, when congestion occurs in the road network, the vehicle is serviced by deploying the UAVs as a supplement. Considering the dynamics of vehicle movement in the road network, we modeled the movement of the vehicle and thereby predict and adjust the positions of the UAVs, so that we can provide reliable service to road vehicles in real time by using fewer RSUs and UAVs. In future work, we intend to establish a better traffic prediction model and consider the impact of propagation channel on QoS during communication among RSUs, UAVs and vehicles.

References 1. Mozaffari M, Saad W, Bennis M, Debbah M (2016) Unmanned aerial vehicle with underlaid device-to-device communications: performance and tradeoffs. IEEE Trans Wirel Commun 15(6):3949–3963 2. Rashidi M, Batros I, Madsen TK et al (2012) Placement of road side units for floating car data collection in highway scenario. In: Ultra modern telecommunications and control systems and workshops. Petersburg, pp 114–118 3. Wu TJ et al (2012) A cost-effective strategy for road-side unit placement in vehicular networks. IEEE Trans Commun 60(8):2295–2303 4. Chi J, Jo Y, Park H et al (2013) Intersection-priority based optimal RSU allocation for VANET. In: International conference on Ubiquitous and future networks. Da Nang, pp 350–355 5. Yang WD et al (2014) A RSU deployment scheme based on hot spot in vehicular ad hoc networks. Springer, New York, Unifying Electrical Engineering and Electronics Engineering, pp 1631–1638 6. Trullols O, Fiore M, Casetti C et al (2010) Planning roadside infrastructure for information dissemination in intelligent transportation systems. Comput Commun 33(4):432–442 7. Cavalcante E, Aquino A, Pappa G et al (2012) Roadside unit deployment for information dissemination in a VANET: an evolutionary approach. In: The 14th annual conference companion on genetic and evolutionary computation, pp 27–34 8. Lin YY, Rubin I (2015) Throughput maximization under guaranteed dissemination coverage for VANET systems. Inf Theory Appl Worksh 313–318 9. Xiao L, Lu X, Xu D, Tang Y, Wang L, Zhuang W (2018) UAV Relay in VANETs against smart jamming with reinforcement learning. IEEE Trans Veh Technol 67:4087–4097

Research on the Enhancement of VANET Coverage

795

10. Chen Y, Feng W, Zheng G (2017) Optimum placement of UAV as relay. IEEE Commun Lett (99):1–1 11. Charlesworth PB (2014) Using non-cooperative games to coordinate communications UAVs. In: Proceedings of IEEE Globecom workshops (GCWkshps). IEEE Press, pp 463–1468

Research on Image Encryption Algorithm Based on Wavelet Transform and Qi Hyperchaos Zhiyuan Li, Aiping Jiang(&), and Yuying Mu Heilongjiang University, Harbin, China [email protected], [email protected], [email protected]

Abstract. In this paper, the Wavelet transform is used to decompose the image into low-frequency components, horizontal components, vertical components, and diagonal components. Then, the low-frequency components are scrambled by constructing index sequences, obtaining the scrambled image by wavelet reconstruction. The chaotic sequence generated by the logistic chaotic system is used to encrypt the scrambled image, and then multiple sequences generated by Qi hyperchaos are used for the second encryption of images. The decryption process is the inverse of encryption. The encryption scheme is simulated on MATLAB, and the security analysis is carried out by key space, key sensitivity, histogram, information entropy, pixel correlation, etc., and compared with other algorithms, it is concluded that the algorithm can be applied to images encryption. Keywords: Wavelet transform

 Chaotic encryption  Image encryption

1 Introduction The current research directions of image encryption include: (1) Image encryption based on transform domain. The purpose is to reduce the amount of data and anti-attack performance. Such as: DCT, wavelet transform, etc. (2) Image encryption based on chaos. Chaotic systems are characterized by ergodicity, randomness and sensitivity to initial values. Therefore they are suitable for secure communication. Currently, chaoticbased encryption schemes for image encryption schemes are widely used. For example: logistic chaotic system, Qi hyperchaotic system. In this paper, the Wavelet transform is used to decompose the image into lowfrequency components, horizontal, vertical and diagonal components, then scramble the low frequency, and then obtain the scrambled image through Wavelet reconstruction. Then, the chaotic sequence generated by the logistic chaotic system is used to scramble. The latter image is encrypted, and then the encrypted image is doubleencrypted using multiple sequences generated by Qi hyperchaos.

The original version of this chapter was revised: The affiliation of authors has been updated. The correction to this chapter is available at https://doi.org/10.1007/978-981-13-9409-6_327 © Springer Nature Singapore Pte Ltd. 2020, corrected publication 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 796–810, 2020 https://doi.org/10.1007/978-981-13-9409-6_94

Research on Image Encryption Algorithm Based on Wavelet

797

2 Wavelet Transform Wavelet transform can better analyze and process real-time signals, it is widely used in image processing of transform domain such as image encryption, segmentation and compression. In this paper, discrete Wavelet transform (DWT) is performed on images. Discrete Wavelet transform can be defined as: 1 X f ðxÞ;j0 ;k ðxÞ W; ðj0 ; k Þ ¼ pffiffiffiffiffi M x

ð1Þ

1 X f ðxÞwj;k ðxÞ Ww ðj; kÞ ¼ pffiffiffiffiffi x M

ð2Þ

Its inverse transformation is: 1 X 1 X pffiffiffiffiffi fðxÞ ¼ pffiffiffiffiffi W ð j ; k Þ; ðxÞ þ W ðj; kÞwj;k ðxÞ 0 j ;k ; 0 x x w M M

ð3Þ

W; ðj0 ; kÞ and Ww ðj; kÞ are called “approximation coefficient” and “detail coefficient” respectively. Inverse discrete wavelet transform is equivalent to the process of signal synthesis. The synthesis process is the inverse process of signal decomposition. One-dimensional wavelet transform can be extended to two-dimensional Wavelet transform, which includes a two-dimensional scaling function and three twodimensional Wavelet functions. Two-dimensional Wavelet transform is applied to the approximate part of the jlevel, and one approximation and three directions (horizontal, vertical and diagonal) of the next resolution level (j + 1) can be obtained. Two-dimensional Wavelet transform is mainly applied to two-dimensional image processing, image decomposition and reconstruction.

3 Chaotic System Logistic chaotic system is one of the most widely used discrete chaotic mapping systems in nonlinear dynamics. Its mapping equation is: xn þ 1 ¼ lxn ð1  xn Þ

ð4Þ

Intermediate parameter l 2 ð0; 4; xn 2 ð0; 1Þ; n ¼ 1; 2; 3. . . When 3:5699 . . .\ l  4, The system is in a state of chaotic. When the system parameter l changes, the dynamic state of the system changes accordingly. The low-dimensional chaos is encrypted. High, but the cracking method of the encryption method is also endless.

798

Z. Li et al.

Especially in the big data era of computer development, it is possible to attack lowdimensional chaotic encryption systems by exhaustive method and statistical analysis method, so it is necessary to re-encrypt on this basis. In 2005, Qi Guoyuan and others proposed a new chaos and named it Qi chaos [2]. After the Qi chaotic system was proposed, based on the experience of four-dimensional chaos derived from the previous three-dimensional chaos, Qi Guoyuan and others further proposed Qi. Hyperchaotic system. The equation is as follows: 8 0 x ¼ aðy  xÞ þ yzw > > < 0 y ¼ bðx þ yÞ  xzw z0 ¼ cz þ gxyw > > : w0 ¼ dw þ xyz

ð5Þ

This is a four-dimensional system in which x, y, z, and w are the trajectories of chaotic motion. When the parameters a ¼ 50, b ¼ 4, c ¼ 13, d ¼ 20, g ¼ 4, there are two positive Lyapunov exponents. That presents a hyperchaotic state. The initial values and parameters of the system state variables can be used as keys for the encrypted chaotic sequence. In this paper, the initial values x0 , y0 ; z0 and w0 of the system are 1.01, 1.01, 1.01, and 1.01, respectively, and the values of the parameters a, b, c, d, and η are kept unchanged. The chaotic sequence is obtained by the fourth-order Runge-Kutta method.

4 Encryption Process Image encryption can be divided into pixel scrambling and pixel diffusion. Pixel scrambling is to change the pixel position to disturb the spatial correlation of adjacent pixels, hiding the original visual information of the image [1]. In this paper, the standard test image lena (256 * 256) is selected. Firstly, the image is decomposed by two-dimensional wavelet transform. The wavelet basis function ‘bior3.7’ is used to obtain the low-frequency coefficient of the image, and the image approximation coefficient, horizontal coefficient and vertical can be obtained. The diagonal coefficient is ca1, ch1, cv1, cd1, as shown in Fig. 1.

Research on Image Encryption Algorithm Based on Wavelet

Fig. 1. Wavelet decomposition

Fig. 2. Image scrambling effect

799

800

Z. Li et al.

Fig. 3. Encryption rendering

It can be seen from Fig. 1 that the image approximation coefficient ca1 is close to the original image and contains a large amount of detail information of the image so that it is pixel-scrambled. Methods as below: (1) First, the ca1 image (m * n) is arranged in a column from top to bottom, and left to right. (2) The parameter of the Logistic chaotic system is set to 3.62, the initial value is 0.7, and the iteration is m*n times. A chaotic sequence S is obtained. (3) Sorting the sequence S in ascending order and constructing the index scrambling sequence L, and using the index scrambling sequence L to implement scrambling of the image pixels, the sequence after the scrambling is recorded as P(i), i = 1, 2…, m * n. Convert the sequence to an m * n matrix. (4) Wavelet reconstruction of each coefficient to obtain a scrambled image, as shown in Fig. 2, denoted as B(i). Although the scrambling destroys the correlation of adjacent pixels of the image, it cannot resist the known plaintext or selective plaintext attack. In order to improve the security of the encrypted image, the scrambled image needs to be diffused. In order to prevent the same key stream from being generated, the parameters of the Logistic chaotic system and the initial values are set to different values, so that l ¼ 3:97, the initial value x0 ¼ 0:23, iterative M * N times, the chaotic sequence X is obtained, and then the sequence is preprocessed:

Research on Image Encryption Algorithm Based on Wavelet

801

When roundð100  xðiÞÞ\100  xðiÞ xðiÞ ¼ mod(round(1000  ð100  xðiÞ  roundð100  xðiÞÞÞÞ; 256Þ Otherwise xðiÞ ¼ modðroundð1000  ð1  ð100  xðiÞ  roundð100  xðiÞÞÞÞÞ; 256Þ The round function in the expression represents rounding. Next, use the typical encryption formula to get the value of the ciphertext pixel: CðiÞ ¼ xðiÞ  BðiÞ

ð6Þ

In formula C(i) is the pixel value of the encrypted image,  represents XOR operation, and then converts it into M * N matrix, that is, the encrypted image, Recorded as B1 (i). Next, set the parameters and initial values of the Qi hyperchaotic system, and change the chaotic sequence z, w as described above. Perform the key construction, first cyclically shift, then XOR the two sequences, and shift again: k0 ¼ z  w

ð7Þ

k ¼ circshiftðk0; 500; 2Þ The function circshift in the formula represents a cyclic shift. Next, the final encrypted image C1 is obtained by using the encryption formula: C1ðiÞ ¼ kðiÞ  B1ðiÞ

ð8Þ

In this experiment, the scrambled image obtained after wavelet reconstruction is shown in Fig. 2, and the encrypted image is shown in Fig. 3. It is clear that the scrambled image after wavelet reconstruction can hardly see the feature information such as the outline of the original image. After encryption, the whole image is visually presented with disordered snowflake noise, so when it is not subject to professional attack, only the naked eye observes. From the perspective of the encryption effect is ideal.

5 Simulation Results and Analysis 5.1

Histogram Analysis

The histogram is a statistical result of the frequency of occurrence of each gray level in an image. The gray scale distribution of the image can be directly seen from the histogram [2]. After the image is encrypted, the grayscale distribution should be evenly distributed so that the attacker cannot see the information of the plaintext image through the histogram. Figs. 4, 5, and 6 are gray histograms of the plaintext image, the

802

Z. Li et al.

scrambled image, and the final encrypted image, respectively. It can be seen that the gray histogram of the image after chaotic encryption is completely different from the plaintext, and the basic obedience is evenly distributed to achieve the encryption effect and resist statistical attacks.

Fig. 4. Original image histogram

Fig. 5. Scrambled image histogram

Research on Image Encryption Algorithm Based on Wavelet

803

Fig. 6. Encrypted image histogram

5.2

Information Entropy Analysis

Information entropy is used to measure the distribution of gray values in images [3]. The more uniform the gray distribution, the larger the information entropy, and the stronger the image’s ability to resist statistical attacks. The ideal information entropy of a 256-level grayscale image should be 8. If it is lower than 8, it indicates that it may be threatened by security. If it is very close to the theoretical value 8, it means that the omission of information is negligible [4]. The information entropy of the information source m is calculated as follows: H(m) ¼

n X

pðmi Þlog

i¼0

1 pð m i Þ

ð9Þ

In the formula, pðmi Þ denotes the probability of the occurrence of symbol mi , and n is the gray level of the pixel. In the algorithm of this paper, the entropy of the obtained ciphertext image is 7.9903, which is closer to the ideal value of 8 compared with the literature [5], indicating that the algorithm has good randomness, high uncertainty and good security. 5.3

Correlation Analysis

The adjacent pixels of the image file have a great correlation, and the encryption algorithm should reduce the correlation between adjacent pixels as much as possible. The calculation formula for calculating the correlation of adjacent pixels is as follows [1]: E(x) ¼

N 1X xi N i¼1

ð10Þ

804

Z. Li et al.

D(x) ¼ covðx; yÞ ¼

1 XN ðx  EðxÞÞ2 i¼1 i N

ð11Þ

1 XN ðx  EðxÞÞðyi  EðyÞÞ i¼1 i N

ð12Þ

covðx; yÞ Rx;y ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi DðxÞDðyÞ

ð13Þ

where x, y represents the gray value of two adjacent pixels of the image, Rx;y represents the correlation coefficient of two adjacent pixels, and N represents the number of adjacent pixel pairs. Randomly select the standard test image Lena and the adjacent pixels in the encrypted image, and the correlation analysis is performed by using the gray value point method and the upper formula. The pixels selected are grouped into pixel pairs. The gray value of the previous point is the abscissa, and the gray value of the next point is the ordinate. The drawing is shown in Fig. 7. The first column and the second column are the original images, and the encrypted images are distributed in the horizontal, vertical, and diagonal directions. Quantitative analysis is shown in Table 1. Table 1. Adjacent pixel correlation coefficient Original image Encrypted image Document [6]

Level 0.9731 −0.0014 0.00253

Vertical 0.9845 0.0129 0.00046

Fig. 7. Pixel correlation

Diagonal 0.9586 0.0030 0.01753

Research on Image Encryption Algorithm Based on Wavelet

805

From the table, it can be seen that the correlation coefficient of adjacent pixels of the original image is close to 1, and the ciphertext image is close to 0, which indicates that the algorithm reduces the correlation between adjacent pixels and achieves the effect of encryption. Compared with the literature [6], the algorithm of this paper is better for weakening the correlation of adjacent pixels. 5.4

Key Sensitivity Analysis

Key sensitivity is the basic feature of an effective cryptosystem [1]. It is mainly demonstrated from two aspects. First, when different keys are used to encrypt the same plaintext image, two completely different encrypted images can be generated, and the second is decryption. A slight change in the key and it does not allow the encrypted image to be decrypted correctly as well. In the decryption process, the system parameter of the Logistic chaotic system is l ¼ 3:97, which causes a slight change l ¼ 3:97000000000001. The decryption effect diagram is shown in Fig. 8.

Fig. 8. Decrypt image

806

Z. Li et al.

Fig. 9. Encrypted image

Fig. 10. Encrypted image

Research on Image Encryption Algorithm Based on Wavelet

807

Fig. 11. Differential image

Figures 9, 10, and 11 show the initial changes in the initial value of the Qi hyperchaotic system, and the resulting encrypted images and the difference images between them. Visible from the difference images, there are large differences between the ciphertext images. It can be seen from the above that the algorithm has better encryption effect and strong sensitivity. 5.5

Key Space Analysis

In the algorithm used in this paper, the key space consists of six parameters, namely the initial value of the Qi hyperchaotic system and the initial value of the logistic chaotic system and the control parameters. When using the double precision representation of 15 digits after the decimal point, the total size of the key space is: K ¼ 1015  1015  1015  1015  1015  1015 2299

ð14Þ

The key space of the literature [1] is 2249 . It can be seen from Eq. (14) that the key space of this paper is higher than that of the literature [1]. When considering the number of iterations, the key space of the algorithm is theoretically infinite. It can effectively resist exhaustive attacks.

808

5.6

Z. Li et al.

Noise Attack Analysis

Since the image is easily interfered by noise during transmission, this will affect the quality of the decrypted image. Therefore, measuring the quality of an image encryption method should verify its anti-noise ability [7]. Figure 12 is an original image with 2% salt and pepper noise, Fig. 13 is a ciphertext image obtained by encrypting a noise-containing image, Fig. 14 is the final image that decrypted at the receiving end, and denoised by a 5 * 5 median filter.

Fig. 12. Add noise image

Research on Image Encryption Algorithm Based on Wavelet

809

Fig. 13. Encrypted image

Fig. 14. Filtered image

It can be seen that the final image is slightly blurred, but it is very close to the original image, so this algorithm can resist noise attacks.

810

Z. Li et al.

6 Conclusion In this paper, the experimental results and the security analysis of histogram, information entropy, pixel correlation, key sensitivity, key space and noise attack show that the histogram is evenly distributed after encryption, and the information entropy is close to the ideal value. The correlation between adjacent pixels is far lower than that of plaintext, the ciphertext is evenly distributed, the key sensitivity is strong, and the key space is large. Can effectively resist statistical attacks, noise attacks and exhaustive attacks. It can be stated that this algorithm can achieve the encryption effect. Acknowlegments. National Natural Science Foundation of China (Nos. 51607059); Natural Science Foundation of Heilongjiang Province (QC2017059); Postdoctoral Fund in Heilongjiang Province (LBH-Z16169); Cultivation of Scientific and Technological Achievements of Heilongjiang Education Department (TSTAU-C2018016); Heilongjiang University Youth Science Fund Project (HDJMRH201912).

References 1. Qing L, Yanjiang W, Wei W (2016) Image encryption algorithm based on hyperchaotic system. China Sci Mag 46(9):910–918 2. Jian Z, Da H (2018) Quantum image encryption algorithm based on chaotic system and DNA coding. J Southwest Jiaotong Univ 53(6):1143–1149 3. Dongsheng C, Tan X, Zhiliang X et al (2018) Research on image encryption algorithm combined with four-dimensional hyperchaotic system and bit decomposition. J Univ Electron Sci Technol 47(6):907–912 4. Xiuli C, Zhihua G (2016) A new bit-level adaptive color image encryption algorithm based on hyperchaotic system. Comput Sci 43(4):134–139 5. Wang Yan, Tu Li (2017) A new image encryption algorithm based on improved Lorenz chaotic system. J Central South Univ (Natural Science Edition) 48(10):2679–2685 6. Guobo X, Tian W (2016) Chaotic image encryption algorithm based on pixel scrambling and bit replacement. Microelectron Comput 33(3):81–85 7. Bin X, Wenming S, Weisheng L et al (2018) Image encryption method based on multi-order fractional discrete Chebyshev transformation and generating sequence. J Commun 39(5):2–10

A Design of Satellite Telemetry Acquisition System Meishan Chen(&), Qiang Mu, Jinyuan Ma, and Xin Li Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected]

Abstract. With the development of space technology, the functions of satellites are becoming more and more powerful. The number of intra-satellite telemetry and the requirement of possible inter-satellite telemetry transmission increase the complexity of telemetry acquisition and scheduling. The traditional acquisition mode and PCM telemetry format can no longer meet the increasing telemetry requirements. In this paper, a design scheme of satellite telemetry acquisition system is proposed, which takes the CMU as the core, the distributed network architecture as the system architecture, and follows the TM Space Data Link Protocol. It can realize the telemetry acquisition and scheduling download of complex satellite systems, and has scalability. It can meet the needs of most satellite and satellite systems for telemetry data download. Keywords: TM space data link protocol CADU  CCSDS

 CMU  ISU  SPP  VCDU 

1 Introduction In recent years, China has made remarkable achievements in the development of spacecraft. Satellite systems are becoming more and more powerful. Telemetry information downloaded from satellite-to-ground links may include not only local satellite telemetry, but also other satellite telemetry information transmitted through inter-satellite links [1, 2]. All telemetry information acquisition, framing, scheduling and downloading work need to be considered by the whole network and deployed uniformly [3, 4]. A design of satellite telemetry acquisition system is presented in this paper. The system adopts centralized telemetry acquisition mode. The ISU (Integrated Services Unit) of the telemetry acquisition system is responsible for collecting telemetry information of all other equipments of the whole satellite. The ISU transmits the collected telemetry to the CMU (Central Management Unit) of the telemetry acquisition system according to the corresponding format requirements. At the same time, the CMU receives the telemetry information of other satellites from the inter-satellite link and processes it with the telemetry information of the local satellite. According to the TM Space Data Link Protocol [5–7], it organizes packets, frames and schedules downloads. It can satisfy the increasing telemetry demand and has expansibility, and can meet the needs of most satellite and satellite system telemetry download. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 811–818, 2020 https://doi.org/10.1007/978-981-13-9409-6_95

812

M. Chen et al.

2 Design and Implementation 2.1

Composition of Satellite Telemetry Acquisition System

The satellite telemetry acquisition system takes the CMU as the core and the distributed network architecture as the system architecture, realizes the telemetry acquisition of all other equipments of the whole satellite, and receives the telemetry information of other satellites, and carries on the unified dispatch and download. Telemetry acquisition system consists of CMU and ISU. CMU: Local telemetry collected by receiving ISU and inter-satellite telemetry transmitted by other satellites. According to TM Space Data Link Protocol, package, frame and schedule download. ISU: Several stations collect telemetry data of the whole satellite and individual computers respectively. Through 1553B bus, the collected results are transmitted directly to the CMU. The system has expansibility. If the first-level bus can not meet all the acquisition requirements of telemetry, it can be extended by setting two or even three-level bus. The product topology diagram of satellite telemetry acquisition system is shown in Fig. 1.

Other satellite telemetry Satellite-toground links

local satellite telemetry

local satellite telemetry

local satellite telemetry

First-level Bus

local satellite telemetry

local satellite telemetry

local satellite telemetry

Other satellite telemetry Secondary Bus

Fig. 1. Topology diagram of satellite telemetry acquisition system

In Fig. 1, CMU1 serves as the BC (Bus Control) terminal of the first-level bus and receives the telemetry information of other satellite devices collected by the ISU of the first-level bus as the RT terminal. CMU2 serves as the BC terminal of the secondary bus and receives the telemetry information of other local equipments collected by the ISU of the secondary bus as the RT terminal. CMU2 is then used as the RT of the firstlevel bus to transmit data to CMU1 through the first-level bus.

A Design of Satellite Telemetry Acquisition System

2.2

813

TM Space Data Link Protocol Telemetry Format

With the continuous development of space science and technology, many space missions require the realization of bidirectional transmission of multi-source, multi-user and different business requirements between spacecraft and ground station, spacecraft and spacecraft. To this end, the Consultative Committee for Space Data Systems (CCSDS) has developed a TM Space Data Link Protocol. In order to improve the utilization of spatial data channel as much as possible when transmitting these different kinds and characteristics of spatial data, CCSDS TM Space Data Link Protocol uses two-stage multiplexing mechanism of packet channel multiplexing and virtual channel multiplexing to realize dynamic sharing of the same physical channel among multiple users. The telemetry format of TM Space Data Link Protocol is introduced below, which is used in CMU to frame telemetry data and schedule download. 2.2.1 Space Pack Spatial package is a SPP (Packing Protocol Data Unit) which is formed by using the package business to generate data in a certain format for each application process on board the satellite. The data structure is shown in Fig. 2. N (Max to 236Byte) Bytes Packet Data field

Packet primary header 6Bytes

Version 3b

Packet secondary header 32Bytes

Packet Sequence Control APID Sequence Count 11b Flag 2b 14b

Packet Identitification TYPE 1b

FLAG 1b

Version 3b

Packet Data length 2Bytes

Rout Field

Signalling Field Replay Flag 1b

RSV 4b

User Data field

Source ID 8b

Destination ID 8b

Channel 4b

Transmit Transmit Sequence Protocol 4b Number 8b

Encryption Field Conform Sequence number 8b

IV 10Bytes

MAC 16Bytes

Fig. 2. Spatial package format

2.2.2 Virtual Channel Data Unit (VCDU) The data format of VCDU consists of header, insertion field (optional), data field and tail sequence (optional). In this application, the insertion domain is selected to transmit the digital information of TT&C at the same time, because there is no RS coding in the telemetry channel, the tail sequence is selected to control CRC error in the whole frame VCDU. The length of VCDU is 252 bytes. According to different business types, different VCDU formats can be defined. The VCDU format of underground telemetry data frame is shown in Fig. 3, and the VCDU format of inter-satellite telemetry data frame is shown in Fig. 4.

814

M. Chen et al. 252Bytes 6Bytes

Master Channel ID Version 2b

SCID 8b

VCID 6b

236Bytes

8Bytes

VC counter 3Byte

Flag 1Byte Replay 1b

Insertion Field 8Bytes TT&C 2Bytes

RSV 7b

2Bytes

CRC

VCDU data field 236Bytes

Time 6Bytes

2Bytes

Loop comparison 8Bytes

Fig. 3. VCDU data structure of satellite underground telemetry data frame

252Bytes 6Bytes

3Bytes

VCDU primary header Master Channel ID version 2b

SCID 8b

VCID 6b

VC count 3Byte

5 Bytes

Routing Information Field

Flag 1Byte

Destina Hop tion Count 1Byte

4b

Cha nnel

Transport Protocal

4b

4b

236Bytes

2Bytes

Reliable Transport Field

Priority PUSH ACK 4b

1b

1b

Frame RSV State 2b

4b

Send sequence Numbei 1Byte

Confirmation Confirmat Data Sequence ion Length number Length 1Byte

1Byte

VCDU data field 236Bytes

CRC

2Bytes

1Byte

Fig. 4. VCDU data structure diagram of inter-satellite telemetry data frame

The data domain of VCDU can load one or more space packages according to the length of space packages. When the remaining length of the data field is less than 40 bytes (i.e., the remaining data and length are not enough to put down the dominant header + secondary header + 1 byte data + 1 byte check) fill 0  5A. If there is a combination of packages, it can be identified in the package sequence control of the main header of the space package. The last two bytes of CRC in VCDU are the first 250 bytes of CRC check in VCDU. The CRC check polynomial is gðxÞ ¼ x16 þ X12 þ X5 þ 1. Encoder and decoder are initialized to all ‘1’. 2.2.3 Channel Access Data Unit (CADU) CADU is composed of VCDU and synchronization head, which is the final data format transmitted on the physical channel. The CADU data format is shown in Fig. 5. The length of the CADU is 256 bytes.

Synchronization code (4 bytes)1ACFFC1D

VCDU 252 bytes

Synchronization code (4 bytes)1ACFFC1D

VCDU 252 bytes

Fig. 5. CADU data format

3 Design Example A design example of a medium and high orbit satellite system is given to verify the design scheme proposed in this paper. The telemetry information of the system includes local satellite telemetry, other satellite telemetry transmitted by inter-satellite

A Design of Satellite Telemetry Acquisition System

815

links, and the telemetry of surveillance camera equipments carried on board. The satellite telemetry acquisition system designed 11 VC frames and 70 kinds of space packages according to the different business requirements of each subsystem of the whole satellite, including all telemetry parameters of the whole satellite. VC VC1 VC2 VC3 VC4 VC5 VC6 VC7 VC8 VC9 VC10 VC11

3.1

SPP SP0, SP1, SP8 SP2 * SP7, SP9 * SP12 SP16 * SP35 SP48 * SP64 SP80 * SP89 SP96 SP112 * SP120, SP127 * SP134 Other satellites’ spp Surveillance camera’ data frame, without SPP Inter-satellite frame SP255

Remarks Synchronized frame Asynchronous frame Asynchronous frame Asynchronous frame Asynchronous frame Asynchronous frame Asynchronous frame Asynchronous frame Asynchronous frame Asynchronous frame Empty frame

Rate 2 kbps, 32 kbps 2 kbps, 32 kbps 2 kbps, 32 kbps Only 32 kbps Only 32 kbps Only 32 kbps Only 32 kbps Only 32 kbps Only 32 kbps Only 32 kbps 2 kbps, 32 kbps

Space Package Organization

The satellite telemetry acquisition system can be divided into cyclic download data source package and event trigger download data source package from the download type of spatial data package. That is, according to different scenarios of telemetry parameter application, the telemetry parameters of accidental events are defined as trigger download, which saves conventional telemetry resources and improves the frequency of telemetry download. At the same time, each telemetry source package has enabling forbidden switch, which can forbid different source packages according to the application requirements, optimize the whole satellite telemetry resources, and ensure that the urgently needed telemetry can be downloaded to the ground in time. 3.2

Spatial Packet Scheduling

Spatial packet scheduling refers to the formation of a space package based on subsystems or functions. Each space package is designed with a download enabler prohibition switch. In the process of ground application, some telemetry source packages can be dynamically prohibited according to the needs of current tasks, so as to ensure that the urgently needed telemetry source packages can download to the ground in time and reduce the time delay. For example, when the navigation subsystem in the working orbit is not on, the telemetry source packages related to the navigation subsystem can be banned to reduce the total amount of telemetry data downloaded from the whole satellite, so that the period of other telemetry downloads can be increased and the time delay can be reduced at the same telemetry rate. Spatial packages in the same VC frame have the same priority and are downloaded in rotation.

816

3.3

M. Chen et al.

Virtual Channel Data Unit Organization

CMU is responsible for the framing of virtual channel frames. Virtual channel frames are divided into synchronous and asynchronous frames. An example of virtual channel organization is shown in Fig. 6. Synchronized frame refers to the telemetry data transmitted periodically with time as trigger point. The telemetry parameters are the important safety state parameters of the satellite platform, which include attitude and orbit control, energy, measurement and control systems. According to the operation stage of satellite, the variable domain data is divided into transfer orbit segment data and working orbit segment data. The data of transfer orbit segment includes the related parameters of pyrotechnics in TT&C and solar wing deployment, so as to ensure that the key parameters of transfer orbit segment can be downloaded quickly enough, and the related parameters of pyrotechnics deployed by solar wing can be in normal orbit. No more attention, so the data of working orbit segment is defined, the parameters of pyrotechnics deployed by solar wing are removed, and the limited bandwidth of satellite and ground is utilized to the maximum extent. Asynchronous frame refers to the telemetry frame that is downloaded to the ground in a queue according to the current rate. The data updating period of this download mode is affected by the current telemetry rate, bandwidth and the total length of the telemetry data generated, and the download delay situation changes dynamically. Therefore, the telemetry parameters downloaded in this way are generally slow variables, or will not affect the satellite security.

Attitude and orbit control Organized by local satellite CMU

Energy Measurement and control

VC1-transfer orbit

VCDU Header

SP0

SP8

CRC

VC1-working orbit

VCDU Header

SP1

SP8

CRC

Synchronized frame

Organized by local satellite CMU

Generally satellite platform telemetry

Payload data and telemetry

CMU memery download Inter-satellite channel state

Organized by local satellite CMU

Organized by local satellite CMU

Organizedby local satellite CMU

Organizedby local satellite CMU

Organizedby other satellite CMU

Organizedby local satellite CMU

kbps

32kbps

VC2

VCDU Header

SP2~ SP SP9 ~ SP12

CRC

VC3

VCDU Header

SP16~SP35

CRC

Asynchronous frame,transmitted periodically,2kbps 32kbps

VC4

VCDU Header

VC6

VCDU Header

VC7

VCDU Header

SP48 ~SP64 Delayed telemetry data frame Payload data and telemetry-SP96 SP112~SP120 SP127 ~SP134

CRC CRC CRC

Asynchronous frame,transmitted periodically,32kbps

VC5

VCDU Header

SP80-SP86

CRC

Asynchronous frame, automatically prohibited to download once,32kbps

VC11

VCDU Header

SP255 Empty SPP

CRC

Empty frame for filling and idle transmission VC9

VCDU Header

Surveillance camera s data

CRC

VCDU data domain directly fills with the data of camera, without space packet VC10

VCDU Header

Other satellites spp

CRC

Inter-satellite telemetry data frame,transmitted by inter-satellite link interface VC8

VCDU Header

Other satellites spp

CRC

VC10 received from other satellite will be organized as VC8 by local satellite CMU

Fig. 6. An example of virtual channel organization

Satellite -toground links

A Design of Satellite Telemetry Acquisition System

3.4

817

Virtual Channel Scheduling

Virtual channel scheduling refers to the dynamic adjustment of the proportion of downloaded virtual channels according to the current satellite operation status and the urgency of observing telemetry parameters in the process of ground application. Virtual channel scheduling refers to scheduling different download modes of virtual channels in the main channel at the VCDU layer. The specific scheduling modes are enabled prohibition control and download proportional control. Enabling prohibition control means that ground applications can prohibit different virtual channels according to different needs and ensure the fast download of enabling virtual channel data. After forbidding the download of data from a virtual channel, all telemetry data source packets under the virtual channel are no longer downloaded to the ground. Therefore, the scheduling method works the same as the source packet scheduling method, but the operation is faster and more convenient. Generally, multiple data source packets can be forbidden to be downloaded by one instruction. Download proportional control refers to the download of different asynchronous frames in proportion. Proportionally occupies downloaded bandwidth resources. This design greatly improves the flexibility of satellite telemetry parameter download in ground applications, reduces the conflict between the urgent telemetry download demand and effective satellite telemetry download resources, and improves the availability of satellites. The downlink code rate of satellite telemetry is 2 kbps or 32 kbps. The satellite 2 kbps rate is mainly used in the active phase, because the upper stage of the rocket in the active phase is limited by the telemetry rate. At this time, the load and the intersatellite link are not turned on, mainly the parameters of the platform operation. Therefore, at this rate, all bandwidth is allocated to the local satellite telemetry. The 32 kbps rate is the main rate of satellite in orbit. At this time, the bandwidth can meet the download requirements of local satellite telemetry and other satellite telemetry transmitted by inter-satellite links. According to the design of inter-satellite network, the bandwidth of 32 kbps is allocated by default to that of local satellite telemetry downloading, which occupies 16 k bandwidth, while that of other satellite telemetry downloading occupies 16 k bandwidth. Bandwidth allocation can be adjusted by instructions. An example of virtual channel scheduling is shown in Fig. 7. The length of CADU unit is 256 bytes, that is, 2 kbit. When the download rate is 2 kbps, one frame is transmitted per second. The download mode of VC1-VC2-VC1VC3-VC1-VC2-ensures the transmission rate of VC1 1 frame/2 s, VC2 and VC3 is 1 frame/4 s. When the download rate is 32 kbs, the local satellite telemetry takes up 16 kbps, i.e. 8 frames per second. The download mode of VC1-VC2-VC3-VC1— (VC4 * VC10 while non-empty, 4 frames in total)—VC1-VC2-VC3-VC1-. The transmission rates of VC1 is 2 frame/s, VC2 and VC3 are guaranteed to be no less than 1 frame/s. When the structure of the secondary bus is selected, the CMU 2 of the secondary bus is responsible for the package, frame formation and scheduling of the telemetry information collected by the interface data unit on the secondary bus. CMU1 schedules

818

M. Chen et al.

Fig. 7. An example of virtual channel scheduling

CMU2 sets of good telemetry frames according to the strategy of asynchronous frames. At the same time, CMU1 receives VC10 of other satellites through the inter-satellite link interface, and then composes VC8 of its own satellite with useful information, which is downloaded according to the scheduling strategy of asynchronous frames.

4 Conclusion In this paper, a design scheme of satellite telemetry acquisition system is proposed, which takes the CMU as the core, the distributed network architecture as the system architecture, and follows the TM Space Data Link Protocol. It can realize the telemetry acquisition and scheduling download of complex satellite systems, and has scalability. It can meet the needs of most satellite and satellite systems for telemetry data download.

References 1. Sun H, Chen X, Bai Y et al (2003) Applications of CCSDS AOS on spacecraft of China. Space—Craft Eng 12(1) 2. Ye X, Xiao F, Sun L et al (2009) Research and performance analysis on SCPS/CCSDS protocol suiteI. Comput Eng Appl 45(4) 3. Wang X, Wang T, Li N et al (2011) An efficient scheduling algorithm of multiplexing TM service based on the AOS. Spacecraft 20(5):83–87 (in China) 4. Zhang M, Xiong H, Zhao H, Xia Y. Packet time delay of polling-based multiplexing in advanced orbiting systems. J Beijing Univ Aeronaut Astronaut 40(10) 5. Li F, Fu Q, Hou X (2009) Implementation of real time and non real time data mixed transmission using advanced orbiting systems. Netw Secur Technol Appliction (4) 6. Gong X, Bai Y (2006) Implementation of synchronous and asynchronous high rate multiplexer using advanced orbiting systems. Comput Eng Des 27(19) 7. CCSDS (2015) CCSDS 132.0-B-2 TM space data link protocol. CCSDS, Washington D.C.

Fingerprint Feature Recognition of Frequency Hopping Radio with FCBF-NMI Feature Selection Hongguang Li(&), Ying Guo, Zisen Qi, Ping Sui, and Linghua Su Institute of Information and Navigation, Air Force Engineering University, Xi’an, Shaanxi 710077, China [email protected]

Abstract. High-dimensional features have great advantages for the analysis and identification of radio fingerprint features. In order to enhance the classification and recognition capability of frequency hopping stations, it is usually necessary to increase the feature type and dimension to further improve the classification accuracy of the classifier. However, with the increase of the feature types and the dimensions, a large number of irrelevant and redundant features will be introduced, which leads to the increased classification time and the low classification accuracy. In order to reduce the feature dimension and remove redundant features, a FCBF feature selection algorithm based on normalized mutual information was proposed, named FCBF-NMI. The algorithm uses normalized mutual information instead of symmetric uncertainty as the correlation evaluation standard of FCBF algorithm, and analyzes the correlation between features and categories, deletes irrelevant and redundant features, and finally obtains optimal feature subset. Experimental results show that, FCBFNMI can obtain the reasonable optimal features, on the base of guaranteeing the correct classification rate, the computing time can be reduced, and the effectiveness of feature recognition and the generalization ability of classification algorithms can be improved as well. Keywords: Frequency hopping radio features  Support vector machines

 Feature selection  Fingerprint

1 Introduction Frequency hopping (FH) communication has the advantages of low probability of interception, good confidentiality, strong anti-jamming capability, and powerful networking capability. It has become an important technical means for anti-reconnaissance and anti-interference in the field of world military communications. The sorting of FH communication network stations is the prerequisite for intercepting enemy communications, generating the best interference signal and breaking down the enemy’s normal communications. The existing FH signal network selection mainly utilizes FH signals’ duration, azimuth information, power and signal time correlation to achieve sorting and identification of network station of FH signals. Xu et al. [1] uses the maximum correlation method and time correlation statistics to achieve the classification of FH signal © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 819–831, 2020 https://doi.org/10.1007/978-981-13-9409-6_96

820

H. Li et al.

recognition. Eric et al. [2] uses the MUSIC method to sort the frequency hopping signals by using signal direction information. Wang et al. [3] combines the improved KHM algorithm and the cluster number estimation method to classify the frequency hopping signals by using signal duration, position information and power. However, with the diversification of the FH modes and the increased number of configurations, it has become very difficult to achieve the FH signal sorting only by the above features. Due to the random discreteness of the component performance and production process of each FH radio, the radiated FH signal has different individual characteristics from other FH radio signals. Therefore, the unique characteristics of different FH radio signals can be utilized to realize the network sorting of FH signals. Li et al. [4] proposed a method of radio frequency fingerprint identification, which uses frequencyshifting transient signal characteristics of FH communication equipment to realize classification and recognition. Luo et al. [5] proposed a FH radio station sorting method based on the transient response of transmitting power amplifier. Tang et al. [6] proposed a method of radio transmitter identification based on collaborative representation. Due to the fingerprint features of the radio are more irregular non-stationary, nonlinear and non-Gaussian signals, the traditional first-order and second-order analysis methods can not reveal their essence more deeply. As a non-stationary signal analysis tool, the higher-order statistics can extract a variety of features of the signal to acquire a large amount of rich information that the first-order and second-order statistics do not have. At the same time, it also has a good ability to suppress a variety of noise [7]. However, in order to enhance the classification and recognition capabilities of FH stations, it is necessary to increase the dimension of high-order statistics and add a variety of different feature types. If you use high-dimensional feature vectors directly, it will cause “dimensionality disaster”. When the feature type is increased to a certain extent, the classification accuracy will not increase as the feature type increases, but will slowly decrease. The root cause of this phenomenon is that a large variety of features contain more unrelated and redundant features. These features not only increase the complexity of the classification algorithm, but also contain a lot of noise data, which reduces the accuracy of the classification algorithm. Therefore, dimension reduction of high-dimensional feature sets [8] is an important part of data preprocessing. Its main purpose is to make the learning algorithm ignore the interference of unrelated redundant features in the process of constructing the classification model. Feature dimension reduction not only reduces the complexity of the classification algorithm, but also reduces the training time of the algorithm and improves the classification accuracy. Chandrashekar et al. [9] proposes the simplest and fastest filtering feature selection algorithm for high-dimensional data. However, it does not consider the correlation between the selected features, which leads to greater redundancy. Hoque et al. [10, 11] proposes a mutual information feature selection algorithm. Each time a feature is selected, the mutual information of each candidate feature and all selected features needs to be calculated. When the dimension of the feature set is high, the calculation amount is large and the practicability is poor. Estevez et al. [12] proposes a normalized mutual information feature selection algorithm, but it is necessary to specify the number of selected features in advance. Yu et al. [13] proposes a fast correlation-based filter (FCBF) feature selection algorithm. Because the algorithm uses

Fingerprint Feature Recognition of Frequency Hopping Radio

821

symmetry uncertainty as the evaluation criterion of correlation, it will lead to unreasonable situations such as the dimension of the optimal subset being too large or too small. The support vector machine (SVM) [14, 15] classification algorithm can transform low-dimensional nonlinear problems into high-dimensional linear classification problems by introducing kernel functions [16]. It does not use inner product operations, does not increase the complexity of the algorithm, and relieves the dimensionality disaster problem to some extent. However, some features in high-dimensional data do not contribute much to classification performance. Instead, it will increase the time and space overhead of the SVM classification algorithm. Moreover, mutual interference between different features can also lead to a sharp decline in classification performance. Therefore, it is necessary to determine the appropriate kernel function parameters to control the generalization ability and empirical risk of the SVM. Based on the above analysis, this paper proposes a fast correlation filtering feature selection algorithm based on normalized mutual information (FCBF-NMI). The algorithm uses normalized mutual information as the correlation evaluation standard, analyzes the correlation between features and categories, features and features, and removes irrelevant and redundant features to obtain the optimal feature subset. At the same time, this paper uses the quadratic grid search algorithm [17] to dynamically adjust the search area, determine the optimal SVM classifier parameter combination, and improve the generalization ability of SVM.

2 Fingerprint Feature Recognition Algorithm The fingerprint identification process of FH radio based on feature selection is shown in Fig. 1.

Fingerprint feature extraction

FH radio signal 0.6 0.4 0.6 0.2 0.4 0 -0.2 0.6 0.2 -0.4 0.4 0 0.2 -0.2

1

0

2

3

4

6

5

7

8

9

10

5

x 10

0 -0.4 -0.2

0

1

2

3

4

5

6

7

8

10 5

9

x 10

-0.4 0

1

2

3

4

5

6

7

8

9

10 5

x 10

FH radio feature set extraction phase

f1 f2 f3 f4 f5 f6 f7 f8

Candidate feature subset

Quadratic grid search parameter optimization

Feature subset evaluation

The current best feature subset

Stopping criterion

FH radio feature selection phase

class (1) class ( 2 )

Selected feature set SVM

NO

Classification results

... class ( c )

FH radio feature classification stage

Fig. 1. FH radio fingerprint feature recognition algorithm flow.

Firstly, the high-dimensional feature set of the FH radio signal is extracted, and the feature selection is performed on the high-dimensional feature set to obtain the optimal feature set. Then the quadratic grid search algorithm selects the appropriate initialization parameters to control the generalization ability of the SVM classifier. Finally, the optimized SVM classifier is used to identify the optimal feature set.

822

2.1

H. Li et al.

Feature Selection Algorithm Based on Mutual Information

Mutual information [18] represents the measurement of the amount of information that two random variables have in common. Assuming that X and Y are two discrete random variables, pðxÞ, pðyÞ and pðx; yÞ are the probabilities of X, Y and ðX; Y Þ respectively, the entropy [19] H ð X Þ of random variable X is defined as: H ðX Þ ¼ 

m X

pðxi Þ log2 pðxi Þ

ð1Þ

i¼1

The joint entropy H ðX; Y Þ of random variables X and Y is defined as H ðX; Y Þ ¼ 

m X n X     p xi ; yj log2 p xi ; yj

ð2Þ

i¼1 j¼1

In the Eq. (2), xi and yj are possible values of X and Y respectively. Joint entropy H ðX; Y Þ is a measurement of the total uncertainty of X and Y, which ranges from maxfH ð X Þ; H ðY Þg  H ðX; Y Þ  H ð X Þ þ H ðY Þ

ð3Þ

The value of H ðX; Y Þ is the minimum when X depends on Y, and when X and Y are independent of each other the value of H ðX; Y Þ is the maximum. The mutual information I ðX; Y Þ of random variables X and Y is defined as   m X n X   p xi ; yj   p xi ; yj log2 I ðX; Y Þ ¼ pðxi Þp yj i¼1 j¼1

ð4Þ

Obviously, mutual information is symmetrical and non-negative. It is a dimensionless measurement that does not change when the coordinate system varies. Its value indicates the strength of the dependency between two random variables. The relationship between mutual information and entropy is I ðX; Y Þ ¼ H ð X Þ þ H ðY Þ  H ðX; Y Þ

ð5Þ

From Eqs. (3) and (5) we learn that the mutual information of two random variables is the upper bound of the minimum entropy of these two random variables. The range of I ðX; Y Þ is 0  I ðX; Y Þ  minfH ð X Þ; H ðY Þg

ð6Þ

In many practical problems, the entropy of different random variables may have a big difference. When mutual information is applied to a variable set, in order to compensate for the problem that mutual information is biased toward multi-valued variables, the mutual information should be normalized in advance and strictly limited to [0, 1]. This is called normalized mutual information.

Fingerprint Feature Recognition of Frequency Hopping Radio

823

The FCBF algorithm uses symmetric uncertainty (SU) as the correlation metric, calculates the SU to find the feature subset related to the category, and then removes the redundant features by heuristic sequence elimination.   Assuming that the original dataset is X ¼ xij ND , there are N D-dimensional

sample vectors ðx1 ; x2 ; . . .; xN ÞT and D dimensional feature vectors ðf1 ; f2 ; . . .; fD Þ, the sample category vectors are C ¼ ðci ÞN1 . The symmetry uncertainty SU ðX; Y Þ of two random variables X and Y is defined as 

H ð X Þ  H ðXjY Þ SU ðX; Y Þ ¼ 2 H ð X Þ þ H ðY Þ

 ð7Þ

The definition (7) of symmetry uncertainty SU ðX; Y Þ can be transformed into SU ðX; Y Þ ¼

I ðX; Y Þ ½H ð X Þ þ H ðY Þ=2

ð8Þ

Therefore, symmetry uncertainty SU ðX; Y Þ is also a normalized form of mutual information I ðX; Y Þ, obviously minfH ð X Þ; H ðY Þg  ½H ð X Þ þ H ðY Þ=2

ð9Þ

The large denominator will lead to the normalization of mutual information unreasonable. Using Jensen’s inequality for entropy Eq. (1), we get H ð X Þ  log2

m X i¼1

pð x i Þ

1 ¼ log2 m pð x i Þ

ð10Þ

The entropy of random variable X is satisfied 0  H ð X Þ  log2 m

ð11Þ

From Eqs. (6), (11) can be drawn 0  I ðX; Y Þ  minflog2 m; log2 ng

ð12Þ

In the above formula, m and n denote the numbers of possible values of discrete random variables X and Y. Clearly, normalized mutual information (NMI) is always within the range [0,1]. Therefore, to achieve a balance between the relevance and the redundancy, we divide the class-feature mutual information by minflog2 m; log2 ng. The normalized class-feature NMI ðX; Y Þ is now defined as NMI ðX; Y Þ ¼

I ðX; Y Þ minflog2 m; log2 ng

The FCBF-NMI algorithm flow is shown in Table 1.

ð13Þ

824

H. Li et al. Table 1. The FCBF-NMI algorithm

2.2

SVM Parameter Optimization Based on Quadratic Grid Search Algorithm

In the SVM classification model parameter optimization problem, the two most important parameters for controlling the SVM classification and generalization ability are the penalty parameter and the kernel function parameter. The penalty parameter is used by the SVM to adjust the tolerance for error samples when constructing the optimal classification plane. The kernel function parameter determines the range of sample interval scales. Since the radial basis kernel function (RBF) is not limited by the number of samples and the dimension of the feature. Therefore, the SVM classifier of this paper uses the RBF kernel function. The principle of the grid search algorithm is simple. In theory, when the search space is large enough and the search step is small enough, it can ensure that the global optimal solution is searched. In practical applications, due to the increase of search parameters, the expansion of the search space, the reduction of the search step size, the grid becomes very dense, and the calculation amount of the algorithm increases sharply. The classification algorithm of this paper allows the existence of erroneous samples by introducing the relaxation factor n, which balances the empirical risk and generalization ability of SVM. In this case, the classification plane satisfies,   yi wT /ðxi Þ þ b  1  ni

ð14Þ

Fingerprint Feature Recognition of Frequency Hopping Radio

825

when 0\ni \1, sample xi is correctly classified. When ni [ 1, sample xi is misclasl P sified. Penalty term C ni is added, the objective function is minimized as i¼1

l X 1 ni min wT w þ C w;b;n 2 i¼1

ð15Þ

where C is the punishment factor. At the same time, RBF kernel function parameter c ¼ r1 affects the radial range of the kernel function, when the scale parameter r is small, r is less than the interval  of the actual training sample points, the value of the RBF kernel function K xi ; xj is small, the kernel function only affects the samples in the small-scale range where the sample interval is consistent with r. 2     K xi ; xj ¼ exp cxi  xj 

ð16Þ

Therefore, setting appropriate scale parameters r will also have a great impact on the SVM’s classification and generalization capabilities. This paper proposes a quadratic grid search parameter optimization algorithm, which optimizes the penalty parameter C and the RBF kernel function parameter c. The specific algorithm flow is shown in Table 2. Table 2. The quadratic grid search parameter optimization algorithm

826

H. Li et al.

The quadratic grid search algorithm increases the initial step size to k  l on the initial parameter space ðCi ; ci Þ and performs the initial traversal search. Thus, the optimal parameter combination ðC1i ; c1i Þ and evaluation result contour map of the initial SVM are obtained. In order to reduce the initial grid search time and grid density, the initial step size is increased to k times the default step size l. When judging the initial penalty parameter C, the step size is reduced to half of the default step size l for fine search. When C exceeds the critical value d, the algorithm searches for the opposite direction of the lower region of the evaluation result value. The goal is to avoid overfitting while optimizing parameters.

3 Simulation Experiment and Analysis 3.1

Experimental Data

The experimental data collected in 4 FH radios of the same model. Each radio station collects 500 samples. There are 8192 points per sample. The experimental environment is the processor Intel (R) Core i7-4510U CPU @ 2.00 GHz, memory 8.00 GB, Windows 7 64bit system. The simulation software is Matlab R2014a. The data sets were preprocessed to calculate the experimental feature set of each sample. There are seven conventional features including spectral symmetry mean, spectral symmetry variance, waveform characteristics, box dimension, Rayleigh entropy, information dimension, and Lempel-Ziv complexity (LZC). Compared with the radial integral bispectra (RIB), the axial integral bispectra (AIB) and the circumferential integral bispectra (CIB), the square integral bispectra (SIB) has better timeinvariant, scale-changing and phase-preserving properties. Therefore, SIB is selected as the higher-order spectral feature of this experiment. The characteristic dimension of the SIB is 256. Then the conventional feature set can be expressed as F1 ¼ ff1 ; f2 ; f3 ; f4 ; f5 ; f6 ; f7 g, where f1 ; f2 ; f3 ; f4 ; f5 ; f6 ; f7 are spectral symmetry mean, spectral symmetry variance, waveform feature, box dimension, Rayleigh entropy, information dimension, LZC respectively. The higher order spectral feature set SIB can   be expressed as F2 ¼ ff8 g, f8 ¼ f81 ; f82 ; . . .f8256 . 3.2

Feature Selection Experiment

In order to test the classification performance of the optimal feature set selected by the feature selection algorithm in this paper. The original data samples were randomly divided into 10 groups by a 10-fold cross-validation method, of which 9 groups were used as training samples and 1 group was used as test samples. The FCBF and FCBFNMI feature selection algorithms are used to select the conventional feature set and the high-order spectrum SIB feature set. The SVM classifier based on the quadratic grid search parameter optimization proposed in this paper is used for classification. The feature selection performance of the FCBF-NMI algorithm is compared and analyzed.

Fingerprint Feature Recognition of Frequency Hopping Radio

3.2.1

827

Experiment 1: Analysis of Feature Selection Algorithms Under Conventional Features Firstly, the optimal feature sets ½f3 ; f4 ; f6 ; f7  and ½f4 ; f5 ; f7  are selected from the feature set F1 by FBCF and FBCF-NMI respectively. Then, a single Conventional feature, all Conventional features, ½f3 ; f4 ; f6 ; f7  and ½f4 ; f5 ; f7  are respectively used as feature sets to classify the four FH stations. In order to facilitate the visualization of classification effects, the features selected by FCBF and FCBF-NMI are clustered respectively. The clustering results of 4 FH radios are shown in Fig. 2.

0.5 Radio Source 1 Radio Source 2 Radio Source 3 Radio Source 4

0.45

0.4

0.35

0.9 0.8

1.5 0.7

1 0.6

0.5 0.5

0

(a) Clustering diagram of features selected by FCBF 0.85 Radio Source 1 Radio Source 2 Radio Source 3 Radio Source 4

0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.9 0.8

1.5 0.7

1 0.6

0.5 0.5

0

(b) Clustering diagram of features selected by FCBF-NMI Fig. 2. Category rendering

828

H. Li et al.

It can be seen from the figure that the inter-class divergence of Fig. 2b is larger than the inter-class divergence of Fig. 2a. This shows that the FCBF-NMI algorithm reduces the redundancy of the feature set. The clustering effect of the optimal feature set selected by FCBF-NMI is relatively better. Since the original sample data is randomly grouped in the 10-fold cross-validation experiment, the test sample and the training sample are different for each experiment. In order to obtain more scientific and reasonable experimental results, the results of 50 times of 10-fold cross-validation experiments were averaged. The experimental statistics are shown in Table 3. Table 3. Conventional feature selection and classification experiment Classification features Spectral symmetry mean f1 Spectral symmetry variance f2 Waveform characteristics f3 Box dimension f4 Rayleigh entropy f5 Information dimension f6 LZC f7 Conventional features FCBF selection result FCBF-NMI selection result

Feature number 1

The best subset

Time/s

½f1 

Correct rate/% 43.51

1

½f2 

43.63

0.008

1

½f3 

51.72

0.013

1 1 1 1 7 4 3

½f4  ½f5  ½f6  ½f7  ½f1 ; f2 ; f3 ; f4 ; f5 ; f6 ; f7  ½f3 ; f4 ; f6 ; f7  ½f4 ; f5 ; f7 

64.55 65.13 63.52 63.43 72.32 72.15 72.24

0.013 0.013 0.015 0.013 0.082 0.057 0.029

0.008

It can be seen from Table 3 that under the same classification algorithm, the classification accuracy of multi-dimensional features is significantly better than that of single features. Compared with all conventional features, the classification accuracy of the feature set after feature selection decreases slightly, but the computation time is significantly reduced. After the FCBF-NMI deletes the feature of large redundancy in conventional feature sets, the number of selected features is less than that of FCBF, but the accuracy of classification is improved and the operation time is relatively short. 3.2.2

Experiment 2: Analysis of Feature Selection Algorithms Under Higher-Order Spectral Feature SIB   The 256-dimensional SIB feature f81 ; f82 ; . . .f8256 is selected by the FCBF and FCBF  NMI to obtain 193-dimensional feature f82 ; f86 ; . . .f8253 and 148-dimensional feature  3 11  f8 ; f8 ; . . .f8246 respectively. The original SIB features and the features selected by FCBF and FCBF-NMI were separately classified, and the results of 50 times of 10-fold cross-validation experiments were also averaged. The experimental results are shown in Table 4.

Fingerprint Feature Recognition of Frequency Hopping Radio

829

Table 4. Higher-order feature sets selection and classification experiment Classification features

Feature number 256

Higher-order spectral features SIB f8 FCBF selection result

193

FCBF-NMI selection result

148

The best subset  1 2  f8 ; f8 ; . . .f8256  2 6  f8 ; f8 ; . . .f8253  3 11  f8 ; f8 ; . . .f8246

Correct rate/% 80.60

Time/s 1.715

82.41

0.774

84.23

0.459

It can be seen from Tables 3 and 4 that under the same classification algorithm, the classification accuracy rate of high-order features is higher than that of conventional features. However, due to its high dimensionality, the classification time is longer. After the high-order SIB is selected by the feature, the selected feature dimension is reduced. Compared with the original SIB features, the classification accuracy rate is higher and the classification operation time is less. The number of features selected by the FCBF-NMI algorithm in this paper is less than that of the FCBF algorithm. The classification accuracy rate is higher than the FCBF algorithm, and the operation time is less. In summary, the FCBF-NMI algorithm removes some features with large redundancy and selects a more reasonable optimal feature set, therefore the average classification accuracy rate is higher. 3.3

SVM Parameter Optimization Experiment

In this experiment, the original grid search algorithm and the quadratic grid search algorithm proposed paper  3 11 in this 246  are used to optimize the parameters of SVM. The 148in Experiment 2 is taken as the optimal feature set. The dimensional f8 ; f8 ;    ; f8 range of optimization of penalty parameter and RBF kernel function parameter is set to ½210 ; 210 . Search step set to 1.5. The 10-fold cross-validation method was used for classification test. The contour map of the classification accuracy rate after parameter optimization is shown in Fig. 3. It can be seen from Fig. 3a that after the quadratic grid search parameter optimization, the SVM penalty parameter is equal to 1.3142, RBF kernel function parameter is equal to 0.022097, and the average classification accuracy rate is 84.3%. It can be seen from Fig. 3b that after the original grid search parameter optimization, the SVM penalty parameter is equal to 2.1328, RBF kernel function parameter is equal to 0.015625, and the average classification accuracy rate is 69.7%. The average classification accuracy of SVM after quadratic grid search parameter optimization is 14.6% higher than that of original SVM classifier. At the same time, the parameter range threshold is constrained to avoid over-fitting.

830

H. Li et al.

(a) Classification accuracy contour map after quadratic grid search parameter optimization

(b) Classification accuracy contour map after original grid search parameter optimization Fig. 3. Classification accuracy contour map

4 Conclusion This paper proposes a subtle feature recognition algorithm for FH stations based on feature selection. The normalized mutual information is used as the evaluation criterion of the feature selection algorithm. The redundant features in the original high-dimensional feature set are deleted, and the optimal feature set of the lower dimension is obtained. It effectively overcomes the dimensionality disaster caused by the high dimensionality of the feature set, reduces the computational complexity, and improves the timeliness of

Fingerprint Feature Recognition of Frequency Hopping Radio

831

feature recognition. At the same time, the quadratic grid search algorithm is used to optimize the initial parameters of the SVM classifier, which avoids the over-fitting phenomenon and improves the accuracy of classification recognition.

References 1. Xu QH (2005) Recognition method for frequency hopping in signal reconnaissance. Xidian University, Xi’an 2. Eric M, Dukic ML, Obradovic M (2000) Frequency hopping signal separation by spatiofrequency analysis based on MUSIC method. Spread Spectrum Techniques and Applications 66(1):78–82 3. Wang B, Chen QH, Wang CB (2009) Identification of frequency hopping signals based on clustering. J Beijing Univ Posts and Telecommun 32(2):80–84 4. Li WL, Liang T, Xu JY (2009) Radio frequency fingerprinting in frequency hopping communication. PLA University of Science & Technology, Nanjing 5. Luo ZX, Zhao ZJ (2009) Individual HF transmitter identification based on transient response of power amplifier. J Hangzhou Dianzi Univ 29(1):29–32 6. Tang Z, Lei YK (2017) Radio transmitter identification based on collaborative representation. Wireless Pers Commun 96(1):1377–1391 7. Fan H (2013) Non-stationary signal feature extraction and application. Science Press, p 6 8. Vergara JR, Estévez PA (2014) A review of feature selection methods based on mutual information. Neural Comput Appl 24(1):175–186 9. Chandrashekar G, Sahin F (2014) A survey on feature selection methods ☆. Comput Electr Eng 40(1):16–28 10. Hoque N, Bhattacharyya DK, Kalita JK (2014) MIFS-ND: a mutual information-based feature selection method. Expert Syst Appl 41(14):6371–6385 11. Liu H, Ditzler G (2017) A fast information-theoretic approximation of joint mutual information feature selection. In: International joint conference on neural networks. IEEE 12. Estévez PA, Tesmer M, Perez CA et al (2009) Normalized mutual information feature selection. IEEE Trans Neural Netw 20(2):189–201 13. Yu L, Liu H (2004) Efficient feature selection via analysis of relevance and redundancy. J Mach Learn Res 5(12):1205–1224 14. Ding Z, Wang F, Zhou P (2011) Fetal ECG extraction based on different kernel functions of SVM. In: International conference on computer research and development. IEEE, pp 205– 208 15. Vapnik VN (2000) The nature of statistical learning theory. Springer, NY, pp 25–314 16. You Y, Demmel J, Czechowski K et al (2017) Design and implementation of a communication-optimal classifier for distributed kernel support vector machines. IEEE Trans Parallel Distrib Syst 28(4):974–988 17. Hesterman JY, Caucci L, Kupinski MA et al (2010) Maximum-likelihood estimation with a contracting-grid search algorithm. IEEE Trans Nucl Sci 57(3):1077–1084 18. Vinh LT, Lee S, Park YT et al (2012) A novel feature selection method based on normalized mutual information. Appl Intell 37(1):100–120 19. Dash M, Liu H (1997) Feature selection for classification. Intell Data Anal 1(3):131–156

Integrated Design of High Speed Uplink and Emergency Telemetry and Control for LEO Satellite Qiang Mu(&), Hongwei Shi, Jinyuan Ma, and Meishan Chen Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected]

Abstract. An integrated high-speed data injection and emergency measurement and control system is produced in this paper, which is based on forward link of relay satellite. In this system, the relay high-speed forward link is used as the satellite-to-ground transmission channel. The satellite receive the data and record it in the storage module. The data will be transmitted to each device through intra-satellite bus network according to the need of use. Through this system, the high-speed data injection on the ground can be completed, the injection efficiency can be greatly improved, and the requirements of functional maintenance and new additions can be quickly responded. And the ability of backing up TT&C upstream is acquired. When the multiple faults of transponder and remote control module can not be recovered, satellites can still carry out emergency TT&C for satellite through relay forward channel, which improves the survivability of satellite in fault state. Keywords: Relay and control

 Forward link  High speed uplink  Emergency telemetry

1 Preface Many equipments on LEO remote sensing satellites have on-orbit software reconfiguration capabilities. Considering the large scale of software, in order to improve the feasibility of on-orbit implementation, the demand for high-speed upstream links is more urgent [1–3]. In addition, when designing a satellite system, it is usually considered that the means to deal with single and multiple faults need to be further improved. For example, when the multiple faults of transponder and remote control module cannot be recovered, the upstream injection function of satellite will be lost, which will cause the whole satellite to fail. If there are other emergency upstream channels at this time, the survivability of the satellite can be greatly improved. This paper presents a design scheme of integrated high-speed upstream and emergency measurement and control system, which is based on the 6 Mbps forward link provided by relay satellites for LEO satellites. Fast on-orbit maintenance of largescale software and FPGA program of each single satellite is realized by this scheme. Through this scheme, the occupancy of conventional TT&C channel by large data © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 832–839, 2020 https://doi.org/10.1007/978-981-13-9409-6_97

Integrated Design of High Speed Uplink

833

injection is avoided. It can effectively improve the efficiency of data upstream, and respond quickly to new requirements and software error modification. At the same time, it can be used as backup of conventional TT&C links to deal with multiple failures of conventional TT&C links.

2 System Design 2.1

Systematic Composition

Integrated high-speed data injection and emergency telemetry and control system uses forward inter-satellite link communication of relay satellite. The whole link is composed of data relay satellite ground control and management system, relay satellite, onboard relay and data management related equipment and satellite network, which jointly complete data transmission, processing and distribution from ground to satellite. The ground control and management system of the relay satellite is responsible for data preparation, convolution coding, SQPSK modulation, power amplification and directional transmission to the data relay satellite through large ground antenna. The relay satellite receives the upstream signal from the ground and transmits it to the user satellite through the satellite antenna after frequency conversion, routing and power amplification. The user satellite completes the pointing tracking of the relay satellite, receives, demodulates and decodes the incoming forward signal, sends the decoded signal to the user satellite data management equipment, and carries out the following data synchronization, transmission protocol processing, data storage and data scheduling. The composition of the user satellite correlation system is shown in Fig. 1. 2.2

Data Stream Design

Upstream data frames are classified into two categories: command frames and data frames. The system management unit processes the data according to different data types. The command frame is forwarded to CPU module in real time; the data frame is stored and then forwarded. The data frame is sent to storage module and stored in Flash memory. The stored data can be output frame by frame at specified output frequency under ground control as required. It can be processed by CPU module or distributed by 1553B bus. It can also be routed and forwarded directly through high-speed bus network. The data flow design is shown in Fig. 2.

relay relay forward forward link link 1553B bus

RT 1553B bus terminal

RT 1553B bus terminal

Relay satellite

data data management management high high speed speed bus bus RT 1553B bus terminal

high speed bus terminal

high speed bus terminal

Fig. 1. Block diagram of integrated high speed data injection and emergency telemetry and control system

834

Q. Mu et al.

command frame Data frame forward receiver

fra me sync hro niza Ɵon

Command frame

Data frame

SoŌware processing CPU module

1553B bus

storage module

High speed bus

High speed bus router

Fig. 2. Data flow diagram

2.3

Link Design

2.3.1 Synchronization and Channel Coding The high-speed upstream link is established, after the tracking antenna of the relay satellite completes the orientation of the user satellite and the relay antenna of the user satellite completes the tracking of the relay satellite. The upstream link adopts scrambling, data difference, (2, 1, 7) convolutional coding and SQPSK system. The structure is shown in Fig. 3. All data except frame synchronization code are scrambled by short sequence scrambling code. The pseudo-random sequence is generated by the following polynomial, in which the initial state of the shift register is all ‘1’: hðxÞ ¼ x8 þ x7 þ x5 þ x3 þ 1 The scrambled data are first converted into code type (NRZ-L is converted into NRZ-M), and then convolutional coding is performed. The logic of convolutional coding is shown in Fig. 4. Convolutional coding parameters are defined as follows: • • • • •

Coding rate: 1/2; Constraint length: 7 bit; Connection vectors: G1 = 1111001, G2 = 1011011; Phase relation: G1 is related to the first symbol; Symbol inversion: on the output branch of G2.

I branch

C1C1C1... upstream data

scrambling

NRZ-L data diīerence NRZ-M convoluƟonal coding

C2 C2 C2 ...

delayed by 1/2 symbols

Fig. 3. Ground physical structure of uplink

Fig. 4. Convolutional coding logic diagram

Q branch

QPSK

Integrated Design of High Speed Uplink

835

2.3.2 Data Link Protocol The relay forward link layer protocol adopts the VCDU format recommended by CCSDS and is defined as Fig. 5 [4]. VCDU Frame Definition

bit byte

Synchroni zation head

VCDU Primary Header VCDU Logo

Version Number

SCID

VCID

2

8

6

32 4 4

VCDU Counter

Mark Field

24

8

VCDU Insertion Field

VCDU Data Field

VCDU Error Control Field

256 32

6816 852

16 2

6 892

Fig. 5. VCDU data format definition

Explain: (1) (2) (3) (4) (5) (6) (7) (8)

Transfer frame version number: 2 bits; spacecraft identifier SCID: 8 bits; Virtual channel identifier VCID: 6 bits (used to distinguish data types); Virtual channel frame counter: 3 bytes, modulus 224; Markup field: 1 byte (reserved); VCDU insertion domain: 32 bytes; VCDU data domain: valid data; VCDU error control domain: Using CRC16 checksum algorithm, CRC checkout contains full frame data from the beginning of VCDU dominance to the end of VCDU data domain. " CRC ¼ ðX MðXÞÞ þ X 16

n16



15 X

!# X

i

GðXÞ

i¼0

Injected frames are divided into command frames, data frames and idle frames, identified by VCID. VC1 is a low-speed channel for transmitting control commands requiring immediate instructions. It occupies no more than 1. VC2 is a high-speed channel, transmitting high-speed upstream data. The idle channel transmits idle frames for data synchronization and channel maintenance. Specific allocation is shown in Table 1. Table 1. Virtual channel assignment table for high speed uplink Type Command frame Data frame Idle frame

Data Configuration command High-speed data injection Idle data

Virtual channel VC1 VC2 –

VCID 100100 101001 111111

The command frame transmitted by VC1 can insert configuration commands in the process of high-speed upstream to control the action of high-speed upstream. For the command frame is forwarded immediately without entering the CPU module directly

836

Q. Mu et al.

through the TT&C channel and processed by the software, it can also be used as the emergency injection channel when the TT&C channel fails. The design of command frame is compatible with the TT&C channel link protocol to some extent. Therefore, the remote control frame of the remote control link protocol is encapsulated in the VCDU data domain [5]. The remote control frame is a remote control package that conforms to the definition of CCSDS space package protocol [6]. Simplified on-board software design is also considered, encapsulating only one frame at a time, as shown in Fig. 6. After receiving the data frame, the CPU module obtains the content of the data domain and extracts the remote control package for subsequent processing, including execution or distribution to the subsystems through 1553B bus.

VCDU Frame DefiniƟon

Synchroni zaƟon Head

byte

4

Version Number

VCDU Primary Header VCDU Logo VCDU Counter SCID VCID

Mark Field

VCDU InserƟon Field

VCDU Data Field

VCDU Error Control Field

32

852

2

6

The length of a single remote control frame is not exceeding 852 bytes. Primary Header 5bytes Version Number

Pass Sign

2

1

Control Comma nd Flag

6 bytes

SCID

VCID

10

6

847bytes

Frame Length

Frame Serial Number

10

8

Transfer Frame Data Field 2

1

Remote Control Packet1

Packet Primary Header

Idle Bit

Deficiencies Complete AA

Remote Control Frame

Frame Error Control Field 16

Remote Control PacketN

Remote Control Packet2

Packet Secondary Header 4 bytes

8n

Packet Data Field User Data Field

Packet ValidaƟon 2 bytes

Fig. 6. Configuration command data (VC1) format description

VC2 transmits high-speed injection data. After entering the system management unit, the data is first stored by the storage module, which is to be processed later. Therefore, data integrity should be guaranteed in the transmission process, and frame loss and serial frames should be reissued in time to avoid the storage of erroneous data affecting subsequent applications. Therefore, the system management unit checks the correctness of the VCDU counter and error control domain of the injected frame to ensure the correct transmission and storage module storage. If the correctness check is not passed, the processing of the error frame and its subsequent injection frame is stopped, and the error information and the expected next frame count are telemetered to the ground. The ground can retransmit the data at the error counting position. The data that has been correctly received and stored need not be re-injected. According to the characteristics of on-board users, high-speed injection data can be divided into user data of 1553B bus RT terminal (type 1) and user data of high-speed upstream data (type 2). Type 1 data users are the CPU module software of the system management unit or the RT terminal user of the 1553B bus on board. After the injection frame is stored by the storage module, it is output to the CPU module frame by frame at the specified output frequency under the ground control according to the need. Type 1 high-speed injection data frame format definition is the same as the configuration command frame.

Integrated Design of High Speed Uplink

837

Through VCID identification, the software processing mode of the CPU module is the same as that of the configuration command frame. Type 2 data users are on-board highspeed bus node users. The content of the data domain is a program to be maintained. Based on the definition in Fig. 5, the VCDU data domain is a user-defined format. See Fig. 7. Type 2 high-speed injection data frames are stored by storage module, and then distributed frame by frame through high-speed bus network under the control of ground instructions. The distribution frame interval can be set, which is compatible with the processing capacity of each node. 2.3.3 Emergency Measurement and Control Link Establishment The system designs an on-orbit autonomous forward link opening strategy as an emergency plan. The autonomous control satellite will enter the minimum energy security mode after receiving the ground remote control information within the prescribed time. The relay terminal subsystem operates independently in parallel with the power supply, and the relay antenna points to the relay satellite independently, receives the remote control commands and data from the relay satellite, and eliminates the fault of the satellite to restore its health. VCDU DefiniƟon

Synchroni zaƟon head

byte

4

Version Number

VCDU Primary Header VCDU Logo VCDU Counter SCID VCID

Mark Field

VCDU InserƟon Field

VCDU Data Field

VCDU Error Control Field

32

852

2

6

852 bytes.Custom Data Format

Fig. 7. High-speed data injection (VC1) format description

Diīerence signal Terminal Terminal Controller Controller

relay relay antenna antenna component component

Tracking Tracking Receiver Receiver Shunt Shunt filter filter

Sum signal

Ka Ka Forward Forward Receiver Receiver LVDS Command frame Data frame BC System Management Unit 1553B BUS SpaceWire RT

1553B terminal

RT

1553B terminal

Spw Router

RT

1553B terminal

SPW terminal

SPW terminal

SPW terminal

SPW terminal

Fig. 8. A design of high-speed data injection and emergency measurement and control system

3 Design Examples 3.1

System Design

An example of a LEO satellite design is given to verify the design scheme proposed in this paper. The satellite on-board relay and data management related equipment and satellite network are implemented by the relay terminal subsystem and integrated electronic subsystem respectively. The data transmission from ground to satellite,

838

Q. Mu et al.

on-board processing and distribution to designated equipment are accomplished jointly. The relay terminal subsystem is mainly composed of Ka/S antenna (reflector and feed), tracking receiver, terminal controller and Ka forward receiver. Ka forward receiver receives Ka-band forward signal transmitted by data relay satellite, demodulates and decodes it, and sends the decoded signal to the management unit of satellite integrated electronic subsystem system through LVDS interface. The system management unit completes data synchronization, transmission protocol processing, data storage and data flow scheduling. The satellite network consists of 1553B bus network and SpaceWire network. The system management unit is responsible for network management and configuration. The 1553B bus connects data interface unit and control computer and other low-speed data users. The SpaceWire network takes the SpaceWire router as its core, and each SpaceWire node communicates by routing, as shown in Fig. 8. The forward link data format is shown in Fig. 5. 4 byte synchronization code 0x1ACFFCD. After the link is established, the data can be transmitted to the satellite at a sustained rate of 6 Mbps. 3.2

System Design

The software injection amount is about 512 KB. When injected through the traditional TT&C channel, the up-link code rate of the satellite is 4 kbps, the frame length does not exceed 1024 byte, the injection frame interval is 2 s, and the minimum injection time is about 30 min. According to the effective up-link of the satellite, it occupies the TT&C arc segment continuously. On the premise, it takes at least 7–8 h to complete the data transmission. When injected through the high-speed channel designed in this paper, the data transmission time does not exceed 2 s, and the actual implementation time in orbit does not exceed 1 min (including channel establishment time). The autonomous forward link strategy designed by the satellite is to open the forward link independently for 48–72 h (adjustable) without receiving any daily maintenance and remote control information on the ground. At this time, the forward link is opened independently to control the satellite to enter the minimum energy security mode. The power-up of the relay terminal subsystem is juxtaposed for autonomous tracking, and the relay antenna points to the relay satellite independently. When the forward link is opened, the relay satellite’s orientation is calculated based on real-time time information on the satellite, and the relay antenna is programmed to the selected relay satellite under the condition of ensuring the safety of the equipment (pointing angle is not exceeded); after the relay antenna is rotated in place, the beacon signal sent by the relay satellite is received to judge the beacon receiving quality (tracking receiver digital AGC). After the beacon signal receives normally, the satellite has the condition to receive the data on the forward link. The relay antenna is designed with and without feeder and differential feeder, which is input to the tracking receiver through waveguide. The angle of the relay satellite deviating from the beam center in the pitch and azimuth directions is calculated according to the beacon signals received by the sum branch and the difference branch. If the beacon receiving quality is good enough (the digital AGC of the tracking receiver exceeds the security threshold), the terminal controller controls the relay antenna to point the beam center to the relay satellite relay antenna autonomously to the relay satellite and enter the automatic

Integrated Design of High Speed Uplink

839

tracking state; if the beacon receiving quality is not ideal (the digital AGC of the tracking receiver does not reach the security threshold), the system will remain in the programmed tracking state. The simulation results show that on the selected user satellite, the longest invisible relay arc of the relay satellite is 128.3 min. Therefore, keeping track of the relay link on the satellite for 200 min can ensure the establishment of relay link with the relay satellite, and the relay forward link can be used for emergency TT&C command ascent.

4 Conclusion The integrated high-speed data injection and emergency measurement and control system proposed in this paper provides a 6 Mbps efficient code rate high-speed forward transmission channel for satellites based on the high-speed injection channel of relay Ka-band forward link. It realizes fast on-orbit maintenance of large-scale software and FPGA programs of satellite subsystems, avoids the occupation of measurement and control channel by large-scale data injection, and effectively improves data. The upstream efficiency can respond quickly to new demands and software error modification. At the same time, it can backup the upstream capability of TT&C. When the multiple failures of TT&C integrated transponder and remote control module can not be recovered, it can still implement emergency TT&C for satellite through relay forward channel. The integrated high-speed data injection and emergency measurement and control system improves the two-way communication ability between satellite and ground, and has passed the actual verification of satellite development, which makes the application of space data system model engineering a big step forward, accumulates important engineering experience, and has strong practical significance.

References 1. Pang B, Hao W, Zhang W et al (2017) Scheme of SRAM-FPGA on-orbit reconfiguration. Spacecraft Eng 26(5):51–56 (in Chinese) 2. Liang G, Gong W, Liu H et al (2010) The design of reconfigurable LEO satellite communication system. J Astronaut 31(1):186–191 (in Chinese) 3. Wu H, Luan J, Zhang L et al (2016) Research of on-orbit reconfigurable fault-tolerant technology for spacecraft avionics system. Spacecraft Eng 25(2):120–126 (in Chinese) 4. CCSDS. CCSDS 132.0-B-2 TM Space Data Link Protocol. Washington D.C.: CCSDS (2015) 5. CCSDS.CCSDS 232.0-B-2 TC Space Data Link Procotol. Washington D.C.: CCSDS (2010) 6. CCSDS133.0-B-1 Space Packet Protocol. Washington D.C.: CCSDS (2003)

Imaging Correction Based on AIS for Moving Vessels in Spaceborne SAR Images Yuting Gao1(&), Guangjun He2, Tao Zhang2, Dongqiang Zhou1, Dong Yang1, and Jindong Li1 1

2

China Academy of Space Technology, Beijing, China [email protected] Beijing Institute of Satellite Information Engineering, Beijing, China

Abstract. The moving vessels exhibit defocusing and other characteristics in SAR images, which brings great challenges to the target detection in the later stage. Combining spaceborne SAR images and AIS information to perform imaging correction analysis of moving vessels is the basis for efficient and accurate detection and identification of moving targets. Firstly, the performance of the moving target in SAR image is analyzed theoretically, and the correctness of the theoretical model is verified by simulation experiments. Then simulation and correlation analysis of the scene with moving target is conducted, with the combination of AIS information. In the case of a status with multi-vessel and multi-motion, the AIS information is introduced together with the SAR results to correct the moving targets and label missed vessels in the image. The results show that with the combination of AIS information and SAR image, it is able to obtain the relative focused position of the moving vessels without offset according to the motion state of the target, as well as efficiently screen the hidden vessels in the AIS. Keywords: SAR

 AIS  Moving targets  Imaging correction

1 Introduction Thanks to the advantage of wide-area observation throughout the day, spaceborne SAR becomes an important means of monitoring the targets of vessels in the ocean. With the advancement of SAR technology and the launch of SAR satellites with high resolution and long-term stable operation, spaceborne SAR is more and more widely used in vessel targets monitoring [1, 2]. However, due to the inherent speckle noise of SAR, the limited imaging resolution of the conventional strip mode, and the difficulty in construction of the vessel sample library, the detection and recognition ability of the vessel targets in spaceborne SAR images still needs to be improved. In addition, the positional deviation, defocusing and other phenomena generated by the moving vessel targets in SAR imaging process also increase the difficulty in extracting the target information [3]. Therefore, combining information other than SAR images is of great significance for the detection and recognition of moving targets.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 840–847, 2020 https://doi.org/10.1007/978-981-13-9409-6_98

Imaging Correction Based on AIS for Moving Vessels

841

The Automatic Identification System (AIS) is a new type of auxiliary navigation system with functions such as identifying and tracking ships, avoiding collisions, environmental protection and search and rescue. With the rapid development of small satellite system technology, on-board AIS based on small satellites has become a hotspot for large-scale and even global ocean surveillance research. The typical onorbit AIS satellite platform mainly includes the VesselSat series of ORBCOMM and the AprizeSat series of SpaceQuest from the United States, the Rubin series of OHB from Germany, and the NTS of COMDEV from Canada. The “Tiantuo No. 1” technical experimental satellite is independently designed and developed by the National University of Defense Technology, one of whose main task is to carry out on-orbit experiments conducted by spaceborne AIS. AIS system on Tiantuo No. 1 covers more than 3,000 km in diameter, which can accurately provide data such as vessel name, position, speed and heading within the coverage. However, on-board AIS is generally a problem of serious information congestion, and key communication technologies need to be further resolved. When detecting the AIS signal from a large amount of vessels in the landing area, there is a significant number of missed inspections, and thus AIS is generally used for monitoring ocean ship targets far from the seacoast. In consideration of the respective advantages of spaceborne SAR and AIS, vessels detection and identification technology combining SAR and AIS information has been paid attention to, and satellites equipped with both SAR and AIS have also been launched [4, 5]. In the detection and identification tasks that combine SAR images and AIS information, the correspondence between the same ship obtained from different information sources is the premise of subsequent processing. This paper mainly analyzes the vessel positioning and detection problems caused by the migration and defocus generated by motion of targets in SAR images, and uses the known AIS information to perform position correction and refocusing of vessels in SAR images. Through the effects of correction and refocusing, it can be judged whether the correct AIS information is applied, which is one of the basis for estimating whether the vessels in SAR images and the AIS information are correctly corresponding. On the other hand, extracting the vessels without corresponding AIS information from the corrected SAR images can provide clues for missed targets, and then screening those unconventional vessels that intentionally silence AIS [6].

2 Performance of Moving Vessels in SAR Images Figure 1 shows the SAR operating in the right side view mode. The satellite heads to azimuth direction with velocity v; the target is on the right side of the radar; the direction between radar and the moving target is Rr ; the velocity of the target is vr in range direction and va in azimuth direction; the acceleration of the target is neglected. With the information above, the distance between the radar and the moving target can be shown as a function of time: RðtÞ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðvt  va tÞ2 þ ðRr þ vr tÞ2

ð1Þ

842

Y. Gao et al.

Fig. 1. Geometric relationship between SAR and moving target

Carry out Taylor expansion of Eq. (1) at the moment t ¼ 0, ignore high-order terms, and the approximation is RðtÞ  Rr þ vr t þ

ðv  va Þ2 t2 2Rr

ð2Þ

When the radar’s transmitted signal is a load frequency chirp signal:   sðsÞ ¼ aðsÞ exp j2pxs  jpKr s2

ð3Þ

where aðsÞ is the rectangular envelope of the transmitted pulse; x is the carrier frequency; Kr is the linear modulation frequency. The echo obtained at the moment t is: "     # 2RðtÞ 2RðtÞ 2RðtÞ 2 exp j2px s   jpKr s  sðs; tÞ ¼ qWðtÞa s  c c c 

ð4Þ

Compared to the transmitted signal, the backscattering coefficient of the target q and the antenna pattern in azimuth direction WðtÞ are considered, and the transmitted signal has a delay of twice the target distance. Bring Eqs. (2) into (4) and leave only the baseband signal to get the echo of the motion point. Since the echo of the moving target introduces the motion parameters of the target, it is different from the echo signal of a stationary target. The target velocity components in both range and azimuth directions will affect the imaging process, which is manifested in the following three aspects [7]:

Imaging Correction Based on AIS for Moving Vessels

843

The velocity component in range direction will cause an upward position offset of the target in azimuth direction; The velocity component in range direction will cause an upward position offset of the target in range direction; The velocity component in azimuth direction will cause defocusing of the image.

3 Parameter Settings of the Simulation and Results Analysis According to the theoretical analysis in Sect. 2, the targets are simulated and analyzed under a set of parameters. The parameters are shown in Table 1. The simulation results are shown in Table 2.

Table 1. Parameter settings of the simulation Height of the satellite Bandwidth Wavelength Incident angle Antenna length Azimuth resolution Range resolution

500 km 65 MHz 0.1 m 30° 10 m 5m 5m

Table 2. Results of the moving vessels Magnitude of speed (10 knots) 0 1 2 3 1 2 3 1 2 3

Direction of speed 0 Azimuth direction Azimuth direction Azimuth direction Range direction Range direction Range direction 45° 45° 45°

Peak sidelobe ratio in range direction (dB) 13.52 13.52

Peak sidelobe ratio in Offset in Offset in azimuth direction range azimuth (dB) direction (m) direction (m) 19.32 0 0 14.66 0 0

13.5

8.64

0

0

13.46

6.18

0

0

13.44

18.8

0

212

13.34

18.3

0

423

13.74

17.88

13.46 13.38 13.32

16.06 12.84 9.36

−0.92 0 0 0

635 150 300 450

844

Y. Gao et al.

According to the offset of the target in azimuth direction caused by range velocity, when the velocity is 10 knots, the theoretical azimuth deviation is 211.68 m while the simulation result is 212 m, which reflects the correctness of the azimuth migration model described in Sect. 2. Since the position of the vessels at moment Ta =2 is used as the offset reference point during the simulation, the offset in range direction is almost 0 m, which is in accordance with the offset model in range direction described in Sect. 2. During the imaging process, there is no window added in range direction, while a Hamming window is added in azimuth direction. It can be seen from the peak sidelobe ratio results that it is almost ideally 13 dB in range direction, and in azimuth direction it decreases with the increase of the azimuth velocity, which is due to the defocus caused by the azimuth velocity.

4 Moving Targets Scene Simulation and Association Analysis Combined with AIS Information In this section, the comprehensive scene simulation of two and more vessels with multiple motion states is used to verify the guidance of AIS information on imaging correction, and to take the advantage of the AIS to recognize the targets and judge if they are silent. Firstly, a scene of multi-vessel with multi-motion states is simulated for a total of 60 s. Then at the 30th s, SAR imaging of the scene states is carried out. Also assume that the AIS information of the scene at both second 0 and 60 is obtained. With the AIS information before and after imaging, the imaging correction of the moving targets in SAR image, recognition of targets, and the judgment of the silent targets are performed. Settings and the observed AIS information is shown in Table 3.

Table 3. The observed AIS information AIS time (s) 0

60

Number of vessels 1 2 3 4 5 6 7 1 2 3 4 5 6 7

Magnitude of velocity (knot) 20 0 20 30 20 10 20 20 0 20 30 20 10 20

Direction of the velocity (°) 0 0 −90 0 0 45 −45 0 0 −90 0 0 45 −45

Position (100 m) (−43.09, −40) (−23, 0) (0, 26.91) (6.91, 30) (21.91, 10) (8.2422, −18.2422) (8.2422, −41.755) (−36.91, −40) (−23, 0) (0, 33.09) (16.18, 30) (28.09, 10) (10, −20) (11.755, −38.245)

Imaging Correction Based on AIS for Moving Vessels

845

The radar image obtained at second 30 is shown in Fig. 2.

Fig. 2. SAR image

It can be seen from the enlarged parts of the SAR image that the defocusing of the vessel with 20 knots in azimuth direction is obvious, the brightness is obviously lower than the vessel with 20 knots in range direction, and the vessel with 20 knots in range direction almost has the same brightness as the stationary vessel. In addition, the two vessels in the lower enlarged image overlap in SAR image, which is due to the offset of the targets in SAR image caused by its motion in range direction. Based on the position and velocity information of the vessels in the AIS, the interpolation yields the predicted position at second 30. Calculate the predicted position of the vessels in SAR image based on the predicted position and velocity information in AIS. The nearest neighbor algorithm is used to correspond to the targets in the simulated SAR image, then the imaging of moving targets are corrected [8] and the accurate focused targets with real position in SAR image is obtained. It can be seen from Fig. 3 that the coincident vessels in the original SAR image are separated and in their correct positions.

846

Y. Gao et al.

Fig. 3. Corrected SAR image

In addition, there are 9 vessels that can be seen in SAR image, while only 7 shown in the AIS information. Further analysis of two vessels without AIS information can increase the efficiency of obtaining sensitive targets.

5 Conclusion Based on the analysis of the offset and defocus in SAR images generated by motion of targets, this paper uses the known AIS information to perform position correction and refocusing of vessels in SAR images. The correctness of the offset and focus models is verified and the importance of AIS information in SAR imaging correction is demonstrated. On the other hand, extracting vessels without AIS information from the corrected SAR images can provide clues for finding vessels that do not carry AIS or are missing the test, and thus can screen certain military targets that intentionally silent AIS. SAR images and AIS information can assist each other to identify moving vessels more accurately and efficiently.

References 1. Ji K, Xing X, Chen W, et al. Ship classification in TerraSAR-X SAR images based on classifier combination (2014). IEEE Int. Geosci. Remote Sens. Symp.—Igarss 2589–2592 2. Chen W, Xing X, Ji K. A Survey of Ship Target Recognition in SAR Images [J] (in Chinese). Modern Radar (2012) 34(11):53–58 3. Vachon PW (2006) Ship detection in synthetic aperture radar imagery

Imaging Correction Based on AIS for Moving Vessels

847

4. Hoellisch D, Bach K, Janoth J et al (2011) On the second generation of TerraSAR-X [C]. In: European Conference on Synthetic Aperture Radar. VDE pp 1–4 5. Northrop Grumman Corporation: JIB antennas from northrop grumman astro aerospace will support ship identification capability being added to Canada’s RADARSAT constellation mission[R]. https://www.marketscreener.com/NORTHROP-GRUMMAN-CORPORAT-13763/ news/Northrop-Grumman-Corporation-JIB-Antennas-From-Northrop-Grumman-AstroAerospace-Will-Support-Ship-17219649/ 6. Vachon PW, Dragosevic MV (2009) Estimation of ship velocity by adaptive processing of single aperture RADARSAT-2 data: validation with AIS data from the strait of gibraltar. Technical Memorrandum DRDC Ottawa 109:2–31 7. Zou B, Zhang L, Kou L, et al., Characteristic Analysis and Simulation of SAR Moving Targets [J] (in Chinese). Radar Science and Technology (2008) 6(2):116–122 8. Zhang L. Using Synthetic Aperture Radar (SAR) to Imaging Ground Moving Target (GMTI) [J] (in Chinese). ELECTRON IC ENGINEER, (2006) 32(2):1–4

Research on Flying Catkins Detection and Removal in Target Video Hualin Liu(&), Haipeng Wang, Limin Zhang, and Xueteng Li Naval Aviation University, Yaitai, China {447687127,574862300}@qq.com, {whp5691,iamzlm} @163.com

Abstract. In order to solve the practical application problem of the automatic target-scoring system based on computer vision under dynamic interference conditions such as flying catkins, this paper improves the traditional detection and removal methods of frame difference method and mean value method and proposes an effective flying catkins detection and removal algorithm based on time domain and brightness characteristics by fully studying the characteristics of flying catkins in target video and existing rain and snow removal algorithm. Experiments show that this method can effectively solve the problem of missed detection caused by the multiple moving states of flying catkins and realize the rapid flying catkins removal of the target video with good robustness and timeliness. Keywords: Target video

 Flying catkins  Detection and removal

1 Introduction With the development of image processing technology, the automatic target-scoring system based on computer vision has been widely applied due to its advantages of fast, accurate and low cost [1]. However, the system design is mostly based on video without interference, and the quality of real-time images obtained in the real environment usually degrades to different degrees. The shooting range is generally built in an open space, with earth walls or brick walls, and tall trees planted around it. In April and May, poplars produce a lot of flying catkins. They not only increase the difficulty of shooting, but also reduce the visual effect of target video and make the video present the characteristics of fuzzy target, which seriously interferes with the detection and identification of bullet holes in the target-scoring system. Therefore, the removal of flying catkins in target video has important practical application significance. All along, research on dynamic disturbances such as flying catkins has focused on the removal of rain and snow. For general rain and snow scenes, the corresponding model is usually built based on the visual or physical characteristics of rain and snow for detection, and the results are taken as guidance to realize the removal of rain and snow and restore clear video images [2, 3]. For example, the raindrop optical model and dynamic model proposed by Gray and Nayar [4], the raindrop color model proposed by Zhang et al. [5], a snowflake removal filter based on the optical characteristics of snow-flakes designed by Liu et al. [6], an improved active contour model proposed by Sun et al. [7]. They are well able to remove rain and snow under certain conditions. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 848–857, 2020 https://doi.org/10.1007/978-981-13-9409-6_99

Research on Flying Catkins Detection and Removal in Target Video

849

Based on the actual application of the target-scoring system, this paper studies the problem of flying catkins interference in the target video. Combined with the analysis of flying catkins characteristics in target video and the actual needs of automatic targetscoring, this paper proposes an effective flying catkins removal algorithm based on time domain and brightness characteristics. Based on the frame difference method, the algorithm improves the constraint condition of flying catkins detection, increases the candidate positions of flying catkins, and realizes the detection of all flying catkins. On the basis of raindrop removal by means method, the current frame region averaging operation of candidate positions of flying catkins was added to effectively solve the problem of incomplete removal caused by partial overlap of flying catkins between video frames. Experiments show that the algorithm can quickly and effectively detect and remove flying catkins in target video.

2 Analysis of Flying Catkins Characteristics Video images taken in the actual shooting range environment are seriously affected by flying catkins and have irregular white interference such as dot, block or line. These interferences have high brightness, various shapes and sizes, and complex and diverse motion features, which not only reduce the image visual effect and application value, but also seriously affect the detection and extraction of bullet holes in the automatic target-scoring system. 2.1

Physical Characteristics of Flying Catkins

Flying catkins are usually white flocculent floccules, which are made of poplar catkins glued together with each other. They have no fixed shape, small mass, light body, and they vary in size but usually do not exceed 3 cm2. Their motion state is complex, and the speed and direction of falling flying catkins are affected by a variety of factors, including their own weight, volume shape, wind speed and other natural environmental factors. And they can move at high speed, low speed, or even stop in a short time. 2.2

Brightness Characteristics of Flying Catkins

Flying catkins are white and have a brightness value higher than the background under the reflection of light. At the same time due to the structure and composition, they present a certain transparency effect on the vision. Therefore, when a pixel in the image is covered by flying catkins, its brightness is affected by both background and flying catkins. If the frame speed of the target video is V, then the exposure time of each frame is 1=V, and the time for a pixel to be covered by flying catkins is s. Given that s is far less than 1=V, then the brightness of the pixel Ibf is: Zs Ibf ¼

Z1=V Ef dt þ

0

Eb dt s

ð1Þ

850

H. Liu et al.

Ef is the flying catkins brightness, Eb is the background brightness. Let Eb be the average brightness of the background, Ef be the average brightness of the flying catkins, so that a ¼ s=ð1=V Þ, the brightness change due to the coverage of the flying catkins DI is: If ¼ Ef ð1=V Þ

ð2Þ

Ib ¼ Eb ð1=V Þ

ð3Þ

Ibf ¼ aIf þ ð1  aÞIb

ð4Þ

DI ¼ Ibf  Ib ¼ aIf  aIb

ð5Þ

If and Ib are the total brightness of the flying catkins and the background of the pixel position within the exposure time 1=V, respectively. 2.3

Time Domain Characteristics of Flying Catkins

In the actual shooting range environment, flying catkins move randomly in the air with their own gravity and external forces, so in the target video, any pixel of video frame will not be covered by flying catkins all the time. It can be seen that it has two states in the whole timeline: the condition with a high brightness value represents the presence of flying catkins, and the condition with a low brightness value indicates the absence of flying catkins.

3 Flying Catkins Detection and Removal As an important pretreatment part of the automatic target-scoring system based on computer vision, the removal of flying catkins of target video should be established on the basis of not affecting the bullet holes detection and the overall target-scoring work of the system, and the algorithm complexity should be reduced as far as possible to improve the operation efficiency, laying a foundation for achieving real-time targetscoring. Through the relevant research on the rain and snow removal algorithm, the removal of flying catkins in target video can be divided into dynamic interference detection and removal [3]. 3.1

Frame Difference Method Raindrop Removal Algorithm

Let the mass of the raindrop be m, the ambient wind speed be v1 , and the falling speed be v2 . Under the influence of gravity g, air resistance F1 , buoyancy F2 , etc. [8], in the falling process, raindrops move uniformly after reaching a constant speed. Gunn and Kinzer [9] described the relationship between the final falling velocity v of raindrops and raindrop diameter d as follows:

Research on Flying Catkins Detection and Removal in Target Video

pffiffiffi v ¼ 200 d

851

ð6Þ

Meanwhile, the movement direction of raindrops [10] can be described as: F1 ¼ qVg

ð7Þ

F2 ¼ 3plDv2

ð8Þ

U ¼ tan1 ½ðm=3plDÞ  ðg  qVg=mÞ  ð1=v1 Þ

ð9Þ

Among them, q for air density, V for the volume of rain drops, l for motion viscosity. It can be seen from the above analysis that raindrops have relatively clear motion characteristics. Gray and Nayar [4, 11] believe that in video disturbed by rain, the same pixel will not be occupied by raindrops all the time. Combining this prior information with the brightness characteristics of raindrops, they proposed frame difference method and mean value method to remove raindrops. In this method, In1 , In and In þ 1 of three consecutive frames in video are taken on the premise that the background is still, and the brightness characteristic constraint is established through the method of inter-frame subtraction, as is shown in (10). The position of pixels covered by raindrops in the current frame can be roughly obtained. Then, the motion characteristics of raindrops are used to eliminate interference and refine the position of raindrop. Finally, the mean value of pixel values corresponding to these positions in two frames before and after the current frame is obtained to replace the pixel value of the raindrop position of the current video frame, so as to effectively remove raindrops. DI ¼ In  In1 ¼ In  In þ 1  c

ð10Þ

c is the value added to the brightness of flying catkins interference, usually 2. Based on the research of Gary et al., Zhang et al. [12] changed the mode of video frame selection and selected the five-frame difference method for raindrop detection, and the constraint condition correspondingly became: DI ¼ In  In2 ¼ In  In þ 2  c

ð11Þ

Regarding the removal of raindrops, Zhang et al. proposed two methods. First, four images before and after the current frame are used to determine whether the mean of In1 and In þ 1 or In2 and In þ 2 are used for image restoration through the results of (12). Second, the pixel values of the corresponding positions of the four frames of images are sorted, and the mean pixel values of the two frames with smaller pixel values are taken for raindrop removal. The improved method can effectively solve the situation that the same pixel may be covered by different raindrops in two consecutive frames. A¼

In1 þ In þ 1 2



In2 þ In þ 2 2

ð12Þ

852

3.2

H. Liu et al.

A Flying Catkins Removal Algorithm Based on Time Domain and Brightness Characteristics

The flying catkins are dynamic interference, which is different from the continuous invariance after the bullet hole is on the target. It is satisfied that the same flying catkins are not continuously disturbed at the same position within a certain time interval. Therefore, we can consider the use of frame difference method to achieve the detection of flying catkins. Flying catkins are light in weight, and their speed and direction are greatly affected by external forces such as wind force compared with gravity. Different from the fact that the raindrops keep moving at a faster speed in front of the falling ground [13], the flying catkins do irregular random motion in the space, resulting in a variety of motion states, and there is a situation of slow motion in a short time. As shown in Figs. 1, 2 and 3, pixels in two adjacent frames are continuously blocked by the same flying catkins, so that all flying catkins cannot be detected completely in the traditional frame difference method through the difference operation of adjacent frames.

Fig. 1. Current frame image

Fig. 2. Previous frame

Research on Flying Catkins Detection and Removal in Target Video

853

Fig. 3. The next frame

According to the characteristic analysis of the flying catkins, in addition to the brightness and time domain characteristics, there is no clear attribute feature that can be used to effectively detect and extract it. Meanwhile, the movement of leaves and other background in the actual target video will bring interference to the detection and extraction of bullet holes. Therefore, from the perspective of the brightness characteristics, combined with the time domain information of the target video, based on the five-frame difference method raindrop removal algorithm [12], the flying catkins detection constraints are improved, and In2 , In and In þ 2 frame images in target video are selected to detect the brightness change of pixels in time-domain information through frame difference, as shown in (13), that is, the body position of flying catkins, wherein c is the brightness increase value when there is interference, usually 2. On this basis, the morphological processing [14] is carried out by using the expansion method of the main position of the flying catkins, and the candidate positions that may contain part of the flying catkins are obtained by image algebraic operation. At the same time, the interference fine detection part is removed and the dynamic interference detection is used to roughly replace the flying catkins detection to detect all the dynamic interference in the target video. 

DI1 ¼ In  In2  c DI2 ¼ In  In þ 2  c

ð13Þ

In terms of the removal of flying catkins, after the location of flying catkins is detected, since the same pixel in the dynamic picture cannot be covered by flying catkins all the time, the information in the time domain can be used. The scene not covered by flying catkins is used to restore the pixel value of the image interfered by them. It is known from the detection method that the corresponding positions of the two frames of In2 and In þ 2 are not interfered by the flying catkins. In this paper, the brightness average value of the pixel corresponding to the position is selected as the

854

H. Liu et al.

image restoration value to effectively remove the main position of flying catkins. In the candidate position of the flying catkins, the method of regional mean value is used to replace the pixel value of the region on the basis of the current frame, and the partial miss detection problem caused by the flying catkins motion state is effectively solved. 3.3

Analysis of Results

In this paper, the current frame image in target video is selected as shown in Fig. 3. The size and shape of the flying catkins are extremely similar to bullet holes, which brings serious interference to the detection of bullet holes in the automatic target-scoring system. The current frame images subject to flying catkins interference are processed by the traditional frame difference method and the improved interference removal algorithm in this paper, respectively. The binary images with detected dynamic interference including flying catkins are shown in Figs. 4, 5 and 6. It can be clearly seen that the improved interference removal algorithm in this paper can detect all the whole flying catkins more accurately than the traditional frame difference method, so as to avoid the partial omission of flying catkins caused by slow movement in a short time. The removal of flying catkins is based on the accurate detection of flying catkin position. Figures 7 and 8 shows the results of the traditional frame difference method and the improved interference removal algorithm in this paper respectively. It can be clearly seen that the method of this paper not only effectively solves the problem of detecting and removing the slow-moving flying catkins in a short time, but also removes the dynamic interference that affects the bullet holes detection, and obtains a good interference removal effect while improving the algorithm operating efficiency.

Fig. 4. The location of flying catkins

Research on Flying Catkins Detection and Removal in Target Video

Fig. 5. The main body of flying catkins

Fig. 6. Candidate location of flying catkins

Fig. 7. Interference removal result 1

855

856

H. Liu et al.

Fig. 8. Interference removal result 2

4 Summary and Prospect This paper deeply studies the problems related to rain and snow removal, and analyzes the physical and visual characteristics of the flying catkins in the real shooting range environment. Under the premise that the detection of bullet holes is not affected, the frame difference method of raindrop removal algorithm is improved, and the flying catkins removal algorithm based on time domain and brightness characteristics is proposed, which can effectively solve the problem of removal of flying catkins under special motion state and realize the effective removal of all flying catkins in target video. Experiments show that the method is simple, fast and effective, and can complete the target video pre-processing of the automatic target-scoring system, and to some extent eliminate the impact caused by jitter and illumination changes.

References 1. Luo J, Zhang Z (2016) Survey on automatic target-scoring system based on image processing technology. Laser J 37(07):1–6 2. Xu Y (2017) Video enhancement under bad light. Beijing University of Posts and Telecommunications, China 3. Shi X (2016) Research on rain detection and removal in single image. Beijing Jiaotong University, China 4. Garg K, Nayar SK (2014) Detection and removal of rain from videos. In: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, vol 1, pp I–I 5. Zhang X, Li H, Qi Y et al (2006) Rain removal in video by combining temporal and chromatic properties. In: 2006 IEEE international conference on multimedia and expo, pp 461–464 6. Liu H, Ma L, Cai X et al (2009) A closed-form solution to video matting of natural snow. Inf Process Lett 109(18):1097–1104

Research on Flying Catkins Detection and Removal in Target Video

857

7. Sun Y, Duan X, Yu Z (2011) Research on removal algorithm of rain and snow from images based on improved snake model. Appl Res Comput 28(05):1991–1993 8. Batchelor CK, Batchelor GK (1967) An introduction to fluid dynamics. Cambridge University Press, pp 214–215 9. Gunn R, Kinzer GD (1949) The terminal velocity of fall for water droplets in stagnant air. J Meteorol 6(4):243–248 10. Zhou P (2017) Review of rain removal techniques in videos and images. J Graph 38(5):629– 646 11. Garg K, Nayar SK (2007) Vision and rain. Int J Comput Vision 75(1):3–27 12. Zhang Y, Chen Q, Liu Y (2007) Research on the method of rain detection and removal in video image. Microcomput Appl (12):16–20 + 68 13. Cui X (2014) The removal of rain and snow from video images based on statistical learning of spatiotemporal property. Beijing University of Posts and Telecommunications, China 14. Gonzalez RC, Woods RE, Eddins SL (2005) Digital image processing using MATLAB. Publishing House of Electronics Industry, Beijing, pp 255–261

Robust Context-Aware Tracking with Temporal Regularization Tianhao Li1 , Tingfa Xu1,2(B) , Yu Bai1 , Axin Fan1 , and Ruoling Yang1 1 2

Image Engineering and Video Technology Lab, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education of China, Beijing 100081, China

Abstract. Discriminative Correlation Filters demonstrate superior capabilities, while still suffering from background clutter. The proposed context-aware correlation filter (CACF) framework effectively avoids the interference of background noise with the explicit incorporation of global context information. However, there is still sequential context information that is not considered. This work proposes a robust context-aware tracking based on hand-crafted features by adding a temporal regularization. The temporal regularization term provides temporal information for learning filter, which limits the mutation of the filter. Experiments on OTB-100 show that our tracker demonstrates excellent accuracy and significantly improves the robustness of CF trackers and those trackers in the CACF framework.

Keywords: Correlation filter regularization

1

· Context-aware tracking · Temporal

Introduction

Computer vision is one of the current research hotspots. Object tracking as a core problem in computer vision plays a huge role in the fields of surveillance, military strikes and robotics. Consistent with other image processing method, object tracking contains three parts which are obtaining images from imaging devices, processing images by a computer or embedded systems and providing an intelligent interface to the operating personnel or advanced signals for the next system. In the field of object tracking, a large amount of tracking data sets such as OTB-2013 [19], OTB-2015 [20], TC-128 [14] and UAV123 [16], and competition such as VOT and MOT which are provided by various laboratories and other research organizations, not only offered great convenience for researches but also attracted a large number of scholars. In this paper, we target the single-object tracking. The bounding box of the tracking object was obtained by a manual annotation method or a target detection method in the first frame of the tracking c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 858–865, 2020 https://doi.org/10.1007/978-981-13-9409-6_100

Robust Context-Aware Tracking with Temporal Regularization

859

video. And then in the remaining frames of the video, tracker keeps the tracking status of the target and draws the bounding box. With the improvement of the calculation performance of computers, the application of object tracking becomes more and more prevalent. However, object tracking still faces many difficult situations such as occlusion, illumination variation, scale variation and so on. In this paper, we propose a robust context aware tracker via learning temporal structure. We use a rich context aware filter that combines temporal information with the background information around the target. The model considers both the spatial constraint and the temporal constraint, which can provide more robust tracking performance. At the same time, APCE [18] is used as the occlusion criterion for model update.

2

Related Works

Recently, tracking by detection methods perform well for single-target tracking. These methods are usually composed of target feature extraction, model representation, model update and position detection. Correlation filter tracking is one of the tracking by detection methods with superior performance and fast computing speed. Cross-correlation or correlation filters are commonly used in signal processing to calculate the degree of correlation of signals, and now are widely employed in target detecting and object tracking. The minimum output sum of squared error (MOSSE) [3] first uses the correlation filter for target tracking. It learns a correlation filter from a single frame by solving the model equation. The equation is designed to make the response obtained by calculating the correlation operation between the filter and the object candidate area more likely to a standard gaussian response. CSK [9] uses circulant samples and kernel trick to train filter with dense samples, which improves the tracking effect of the correlation filtering algorithm. Danelljan et al. [4] adopt the color-naming feature to promote the performance in deformation. Based on CSK, KCF [10] adopts the HoG feature instead of the raw pixel. The HoG feature can better describe objects even with deformation. To adopt the scale variation, SAMF [13] learns filters of different scales to detect the position and the scale of the target, and DSST [5] detects the position first and then detects the scale with a scale filter. SRDCF [6] penalizes correlation filter coefficients by adopting a spatial regularization component, which considers the spatial weight in learning process. BACF [11] by amplifying the search box and effective clipping suppress the bad impact of background. Recently, Mueller et al. [17] propose a new framework based on correlation filter which is contextaware correlation filter framework that combines global contextual background information during the correlation filter learning and achieves a more advanced performance. Further, approaches [2,7, 8,12,15] using features extracted from deep neural networks have shown a notable improvement. COT [7] transforms the discrete features into a continuous space by interpolation, resulting in a more accurate position prediction. In addition, time constraints [12] have been added to the training process, achieving excellent performance.

860

3

T. Li et al.

Context-Aware Correlation Filter Framework

For the purpose of improving the robustness of tracker, Muller et al. propose a new framework named CACF [17] (Context-aware Correlation Filter), which can add more background information as negative samples during the process of training the filter. Compared to the conventional CF, additional samples are collected separately in directions around the target area. And the circulant matrix generated from the samples as additional hard negative samples can effectively suppress background interference. min Xω − y2 + λ1 ω2 + λ2 ω

k 

Xi ω2 .

(1)

i=1

Here, Xi is the ith group of additional negative samples. It can be seen that the optimization requires that Xi ω be as close as possible to zero, and that means Xi appears as a negative sample matrix. This allows us to train a filter that obtains a higher response value at a reliable positive sample and a near zero response at a cluttered background sample. By stacking the additional negative samples with original samples, Eq. 1 can be rewritten. ¯ 2 + λ1 ω2 . (2) min Aω − y ω

where ⎤ ⎡ ⎤ y √X ⎢0⎥ ⎢ λ 2 X1 ⎥ ⎥ ⎢ ⎥ ⎢ ¯ = ⎢ . ⎥. A=⎢ ⎥ and y .. ⎦ ⎣ .. ⎦ ⎣ . √ 0 λ 2 Xk ⎡

(3)

And it still has a closed-form solution with one-dimensional feature. ¯. ω = (AT A + λ1 I)−1 AT y

(4)

The solution in Fourier domain is obtained like CF tracker. ˆ = ω

4 4.1

ˆ y ˆ x . ˆ + λ1 + λ2 ki=1 x ˆi ˆ x ˆ ∗i  x x ∗

(5)

Proposed Method Rich Context Aware Tracker

In CACF framework, additional negative samples are extracted from the background locating on the top, bottom, left and right of the target, which provides a space regular limitation in training filter. In order to obtain more abundant context information, a time regular limitation is adopted. Specifically, considering that the deformation of the target in consecutive frames is relatively small

Robust Context-Aware Tracking with Temporal Regularization

861

for the reason of short time interval, the filter is not supposed to deviate too much from the one in previous frame. min Xω − y2 + λ1 ω2 + λ2 ω

k 

Xi ω2 + λ3 ω − ω t−1 2 .

(6)

i=1

Setting the gradient of Eq. 6 to zero, the optimal ω is obtained. ¯ + ω t−1 ). ω = (AT A + λ1 I + λ3 I)−1 (AT y

(7)

And transform the solution into the Fourier domain. ˆ = ω

4.2

ˆ y ˆ+ω ˆ t−1 x . k ˆ + λ1 + λ2 i=1 x ˆ i + λ3 ˆ x ˆ ∗i  x x

(8)



Multi-channel Features

In order to improve the discriminative ability, we use the HOG (Histogram of Gradient) and CN (Color-naming) features which are complementary and of multi-dimension. These features can effectively adapt target deformation and illumination changes, etc. And by using Eq. 3, we simplify our equation. min  ω

D 

¯ 2 + λ1 A(d) ω (d) − y

d=1

D  d=1

ω (d) 2 + λ3

D 

(d)

ω (d) − ω t−1 2 .

(9)

d=1

The superscript (d) refers to the dth channel of the features. Equation 9 can be expressed in the frequency domain. ˆ¯ 2 + λ1 ˆ ˆ −y ω 2 + λ3 ˆ ω−ω ˆ t−1 2 . min Aˆω ω ˆ

(10)

where ⎡

⎤ ⎡ ⎤ ··· diag(ˆ x(D) ) ˆ (1) ω √ ⎢ (2) ⎥ (D) ⎥ ˆ1 ) ⎥ ˆ ⎥ · · · diag( λ2 x ⎢ω ⎥ ⎢ . ⎥. ˆ and ω = .. .. ⎥ ⎢ . ⎥ . ⎦ ⎣ . ⎦ . √ (D) ˆ (D) ω ˆk ) · · · diag( λ2 x (11) Setting the gradient of Eq. 10 to zero, we can obtain the close-form solution. diag(ˆ x(1) ) diag(ˆ x(2) ) √ √ ⎢ (1) (2) ˆ 1 ) diag( λ2 x ˆ1 ) ⎢ diag( λ2 x ˆ ⎢ A=⎢ .. .. ⎣ . . √ √ (1) (2) ˆ k ) diag( λ2 x ˆk ) diag( λ2 x

ˆ t−1 ). ˆ = (AˆH Aˆ + λ1 I + λ3 I)−1 (AˆH yˆ ¯+ω ω

(12)

Equation 12 is similar to Eq. 7 solved in one dimension. In Eq. 7 the component AT A can be diagonalized by the theorem of the circulant matrix, which is impossible in Eq. 12. It is because the samples of different channels are not independent, that we can not consider the samples for each dimension respectively. Since the size of A is (k + 1)n × nD and size of AH A is nD × nD, which are particularly large, the time it takes to calculate the inverse matrix is unacceptable.

862

T. Li et al.

Fortunately, the equation can be decomposed into n sub-equations because all of the blocks in AˆH Aˆ are diagonal, and the computational complexity is relatively reduced. In ith sub-equation, the ith element of w(d) m with all d ∈ D can be solved at the same time. ⎤⎞ ⎡ ⎤ ⎛ ⎤⎞−1 ⎛⎡ ⎤ ⎡ (1) ω ˆ t−1 (i) ω ˆ (1) (i) k11 (i) k12 (i) · · · k1D (i) p1 (i) (2) ⎥⎟ (2) ⎢ k21 (i) k22 (i) · · · k2D (i) ⎥⎟ ⎜ ⎢ p2 (i) ⎥ ⎢ ⎥ ⎜ ⎢ω ω ˆ t−1 (i) ⎥⎟ ⎢ ⎢ ⎢ ˆ (i) ⎥ ⎜ ⎥⎟ ⎜ ⎥ ⎢ ⎢ ⎜ ⎥⎟ . λI + = + ⎢ ⎜ ⎢ ⎥ ⎢ ⎥ ⎟ ⎥ . . . . .. .. .. ⎢ ⎥⎟ .. .. ⎣ .. ⎦ ⎝ ⎣ ⎦⎠ ⎜ . ⎝⎣ .. ⎦ ⎣ ⎦⎠ . . (D) (D) pD (i) kD1 (i) kD2 (i) · · · kDD (i) ω ˆ (i) ω ˆ t−1 (i) ⎡

(13) where λ = λ 1 + λ3 . ˆ (m)∗  x ˆ (n) + λ2 kmn = x

k 

(14) (m)∗

ˆi x

(n)

ˆi . x

(15)

i=1

and ˆ (m)∗  y ˆ. pm = x

(16)

In addition, we can also use Gauss–Seidel or Conjugate Gradient method which can give a approximate solution with some iterations. When HoG is used and when assuming the features of different channels are independent, the filter for each channels can be solved independently [17].

5

Experiment

In this section, we provide the results of our tracking method. The experiment is implemented on MATLAB R2017a. We test the performance on the OTB-2015 [20] dataset published in CVPR2015. The sequences in the dataset are manually tagged with 9 attributes, which represents the different challenges in tracking. And in some cases, there are many challenging situations in one sequences. And we use a precision plot based on location error threshold and a success plot based on overlap scores. The precision scores is the percentage of frames where the location error is less than 20 pixels. And the area under curve (AUC) in each success plot is the success scores. In order to test the performance, we compare our proposed method with other trackers including DSST [5], KCF [10], DCF CA [17], SAMF, SAMF CA [17], STAPLE [1]. The parameters are fixed in all sequences in datasets. Their specific values are set as follows. The regularization parameters λ1 , λ2 and λ3 are set to be 1e-4, 0.4 and 0.175. The learning rate and padding rate is set to be 0.005 and 2. The number of context patches k is set to 4. We choose 7 scale factor to train filter in different scale that is similar to SAMF [13]. And we jointly used the historical average of APCE and the maximum of response as the update credential.

Robust Context-Aware Tracking with Temporal Regularization

5.1

863

Quantitative Analysis

Figure 1 shows the precision plot and the success plot, which indicate that our method is more robust than other state-of-the-art trackers in OTB-2015. Our tracker receives excellent results with a precision score and a success score of 0.810 and 0.591 respectively that ranks first. The precision score and the success score of baseline DCF CA are 0.743 and 0.512 ranking 5th and 6th. SAMF tracker uses the powerful features including HOG features and colornaming are integrated together with scale predicting process achieved 0.751 and 0.553. Muller in CVPR2017 has paid attention to SAMF, DCF and MOSSE, and has improved their performance in original methods with additional background patches. SAMF CA increases the precision score and success rate score by 3% and 2% which shows the great potential power of CA framework and ranks second and third. Our proposed method obtains an increase of precision and success scores of 1.9% and 1.8% comparing to SAMF CA, as well 2.6% and 1% comparing to STAPLE. It benefits from the abundant context information, which provides the sufficient background suppression and time constraints in training filter. Precision plots of OPE

0.9

ours [0.810] SAMF_CA [0.791] STAPLE [0.784] SAMF [0.751] DCF_CA [0.743] KCF [0.696] DSST [0.680] MOSSE_CA [0.598]

Precision

0.7 0.6 0.5

0.7

0.4 0.3

0.6 0.5 0.4 0.3

0.2

0.2

0.1

0.1

0

0

5

10

15

20

25

30

35

40

Location error threshold

45

ours [0.591] STAPLE [0.581] SAMF_CA [0.573] SAMF [0.553] DSST [0.513] DCF_CA [0.512] KCF [0.477] MOSSE_CA [0.447]

0.8

Success rate

0.8

Success plots of OPE

0.9

50

0

0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1

Overlap threshold

Fig. 1. Location error and success plots on OTB-2015 dataset

5.2

Qualitative Evaluation

We mainly visualize the impact that our tracker outperforms the other trackers in some challenging and common difficulty issues as shown in Fig. 2.

6

Conclusion

We propose a robust context aware tracking via learning temporal structure which provides benefit compared with CA framework in some situations. By

864

T. Li et al.

Fig. 2. Results of ours, DCF CA and staple in jogging1 and biker sequences

introducing a time regular term, the training process of the filter is limited partially in timing sequence, which can add additional effective information to improve the overall stability of the tracker. As well we use HoG features and color-naming features to describe the tracking target more completely and comprehensively. We also use APCE and maximum response as our update strategy to improve the performance when target is occluded. Extensive experiments based on OTB-2015 show that our proposed method achieves a precision rate and a success rate of 0.810 and 0.592 respectively, which are better than other outstanding trackers. The experiments fully demonstrate the robustness and accuracy of the proposed algorithm. Acknowledgements. This work was supported by the Major Science Instrument Program of the National Natural Science Foundation of China under Grant 61527802 and the General Program of National Nature Science Foundation of China under grants 61371132 and 61471043.

References 1. Bertinetto L, Valmadre J, Golodetz S, Miksik O, Torr PH (2016) Staple: complementary learners for real-time tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE Press, Las Vegas, pp 1401–1409 2. Bhat G, Johnander J, Danelljan M, Shahbaz Khan F, Felsberg M (2018) Unveiling the power of deep tracking. In: European conference on computer vision. Springer, Munich, pp 483–498 3. Bolme DS, Beveridge JR, Draper BA, Lui YM (2010) Visual object tracking using adaptive correlation filters. In: 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE Press, San Francisco, pp 2544–2550 4. Danelljan M, Shahbaz Khan F, Felsberg M, Van de Weijer J (2014) Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE Press, Columbus, pp 1090–1097

Robust Context-Aware Tracking with Temporal Regularization

865

5. Danelljan M, H¨ ager G, Khan F, Felsberg M (2014) Accurate scale estimation for robust visual tracking. In: British machine vision conference. BMVA Press, Nottingham 6. Danelljan M, Hager G, Shahbaz Khan F, Felsberg M (2015) Learning spatially regularized correlation filters for visual tracking. In: Proceedings of the IEEE international conference on computer vision. IEEE Press, Boston, pp 4310–4318 7. Danelljan M, Robinson A, Khan FS, Felsberg M (2016) Beyond correlation filters: learning continuous convolution operators for visual tracking. In: European conference on computer vision. Springer, Amsterdam, pp 472–488 8. He Z, Fan Y, Zhuang J, Dong Y, Bai H (2017) Correlation filters with weighted convolution responses. In: Proceedings of the IEEE international conference on computer vision. IEEE Press, Honolulu, pp 1992–2000 9. Henriques JF, Caseiro R, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. In: European conference on computer vision. Springer, Florence, pp 702–715 10. Henriques JF, Caseiro R, Martins P, Batista J (2015) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596 11. Kiani Galoogahi H, Fagg A, Lucey S (2017) Learning background-aware correlation filters for visual tracking. In: Proceedings of the IEEE international conference on computer vision. IEEE Press, Honolulu, pp 1135–1143 12. Li F, Tian C, Zuo W, Zhang L, Yang MH (2018) Learning spatial-temporal regularized correlation filters for visual tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE Press, Salt Lake City, pp 4904– 4913 13. Li Y, Zhu J (2014) A scale adaptive kernel correlation filter tracker with feature integration. In: European conference on computer vision. Springer, Zurich, pp 254– 265 14. Liang P, Blasch E, Ling H (2015) Encoding color information for visual tracking: algorithms and benchmark. IEEE Trans Image Process 24(12):5630–5644 15. Ma C, Huang JB, Yang X, Yang MH (2015) Hierarchical convolutional features for visual tracking. In: Proceedings of the IEEE international conference on computer vision. IEEE Press, Boston, pp 3074–3082 16. Mueller M, Smith N, Ghanem B (2016) A benchmark and simulator for UAV tracking. In: European conference on computer vision. Springer, Amsterdam, pp 445–461 17. Mueller M, Smith N, Ghanem B (2017) Context-aware correlation filter tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE Press, Honolulu, pp 1396–1404 18. Wang M, Liu Y, Huang Z (2017) Large margin object tracking with circulant feature maps. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE Press, Honolulu, pp 4021–4029 19. Wu Y, Lim J, Yang MH (2013) Online object tracking: a benchmark. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE Press, Portland, pp 2411–2418 20. Wu Y, Lim J, Yang M (2015) Object tracking benchmark. IEEE Trans Pattern Anal Mach Intell 37(9):1834–1848

Research on Motor Speed Estimation Method Based on Electric Vehicle Jian He and Bo Li(&) School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China [email protected], [email protected]

Abstract. The classification describes common methods for estimating motor speed. Methods completely dependent on the physical parameters of the motor and the electromagnetic equation are easy to implement, but have poor robustness and anti-noise ability. Methods partially dependent on the physical parameters of the motor and the electromagnetic equation are introduced. Simulated and compared Model Reference Adaptive System (MRAS) and Sliding Mode Observer (SMO). Methods independent of the physical motor parameters and electromagnetic equations are introduced. Introduced common artificial intelligence algorithms. The applicability of various algorithms is summarized. Keywords: Motor speed estimation  Model reference adaptive system  High frequency signal injection

1 Introduction With the shortage of energy and the growing problem of environmental pollution, the research and development of electric vehicles with significant advantages such as low noise, zero emissions, high efficiency and energy saving has been attached great importance by the world. As a complex system, electric vehicles need to consider various complex situations and emergencies. And in recent years, fault-tolerant control has become a hot spot through increased automation of control systems. For electric vehicles, fault-tolerance control includes actuator fault tolerance, controller fault tolerance, and sensor fault tolerance. Sensor fault-tolerant control need not change the motor and controller structure, making it easy to implement and apply to existing electric vehicles. Therefore, this paper focuses on the common methods of motor speed estimation [1]. This paper introduces the current common speed sensor-less motor control methods [2] and some new algorithms and predicts the future development trend. As a complex nonlinear control object, it is difficult to obtain accurate physical parameters of the motor. The accuracy of the mathematical model established by the observer depends on the accuracy of these parameters and affects the accuracy of the estimation. Meanwhile, high-performance control requires a control system with antinoise ability. Therefore, the common speed estimation algorithms are classified according to the robustness and the anti-interference ability of the system: © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 866–876, 2020 https://doi.org/10.1007/978-981-13-9409-6_101

Research on Motor Speed Estimation Method Based

867

1. Completely dependent on the physical parameters of the motor and the electromagnetic equation; 2. Partially dependent on the physical parameters of the motor and the electromagnetic equation; 3. Independent of the physical parameters of the motor and the electromagnetic equation.

2 Completely Dependent on the Physical Parameters of the Motor and the Electromagnetic Equation According to the voltage equation of the motor and the detected current and voltage values, the back electromotive force (EMF) is calculated based on the electromagnetic model of the motor, and then the rotor position and speed are calculated [3]. This easyto-implement algorithm directly calculates the back-EMF, which is a speed open-loop estimation and does not require proportional integral (PI) control or adaptive closedloop regulation. Because of the poor robustness of the parameters and the use of the tangent function to obtain the rotor position, the algorithm’s anti-noise ability is weak. Then taking permanent magnet motor for example, the electromagnetic equation in the stationary coordinate system of the motor is shown by the Eq. (1). The subscript ab denotes the static coordinate system, dq denotes the rotating coordinate system, Rs denotes the stator resistance, and xr and h denote the rotational speed and rotor position. The speed and rotor position in Eq. (1) are unknown. If the initial position of the rotor is known, the speed and rotor position can be calculated by knowing one of them. 

ua ub





  Rs þ pLd xr ðLd  Lq Þ ia ¼ Rs þ pLd xr ðLd  Lq Þ ib        sin h  sin h þ ðLq  Ld Þðxr id  piq Þ þ xr wm cos h cos h

ð1Þ

3 Partially Dependent on the Physical Parameters of the Motor and the Electromagnetic Equation The open loop estimation static error of the speed is large, and it is sensitive to parameter changes and environmental noise. The speed observer based on the adaptive principle can improve the parameter robustness and anti-noise ability. The estimation algorithms based on the adaptive principle mainly include: 1. 2. 3. 4.

Model Reference Adaptive System (MRAS); Luenberger Observer (LO); Extended Kalman Filter (EKF); Sliding Mode Observer (SMO).

868

3.1

J. He and B. Li

Model Reference Adaptive System

The basic principle based on model reference adaptive method is to follow the reference model with an appropriate adaptive law using an adjustable model containing variables to be estimated. The basic principle block diagram is shown in Fig. 1.

Fig. 1. The scheme of MRAS

The accuracy of the MRAS estimation is judged by the output error of the two models. The adjustable model is modeled according to the actual motor model. Therefore, the accuracy of the motor model parameters determines the accuracy of the MRAS speed estimation, which affects the control accuracy of the speed sensor-less algorithm. Although the adaptive algorithm makes the system have certain parameter robustness, MRAS is still very sensitive to the change of parameters, especially the stator resistance. Because the parameters of the motor are affected by temperature and other factors, it will not match the nominal value, which makes the MRAS algorithm have a large estimation error when the motor is low speed. In this paper, a 1.1 kW permanent magnet synchronous surface motor with MRAS speed sensor-less control algorithm based on vector control is simulated in MATLAB/Simulink environment. Figure 2a, b shows the actual speed and estimated speed for the rated desired speed and 150 rad/s, respectively. It can be seen that the motor rotor position estimation error is significantly increased, and the dynamic performance is degraded in the lower speed region. As the desired speed decreases further, the estimation error increases and the system may tend to be unstable. Unlike MRAS, the speed estimates of other speed observers are mostly based on back-EMF observer or flux observer. When the motor speed is low, the back-EMF is small. The back-EMF error observed by the observer is large, resulting in an increase in the estimation error of the rotational speed. Therefore, the observer method based on back-EMF is not applicable to the speed estimation at zero speed and low speed.

Research on Motor Speed Estimation Method Based

Fig. 2. The error of rotor speed and rotor position

869

870

3.2

J. He and B. Li

Luenberger Observer

The speed sensor-less control block diagram of the LO is shown in Fig. 3. The state of the motor is reconstructed according to Eqs. (1) and (2). The control quantity la, lb of the motor driver is used as the input of the observer. At the same time, the observer ^ to be estimated to obtain the current ^ia ; ^ib to be needs to include the speed variable x estimated. The measured three-phase current value is subjected to Clark transformation to obtain ia , ib . The estimated value ^ea , ^eb of the back-EMF can be obtained from Eq. (2), and the adaptive law is established using the current estimation error and the back-EMF as shown in Eq. (3). *

ws ¼

Z

*

Z

es ¼

*

*

*

ðus  Rs i s Þdt þ ws0

ð2Þ

Zt ^ ¼ Kp ð^ea ea  ^eb eb Þ þ KI x

ð^ea ea  ^eb eb Þdt

ð3Þ

0

The rotational speed estimate obtained by Eq. (3) is returned to the state observer to correct the estimated current value and the back-EMF, thereby forming a closed loop estimation of the rotational speed.

Fig. 3. The scheme of LO

3.3

Extended Kalman Filter (EKF)

Kalman Filter (KF) is an optimized regression data processing algorithm for linear systems. Extended Kalman Filter (EKF) is a generalized form of Kalman filter in nonlinear systems [4]. It belongs to nonlinear estimation algorithm and has strong antinoise and anti-interference ability. Since EKF uses a recursive algorithm, its filter gain is automatically adjusted during the recursive process and is an adaptive system. Therefore, it also has certain parameter robustness.

Research on Motor Speed Estimation Method Based

871

The EKF algorithm is complex and requires matrix inversion. To meet the requirements of real-time control, there are certain requirements for the speed and accuracy of the digital signal processor. 3.4

Sliding Mode Observer (SMO)

The output of the sliding mode observer and the state quantity error corresponding to the motor are converted into a sliding modulus by switching signal, then fed back to the input of the observer as a control amount. The block diagram of the control system based on the sliding mode observer is shown in Fig. 4.

Fig. 4. The scheme of SMO

After the system phase point of the sliding mode structure reaches the switching surface, the system operation mode only depends on the switching surface equation and has nothing to do with the original parameters of the system. Therefore, the sliding mode structure has better robustness and anti-interference ability than MRAS. In the MATLAB/Simulink environment, this paper simulates the speed sensor-less control of a 1.1 kW, 50 Hz surface permanent magnet synchronous motor based on sliding mode observer. Figure 5a, b shows the motor speed estimation and rotor position estimation error at rated operating point and 10 rad/s, respectively. At steady state, the observation error of the high-speed or low-speed sliding mode observer is very small, but in the startup phase, the estimation error is large. This is because the initial state of the system is not on the sliding mode surface, so the stability of the system from the initial state to the sliding mode surface is not guaranteed. It is Table 1. Error (%) 10 rad/s MRAS (%) SMO (%)

50

70 rad/s

314 rad/s 5 5

872

J. He and B. Li

Fig. 5. The error of rotor speed and rotor position

Research on Motor Speed Estimation Method Based

873

necessary to select a suitable control amount to make the system move quickly to the sliding surface. Compared with the MRAS estimation results mentioned above, the steady-state estimation effect of SMO is better than MRAS regardless of high speed or low speed. The steady-state error analysis of the two estimation algorithms can be seen in Table 1.

4 Independent of the Physical Parameters of the Motor and the Electromagnetic Equation The estimation performance of the first three types of estimation algorithms at high speed has been confirmed in experiments and applications, but it is affected by noise and parameter errors in the zero-low speed range, and the estimation error is large. It is divided into external high-frequency signal injection method [5, 6] and PWMbased harmonic signal excitation method [7] according to the signal excitation mode. The difference between the two methods is that the former injection signal frequency and amplitude can be freely adjusted, while the latter is limited by the PWM switching frequency. 4.1

External High Frequency Signal Injection

The external high-frequency signal injection method (hereinafter referred to as the high-frequency injection method) is to inject a high-frequency signal into the control system. A block diagram of the high-frequency injection method in which the rotating voltage vector signal is injected on the ab axis is shown in Fig. 6.

Fig. 6. The scheme of high frequency signal injection of rotated voltage vector

874

J. He and B. Li

The high-frequency injection method is also called a carrier injection method. Each method is classified and described below from the perspective of the type of injected signal. The injected signal can be divided into rotating voltage vector injection and amplitude modulation voltage vector injection by type. The injection signal of the rotating voltage vector method is a high-speed rotating voltage vector, and the rotating frequency is much higher than the fundamental frequency of the motor [6]. The amplitude modulation voltage vector injection is also a rotating voltage vector, but its rotating frequency is the same as the motor electric frequency, and the voltage amplitude changes sinusoidally. The signal frequency is high frequency and similar to amplitude modulation signal. In this paper, a speed sensor-less simulation based on the rotating voltage vector injection method is implemented for a 1.1kw, 50 Hz interior permanent magnet synchronous motor. The simulation results of the motor running with the expected electromagnetic speed of 10 rad/s are shown in Fig. 7.

Fig. 7. The error of rotor speed and rotor position under the speed of 10 rad/s

It can be seen that the rotor position and speed can be estimated relatively accurately when the motor speed is low. And the higher the speed, the slower the dynamic response and the greater the position error. The reason is that the application of the high frequency injection method requires that the injected signal frequency is much larger than the motor fundamental frequency and the back-EMF is ignored. If the motor is in the high-speed range, the back-EMF at this time cannot be ignored, and the high

Research on Motor Speed Estimation Method Based

875

frequency injection method is no longer applicable. In addition, due to the injection of external signals, the signal-to-noise ratio of the system is reduced, and the stability is reduced. The filtering is required in the control to filter out the influence of the highfrequency signal, which brings delay and phase shift to the system. 4.2

High Frequency Signal Excitation Method Based on PWM Modulation

In electric vehicles, the inverter switching frequency is much higher than the fundamental frequency of the motor, the motor response of the three-phase square wave signal loaded on the stator of the motor has the same characteristics as under the high frequency injection method. The difference is that the frequency, amplitude and direction of the injection signal of the high-frequency injection method are controllable, and the frequency of the “high-frequency signal” based on the PWM modulation signal is determined by the switching frequency, and the amplitude and direction are uncontrollable. In recent years, there have been some other forms of high frequency signals used to detect the salient poles of motors. The basic principle is consistent with the principle of conventional high frequency injection methods. 4.3

Artificial Intelligent Algorithm

Intelligent control algorithms are also a hot topic in recent years, but there is no mature theoretical research and proof. The application of intelligent algorithms in motor speed estimation is usually divided into two types. One is the improvement of existing classical speed estimation algorithms such as high frequency injection method, etc. The other is to use the intelligent algorithm to achieve speed estimation. Intelligent algorithms are more capable of dealing with complex environmental factors than traditional algorithms. Despite the limitation of calculation speed and data quantity, the calculation speed is faster, and the larger capacity chip will appear with the advancement of science and technology, and the intelligent algorithm will receive more and more attention.

5 Conclusion MRAS and observer methods are more suitable for medium and high- speed regions and have been proved theoretically and experimentally with good stability and control accuracy, and their control performance is related to the accuracy of motor parameters to varying degrees. The control strategy based on salient pole effect and artificial intelligence algorithm does not depend on the motor model and has better robustness and anti-noise ability. However, the high-frequency injection method is not applicable to the medium-high speed zone. The artificial intelligence algorithm has a short development time, and the design and debugging are more complicated.

876

J. He and B. Li

Acknowledgements. This work was supported by the Science and Technology Department of Sichuan Province (Grant No. 2017GZYZF0014), by the Science and Technology Department of Yibin (Grants No. 2018JZ0050, Grants No. 2018SF020, Grants No. 2018ZSF001).

References 1. Kim T, Lee HW, Ehsani M (2007) Position sensorless brushless DC motor/generator drives: review and future trends. Electr Power Appl IET 1(4):557–564 2. Asher GM (1998) Sensorless estimation for vector controlled induction motor drives. In: IEE Colloquium on vector control revisited (Digest No. 1998/199) 3. Vas P (1998) Sensorless vector and direct torque control. Oxford University Press 4. Bendjedia M, Ait-Amirat Y, Walther B et al (2012) Position control of a sensorless stepper motor. IEEE Trans Power Electron 27(2):578–587 5. Jansen PL, Lorenz RD (1995) Transducerless position and velocity estimation in induction and salient AC machines. IEEE Trans Ind Appl 31(2):240–247 6. Tursini M, Petrella R, Parasiliti R (2003) Sensorless control of an IPM synchronous motor for city-scooter applications. In: IAS annual meeting 7. Holtz J, Pan H (2004) Elimination of saturation effects in sensorless position-controlled induction motors. IEEE Trans Ind Appl 40(2):623–631

A Novel Virtual Cell Power Allocation and Interference Merging Algorithm in UDN Liting Song(B) , Weidong Gao, Gang Chuai, and ZiWei Si Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing, China {SongLiTing,gaoweidong}@bupt.edu.cn

Abstract. A new user-centered power allocation and interference merging scheme for virtual cells in ultra-dense networks is proposed in this paper, which can reduce system interference and improve system performance. The proposed scheme is divided into two steps. The first step is to allocate power to users in the virtual community to improve the utilization of system resources. In this connection, we propose a proportional power allocation algorithm, which reduces the complexity. The second step is to merge the virtual cells based on their interference strength. Since user access is generally based on RSRP (Reference Signal Received Power) rather than path loss, we use RSRP as the interference merging indicator. The interference merging scheme we proposed presents the interference strength of the virtual cell better. The simulation results show that the proposed proportional power allocation scheme has similar performance to the traditional proportional power allocation. At the same time, compared with the traditional path loss interference merging algorithm, our proposed interference merging scheme shows better performance.

1

Introduction

In order to cope with the significantly high data rates and traffic capacity expected in the next generation of communication system, 5G will support ultra dense deployments and provide specific requirements for users [1]. The 5G networks take advantage of ultra dense deployment and massive connectivity using radio access vitalization strategies [2], coordinated multiple-point (CoMP), as an effective scheme to mitigate interference, recently attracts great interest and becomes a key technology for future UDN. However, the network deployment of UDN will make the inter-cell interference environment more complicated. The author Gang Chuai’s earlier research can be retrieved at https://ieeexplore.ieee. org/document/8322940. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 877–886, 2020 https://doi.org/10.1007/978-981-13-9409-6_102

878

L. Song et al.

Therefore, an effective interference merging scheme is important for reducing interference in the UND. To facilitate the control of interference in heterogeneous and dense deployments of UDN, the concept of a control and user plane split has also recently gained strong interest [3]. Under the concept of C/U split, small base station (BS) can be configured to form virtual cell (VCs) or super cell as in [4,5], which are dynamic cooperative clusters according to the user locations and/or traffic demands. By cooperative transmission or reception among the multiple TPs (Transmission Point) in the cluster, significant capacity gain can be obtained as inter-cell co-channel interference (CCI) is mitigated [6]. Intra-cluster interference is eliminated by using precoding techniques. The precoding techniques include ZF (Zero-forcing), ZFBD (Zero-forcing Block diagonalization), MMSE (Minimum Mean Square Error), etc. [7]. We can transform the original inter-cell interference into intra-cell interference through the merger of virtual cells. Then we use precoding techniques to eliminate this part of the interference. Therefore, a reasonable interference merging scheme can be used to reduce system interference. The authors of the previous literature have studied the problem of user access point selection through related work, namely virtual cell construction. Virtual cell construction schemes have been extensively studied. We adopt a user-centric virtual cell scheme, which eliminates the cell edge, and the TPs around the user constitutes a virtual cell and serves the user together [8] to maximize system capacity and resource availability. It is assumed that the information of all channel matrices is completely known. The coordinated TPs in the virtual cell performs interference aware scheme to avoid interference, and the selected TPs simultaneously transmits data to a given UE. The joint transmission is then implemented by using a precoding technique. The proposed virtual cell interference merging algorithm achieves a satisfactory level of spectral efficiency. This paper is structured as follows: Sect. 2 presents the system model of virtual cell network. Section 3 describe the proposed scheme in details. Section 4 provides the simulation results. Finally, Sect. 5 concludes this paper.

2

System Model

We consider the downlink OFDMA network system. Multi-antenna macro base stations undertake signaling transmission, Single-antenna TPs undertake the task of data transmission. The TPs are modeled as a stationary Poisson point process (PPT) φT with density λT . The single-antenna users are modeled as PPT φU with density λU . The set of TPs is |B| = B, and the set of UEs is |K| = K, where B > K.

A Novel Virtual Cell Power Allocation and Interference

879

Fig. 1. User-centric virtual.

As shown in Fig. 1, the user selects the TPs according to some schemes as its virtual cell in the initial stage. It can be formed by different standards: the virtual cell cluster is formed according to the geographical distribution of the TPs [9]; or merging partial overlapped virtual cells which the overlapped TPs in the serious interference domain [10]. TPs within a virtual cell cluster fully share user’s data. Tcell cluster fully share user’s data. They can perform a coordinated transmission by beam-forming across all the TP antennas in the cluster. In other words, each TP m sends a signal to its serving MSs set Km , and each user k receives a signal of its serving BSs set Mk . For UE k, its virtual cell is vk , then the received signal of UE k is: yk =

M   √ pj hk,vl xj,Bl + nk l=1 j∈kl

√ = pk hk,vk xk,Bk +



√ pj hk,vk xj,vl

(1)

j∈Km ,j=k

+

M 

√

pj hk,vl xj,vl + nk

l=1,l=m j∈Kl

The received signal of user k is composed of four parts, which are useful signals of UE k, intra-cluster interference, inter-cluster interference, and noise. Where pj represents the transmit power of the base station, hk,vl represents additive white Gaussian noise. xk,vk represents the channel gain, nk represents the send signal. Where hk,vl = Lk,vl fk,vl .Lk,vl and fk,vl are large-scale fading and small-scale fading, respectively. fk,vl ∈ C |vl |×1 is the small scale random 2 fading, which assumed to follow a Rayleigh distribution with E(|fk,vl | ) = 1 for all k ∈ K. The large scale fading is given by: −α/2  Lk,vl = rbv − riK 

(2)

880

L. Song et al.

where rbv and riK denote the location of the TP and user, respectively. α is path loss factor.

3

Proposed Power Allocation and Interference Merging Algorithm

In a conventional virtual cell structure, power is evenly distributed to UEs in a virtual cell. This ignores the impact of power allocation on system performance [10]. There are several basic algorithms for power allocation algorithms, namely equal power allocation algorithm, proportional power allocation algorithm and water injection power allocation algorithm. The following is a clear description: (1) equal power allocation algorithm: pmk =

pm |Km |

(3)

(2) proportional power allocation algorithm: 2

pmk = 

|hkm | pm 2

k∈Km

|hkm |

(4)

(3) water-filling power splitting algorithm:  pmk = βm −

σk2

+ (5)

2

|gkk |

+

where [x] = max {x, 0}, βm is the water-filling level. The average power allocation algorithm does not provide the user with the appropriate transmission power. Regardless of the channel quality of the UE, the power UEs obtained in the same virtual cell is always the same. The water-filling power splitting algorithm provides higher transmission power for users with good channel quality. But it’s more complicated because it takes multiple iterations to find the best solution. Since large-scale fading dominates wireless communication, in order to simplify the algorithm, we propose a simplified proportional power allocation scheme based on path loss: 2

pmk = 

|Lkm | pm

k∈Km

2

|Lkm |

(6)

Traditional proportional power allocation algorithms require perfect channel state information (CSI). The simplified proportional power allocation algorithm proposed by us can be calculated by imperfect channel state information. We replace the original channel gain with large-scale fading. This can reduce signaling overhead. Then we calculate to the next equation based on the power obtained by the UEs.

A Novel Virtual Cell Power Allocation and Interference

881

RSRP at the user k ∈ K can be expressed as: RSRPkm = pkm × Lkm

(7)

SINR at the user k ∈ K can be expressed as: SIN Rk =

pk |hk,vm xk,vm |2 I k + nk

(8)

where 

Ik =

2

pj |hk,vm xj,vm | +

M 



2

pj |hk,vl xj,vl |

l=1,l=m j∈Kl

j∈Km ,j=k

After Simplified power allocation, the SINR value of the UE will be more realistic. Then the user’s rate is: Rk = Blog2 (1 + SIN Rk )

(9)

The user’s Spectral efficiency is: SEk = log2 (1 + SIN Rk )

(10)

After the construction of virtual cells, there will generate some interference between virtual cells. Some strong interference virtual cells will cause serious interference to the transmission between the two virtual cells. For the target user, it is natural to consider the grouping scheme for making the strongest ICI. The direct solution is to merge the initial virtual units that overlap each other in [8]. The users in the merged virtual cell are cooperatively transmitted by all the TPs in the merged virtual cell. Then we eliminate the intra-cell interference generated by the virtual cell merging by precoding technology. When we calculate the interference strength between two virtual cells, we do not use the traditional interference strength calculation scheme based on large-scale fading. We use the RSRP value as the calculation standard. It is more realistic. We introduce a relative interference matrix and then strictly constrain the size of the grouped virtual cells by adjusting the merging threshold. To describe the potential interference strength between two virtual cells, we introduce a relative interference matrix based on RSRP values, which can be expressed as: ⎧ +∞, i = j ⎪ ⎪ ⎪ ⎪ − − ⎨ (1,2) (1,2) RSRPvi ,j (11) δ = {δi,j }K×K = RSRPvj ,i ⎪ + , otherwise ⎪ − − ⎪ ⎪ ⎩ (1,2) (1,2) RSRPvi ,i RSRPvj ,j −

(1,2)

Where RSRPvj ,i

is the average of first strong and second strong RSRP −

(1,2)

values from user i to the virtual cell vj , Where RSRPvi ,i is the average of first strong and second strong RSRP values from user i to the virtual cell vi .

882

L. Song et al.

Generally, i = j, δi,j ∈ [0, 2], the larger δi,j indicates more severe interference between the two virtual cells, and the more motivation to group them together. Furthermore, we introduce the interference graph G, and each user’s initial virtual cell is modelled as a vertex in an undirected graph G, where the edge connect two vertexes is modelled as the relative interference δi,j between them. The merging threshold is 1. Based on G, we can contribute a binary matrix T ∈ C |K|×|K| as a merging strategy matrix. We judge whether two virtual cells need to be merged according to the matrix T. The merging strategy matrix T is defined as: 1, when i, j are grouped together, (12) T = {ti,j }|K|×|K| = 0, otherwise The interference merging algorithm is as follows:

Algorithm 1 : Interference merger Require: TPs density λT and UEs density λU ; Simulation radius R; Virtual cell {vi , i ∈ K} of each UE Ensure: Updated virtual cell {vi∗ , i ∈ K} of each UE 1: Initialize: k = 1 : K, m = 1 : B. 2: for (k = 1 : K) do 3: for (m = 1 : K) do 4: calculates Lkm according to (2) 5: calculates pkm according to (6) 6: calculates RSRPkm according to (7) 7: end for 8: end for 9: for (i = 1 : K) do 10: for (j = 1 : K) do 11: calculates δij according to (11) 12: calculates Tij according to (12) 13: if (Ti,j = 1) then 14: vi∗ = vj∗ = vi ∪ vj . 15: end if 16: end for 17: end for

A Novel Virtual Cell Power Allocation and Interference

4

883

Simulation Results and Analysis

Table 1. Simulation parameters. Parameters

Unit

Values

Operating frequency

GHz

30

Bandwidth

MHz

20 2

TP density

Cells/Km

UE density

Cells/Km2

AP transmit power

dBm

30

Area of simulation

m2

500*500

110 40

Numerical simulations are carried out to evaluate the performance of the proposed scheme in a dense TPs environment. We assume that the positions of TPs and users both obey the Poisson point process. Table 1 is the setting of the simulation parameters.

Fig. 2. Comparison of power allocation algorithms.

Figure 2 shows the performance comparison of the traditional proportional power allocation algorithm with our proposed simplified proportional power allocation score. We can see that the performance of these two power allocation algorithms is similar. This verifies that our proposed path-based simplified proportional power allocation algorithm reduces complexity and ensures system performance.

884

L. Song et al.

Fig. 3. CDF of SE for PowerAllocation scheme compared to Average PowerAllocation scheme.

Figure 3 shows a comparison of the proposed system performance of the simplified power allocation scheme and the average power allocation scheme. We can see that our proposed power allocation algorithm can improve system performance. There are about 50% of all users having a SE of more than 0.5 bps/Hz. This is because we allocate higher transmission power for UEs with better channel quality.

Fig. 4. CDF of SE for Proposed Interference merger scheme compared to PassLoss Interference merger scheme.

A Novel Virtual Cell Power Allocation and Interference

885

Fig. 5. Performance comparison between proposed power allocation algorithm and traditional path loss combining algorithm and proposed power control and RSRP algorithm.

Figure 4 shows a comparison of the proposed interference merging algorithm with the performance of a conventional interference merging algorithm. We can see that there are about 50% of all users having a SE of more than 0.8 bps/Hz. This is because the traditional interference combining algorithm cannot accurately calculate the interference strength between two virtual cells. Because it averages all the interfering signals. While the algorithm we proposed ignores the weaker interference signals. In Fig. 5, compared with power allocation algorithm and interference merging based path loss algorithm, the spectral efficiency of 50% of all users our proposed algorithms is increased by 0.8 and 1.1 bps/Hz respectively. Compared with the interference merging algorithm, the power allocation algorithm does not improve the system performance very well. This is because the mergence of virtual cells causes some inter-cell interference to become intra-cell interference. Inter-cell interference can be eliminated by precoding techniques.

5

Conclusion

This paper proposes a novel power allocation and interference merging algorithm. The simulation results show that the proposed simplified proportional power allocation algorithm has similar performance to the traditional proportional power allocation algorithm. At the same time, our proposed RSRP-based virtual cell interference merging algorithm can more accurately represent the interference strength between two virtual cells. However, as the size of the virtual cell increases, the complexity of the precoding technique also increases. Therefore, we will study the virtual cell merge size with the best system performance in the future direction.

886

L. Song et al.

Acknowledgement. This work was funded by the National Science and Technology Major Project: No. 2018ZX03001029-004.

References 1. Telatar E (1999) Capacity of multi-antenna Gaussian channels. Eur Trans Telecommun 10(6):585–595 2. Cao Y, Maaref A (2014) Soft forwarding device cooperation strategies for 5G radio access networks 359–364 3. Ishii H, Kishiyama Y, Takahashih H (2012) A novel architecture for LTE-B: Cplane/U-plane split and phantom cell concepts. In: IEEE global telecommunications conference (GLOBECOM), Anaheim, USA 4. Zhang Y, Zhang YJ (Angela) (2014) User-centric virtual cell design for cloud radio access networks. In: IEEE 15th international workshop on signal processing advances in wireless communications (SPAWC), pp 1–5 5. Niu Z (2011) TANGO: traffic-aware network planning and green operation. IEEE Wirel Commun Mag 18:25–29 6. Giese J (2011) Application of coordinated beam selection in heterogeneous LTEadvanced networks. In: 8th international symposium on wireless communication systems (ISWCS), pp 730–734 7. Song S, Li H, Fan Y (2018) Downlink interference rejection in ultra dense network. In: 10th international conference on communication software and networks 8. Wang J, Dai L (2015) Downlink rate analysis for virtual-cell based large scale distributed antenna systems. IEEE Trans Wirel Commun 15(3):1998–2011 9. Xiao C et al (2016) Downlink transmission scheme based on virtual cell merging in ultra dense networks. In: IEEE 84th vehicular technology conference (VTC-fall), Montreal, QC, pp 1–5 10. Liu Q, Chuai G, Gao W (2017) Load-aware user-centric virtual cell design in ultradense network. In: 2017 IEEE conference on computer communications workshops (INFOCOM WKSHPS)

Device-Free Sensing for Gesture Recognition by Wi-Fi Communication Signal Based on Auto-encoder/decoder Neural Network Yi Zhong1(B) , Yan Huang2 , and Ting Jiang1 1

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China 2 School of Electrical and Data Engineering, The University of Technology, Sydney, Australia [email protected],[email protected],[email protected]

Abstract. Gesture recognition has been found to be a vital mission for a variety of applications, such as smart surveillance, elder care, virtual reality, advanced user interface, etc. Recently, an emerging sensing technology, namely device-free sensing (DFS), has been introduced to the domain of gesture recognition which only uses radio-frequency (RF) signals without the need to equip any devices or extra hardware support; thus, it would be a natural choice to fully leverage ubiquitous Wi-Fi signals in almost every modern building. Although the feasibility of using this technology for gesture recognition has been explored to some extent, we observe that it still cannot perform promisingly for some gestures which maybe look nearly identical in a certain instant. Therefore, in this paper, we conduct experiments with several typical hand gestures in the opposite direction based on a proposed Auto-Encoder/Decoder (Auto-ED) deep neural network to address gesture recognition in our case. Compared with several traditional learning methods, experimental results demonstrate that our proposed approach can best tackle the challenge of gesture recognition for identical motions, which indicates its potential application values in the near future.

Keywords: Device-free sensing (DFS) (Auto-ED) · Gesture recognition

1

· Auto-encoder and decoder

Introduction

Gesture recognition has been found very useful to improve the quality of life in several meaningful ways, ranging from smart buildings to smart cities. Previously, most studies focused on using either cameras or wearable devices for this application. However, the traditional approaches have been revealed to be deficient under some complex cases. For example, a wearable device is not c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 887–894, 2020 https://doi.org/10.1007/978-981-13-9409-6_103

888

Y. Zhong et al.

practical for constantly use, because it is intrusive and users may forget to wear it. Although computer vision is the most widely used technique for gesture recognition, video sensors are not preferred in long-lasting missions because they consume high amounts of power and require a large memory size. Moreover, the imaging sensors usually suffer from the problem of line-of-sight (LOS) constraints and light conditions. Especially, as far as privacy is concerned, people are not willing to be recorded in the camera in many scenarios (e.g., in a shower). There is an emerging class of sensing technology, known as device-free sensing (DFS), which only uses radio frequency (RF) transceivers as sensing devices [1, 2]. Since each transceiver [3] within a wireless network must have a capability to generate and transmit RF signals, it would be a natural choice to fully utilize this capability for not only data communication but also for sensing. In the past several years, there have been many efforts undertaken to investigate DFS technology for gesture recognition [4, 5]. The concept is to utilize the variations in the transmission channel to recognize gestures in a given environment. Amongst them, Wi-Fi-based gesture recognition systems have recently proliferated due to the ubiquitous availability of almost-everywhere commercial Wi-Fi devices. Although magnificent progress has been made for gesture recognition by analyzing received signal strength (RSS) and channel state information (CSI) of Wi-Fi signals caused by hand motions, deploying this approach for some gestures producing similar features is still not piratical. This is because the posture of a hand in different motions maybe looks nearly identical in a certain instant. Consequently, classification accuracy is degraded significantly in such situations. Currently, many attempts have been made to use deep learning approaches for recognition tasks [6, 7]. To tackle the challenge of gesture recognition for some motions producing similar features, unlike conventional machine learning approaches extracting objects representations first and then conducting a metric learning algorithm, in this paper, an end-to-end Auto-Encoder and Decoder deep neural network (Auto-ED) is proposed, which can mitigate the disturbance produced by the identical motions through an automatic interaction scheme between the feature extraction and metric learning modules. To demonstrate the performance of the presented approach, a variety of data samples are collected by a Wi-Fi communication system with four gesture-pairs with identical movement in the opposite direction. Experimental results demonstrate that the proposed Auto-ED approach can well handle the new challenge presented in this paper.

2

Experimental Setup and Data Collection

To verify the feasibility of the aforementioned approach, in particular the accuracy of distinguishing between different gestures with identical motions has been taken using the Sora platform [8], a fully programmable software radio platform on commodity PC architectures. By utilizing the Sora platform, we have developed a demonstration Wi-Fi communication system with IEEE 802.11a/b/g wireless protocol stack using commodity general purpose PCs. In addition, the

Device-Free Sensing for Gesture Recognition by Wi-Fi Communication

889

prototype is connected to a PC so that the collected data can be digitally acquired and recorded. The experimental setup is given in Fig. 1. The measurements were performed in an office environment. The transceivers are placed at fixed locations at a constant height, 1.5 m above the floor. The distance between the transmitter and the receiver is approximately 0.5 m.

TX

RX

Transmitter Parameters Setting

Received Data Collection and Storage

Fig. 1. Measurement setup.

To ensure that the experimental setup is appropriate and the prototype is fully functional, a measurement was first taken without any performed gestures. Once the functionality of the prototype is confirmed, different types of gestures are then placed between a transmitter and a receiver for testing purposes. As shown in Fig. 2, eight gesture types with identical motions in opposite direction, including (i) move right, (ii) move left, (iii) move forward, (iv) move backward, (v) move upward, (vi) move downward, (vii) rotate clockwise, and (viii) rotate counter-clockwise, are used for classification. Note that measurements were performed by only a single human subject moving the hand gesture at one time. On the other hand, it is also worthwhile to mention that the overall duration of each dynamic gesture is about 1 s which calls for the human subject repeat the gesture at uniform speed.

3 3.1

Methodology Data Preprocessing

Sora platform with IEEE 802.11a wireless protocol stack is selected in the present approach. The transmitted frame format in IEEE Std 802.11a concludes three

890

Y. Zhong et al.

(i)

(ii)

(iii)

(iv)

(v)

(vi)

(vii)

(viii)

Fig. 2. Eight different gestures. (i) Move right. (ii) Move left. (iii) Move forward. (iv) Move backward. (v) Move upward. (vi) Move downward. (vii) Rotate clockwise. (viii) Rotate counter-clockwise.

parts: preamble, signal and data, in which the preamble consisted of 10 short symbols and two long symbols is used for synchronization [9]. Since the preamble symbols are constant, the different motions of gestures performed between the transceivers will result in preamble having different patterns at the receiver. By identifying and interpreting these patterns, it is possible to detect and even classify the types of different gestures. It is noted that to ensure the accuracy of the presented approach, only the two long preamble symbols are selected from each 802.11a frame. Therefore, in this paper, we use the 128 recorded samples of modulated long preamble symbols from each frame for feature extraction. 3.2

Higher-Order Cumulant Feature for Encoding

The Higher-Order Cumulant (HOC)-based features are used as the input vector to the encoder in our proposed Auto-ED neural network. We choose HOC because it can enhance the immunity of the system against noise, but this requires excessive number crunching. In addition, HOC contains both amplitude and phase information of a signal, which is very useful to extract distinct features for target classification [10, 11]. In this paper, the following procedures are used for feature extraction from the received modulated long preamble symbols. First, since the average received signal is nonzero, zero-mean processing needs to be applied for the raw modulated long preamble symbols so that the HOCs can be simplified. Second, onedimensional (1-D) slice of fourth-order cumulants (denoted as cˆ4x (0, 0, τ )) are used to extract useful information, because it can provide a simple but effective way to extract the important information from the received signal. Given k-dim signal source {x1 , x2 , . . . , xk } its corresponding fourth-order cumulant can be

Device-Free Sensing for Gesture Recognition by Wi-Fi Communication

891

calculated by: cˆ4x (0, 0, τ ) =

N 1  [x(n) − m ˆ 1x ]3 [x(n + τ ) − m ˆ 1x ]− N n=1 N N  3  [x(n) − m ˆ ][x(n + τ ) − m ˆ ] × [x(n) − m ˆ 1x ]2 , 1x 1x N 2 n=1 n=1

(1)

where N and τ repetitively represents the total number of signal samples and time delays, m ˆ 1x is the mean value across all the samples (i.e., x(1), x(2), . . . , x(n), n represents the number of samples) which is defined as follows: N 1  x(n). N n=1

m ˆ 1x =

(2)

Finally, the mean and variance of the extracted 1-D slice of fourth-order cumulants of each gesture are computed across a one-second overlapping sliding time window. Although 1-D slices of fourth-order cumulants are possible to represent the features of the received modulated Wi-Fi preambles, there is a high chance that these HOC features extracted from some gestures showing the similar posture will look nearly identical. Therefore, in this paper, we extract 510dimensional variance and mean of the 1-D slice of fourth-order cumulants as the input to the Auto-ED network we proposed, since it has a potential to reduce false-alarm triggering based on its instantaneous and time-varying features. 16 64

510

Encoder

128

256 2 2

8

4 4

8 FS-Conv2

FS-Conv1

2 2

16

32

2 2 Conv1

16 FS-Conv3

32 FS-Conv4

Batch Normalization ReLu Activation

16

Decoder

Move right 32

Move left

64 128

16

1 1

8

1 1

4

Conv1

16

8

Conv2

4

Conv3

2

1 1 1 1

Conv4

Conv5

2

256

Dropout

Move forward Move backward

Move upward Move downward Rotate clockwise

Rotate counter-clockwise

Fig. 3. The architecture of the proposed Auto-ED network.

3.3

Auto-encoder/decoder Deep Neural Network

After extracting the variance and mean of the 1-D slice of fourth-order cumulants from the received preamble samples, we propose an Auto-ED deep neural

892

Y. Zhong et al.

network to simultaneously learn deep representations with high discriminability and optimize a deep classifier for gesture recognition. As shown in Fig. 3, the deep architecture of the proposed Auto-ED network consists of an encoder and a decoder. For training purposes, the Encoder will utilize four continuous Fractionally-Strided Convolution (FS-Conv) layers to map the variance and mean of the 1-D slice of fourth-order cumulants (1 × 510) input to a high dimensional tensor automatically. Then, a sequence of Convolution (Conv) layers are followed after the Encoder to learn deep representations for classifying different gestures. Finally, a softmax loss function is applied to calculate the bias between the predicted probability of an input and its ground-truth label. During the training process, the network will execute a forward propagation (calculating the deep representation from input→Encoder→Decoder→softmax loss) and a backward propagation (updating the learnable parameters of the Encoder and the Decoder according to the bias: softmax loss→Decoder→Encoder→input) iteratively when different inputs are fed into the network. More detailed of the Auto-Encoder/Decoder Deep Neural Network we used can refer to [12].

4

Experimental Results and Discussion

There are eight different gesture classes involved in the data obtained in Sect. 3.1. Each of them includes 2000 consecutive samples, overall 16,000 samples. All samples in the dataset are divided into two categories: training and testing data sets. To form the required training data set, 1400 data samples are selected from each gesture, overall 11,200 data samples. For the testing data set, 600 data samples are selected from each gesture, overall 4,800 data samples. It is worthwhile to mention that, there is no overlapped samples amongst the two sets, which makes the evaluation results more meaningful. Given the original data samples, the variance and mean of the 1-D slice of fourth-order cumulants are extracted (see Sect. 3.2) as inputs of the Auto-ED model. The confusion matrix of this approach is summarized in Table 1. As illustrated, We can observe that a majority of gestures can be classified into the correct category. In order to demonstrate the performance improvement by using the Auto-ED neural network, other classical machine learning methods are also used as benchmarks, including the conventional Support Vector Machine (SVM), the Backward Propagation Neural Network (BPNN), and the K-Nearest Neighbor (KNN). The comparison results are shown in Table 2. As can be seen, the presented approach outperforms all others in gesture recognition, not only the average accuracy of the eight gestures but also the accuracy of each gesture pair with identical movement in the opposite direction.

5

Conclusion

In this paper, we propose a deep learning approach named “Auto-ED” to achieve gesture recognition in an office environment. As a result, the adverse effect of

Device-Free Sensing for Gesture Recognition by Wi-Fi Communication

893

some gestures which produces similar changes in identical posture can be mitigated. To demonstrate the feasibility of utilizing the proposed approach, we use the data that involves four pairs of gestures with identical movement in the opposite direction. Experiment results demonstrate that, by using the proposed approach, we achieve the best performance compared with other classical machine learning method. Therefore, we can firmly conclude that the presented approach is feasible for gesture recognition and particular has certain robustness to classify the gestures with similar movements. Table 1. Confusion matrix of the proposed auto-ED approach Gesture index

Classification rate (%) (iii) (iv) (v) (vi)

(vii) (viii)

95.33 3.17

0.17

0.50

0.50

0.33

0

0.67

0.67

(i) (i)

(ii)

0

0

(ii)

2.67 95.67

(iii)

0.83

0.67 91.67 4.67

0

0

0.33

1.00

0.67

0.50

(iv)

0.67

0.67

0

3.83 92.50 0.83

0.83

0.50

0.17

(v)

0

0.17

0

(vi)

0.67

0

0.33

0.33 96.67 2.67 0

0.17

0

0

0

(vii)

0.83

0.33

1.00

0.67

0.33

0

92.33 4.50

(viii)

0.33

0.33

0.67

1.00

0

0.50

3.83 93.33

2.83 96.17

Table 2. Summary of classification accuracy using different classifier Gesture index

Classification rate (%) Auto-ED SVM BPNN KNN

(i)

95.33

91.50

90.33

90.67

(ii)

95.67

91.83

91.00

91.17

(iii)

91.67

82.83

81.67

82.33

(iv)

92.50

86.67

85.83

86.17

(v)

96.67

93.17

92.17

92.5

(vi)

96.17

92.83

93.00

93.17

(vii)

92.33

85.83

85.17

84.83

(viii)

93.33

88.83

87.83

88.33

Average

94.21

89.19

88.38

88.65

Acknowledgements. This research is supported by NSFC 61671075, NSFC 61631003 and Beijing Institute of Technology Research Fund Program for Young Scholars.

894

Y. Zhong et al.

References 1. Youssef M, Mah M, Agrawala A (2007) Challenges: device-free passive localization for wireless environments. In: Proceedings of the 13th annual ACM international conference on mobile computing and networking. ACM, pp 222–229 2. Zhong Y, Dutkiewicz E, Yang Y, Zhu X, Zhou Z, Jiang T (2017) Internet of missioncritical things: human and animal classification a device-free sensing approach. IEEE Internet Things J 3. Zhong Y, Yang Y, Zhu X, Dutkiewicz E, Shum KM, Xue Q (2017) An on-chip bandpass filter using a broadside-coupled meander line resonator with a defectedground structure. IEEE Electron Device Lett 38(5):626–629 4. Pu Q, Gupta S, Gollakota S, Patel S (2013) Whole-home gesture recognition using wireless signals. In: Proceedings of the 19th annual international conference on mobile computing and networking. ACM, pp 27–38 5. Zhong Y, Zhou Z, Jiang T (2015) A novel gesture recognition method by Wi-Fi communication signal based on fourth-order cumulants. In: 2015 IEEE international conference on communication workshop (ICCW). IEEE, pp 2519–2523 6. Huang Y, Sheng H, Zheng Y, Xiong Z (2017) Deepdiff: learning deep difference features on human body parts for person re-identification. Neurocomputing 241:191– 203 7. Huang Y, Xu J, Wu Q, Zheng Z, Zhang Z, Zhang J (2019) Multi-pseudo regularized label for generated data in person re-identification. IEEE Trans Image Process 28(3):1391–1403 8. Tan K, Liu H, Zhang J, Zhang Y, Fang J, Voelker GM (2011) Sora: highperformance software radio using general-purpose multi-core processors. Commun ACM 54(1):99–107 9. O’Hara B, Petrick A (2005) IEEE 802.11 handbook: a designer’s companion. IEEE Standards Association, Piscataway 10. Zhong Y, Yang Y, Zhu X, Dutkiewicz E, Zhou Z, Jiang T (2017) Device-free sensing for personnel detection in a foliage environment. IEEE Geosci Remote Sens Lett 14(6):921–925 11. Zhong Y, Yang Y, Zhu X, Huang Y, Dutkiewicz E, Zhou Z, Jiang T (2018) Impact of seasonal variations on foliage penetration experiment: a WSN-based device-free sensing approach. IEEE Trans Geosci Remote Sens 12. Huang Y, Zhong Y, Wu Q, Dutkiewicz E, Jiang T (2018) Cost-effective foliage penetration human detection under severe weather conditions based on autoencoder/decoder neural network. IEEE Internet Things J

Detection of Sleep Apnea Based on Cardiopulmonary Coupling Haojing Zhang(B) , Weidong Gao, and Peizhi Liu Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China {HaojingZhang,gaoweidong}@bupt.edu.cn, [email protected]

Abstract. Sleep plays an important role in human life activities, and Obstructive sleep apnea (OSA) is a very important factor. Sleep apnea is a common sleep-related respiratory disease. Polysomnography (PSG) is the gold standard for detecting sleep apnea, but PSG is a contact device, there will be a first night effect, and some people may even be disturbed by long-term incompatibility. This study proposes to extract the cardiopulmonary coupling (CPC) strength based on the Ballistocardiogram (BCG) signal to achieve the purpose of further improving the accuracy of OSA detection. We extract heart rate and breathing through the BCG signal. Then, the time domain features and frequency domain features of the heartbeat interval sequence over a fixed length of time are extracted. The coupling strength of the two signals is further analyzed to generate cardiopulmonary coupling characteristics. A classification model of sleep apnea is used to determine whether sleep apnea occurs within a fixed length of time. And, the accuracy can be further improved by adding the cardiopulmonary coupling feature. Keywords: Sleep apnea feature

1

· BCG signal · Cardiopulmonary coupling

Introduction

Studies have shown that sleep apnea is a very important factor that seriously affects people’s sleep health. Therefore, the detection of sleep apnea syndrome will provide people with more information about sleep health. OSA refers to respiratory disorders during sleep [1,2]. PSG is the gold standard for sleep apnea testing. However the user is prone to generate the first night effect, and there is a phenomenon of device’s fall off in the middle of the night. The BCG signal is a non-contact signal, which can effectively avoid the above problems. In this study, heart rate signals and respiratory signals were extracted from BCG signals. At the same time, not

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 895–903, 2020 https://doi.org/10.1007/978-981-13-9409-6_104

896

H. Zhang et al.

only the heart rate signal is extracted from the time domain and frequency domain features, but also the coupling strength of the heart rate signal and the respiratory signal based on the BCG is analyzed, which further improves the accuracy of the OSA discrimination. The OSA can be effectively judged.

2 2.1

Feature Extraction Heart Rate Variability Feature Extraction

For the extraction of BCG signal heart rate, there are currently a local extremum or global signal spectrum analysis methods. However, the algorithm does not have the ability to adapt to individual differences and differences in physiological signals at different time periods. Here, an adaptive method is used to extract the R peak signal from the BCG signal. The BCG signal generally includes an J peak. J peaks contains more energy than other peaks. However, the presence of other peaks causes interference in the extraction of the J-peak interval, so it is difficult to separate the effective signals directly from the frequency band. The cycle change of the pulse energy is consistent with the heart rate in the medical sense, and the noise interference in the frequency band in which the pulse signal is located is small. Therefore, the pulse signal of the BCG signal is first extracted. The collected physiological signal Xorg (t) is filtered using a Butterworth bandpass filter with a passband frequency of 0.7–3.0 Hz to obtain a pulse signal Xpulse (t). The cycle change of the heartbeat can be roughly obtained by the envelope extraction of the pulse signal. We combine the statistical principles to perform spectral analysis of the pulse signal and set parameters for the adaptive narrowband filter. We use the resulting adaptive narrow-band filter to filter the original signal. Among them, the heart rate of each segment of the signal varies between 0.1–0.3 Hz. The filtering formula is: XBCG−1 (t) = Xorg (t)∗H2 (t).

(1)

In the formula (1), H2 (t) represents an adaptive narrow band filter. XBCG−1 (t) is the preliminary BCG signal. After the signal is processed by the adaptive narrow-band filter, the interference of the invalid noise can be well removed. However, the collected signal can only be processed by filtering, and the heart rate period obtained by filtering is still not accurate enough. The filtering process often only pays attention to the global period information and loses the local information. Next, Xorg (t) performs multi-layer wavelet transform, and then combines XBCG−1 (t) to further accurately locate the position. The Xorg (t) signal is discretely sampled to obtain X(n), and the wavelet decomposition and reconstruction of the signal can be realized by sub-band filtering [3]. F0 (n) and F1 (n) are the filter coefficients corresponding to the low-pass filter and the high-pass filter, respectively.

Detection of Sleep Apnea Based on Cardiopulmonary Coupling

897

The decomposition process of the signal is: The signal X(n) is subjected to “downsampling” (2) through the low-pass filter the high-pass filterget c(n) and d(n) respectively. The formulas for signal decomposition are  cj (m)F0 (m − 2n). (2) cj+1 (n) = m∈n

dj+1 (n) =



cj (m)F1 (m − 2n).

(3)

m∈n

The signal reconstruction process is as follows: Add the upper sampling signals of c(n) and d(n) respectively, and output Xrec (n) signal. The formula for signal reconstruction is  cj+1 (m)F0 (n − 2m) + dj+1 (m)F1 (n − 2m). (4) cj (n) = m∈n

The obtained reconstructed signal Xrec (n) is combined with the initially obtained BCG signal XBCG−1 (t) to accurately locate the position of the heart rate J peak. It’s equivalent to getting the position of the R peak. Heart rate variability (HRV) is further calculated using heart rate. The adaptive heart rate extraction algorithm greatly enhances the adaptive ability of the algorithm, and it is very sensitive to the change of heart rate in different frequency bands for different individuals. Take the 20-minute change of heart rate as an example, the average person’s 20-minute change of heart rate during sleep does not exceed 24 bps/s. So in a short period of time, the range of heart rate frequency variation is very narrow, about 0.2–0.4 Hz. If the filtering accuracy is not enough, once the range is enlarged, a large number of noise will be introduced, which will interfere with the extraction of R peak value. The wavelet transform is based on db4 wavelet, and its waveform is similar to the standard ballistocardiogram. According to current medical research, heart rate variability is strongly correlated with sleep apnea. HRV refers to a small difference between successive heartbeat intervals. It is caused by the autonomic nervous system’s modulation of the sinus node self-discipline, which is usually a tens of milliseconds of difference or fluctuation in the heartbeat interval. HRV reflects the tension and balance of cardiac sympathetic and vagal activity. The high frequency component of the HRV power spectrum can be used as a quantitative indicator to monitor the level of cardiac vagal modulation activity, and the low frequency component increases with the enhancement of sympathetic activity. LF/HF can be used as a quantitative index to evaluate cardiac vagal-sympathetic balance [4, 5]. The steps of extract time domain features and frequency domain features of the heartbeat interval sequence over a fixed length of time are as follows. First, we extract features in the time domain, including mean value, variance, maximum, minimum, standard deviation of normal-to-normal intervals (SDNN), standard deviation of RR intervals and so on.

898

H. Zhang et al.

  N   2 SDN N = 1/N [(RRi − RR) .

(5)

i=1

Then, a fast Fourier transform is performed on the HRV to obtain a heart rate power spectrum. After that, feature extraction of each frequency band is performed by wavelet transform. Table 1. Frequency-domain features Frequency-domain feature

Meaning

Total power (TP)

0–0.4 Hz power’s sum

Very low frequency (vLF)

Sum of 0–0.04 Hz power

Low frequency (LF)

Sum of 0.04–0.15 Hz power

High frequency (HF)

Sum of 0.15–0.4 Hz power

Normalized low frequency power (nLF) LF/(TP-vLF) Normalized low frequency power (nHF) HF/(TP-vLF) Ratio of LF to HF (LF/HF)

LF/HF

The high frequency (HF) energy, low frequency (LF) energy, very low frequency (VLF) energy, total power (TP), LF/HF (MF) and peak rate of each segment can be obtained by power spectrum estimation (Fig. 1 and Table 1).

Fig. 1. Partial frequency-domain features.

Detection of Sleep Apnea Based on Cardiopulmonary Coupling

2.2

899

Feature Extraction of Cardiopulmonary Coupling

Cardiopulmonary coupling analysis is to evaluate the coupling strength between heart rate and respiratory rate. The correlation between R-R interval series and respiratory signals is analyzed by Fourier transform. The R-R interval has been extracted above, and then the respiratory signal is extracted. The respiratory signal is the envelope information that stabilizes the BCG signal. After the body motion signal is removed, the BCG signal energy is concentrated in the respiratory signal band. Spectrum analysis can be used to estimate the frequency range of the respiratory signal. After filtering, it is compared with the original BCG signal from which the body motion signal is removed, and finally the respiratory signal is obtained [6, 7]. After obtaining heart rate and respiration from the BCG signal, the coupling degree of R-R interval and respiration is analyzed from two aspects [8]. One is to calculate the product of the power of the two signals at a given frequency. If the two signals have large vibration amplitudes at a given frequency, it is likely that the two signals are coupled to each other. Through calculating the product of the two signals’ power at a given frequency, the cross-spectrum power at the given frequency of the two signals is obtained. The second is to calculate the coherence of the two signals. If RXX (ω) and RY Y (ω) are the self-power spectrum of the R-R interval and the respiration respectively, and Rxy (ω) is the cross-power spectrum of the two signals, then the coherence coefficient is: Cxy = |Rxy (ω)|2 /[RXX (ω) ∗ RY Y (ω)].

(6)

Then the coupling degree is quantified by the product of correlation and cross spectral power. Fast Fourier transform is used here. The cross power spectrum and coherence between R-R interval and respiratory signal were calculated by fixed time window. Excessive power in the low frequency band is associated with periodic breathing during sleep apnea [9], while excessive power in the high frequency band is associated with physiological respiratory sinus arrhythmia and deep sleep [10]. Therefore, the coupling strengths of low frequency band (0.01–0.1 Hz) and high frequency band (0.1–0.4 Hz) are selected respectively.

3

OSA Classification Model

In order to determine whether sleep apnea occurs, we can regard it as a twocategory problem, that is sleep apnea and normal sleep state. The commonly used classifier models are Logistic Regression, Support Vector Machine, Gradient Boosting Decision Tree, etc. Logistic regression is a linear classification model. The support vector machine constructs a plane separating sleep apnea and non-sleep apnea through support vectors [11]. The gradient

900

H. Zhang et al.

lifting tree is not sensitive to noise data, and the algorithm performance is good, but the complexity is high [12]. Based on the construction of a single model, the classification effect can be improved through model integration. There are many strategies for model integration. The most common methods are voting, averaging, bagging, boosting, and stacking. This article uses the method of model fusion (Fig. 2).

Fig. 2. Model fusion process figure.

The specific steps of the model fusion classification algorithm are as follows: (1) The training set is divided into K parts, and the training process requires K models. (2) In the training phase, for each of the K models, k-1 parts data needs to be selected as the training set, and the rest is the verification set. And the validation set for each model is specified to be different. After the training set has adjusted the parameters of the model, enter the verification set and obtain its predicted output as the new feature of the ith part. In the test phase, the predicted mean values of the k models are directly used as new features. (3) In this way, the K parts data of the entire training set are trained, and the K models are also trained. (4) Input the characteristics of the test set into the model and obtain the final predicted result through the voting mechanism.

4

Result Analysis

From Fig. 3, we can see that adding the coupling characteristics of the R-R interval and the respiratory signal can effectively improve the accuracy when the recall rate and the recall rate are not significantly decreased. And the accuracy rate is increased effectively by 6.1% points at 80s.

Detection of Sleep Apnea Based on Cardiopulmonary Coupling

901

Fig. 3. Comparison of precision, recall and accuracy results.

Fig. 4. Sleep apnea test results at different time intervals when coupling features are not included (left) and included (right).

As can be seen from Fig. 4, the accuracy rate is the highest at 20s, but the recall rate and precision rate are too low. In this case, high accuracy is meaningless. The recall rate and precision rate at 100s are high, but the accuracy rate drop a lot. The value of the precision rate, the recall rate and the accuracy rate is more reasonable at 80s. It can be seen from the Fig. 4 that Regardless of whether or not the coupling characteristics of the R-R interval and the respiratory signal are added, it is most reasonable when the time length is 80 s. There is a good accuracy rate at the same time the recall rate and the recall rate are relatively high. We can also observe that the precision, recall and accuracy are not reduced at 100s when the coupling feature is added than at 80s without the coupling feature.

902

H. Zhang et al.

We also compared other sleep apnea testing studies [13], electrocardiogram (ECG) signals can get more accurate heart rate and respiratory signals, and the accuracy is better. However, ECG is a contact signal and BCG is a non-contact signal. BCG can do it without affecting people’s normal rest. At the same time, we have achieved comparable accuracy through BCG. We only use HRV features and CPC features, and we can add features such as respiratory signals in later studies to further improve accuracy.

5

Conclusion

This article is based on a non-contact sleep monitoring system for the detection of sleep apnea syndrome. The physiological signal of the patient during sleep is extracted without disturbing the rest of the subject. And the characteristics of the sleep apnea with respect to sleep apnea are extracted over a fixed length of time to determine if sleep apnea occurs. When the time length is 80 s, the coupling feature is not added, and the accuracy is 84% when the recall rate is 70.5% and the recall rate is 71.9%. It can effectively detect whether sleep apnea has occurred. When the time length is 80 s, the coupling feature is not added. When the recall rate is 74.3% and the recall rate is 75.1%, there is a 90.1% accuracy rate. Can effectively improve the accuracy rate by 6.1% points. The results show that the cardiopulmonary coupling characteristics extracted from BCG effectively improve the accuracy of OSA detection. At present, BCG’s method of detecting sleep apnea uses only the characteristics obtained by the two signals of heartbeat and respiration, and does not fully utilize the characteristics of blood oxygenation and snoring. If blood oxygen information and snoring signals are added, more features are extracted to depict changes in sleep apnea, the characteristics of sleep apnea syndrome can be analyzed from more dimensions, and better analysis results will be obtained. Acknowledgements. This work is supported by National Key R&D Program of China under grant: No. SQ2018YFC200148-03.

References 1. Ali NJ, Pitson DJ, Stradling JR (1993) Snoring, sleep disturbance, and behaviour in 4–5 year olds. Arch Dis Child 68(3):360–366 2. Gislason T, Benediktsdottir B (1995) Snoring, apneic episodes, and nocturnal hypoxemia among children 6 months to 6 years old: an epidemiologic study of lower limit of prevalence. Chest 107(4):963–966 3. Mallat S (1999) A wavelet tour of signal processing. Elsevier, Amsterdam 4. Gula LJ, Krahn AD, Skanes A et al (2003) Heart rate variability in obstructive sleep apnea: a prospective study and frequency domain analysis. Ann Noninvasive Electrocardiol 8(2):144–149 5. Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci 88(6):2297–2301

Detection of Sleep Apnea Based on Cardiopulmonary Coupling

903

6. Moody GB, Mark RG, Zoccola A et al (1985) Derivation of respiratory signals from multi-lead ECGs. Comput Cardiol 12(1985):113–116 7. Moody GB, Mark RG, Bump MA et al (1986) Clinical validation of the ECGderived respiration (EDR) technique. Comput Cardiol 13(1):507–510 8. Whipp BJ, Ward SA (1982) Cardiopulmonary coupling during exercise. J Exp Biol 100(1):175–193 9. Shiomi T, Guilleminault C, Sasanabe R et al (1996) Augmented very low frequency component of heart rate variability during obstructive sleep apnea. Sleep 19(5):370– 377 10. Penzel T, Kantelhardt JW, Lo CC et al (2003) Dynamics of heart rate and sleep stages in normals and patients with sleep apnea. Neuropsychopharmacology 28(S1):S48 11. Leslie C, Eskin E, Noble WS (2002) The spectrum kernel: a string kernel for SVM protein classification. Biocomputing 2001:564–575 12. Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM sigkdd international conference on knowledge discovery and data mining. ACM, pp 785–794 13. Chen L, Zhang X, Song C (2014) An automatic screening approach for obstructive sleep apnea diagnosis based on single-lead electrocardiogram. IEEE Trans Autom Sci Eng 12(1):106–115

Study on a Space-Air-Ground Integrated Data Link Networks Architecture Jia Guo1(&), Shasha Zhang1, Fan Lu1, Jingshuang Cheng1, Yuanqing Zhao1, and Nuo Xu2 1

Institute of Spacecraft System Engineering, China Academy of Space Technology, Beijing, China [email protected] 2 Institute of Telecommunication Satellite, China Academy of Space Technology, Beijing, China

Abstract. Networked data link system connected by various communication links with space, air and ground nodes could be beneficial to efficient information sharing. This paper studies on an architecture of space-air-ground integrated data link networks, including system architecture, information flow process and protocol structure, which is conductive to data link system design. Keywords: Space information networks

 Data link  Architecture

1 Introduction The data link realizes information transmission and sharing in the coverage area through the interconnection of the combat units such as detection platforms, strike platforms, and command systems, so as to support the cooperative operation of the nodes. With the development of information technology and space technology, the expansion of operational scope and the emergence of new combat modes have placed new demands on data link technology [1, 2]. The current situational awareness, intelligence reconnaissance and coordination data link technologies have their own characteristics, but there are still problems such as limited application scope and limited heterogeneous node adaptability [3, 4]. This paper proposes a space-air-ground integrated data link system architecture, which makes full use of the wide-area coverage characteristics of space-based satellite platforms. Air-based, ground-based data links extend to wide-area networks through space-based platforms. Space-based network extends to the air-based and ground-based platform layers. The system could expand the scope of application and improve collaboration capabilities. The remainder of the paper is structured as follows. Section 2 presents space-airground integrated data link network system architecture. The analysis of information flow process is presented in Sect. 3. Section 4 introduces protocol structures of the system. We conclude in Sect. 5.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 904–910, 2020 https://doi.org/10.1007/978-981-13-9409-6_105

Study on a Space-Air-Ground Integrated Data Link

905

2 System Architecture With the satellite constellation as the wide-area coverage medium, the system forms network connections through Ground-Satellite Links (GSLs) and Inter-Satellite Links (ISLs), and integrates space-based and ground-based operations, command platforms and satellites as network nodes into the system. The entire process of information acquisition, transmission and processing is networked. And various types of information are shared among nodes in the network to support applications such as collaborative detection and rapid response. 2.1

System Composition

The system can be divided into space-based network, air-based network and groundbased network according to the location level of network nodes. According to data link system application, it can be divided into data link network, Low Earth Orbit (LEO) constellation network, space backbone network and ground backbone network. Each LEO satellite and its covered data link nodes form a data link network autonomous system (AS), as shown in Fig. 1.

Space backbone

ISLs LEO constellation ISLs Data link AS GSLs GSLs

Data link AS Satellite station Satellite operation center

Data link station System control center

Fig. 1. System architecture block diagram

Ground backbone

906

J. Guo et al.

In order to ensure the low-latency transmission of time-sensitive information, the system is equipped with a globally-covered LEO constellation network. The LEO constellation satellites and the ground data link nodes are connected through the satellite data link communication links, and the control process of the link connection does not need to be planned in advance. Adopting a self-organizing and distributed network structure, with autonomous features, including dynamic node discovery, link establishment, link switching, dynamic node logout, etc., will be completed independently by the satellite carrying the data link terminal and the ground data link node. Node relationships are peered and data is shared within the data link AS. The LEO satellites distribute the received data link information to the data link AS connected to other LEO satellite nodes through the ISLs, and can also transmit the information back to the ground backbone network through the space backbone network (commonly consists of Geostationary Earth Orbit satellites) for distribution, and expand the data link transmission distance and coverage. On the basis of IP technology as the basic framework and protocol, each layer of the network is connected by satellite-satellite, satellite-ground, satellite-air, air-air, airground, ground wired and wireless links. The LEO satellites are responsible for constellation access, real-time processing, routing and forwarding, protocol conversion of user terminals in different regions of the earth. The space-based network and the ground-based network are mainly composed of various types of user terminals, including airborne, shipboard, and vehicle-mounted terminals. Through the data link application terminal, communication, access, and switching with LEO satellites can be realized. Through the processing and forwarding of the LEO satellite network, realtime communications among the air and ground terminals can be realized. 2.2

System Functionality

In order to meet the application requirements, the system should have the following functionalities. • Satellite supports data link user registration and authentication mechanism, realizes data link user identity authentication, identification information confirmation, location management, etc. • Communications with the ground data link node and the ground station through the GSLs, and have the data link AS routing function. • Protocols conversion with GSLs and ISLs. • Receive satellite navigation and positioning signals. • Data link terminals support user dynamic access and handover. • Support for service level guarantee based on prioritization. • The on-board data link processing payload has user access, real-time data processing, and protocol conversion functions. The constellation routing and forwarding function is implemented by the on-board router. • Ground and air data link terminal has the access capability to the satellite with omnidirectional and directional communication link, and has the function of converting various data link protocols into the satellite data link protocol, and supports the self-organized and distributed network structure.

Study on a Space-Air-Ground Integrated Data Link

907

3 Information Flow Process As is shown in Fig. 2, the typical information flow process of the system can be described as follows. • The ground or air data link node enters the satellite onboard omnidirectional antenna coverage, transmitting Hello information, location information. • The satellite discovers neighbor nodes and sends identity authentication information, location information, and establishes neighbor links through directional antennas. Then ad hoc routing protocol information is fast broadcasted over directional transmission links. • The satellite receives other node location information from the neighbor nodes, broadcasts all location information to other satellites in the constellation, and returns the network registration of all data link nodes through the domestic satellite back to the ground control center. • The ground control center comprehensively judges the cooperative relationship of each data link AS according to the registration status of the access node, generates routing tables and data link communication destination satellite address tables in

RF/Laser ISLs

Onboard router Operation terminal

RF/Laser ISLs Destination satellite

Relay satellite

Access satellite ISL terminal Onboard interface

Operation terminal

Onboard interface Protocol conversion Onboard data link payload

Protocol conversion Onboard data link payload Directional antenna

Onboard router

ISL terminal

Omnidirectional antenna

Omnidirectional antenna

Directional antenna

Satellite station

Satellite station Directional antenna Omnidirectional antenna

Directional antenna Access user node 1

Access user node N

Omnidirectional antenna

Data link terminal

Data link terminal

Protocol 1 conversion

Protocol 1 conversion

Protocol N conversion

Protocol N conversion

Fig. 2. System application information flow process diagram

Destination user node 1

Destination user node N

908

• • •







J. Guo et al.

real time, and uploads to the constellation. Or the satellites generate routing tables and destination satellite address tables based on the rules developed by the control center. The destination address of the data link information transmission can be specified by the ground. The default address is the ground control center. The data link user node can also apply for service through the destination satellite. The constellation network responds to the access satellite according to the registration status of the networked nodes. When a data link user node A is connected to a node B in the same AS, the information transmitted by user A is converted by the node B and transmitted to the access satellite through the GSLs. Access satellites complete the protocol conversion from GSLs to ISLs and route matching, transmitting information to the destination satellite via ISLs. The destination satellite completes the protocol conversion from ISLs to GSLs, sends the information to the data link AS connected to the satellite through the GSLs and distributes it to each node in the network. Each node converts the information to the destination user node’s format according to the external connection situation. In the case that the delay requirement is not high, the inter-satellite forwarding path can be transmitted to the ground gateway station via the LEO to GEO ISLs through the space-based backbone network, and then transmitted to the registered ground system with the satellite data link terminals via the ground-based backbone network. The ground system distributes information to the data link AS which it is connected. If the satellite has not received the updated routing table for a long time, it will transfer to the network broadcast mode, that is, after the access satellite receives the uplink information, it broadcasts the information to the satellites of the entire constellation, and then the satellites distribute the information within the coverage areas. The ground data link node leaves the access satellite coverage and cannot send and receive network information. The original network cannot update its location information, disconnects after timeout, and the ground data link node switches to other access satellite.

In addition, for the application scenarios in which multiple data link ASs exist within a single-satellite coverage, the access satellite can also manage the information of the covered user nodes, process and forward the information received by the data link payload through the on-board network, to achieve cross-domain information interaction sharing based on single-satellite.

4 Protocol Structure In order to ensure efficient exchange of data link information among nodes in the network, layered structure is needed to adopt. Each layer performs specific functionalities, reduces the coupling of protocol processing. Protocol structure and various layer protocol functionality models, data formats, inter-layer services and interface

Study on a Space-Air-Ground Integrated Data Link Space protocol layer

Data link protocol layer

Application layer

Ground-satellite protocol

User protocol

Application Process layer

Transfer layer

Message format IP

Network layer

User frame

User frame

User frame

GSL frame

Link layer

User coding

Datalink layer

User decoding

GSL coding Ethernet

Physical layer

Physical layer

RF

Ethernet

RF

RF

Fig. 3. Typical protocol structure—user node to access satellite Space protocol layer

Access satellite protocol

Relay satellite protocol

Application layer Transfer layer

IP

IP

Network layer

GSL frame Onboard subnetwork protocol Datalink layer

Physical layer

Onboard subnetwork protocol

GSL coding

RF

Onboard interface

Onboard interface

IP

IPoC

IPoC

IPoC

ENCAP

ENCAP

ENCAP

AOS

AOS

AOS

ISL coding

ISL coding

ISL coding

RF/Laser

RF/Laser

RF/Laser

Fig. 4. Typical protocol structure—access satellite to relay satellite

909

910

J. Guo et al.

relationships, network protocol interoperability should be designed to ensure the openness, scalability, and connectivity with external nodes (including terrestrial networks). From the requirements of ensuring the bandwidth and real-time performance of various types of business information, it is necessary to ensure efficient exchange and consider technology evolution and compatibility. Refer to CCSDS and the terrestrial TCP/IP protocols, the typical protocol configuration when the user nodes communicate with satellites is shown in Figs. 3 and 4. Figure 3 shows the typical protocol structure from user node to access satellite. Figure 4 shows the typical protocol structure from access satellite to relay satellite. Protocol structure from relay satellite to destination user node is similar and will not be stated. First of all, considering the compatibility with the future space information network, the network layer protocol should support the IP protocols to ensure that the data link data can be unified processing in all types of nodes in space, air and ground, also ensure efficient and reliable data exchange. Secondly, learn from CCSDS existing space communication protocol reference model [5], compatible with the current common protocols of satellite systems, to ensure that data can be exchanged and transmitted within satellite system and between satellite systems. Thirdly, for the non-IP private data link protocol, the dedicated user message format should be converted to the IP protocol format based on the gateway at the ground user node, and the efficient protocol conversion capability is added on the basis of retaining the existing protocol to ensure that the protocol is backward compatible.

5 Conclusion This paper proposes an integrated networked system architecture for space-air-ground data link networking, which analyzes system architecture, information flow process, protocol structure, etc., and provides design reference for networked data link system. With the advancement of the integration of space, air and ground networks, the efficient management and autonomous control of the data link network needs further study, which could learn from the IoT (Internet of Things) technology as a technical reference.

References 1. Schug T, Dee C et al (2011) Air force aerial layer networking transformation initiatives. In: IEEE military communications conference, pp 1974–1978 2. David F, Bharat D et al (2006) Military satellite communications: space-based communications for the global information grid. Johns Hopkins APL Tech Digest (Appl Phys Lab) 27:32–40 3. Ratna W et al (2013) Enhancements for the broadband satellite network architecture. In: 31st AIAA international communications satellite systems conference, pp 1–8 4. Steven P (2007) A routing architecture for the airborne network. In: IEEE military communications conference, pp 1–7 5. CCSDS Secretariat (2014) CCSDS 130.0-G-3 Overview of space communications protocols. CCSDS, pp 18–20

Similar Cluster Based Continuous Bag-of-Words for Word Vector Training Weikai Sun(&), Yinghua Ma, Shenghong Li, and Shiyi Zhang School of Cyber Space Security, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240, China [email protected]

Abstract. With the increasing use of natural language processing, how to build a word vector which contains more semantic information becomes a top priority. Word vector is used to represent the most basic unit in the language-word, and is the basis of the neural natural language processing model. Therefore the quality of word vectors directly affects the performance of various applications. In continuous bag-of-words model, limited by the frequency of occurrence, some words do not get enough training. At the same time, based on the set of minimum frequency, some low-frequency words are ignored by the model. In this paper, we build similar clusters from the semantic dictionary and integrate it into CBOW model with the help of multi-classifier. We improve word vectors and use it to complete a semantic similarity comparison task. Compared with the original word vectors built by CBOW, the method we proposed got higher accuracy. It shows some semantic information are integrated and the word vectors of low–frequency words are improved. Keywords: Natural language processing Continuous bag-of-words

 Word vector  Similar cluster 

1 Introduction Natural language processing (NLP) is a subfield of artificial intelligence and linguistics. Since neural network architectures [1] are widely applied in machine learning, many NLP models are proposed in succession. Bengio proposed neural network language model in 2003 [2]. In 2013, T. Mikolov proposed two new neural network language model: continuous bag-of-words model and skip-gram model [3, 4]. These two models which firstly use words from the past and future to describe the semantic information of the key word are also named as Word2Vec. Word vector which expresses the relationships between words in the form of a vector, is a fundamental component of NLP [5]. Therefore, how to build better word vectors that carries more semantic information, has attracted the interest of the researchers. In 2017, Mikolov proposed a new approach based on the skip-gram model, in which each word is seemed as a bag of character n-grams [6]. Compared with the previous neural network language model, word vectors built by Word2Vec restore the interrelationship of words better. However, due to the fact that the distribution of words is extremely uneven, especially in small text datasets, those © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 911–918, 2020 https://doi.org/10.1007/978-981-13-9409-6_106

912

W. Sun et al.

low-frequency words which appear only several times in the whole text are not trained well because of insufficient training times and single context matches. Therefore, we consider that we build the relationship between those low-frequency words and other words. Motivated by this, we try to utilize the similarity between low-frequency words and high-frequency words from the semantic dictionary. Through multi-classifier, we add the similarity to the CBOW model and expect word vectors could ‘remember’ the similarity. The contributions of this paper can be summarized as follows: 1. We proposed a new word vector training method which adds similar cluster to CBOW model. 2. We built a relationship between high-frequency vocabulary and low-frequency vocabulary by softmax regression to combine their word vectors. 3. We applied our word vectors into the task of text similarity comparison and achieve some improvements. The remainder of this paper is organized as follows. Section 2 gives a summary of the background which have motivated our work of this paper. Section 3 presents the methods of similar cluster based CBOW. And the experiments and result analysis have been conducted to demonstrate the effectiveness of our method in Sect. 4. Finally, Sect. 5 makes a conclusion.

2 Related Work 2.1

Continuous Bag-of-Words

Continuous Bag-of-Words (CBOW), first proposed by Mikolov in 2013 which can be used to learn high-quality word vectors from huge datasets with large quantities of words. Compared with previous models, CBOW permits more plentiful input, which uses words from the future. Mikolov provides two ways to get vectors updated, hierarchical softmax and negative sampling. In this essay only CBOW with hierarchical softmax is introduced. It consists of three parts: input layer, projection layer, output layer. (1) input layer Input layer consists of the vector of the context of key words. At the beginning, all the word get their initial vectors in a random way. Every word is set in a slide window. (2) projection layer The main function of projection layer is to sum all the vectors of context. The sum, as the input vector of the key word, contains the contextual information of the key word. (3) output layer The essence of output layer is actually hierarchical softmax, which is depicted as a binary tree. There are three kinds of nodes in the binary tree, root node, branch node and leaf node. There is a parameter vector at each root node and branch node. The binary tree set all the words in the vocabulary on its leaf nodes. During training procedure, the input vector multiplies the parameter vector to get a probability, which

Similar Cluster Based Continuous Bag-of-Words for Word Vector

913

represents the possibility of choosing the left path. Looping back and forth, this defines a random walk from the root node to the leaf node. More precisely, the binary tree searches for the key word x, and plans a path from the root node to the leaf node. The input n-dimension vector Xx multiplies the n-dimension parameter vector hxj to get a probability, which compares with the code of current path dxj . The difference between them is used to update the parameter vector hxj−1 and the input vector Xx. 2.2

Softmax Regression

Softmax regression is a typical classification algorithm, which is a generalization of logistic regression to the case where we want to do multi-classification [7, 8]. In logistic regression we assume that the labels should be binary, but in softmax regression it can be multiple. As a kind of supervised learning model, the input of it consists of feature vector x and verification vector y. It continuously updates the weight matrix W by comparing y’ with y. After several iterations, the weight matrix W tends to converge.

3 Proposed Method Before the train, we need to divide the sentences from the datasets into words. As mentioned above, we modify the word vectors by building similar relationship between low-frequency words and high-frequency words. We choose a semantic dictionaryChinese WordNet [9] as a basis for similar relationships. Chinese WordNet is a vocabulary network that expresses the relationship between Chinese words, which contains synonyms, antonyms, upper words, lower words. Based on the training text, those low frequency words are divided into several categories according to the synonym table. And a high-frequency word are often added to each category. That is similar cluster. At the beginning of the train, we set several parameters below. C is the size of the text window, which determines the size of the context. V(x) represents the n-dimension vector of word x, which are randomly initialized before the train. So does the branch node vector h is the same. Besides, d is the Huffman code of words, constructed in accordance with the frequency of words. And dimension n, learning rate a, weight matrix W and epoch are necessary. The front of the model is similar to the CBOW, a sliding window is used to choose C words as the key word and context, and the context are summed as the input of the key word. In the projection layer of the model, there is a difference that we set a bypass in order to let word vectors merge the similar cluster. We consider that, regardless of polysemy, if there are similar words in different sentences, the semantics they expressed will be consistent to some degree. When there is a word in similar cluster appearing in the sliding window, the input is sent to the bypass additionally. Multiclassifier is set in the bypass and word vectors are classified according to Y. In original softmax regression, weight matrix W is updated constantly to reach the most suitable one for the system and the input which usually consists of different features is unmodifiable. In this model, we need to update the word vectors while modifying the

914

W. Sun et al.

weight matrix. So we consider distributing the gradient of loss r to the vector of each word in context.  y  e VðContextðxÞÞ þ ¼ k1  a  P yi :  W T  Wj e In this way, while updating the weight matrix W, we are constantly adjusting the word vectors of the context to make the context word vectors of the key words belonging to the same similar cluster tend to be consistent. At the same time, the update of word vectors affects the parameters in the trunk and finally make changes in the whole system. We expect that the word vectors are restricted not only by the context but also by the similar relationships (Fig. 1).

Fig. 1. Similar cluster based CBOW

Otherwise, both softmax regression and hierarchical softmax update their parameters according to each loss. We combine their gradients of loss to connect bypass and trunk for intensifying similar relationship. So, we have the following formula:

Similar Cluster Based Continuous Bag-of-Words for Word Vector

915

   r ¼ 1  dix  sigmoid XxT :  hx  ð1 þ r 2 Þ  a i We have considered combining the two gradients in an additive way at the beginning, but we found that this makes the gradient change drastically. The parameters and word vectors are continually updated until the iteration is completed. Through the joint influence of the two models on the parameters, we get the final word vectors. Algorithm 1 similar cluster based CBOW Require: S: similar cluster; V(x): the word vector of word x; W: weight matrix; d x : the Huffman code of word x; c: the length of slide window; hx j : the parameter vector of the jth path of word x; b: bias matrix; Y: the check matrix 1: Initialize V(x), W, d x , c, hx j 2: repeat 3: Choose 2c + 1 continuous word from the text, the middle word become the key word, other become the context. 4: Sum the word vectors of the context, the result Xx is treated as the input of the key word. 5: if the key word in S: Get y as follows: y ¼ W  Xx þ b Perform softmax operation, normalize the vector y. Update the weight matrix W and X(Context(x)) as follows: eyi  1 Yi ¼ 1 : P eyi Yi ¼ 0 r2 ¼ P eyi eyi W ¼ a  ðXx  r2 Þ  ð1 þ r1 Þ   y VðContextðxÞÞ þ ¼ k1  a  Pe eyi :  W T  Wj 6: repeat 7: Find a path according to the Huffman code d x . x 8: Multiply hx j by Xx and compare it with the Huffman code dj . x 9: Update the parameter vector hj as follows:   T  x x  ð1 þ r 2 Þ  X x hx j þ ¼ a  1  di  sigmoid Xx :  hi 10: Save the gradient r1 to update the word vector.   r1 ¼ 1  dix  sigmoid XxT :  hx i 11: Until reach the leaf node. 12: Update X(Context(x)) as follow: x

VðContextðxÞÞ þ ¼ k2  a 

d Þ 2lenðP 1

i¼1

  1  dix  sigmoid XxT :  hx  ð1 þ r 2 Þ  X x i

13: Until complete the iteration. Output: V(x) as the final word vector.

916

W. Sun et al.

4 Results 4.1

Dataset

We collected 100 M Chinese corpus to verify our method. These corpora are composed of 126 Chinese novels, about 504,000 different sentences. These novels are classified into 63 groups and novels in the same groups are translated from the same English novels. We have reorganized two novels through some algorithms, so that their semantically identical sentences are matched one by one. In this way, We use this corpus to perform semantic similarity comparison experiments to test the quality of the word vectors we get. In addition, we get 10,000 pairs of synonyms from the Chinese WordNet and choose dozens of words as similar cluster for each pairs to ensure that 70% sentences of them are related to the words in the similar cluster. 4.2

Result of Word Vectors

We compare our method with the traditional CBOW. The same initialization conditions are used to reduce other interference. We choose some word vectors from the result and calculate their similarity by calculating the cosine distance of two vectors The results are demonstrated in Table 1. The frequency of words is marked next to the words. It is evident that the word vectors of words which belong to the same similar cluster become closer than the word vectors of traditional CBOW. The word vectors of words which have much low frequency often do not express their semantics well, but we solved this problem by establishing contact with high-frequency vocabulary. Although only some words walk through the bypass, all the word vectors has changed because of the existence of slide window. The words which are adjacent to the word in similar cluster have closer word vectors. And other words tend to get closer to or far away from the original word vectors due to the join of similar clusters. Table 1. Comparison between the word vectors of two methods Pair of word 沙丁鱼(16)-鱼(278) 沙丁鱼(16)-飞鱼(9) 大人(1)-父母(3) 深蓝(1)-黄色(9) 鱼钩(9)-鱼叉(24) 海洋(13)-海(8)

4.3

Cosine distance (origin) 0.245 0.722 0.552 0.439 0.091 0.418

Cosine distance (improved) 0.945 0.964 0.716 0.749 0.782 0.826

Result of Text Similarity Comparison

We use two sets of word vectors to build the sentence vectors and calculate the cosine distance of sentence vectors of sentences with similar meanings. The result is presented in Table 2. It demonstrates the accuracy of two sets of word vectors under semantic

Similar Cluster Based Continuous Bag-of-Words for Word Vector

917

similarity experiments. Although we only do the classification on section of words, almost all word vectors have changed to some degree. It shows that the word vectors built by our method are closer in the vector space. We focus on those sentences especially that contains the words in similar clusters and find that our method has much higher accuracy than overall accuracy. Not only the word vectors of key words are improved, but also the word vectors of the context of words in similar cluster also benefit from the similar cluster. Through building new peer relationships between lowfrequency words and high-frequency words, we improve the quality of word vectors of low-frequency words in text similarity comparison. Table 2. The result of two method in text similarity comparison task Method CBOW Similar cluster based CBOW

Correct rate 0.460 0.516

Error rate 0.540 0.484

5 Conclusion In this essay, we propose a new CBOW model based on similar clusters to build new connection between high-frequency words and low frequency words. We add the softmax regression model to the CBOW to classify and modify the word vectors. The experiments have been conducted to show that the word vectors of low-frequency words could be improved through adding extra relationships with other words. From this experiment, we realized that the frequency of words is highly correlated with the semantic information contained. So, we consider train the word vectors according to a grading strategy. Besides, we will add an attention model to the projection layer to improve the input word vectors. Acknowledgements. This work was supported by the National Key Research and Development Project of China under Grant 2016YFB0801003.

References 1. Basheer IA, Hajmeer M (2000) Artificial neural networks: fundamentals, computing, design, and application. J Microbiol Methods 43:3–31 2. Bengio Y, Ducharme R, Vincent P (2003) A neural probabilistic language model. J Mach Learn Res 3:1137–1155 3. Mikolov T, Sutskever I, Chen K, Corrado G, Dean J (2013) Distributed representations of words and phrases and their compositionality. Accepted to NIPS 4. Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. In: ICLR workshop 5. Hinton GE (1986) Learning distributed representations of concepts. In: Proceedings of the eighth annual conference of the cognitive science society, pp 1–12

918

W. Sun et al.

6. Bojanowski P, Grave E, Joulin A, Mikolov T (2016) Enriching word vectors with subword information. arXiv Preprint arXiv:1607.04606 7. Ruczinski I, Kooperberg C, LeBlanc M (2003) Logic regression. J Comput Gr Stat 12:475– 511 8. Mingyang J, Yanchun L, Xiaoyue F (2016) Text classification based on deep belief network and softmax regression. Neural Comput Appl, 1–10 9. Huang C-R, Hsieh S-K (2010) Infrastructure for cross-lingual knowledge representation— towards multilingualism in linguistic studies. Taiwan NSC-granted Research Project (NSC 96-2411-H-003-061-MY3)

Research on Integrated Waveform of FDA Radar and Communication Based on Linear Frequency Offsets Lin Zhang1, Kefei Liao1,2(&), Shan Ouyang1,2, Yuan Ma1, Jingjing Li1, Ningbo Xie1, and Gaojian Huang1 1

2

School of Information and Communication, Guilin University of Electronic Technology, Guilin Guangxi 541004, China [email protected] State and Local Joint Engineering Research Center for Satellite Navigation and Location Service, Guilin University of Electronic Technology, Guilin Guangxi 541004, China

Abstract. In order to solve the problem of radar communication integration waveform signal separating and occupying more radar resources, the frequency diverse array transmit signal model and its waveform loaded with communication signal are studied and designed. Considering various factors, a frequency diverse array radar communication integrated transmission waveform with linear frequency interval is designed. The random communication signal is modulated to the frequency deviation between the frequency diversity elements, and the linear frequency interval is added between each communication signal. Therefore, the transmit signal can be easily demodulated at the communication receiver without affecting the beam characteristics of the frequency diverse array and the radar target location task. Keywords: Integration of radar and communication  Frequency diverse array (FDA)  Waveform design  Target location  Data communication

1 Introduction Due to the rapid development of modern combat platforms and the complex electromagnetic environment, the single system in the past can not meet the modern electronic countermeasures. Therefore, it is of great significance to realize the multifunctional integration of radar and communication [1]. Communication systems and radar systems can be shared on hardware infrastructure such as transmitters, receivers, receiving antennas and transmitting antennas [2]. If the transmitted signals of the integrated system can be shared by the system and form an integrated waveform, the degree of integration of the radar system and the communication system can be further improved. The phased array has become a research focus of the integration of radar and communication due to its high gain and the ability to meet the shared channel of the radar system and the communication system [3, 4]. However, the beam pattern of the phased array is only related to the angle. At present, the integration of radar communication has the problem of difficult signal separation and low utilization of radar resources. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 919–926, 2020 https://doi.org/10.1007/978-981-13-9409-6_107

920

L. Zhang et al.

Therefore, frequency diverse array (FDA) is introduced into the integration of radar and communication. The existing integrated wave forms of frequency diverse array radar and communication are mainly divided into two design methods. One is based on a chirp signal, and the base band signal is modulated onto the radar signal; the other is to associate the frequency deviation of the frequency offset array with an orthogonal frequency division multiplexing technique (OFDM) [5, 6]. However, the above integrated method of designing waveform has the problem that the separation of the radar and the communication signal is difficult, and the problem of occupying more radar pulse resources. In this paper, a waveform based on linear frequency interval for frequency diverse array radar and communication is proposed. Random communication signals are modulated onto the frequency offset by a Multi-frequency shift keying modulation (MFSK) technique [7], and in order to be able to isolate the communication signals between the elements in the spectrum, a linear frequency interval is added between each array element to form a new integrated signal waveform. The waveform can complete the target positioning task of the radar without affecting the beam characteristics of the frequency diverse array, and can easily demodulate the communication signal in the case of the transmitting device and the receiving device.

2 Integration of FDA Radar and Communication 2.1

Application Background

Figure 1 shows the application model of radar communication integration in bistatic radar. It can realize data communication between base station radars and also detect and locate far-field targets. The ground transmitting base station transmits a pulse to the air target. After the pulse is reflected, the receiving end of the other base station receives the echo signal, and performs communication signal processing and radar signal processing, thereby realizing communication data transmission and radar target positioning.

Target (Reflection coefficient σ )

Transmitting Receiving

Base station transceiver

Transmitting

Receiving

Base station transceiver

Fig. 1. Application model of bistatic radar

Research on Integrated Waveform of FDA Radar and Communication

921

As a ground-based base station radar transmitting device (0 * N − 1 array elements in Fig. 1), the frequency diverse array radar adopts a uniform linear array with the same transmission and reception, and the spacing of the array elements is d. Assuming that the distance and angle information of the nth transmitting element to the air target are RTn and hT , and the first element of the transmitting base station is used as the reference element, the distance of the reference element to the air target is RT0. After the integrated signal is transmitted, it is reflected by the air target, and the receiving end of the receiving base station of another ground receives the echo signal with a single antenna (the Nth array element in Fig. 1), and the distance and angle of the receiving base station to the air target are respectively RR and hR . Each ground base station can transmit and receive an integrated signal. The subsequent content of this paper is to conduct related research on the integrated system under the condition that the integrated signal is totally reflected by the air target (r ¼ 1). 2.2

A Modulated Signal Loaded with Communications

In order to realize the loading of the communication signal onto the radar pulse signal, the communication signal can be modulated by the Multi-frequency shift keying modulation (MFSK) technique to the unique frequency increments (frequency offsets) between the array elements of the frequency diverse array. The frequency offset between the integrated transmit signal elements changes with the change of different communication signals. Since the communication signals are random, the frequency offsets between the array elements are nonlinear. Assume that the original communication information is random binary data, it can be transformed into Multi-ary data and satisfies M ¼ 2k where k is any positive integer. The Multi-ary data is modulated onto the frequency offsets, and the frequency offset of the single array element of the frequency diverse array becomes cn Df , and the value of cn ranges cn 2 f1; 2; . . .; M g. And taking into account the need for the communication information of each array element to be spectrally isolated, a frequency interval of MDf is added between the communication signals of each array element, so that the communication signals are separated in frequency to avoid aliasing. Therefore, the initial carrier frequency of the nth array element of the frequency diverse array is: fn ¼ fc þ cn Df þ nMDf

n ¼ 0; 1; . . .; N  1

ð1Þ

According to the above formula, the signal emitted by the nth array element is: sn ðtÞ ¼ exp½j2pðfc þ cn Df þ nMDf Þt

n ¼ 0; 1; . . .; N  1

ð2Þ

The transmission signal of all array elements is: sðt Þ ¼

N 1 X n¼0

sn ðt Þ ¼

N 1 X n¼0

exp½j2pðfc þ cn Df þ nMDf Þt

ð3Þ

922

L. Zhang et al.

Suppose that is an ideal far-field observation point at a certain moment, the angle between the observation point to the line where the reference array is located and the array normal is h, and the slant distance from the observation point to the reference array element is R. The time delay of the transmit beam of the nth array element is: sn ¼

R  nd sin h c

ð4Þ

In the above formula, the spacing of the elements is d, and the propagation speed of electromagnetic waves is c. Therefore, the integrated signal transmission beam pattern function expression is: AðR; h; tÞ ¼

N1 X

sn ðt  sn Þ

n¼0

   R  nd sin h exp j2p½fc þ cn Df þ nMDf  t  c n¼0    X   N 1 R ðfc þ cn Df Þd sin h exp j2pn  ¼ exp j2pðfc þ cn Df Þ t  c c n¼0  MDfR nMDfd sin h þ MDft  þ c c

¼

N 1 X

ð5Þ

Since R  nd sin h and fc  Df , nMDfd sin h=c in the above formula can be ignored, and the above formula can be rewritten as:    X   N 1 ðfc þ cn Df Þd sin h R  exp j2pn AðR; h; tÞ ¼ exp j2pðfc þ cn Df Þ t  c c n¼0  MDfR þ MDft  c

ð6Þ

It can be seen from the above equation that the transmit beam pattern is related to the four important parameters of distance, angle, time and frequency offset, and also related to the value of cn and the number M. After theoretical derivation, in order to facilitate the observation of the beam pattern of the signal model, the transmit beam pattern of the frequency diverse array based on the linear frequency interval is simulated by using the MATLAB simulation environment. The simulation parameters are t ¼ 0 ls, d ¼ k=2; N ¼ 50, fc ¼ 8  109 Hz, Df ¼ 3  103 Hz; M ¼ 2 and the range of cn is cn 2 f1; 2g (Fig. 2). When M = 0, the transmitted signal does not carry the communication signal, and the transmit beam pattern is a conventional S-shaped linear frequency offset frequency diverse array transmit signal. The value of the communication signal is on the order of magnitude smaller than the frequency interval between the array elements, so the frequency interval has a greater influence on the transmitted signal beam pattern than

Research on Integrated Waveform of FDA Radar and Communication

923

Fig. 2. The beam direction diagram of the radar communication integration signal

the communication signal. Therefore, the beam pattern is neither a traditional S-shaped beam pattern nor a point beam pattern. The more the frequency offset of the frequency diverse array deviates from the linear rule, the narrower the main lobe width of the beam pattern, the more concentrated the main lobe energy, but the higher the side lobe amplitude. The reason for this is that the nonlinear frequency offset destroys the side lobes distribution of the traditional linear frequency offset, resulting in more deviation from the current rule, and the frequency offset shift is larger, and the energy of the grating lobes is closer to the main lobe [8]. 2.3

MUSIC Algorithm for Multi-target Positioning

The decoupling performance of the frequency-diversity coupling based on the linear frequency interval is not high. The transmission of a set of frequency offsets requires a large value to achieve radar target positioning. However, in practice, under the premise of ensuring the category of frequency diverse arrays, the frequency diversity of a frequency diverse array with an array element number of 100 is generally selected from a few kHz to several tens of kHz [9]. Therefore, two sets of different frequency offset values are still used to achieve multi-target positioning. Assume that the frequency offset is Dfm ; m ¼ 1; 2. The echo signal of the nth array element of the frequency diverse array after the target reflection is: ym;n ðtÞ ¼

I X i¼1



  2Ri 2nd sin hi þ þ nm;n ðtÞ Si ðtÞ  exp j2p½fc þ cn Dfm þ nMDfm  t  c c ð7Þ

where ðRi ; hi Þ indicates the position of the i-th target, Si ðtÞ is the signal returned by the i-th target, and nm;n ðtÞ is the additive noise, and transform the signal to baseband for processing is:

924

L. Zhang et al.

ym;n ðtÞ ¼

I X i¼1

   2Ri 2nd sin hi þ þ nm;n ðtÞ Si ðtÞ  exp j2p½fc þ cn Dfm þ nMDfm   c c ð8Þ

In the h-th sampling, the signal is represented in vector form: YðhÞ ¼ AðR; hÞSðhÞ þ NðhÞ

ð9Þ

where h is the h-th sampling point, the value range is h 2 f1; 2; . . .; H g, and YðhÞ is the N  2 order matrix. The target estimation can be achieved using the MUSIC algorithm [10]. Based on the above theoretical derivation, in order to further verify the feasibility of the transmitted signal in radar target positioning, the radar signal positioning is simulated by MATLAB. The simulation parameters are as follows: the number of array elements is N = 30, and the multi-target positions are (−15°,9974 m), (15°,10040 m), (0°,10,020 m), (10°,10,000 m), (−10°,10,000 m), the frequency offsets are Df1 ¼ 104 Hz, Df2 ¼ 104 Hz, M = 16, the radar transmits a carrier frequency of 10 GHz, the value range of the communication information cn is cn 2 f1; 2; . . .; 16g, the scattering coefficient r ¼ 1, and the signal-tonoise ratio is SNR = 10 dB.

Fig. 3. The effect picture of the multiple target location

According to the simulation parameters, the pulse width of the radar is Tp ¼ 1=Df ¼ 0:1 ms, the pulse repetition frequency is fr ¼ 1000 Hz, the pulse repetition period is Tr ¼ 1=fr ¼ 1 ms, the sampling point is H = 128, the sampling interval is Dt ¼ Tp =H ¼ 0:00078125 ms, and the sampling frequency fs ¼ 1:28  106 Hz. It can be seen from Fig. 3 that the spectral function forms the maximum gain at the target point, which verifies the validity and feasibility of the target location of the transmitted signal based on the linear frequency interval.

Research on Integrated Waveform of FDA Radar and Communication

2.4

925

Analysis of Communication Performance

The integrated signal is equivalent to modulating the communication signal on the frequency offset by using the MFSK modulation method, and detecting the received echo signal by the N sets of filters at the receiving end, and the output end is a signal containing noise. According to the corresponding bit error rate solving method of MFSK modulation, the bit error rate [7] of the integrated signal is: Pb 

h r i 2k1 b exp k  ln 2 k 2 1 2

ð10Þ

where k is the number of bits contained in each communication message, and rb is the signal-to-noise ratio per bit. If the guaranteed bit-to-noise ratio satisfies rb =2  ln 2 [ 0 or rb [ 1:42 dB, increasing the value of k can result in any small bit error rate. It can be seen from the above equation that the bit signal-to-noise ratio rb and the hexadecimal number M ¼ 2k of the communication system can determine the Pb of the system. In the frequency diverse array radar communication integrated system, if random binary communication data is transformed into M-ary data, each array element of the frequency diverse array carries the data amount as log2 M bits. Assuming that the integrated system launch platform has N array elements, each time the system transmits a pulse signal, it carries the data of N  log2 M bits. The pulse repetition frequency of the integrated system is set to fr ¼ 1000 Hz. Without considering the channel noise, the data rate of the integrated system [7] can be expressed as: Rb ¼ N  log2 M  fr

ð11Þ

As can be seen from the above equation, in the case where the pulse repetition frequency of the integrated system does not change, the data rate of the communication system depends on the number of elements N and the value of the number M. Since the communication signal modulation method is equivalent to MFSK modulation, the system band utilization rate [7] is: g¼

Rb N  log2 M  fr log2 M  fr ¼ ¼ BN N  M  Df M  Df

ð12Þ

where B is the bandwidth of the integrated system. The frequency band utilization and the data rate are related to the system bandwidth. In the case where the pulse repetition frequency of the integrated system is constant, the frequency band utilization depends on the value of the frequency offset of the frequency diverse array and the numerator M.

926

L. Zhang et al.

3 Conclusion In order to solve the problem that the integrated signal is difficult to separate at the receiving end and occupy more radar resources, this paper proposes a waveform design method, which introduces the beam advantage of frequency diverse array into the field of radar communication integration, thus forming a waveform of FDA radar and communication integration based on linear frequency interval. The random communication signal is modulated by the MFSK technique onto the frequency offset between the frequency diverse array elements, and a linear frequency interval is added between each communication signal, so that the integrated signal does not affect the radar target positioning and is easy to demodulate at the communication receiving end to realize communication data transmission. In this paper, from the two aspects of radar target positioning simulation experiment and communication system performance analysis, it is verified that the signal waveform is suitable for radar communication integration. Acknowledgements. This work was supported by the National Natural Science Foundation of China (61631019, 61701128, 61871425), the National Natural Science Foundation of Guangxi (2017GXNSFBA198032), and the Guangxi science and technology department project (AA17202048, AD18281061).

References 1. Zhang MY (2010) Introduction to radar-electronic warfare-communication integration. National Defense Industry Press, Beijing, pp 87–101 2. Liu Y, Zhao JY, Gao ZY (2013) Research on the integration of radar and communication. Technol Dev Enterp 3. Zuo B, Feng L, Gan QG (2017) Research on integration technology of ship-borne phased array radar and over-the-horizon communication. Radio Eng 1:67–70 4. Hu YP (2008) Research on integrated communication system based on shipborne phased array radar. Modern Radar 30(1):26–29 5. Yuan D, Zhang J, Fusco V (2015) Frequency diverse array OFDM transmitter for secure wireless communication. Electron Lett 51(17):1374–1376 6. Shi C, Wang F, Sellathurai M et al (2018) Power minimization-based robust OFDM radar waveform design for radar and communication systems in coexistence. IEEE Trans Signal Process 66(5):1316–1330 7. Fan CX, Cao LN (2001) Principle of communication. National Defense Industry Press, Beijing, pp 170–320 8. Gao KD (2016) Research on optimization design of frequency array radar array and its target parameter estimation method. University of Electronic Science and Technology 9. Li JC (2015) Research on frequency array characteristics and its transmit beam control. University of Electronic Science and Technology 10. Gu KL, Ouyang S, Li JJ et al (2017) Frequency diverse array radar target location method based on MUSIC algorithm. J Guilin Univ Electron Technol 37(02):87–91

Research on Parameter Configuration of Deep Neural Network Applied on Speech Enhancement Xiaoyu Zhan1, Yongjing Ni2,3, and Ting Jiang1(&) 1

2

Beijing University of Posts and Telecommunications, Haidian District, Beijing, China [email protected] College of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang, Hebei 050000, China 3 College of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei 066000, China

Abstract. This paper research on parameter configuration of deep neural network applied on speech enhancement. In recent years, deep learning based methods have become a hot topic in the field of speech enhancement. Parameter selection is important to the performance of deep neural networks and can even play a decisive role. From the perspective of engineering practice, this paper analyzes five basic parameters and their influence when applied on speech enhancement. Through detailed analysis and a large number of engineering experiments, we give a proposal on how to configure parameters of deep networks when doing speech enhancement. Keywords: Speech enhancement DNN

 Deep learning  Parameter configuration 

1 Introduction Speech Enhancement (SE) has been a hot search field since recent years. Its goal is to improve the quality and intelligibility of speech signals corrupted by noise which is important in audio signal processing specially in applications such as automatic speech recognition, mobile communication, hearing aids etc. In the past few decades, many SE solutions have been proposed. Spectral subtraction [1], Winer filter, Kalman filter [2], MMSE-STSA [3] based methods, Signalsubspace based methods, wavelet transform based methods are all classical methods. The success of deep learning since 2006 when Hinton proposed an effective way to build multilayer neural networks on unsupervised data has accelerate the developments in many fields including speech enhancement [4, 5]. Then, Xie proposed a method to learn the relationship between noisy speech and clean speech through artificial neural network in frequency domain [6]. In 2013, Narayanan and Wang proposed the Ideal Ratio mask (IRM) as the training target [7–9]. In 2014, Xu et al. used regression method and deep learning network to study the amplitude spectrum characteristics of © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 927–935, 2020 https://doi.org/10.1007/978-981-13-9409-6_108

928

X. Zhan et al.

clean speech [10]. In 2016, Kumar et al. used DNN training strategy based on psychological model to enhance the coding of noisy speech [11]. In short, the use of deep learning methods for speech enhancement research in recent years is a very hot topic. And parameter selection is important for the performance of deep neural networks and can even determine the performance of the network. This paper discusses the configuration of deep neural network parameters for general single-channel speech signals from the perspective of engineering practice. In the following, we give an overview of basic DNN model for speech enhancement (Sect. 2). Next, we study some common influencing parameters (Sect. 3). Then, we describe the experimental setup and report the results (Sect. 4). Finally, we discuss some conclusions (Sect. 5).

2 Speech Enhancement Based on DNN In order to study the parameter configuration scheme, we build a speech enhancement model based on DNN. Figure 1 shows the workflow of our research work.

Fig. 1. Process diagram for building DNN for speech enhancement

As can be seen from Fig. 1, the speech enhancement algorithm based on DNN can be divided into four steps: dataset construction, feature extraction, deep learning and speech reconstruction. Dataset Construction This paper aims at additive noise and the frequency domain expression for generating noisy speech is obtained as follows: Yðt; dÞ ¼ Xðt; dÞ þ aNðt; dÞ

ð1:1Þ

Noisy speech Y(t, d) is composed of clean speech X(t, d) and noise N(t, d) where T and D respectively donates serial number of frame and band. Setting different a can change the signal-to-noise ratio of noisy speech. Feature Extraction Research indicates that the spectral characteristics of speech signals show very obvious acoustic significance [12] and the perception of human ear to frequency is logarithmically proportional to the magnitude of the frequency [13].

Research on Parameter Configuration of Deep Neural Network

929

Therefore, the logarithmic power spectrum feature is adopted in this paper. As shown in formulas (1.2) and (1.3). ^ f ðdÞ ¼ e X

^ l ðdÞ X f 2 þ jY ðdÞ

  ^ l ðdÞ 2 XLPS ðdÞ ¼ log X

ð1:2Þ ð1:3Þ

^ f ðdÞ by making short-time Fourier transform of the We get the power spectrum X speech signal, and then calculate logarithmic power spectrum XL P S ðdÞ. In order to preserve context information, we expend the front and back s frames [14] as shown in Formula 1.4. Vn ¼ ½Xns ; . . .; Xn1 ; Xn ; Xn þ 1 ; . . .; Xn þ s 

ð1:4Þ

where Xn , denotes the short-time Fourier transform of speech signal in frame n, and Vn denotes the video characteristics of frame n. Deep Learning The neural network can be regarded as a non-linear filter as formula (1.5) shows. ^d ¼ H ðXd Þ X

ð1:5Þ

where H denotes the nonlinear filter abstracted from the deep learning network. Generally, the process of deep learning is divided into two stages: training stage and enhancement stage. The training stage is the process of learning the weight of the network. Speech Reconstruction The reconstruction of speech waveform is an inverse process of feature extraction. It should be pointed out that the ear is not sensitive to phase change, so this paper uses phase information separated from the corresponding noisy speech to reconstruct the estimated clean speech [15].

3 Common Influencing Parameter We mainly study the influence of five parameters to DNN for speech enhancement in this paper. These parameters are briefly described in this chapter as shown below. Training Data Volume When the training data is small, the learning effect is often unsatisfactory. As the training data increases, the effect gradually increases, but when the training set reaches a larger scale, the increase in effect gradually slows down. Depth In Ref. [16], by adding a small deviation to the original input, the relationship between input and output deviation at each level is calculated. It showed that the good performance is already obtained when there are two layers. But increasing the number of layers of the network will first improve and then reduce the ability of learning speech performance.

930

X. Zhan et al.

Width The network width of the hidden layer represents the number of neurons used by the layer to learn the characteristic state of the upper layer network. It should be roughly equivalent to the total number of input features. It can be less than the characteristic number, but not less than an order of magnitude. Activation Function Activation function is used to introduce non-linear elements into the network, so that the neural network can learn complex non-linear relations. We choose four common activation functions to study. Among then, Sigmoid function (also known as Logistic function) [17] and Tanh function (also called hyperbolic tangent function) are saturated activation functions, ReLU and Leaky ReLU are unsaturated. Loss Function Loss function refers to the minimization function in deep learning. The speech enhancement problem in this paper belongs to regression problem, so three loss functions suitable for regression problem are selected to study. They are MSE, MAE and Huber.

4 Experiments and Analysis The experimental data set used in this paper is from the DARPA TIMIT AcousticPhonetic Continuous Speech Corpus [18]. Noise sources come from two noise libraries, Noise92 and another one which contains 100 kinds of noise collected by Wang Deliang Laboratory of Ohio State University [19]. Four SNR, 0, 5, 10 and 15 dB, are selected to mix clean speech and noise additively. Four kinds of noise are used in the training stage, white noise, babble noise, factory1 noise, traffic noise (n45: Traffic and car noise). Two kinds of new noise are added to the test stage, namely, vehicle interior noise (volvo) and audience noise (n2: Crowd noise). The sampling rate of speech signal waveform is 16 kHz, 32 ms (512 sampling points) are selected for each frame length, and 50% for frame shift. Before input into the network, the feature data are normalized by Gauss. The learning rate is 0.0001 and the number of small batch training is 500. When extracting time-frequency features, the context window length is 7. 4.1

Experiments and Results

Training Data Volume Different number of speech samples are randomly selected from the training set to train. After testing the test set, the average PESQ score of the learned speech is obtained as shown in Fig. 2. It can be seen that with the increase of the data, the system performance gradually improves, and the more the training amount, the better the learning effect. But as the number increases, performance increases more and more slowly.

Research on Parameter Configuration of Deep Neural Network

931

PESQ 3.4 3.2 0dB

3 2.8

5dB

2.6

10dB

2.4

15dB

2.2 2 1

6

11

16

21

26

31

×103

Fig. 2. Average PESQ scores of enhanced speech under different training set data volumes

Depth Different experimental groups have different hidden layers. Figure 3 shows the average PESQ values of DNN neural networks with different depths for all test sets. As can be seen from the figure, with the increase of DNN hidden layer, the effect of speech enhancement is constantly improving, but when it reaches three layers, the enhancement effect begins to decline.

PESQ

3.1 2.9 2.7 2.5 2.3 2.1 1.9 1.7 1.5

15dB 10dB 5dB 0dB

Noisy

DNN1

DNN2

DNN3

DNN4

Fig. 3. Effect of DNN network depth on enhancement performance

Width The number of layers in the DNN network is three, and each layer is set with 256, 512, 1024, and 2048 neurons for comparison. The PESQ scores of the estimated speech obtained by the experiment are shown in Fig. 4. It can be seen that the enhancement effect of noisy speech with matching noise is better than that of nonmatching speech. From the trend of the curve, as the network width increases, the enhancement effect of both types of speech is increasing, but the growth trend is gradually slow.

932

X. Zhan et al.

Fig. 4. Average PESQ score of enhanced speech under different network widths

Activation Function Four common activation functions are selected in the experiment, and each network uses only one activation function. Figure 5 shows the experimental results. It can be seen that the unsaturated activation functions ReLU and LeakyReLU work better than the two easy-to-saturation activation functions (Sigmoid and Tanh). In the speech enhancement of DNN, the unsaturated function such as LeakyReLU is a better activation function selection.

Fig. 5. Average PESQ score of enhanced speech under different activation functions

Loss Function The experiment compares several common objective functions for regression problems. The learning curve is shown in Fig. 6. It can be seen that the Huber function has the fastest convergence rate, and the MAE is second, followed by MSE. However, the performance of MAE and MSE is similar.

Research on Parameter Configuration of Deep Neural Network

933

Fig. 6. Comparison of loss curves of common objective functions

4.2

Analysis

The proposal on DNN network configuration are summarized as follows. 1. When choosing the amount of training data, a small number of speech can be selected first, and the amount of data can be doubled gradually according to the learning effect. 2. The optimal number of layers of DNN network for speech enhancement tasks in this paper is three, and the performance of less than or more layers decreases. 3. The number of neurons in the hidden layer should be greater than or equal to the total number of features in the input layer. 4. The effect of unsaturated activation function is better than that of saturated activation function. Leaky ReLU works best. 5. According to the research and analysis of loss functions commonly used in regression problems, Huber has the fastest convergence speed. After configuring the network according to the above parameter optimization proposal, the enhanced speech performance indicators are shown in Table 1. Table 1. Speech enhancement performance score after network adjustment parameters SNR (dB) 0 5 10 15 CLEAN

PESQ 2.33 2.75 2.98 3.17 4.5

STOI 0.78 0.84 0.88 0.92 1

SDR 5.29 7.27 8.11 8.14 inf

LLR 0.78 0.64 0.49 0.38 0

SNR 7.37 10.29 11.75 12.43 inf

SNRseg 3.55 5.50 7.04 8.24 inf

WSS 34.55 27.49 22.00 17.88 0

From Table 1, we can see that the speech performance has been improved under various SNR. At low SNR, all performance indicators have been improved. At high

934

X. Zhan et al.

SNR, the enhancement method causes a reduction in global SNR to some extent, but improve the quality of speech audition.

5 Conclusion Firstly, this paper describes the research process and basic DNN network studied in this paper is introduced. Then, network depth, network width, activation function and loss function, this paper studies and analyses the parameter optimization method of DNN network in speech enhancement task, and proves it through experiments. Finally, the proposal of DNN network parameter configuration for speech enhancement are summarized. The proposed DNN network shows good speech enhancement effect when it is applied to speech enhancement. And there are many other influencing parameters for Speech enhancement based on DNN, we will study them in the future. Acknowledgements. The work was supported by national natural science foundation of China 61671075, Major program of National natural science foundation of China No. 61631003.

References 1. Boll S (2003) Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans Acoust Speech Signal Process 27(2):113–120 2. Paliwal K, Basu A (1987) A speech enhancement method based on Kalman filtering. In: IEEE international conference on acoustics, Speech, and signal processing, ICASSP’87, vol 12. IEEE, pp 177–180 3. Ephraim Y, Malah D (1984) Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Trans Acoust Speech Signal Process 32 (6):1109–1121 4. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507 5. Qiao K, Yang Z (2015) Research on speech enhancement based on deep neural network. Tech Exch 06(015):62–64 (in Chinese) 6. Xie F, Van Compernolle D (1994) A family of MLP based nonlinear spectral estimators for noise reduction. In: 1994 IEEE international conference on acoustics, speech, and signal processing, ICASSP-94, vol 2. IEEE, pp II/53–II/56 7. Narayanan A, Wang DL (2013) Ideal ratio mask estimation using deep neural networks for robust speech recognition. In: 2013 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 7092–7096 8. Wang Y, Wang DL (2013) Towards scaling up classification-based speech separation. IEEE Trans Audio Speech Lang Process 21(7):1381–1390 9. Wang Y, Narayanan A, Wang DL (2014) On training targets for supervised speech separation. IEEE/ACM Trans Audio Speech Lang Process (TASLP) 22(12):1849–1858 10. Xu Y, Du J, Dai LR et al (2015) A regression approach to speech enhancement based on deep neural networks. IEEE/ACM Trans Audio Speech Lang Process (TASLP) 23(1):7–19 11. Xu Y, Du J, Dai LR et al (2014) An experimental study on speech enhancement based on deep neural networks. IEEE Signal Process Lett 21(1):65–68 12. Loizou PC (2012) Speech enhancement: theory and practice

Research on Parameter Configuration of Deep Neural Network

935

13. Du J, Huo Q (2008) A speech enhancement approach using piecewise linear approximation of an explicit model of environmental distortions. In: Ninth annual conference of the international speech communication association 14. Xu Y (2015) Research on speech enhancement based on deep neural network. University of Science and Technology of China. (in Chinese) 15. Wan EA, Nelson AT (1999) Networks for speech enhancement.Handbook of neural networks for speech processing. Artech House, Boston, USA 16. Research on speech enhancement algorithm based on deep neural network. Beijing University of Technology, 2016. (in Chinese) 17. Han J, Moraga C (1995) The influence of the sigmoid function parameters on the speed of backpropagation learning. In: International workshop on artificial neural networks. Springer, Berlin, Heidelberg, 1995, pp 195–201 18. Garofolo JS, Lamel LF, Fisher WM et al (1993) DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1–1.1. NASA STI/Recon technical report n, 1993, 93 19. http://web.cse.ohio-state.edu/pnl/corpus/HuNonspeech/HuCorpus.html

Mid-Infrared Characteristic Analysis of Stability Index of Vehicle Gasoline Lianling Ren1(&), Hongjian Li1, Lei Guo1, Deyan Wang2, Jianping Song1, and Xin Hu1 1

2

Institute of Systems Engineering, Fengtai Beijing, China [email protected] Beijing Aviation Engineering Technology Development Center, Beijing, China

Abstract. In order to verify the correlation between infrared spectrum characteristics and storage stability of automotive gasoline, this experimental study was carried out. Using the method of 43 °C accelerating storage test, the gasoline was carried on the long-term storage test. By means of mid-infrared spectroscopy, quantitative analysis of the integral area of absorption interval on the stability index of gasoline was carried out. The results show that the quality decay model of gasoline established with the integral area of 648–628 cm−1 can better characterize the quality decay state of gasoline during storage.

1 Introduction The stability of oil products decreases slowly during storage and transportation [1, 2]. In order to investigate the degree of oil quality decay, it is necessary to continuously monitor and closely control the oil quality. Conventional detection methods are tedious and time-consuming, while instrumental analysis can be used for rapid analysis and monitoring. The main methods available at present are: near infrared spectroscopy, middle infrared spectroscopy and nuclear magnetic resonance. Among them, infrared spectroscopy has been widely used. For example, the near-infrared quality analyzer developed by the Domestic General Oil Research Institute, PIONIR 1024 and other foreign instruments can measure octane number, induction period, oxygen content, steam pressure and other multiple physical and chemical indexes in one minute by establishing spectral model [3, 4]. However, the stability of gasoline has not been studied systematically and comprehensively by using mid-infrared technology. In this study, different types of gasoline products produced by several domestic refineries were collected. The unwashed gum, actual gum and acidity were measured. The middle infrared spectrum characteristics of motor gasoline during storage were investigated by correlation analysis method, and the correlation between gasoline infrared spectrum and acidity, unwashed gum and actual gum were established.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 936–942, 2020 https://doi.org/10.1007/978-981-13-9409-6_109

Mid-Infrared Characteristic Analysis of Stability Index

937

2 Experiments 2.1

Collection of Experimental Samples

Four motor gasoline samples were collected from several domestic refineries, numbered CS0002, CS0008, CS0009, CS0013. 2.2

Testing Program

According to appendix B of SH/T0237-1992 “Test method for the stability of gasoline storage”, a 40-week accelerated storage test was conducted through the 43 °C accelerating storage test. Unwashed gum, actual gum, acidity and other stability indexes were measured once every 8 weeks, and infrared spectrum scanning was performed with the Tensor27 FT-IR medium infrared spectrometer produced by Brock Company. Among them, the unwashed gum and the actual gum were determined according to GB/T 8019-2008 “gum determination spray evaporation method”; The acidity is determined according to GB/T 258-1977 “determination of acidity of gasoline, kerosene and diesel oil”.

3 Results and Discussion 3.1

Correlation Analysis of Gasoline Stability Index and Infrared Spectrum During Storage

According to the test data of 43 °C accelerating storage, among the four samples of CS0002, CS0008, CS0009 and CS0013, CS0008 has the worst storage stability, that is, with the extension of storage time, the gum increases significantly and the acidity also increases. The storage stability of other oil samples is relatively good, and the gum and acidity change little during the storage process. Figure 1 shows the changing trend of the actual gum and acidity with the storage time. Figure 2a shows the correlation coefficient diagram of each quality index and infrared spectrum during CS0008 storage. Figure 2b shows the correlation coefficient of infrared spectrum and the actual gum during storage process of different gasoline. It can be clearly seen that: (1) During storage, the change of fuel quality index is closely related to IR spectral correlation coefficient and the attribute of spectral peak. Some of them are positively correlated and some of them are negatively correlated. Some intervals have high correlation coefficients, some have low correlation coefficients. The characteristic wave number which can best reflect the decay of the quality index can be determined by the correlation coefficient diagram. (2) The correlation coefficients of different quality indexes of the same sample are approximate, indicating that for the same fuel, both gum and acidity can reflect the oxidative metamorphism of the fuel, which can be characterized by infrared spectroscopy. (3) The correlation coefficient diagrams of the same quality index of different gasoline samples are obviously different, which indicates that different fuel quality decays in different forms. The positive and negative attributes and absorption intervals of the correlation coefficients of the four oil stability indexes are classified and analyzed one by one, as

938

L. Ren et al.

shown in Table 1. The results show that, whether it is unwashed gum, actual gum or acidity, the interval of correlation coefficient greater than 0 is mainly the aromatic and carbonyl absorption interval, while the interval less than 0 is the olefin absorption interval. This indicates that during the process of gasoline storage, olefin is oxidized and aromatized, which leads to the increase of aromatic hydrocarbon and the content of carbonyl substance.

Fig. 1. The actual gum and acidity data of automotive gasoline during the acceleration storage process

Fig. 2. The actual gum and acidity data of automotive gasoline during the acceleration storage process

Unwashed gum Correlation Interval coefficient (cm−1) 3068–3032 >0 1944–1925 >0 1896–1847 >0 1762–1714 >0 1625–1479 >0 1302–1333 >0 1059*1021 >0 849–804 >0 688–796 >0 994–942 0 >0 >0 >0 >0 >0 >0 0 >0 >0 >0 >0 >0 γth )p(γdF D > γth ) ps gsr pr grd = 1 − p( > γth )p( > γth ) pc gcr + pr grr ps gsd + pc gcd 1 1 1 1 =1− pc ηcr γth pr ηrr γth pc ηcd γth ps ηsd 1 + γth 1 + 1 + 1 + ps ηsr ps ηsr pr ηrd pr ηrd

(4)

where γth is the threshold that satisfies the SU-DU communication.

3

Outage Analysis

In this section, we formulate the power allocation problem by minimizing the outage probability while fulfilling the interference inflicted on the cellular user is below a predetermined threshold. Since we assume the direct link between SU and DU can be neglected due to obstacles or deep shadowing, the problem is formulated as min 1 − 1+ γth1pc ηcr 1+ γth1pr ηrr 1+ γth1pc ηcd ps ηsr

ps ηsr

pr ηrd

s.t. ps gsb + pr grb ≤ Ith ,

0 < ps ≤ pmax , 0 < pr ≤ pmax .

(5)

where Ith is the predetermined threshold. The optimization problem can be simplified to max

1

1 1+

γth pc ηcr ps ηsr

1+

γth pr ηrr ps ηsr

1 1+

γth pc ηcd pr ηrd

s.t. ps gsb + pr grb ≤ Ith , 0 < ps ≤ pmax , 0 < pr ≤ pmax .

(6)

Optimization of Power Allocation for Full Duplex Relay-Assisted

1131

We define a function f (ps , pr ) as follows f (ps , pr ) =

1 1+

γth pc ηcr ps ηsr

1 1+

γth pr ηrr ps ηsr

1 1+

γth pc ηcd pr ηrd

2

ps pr = (ps + A)(ps + Bpr )(pr + C)

(7)

ηrr γth pc ηcd . The following results can be where A = γthηpsrc ηcr , B = γth ηsr , and C = ηrd obtained by taking the first-order partial derivative of function f (ps , pr ) with respect to ps and pr and then making the result equal to zero.

Bpr 3 ps 2 + (A + CB)ps 2 pr 2 + 2ABps pr 3 + ACps 2 pr + 2ABCpr 2 ps = 0 Cps 4 + ACps 3 − Bpr 2 ps 3 − ABps 2 pr 2 = 0

(8)

The result of the above formula is at the critical point of region { (ps , pr )| ps ∈ (0, pmax ] , pr ∈ (0, pmax ]} ∩ { (ps , pr )| ps gsb + pr grb ≤ Ith } However, the solution of this formula does not exist, that is, there is no extreme point, then the maximum will fall on the boundary condition. There are three kinds of boundary conditions: A:{(ps , pr )|ps ∈ (0, pmax ], pr ∈ (0, pmax ], ps gsb + max grb ], pr = pmax }, C:{(ps , pr )|ps = pr grb =Ith }, B:{(ps , pr )|ps ∈ (0, Ith −pgsb max gsb pmax , pr ∈ (0, Ith −pgrb ]}. The three boundary cases are discussed as follows.

3.1

Boundary Condition A

Under this boundary condition, the objective function f (ps , pr ) can be cons gsb ), for the first order partial derivative of function verted to f (ps , Ith −p grb s gsb ) with respect to ps , and then make the result equal to zero. we f (ps , Ith −p grb can get

− gsb grb 3 Cps 4 + ACgrb 3 ps 2 (Ith − 2ps gsb ) − 2grb ABIth (Ith − ps gsb )2 ps + grb ABps 2 (Ith − ps gsb )2 gsb − Bgrb Ith (Ith − ps gsb )2 ps 2 − Agrb 2 ps 2 (Ith − ps gsb )2 − BCgrb 2 ps 2 (Ith − ps gsb )2 − ABCgrb 2 ps (Ith − ps gsb )2 = 0

(9)

After calculating the above formula, the quaternary equation of one variable is obtained as (10) aps 4 + bps 3 + cps 2 + dps = 0 where a = −gsb grb 3 C + grb gsb 3 AB − Bgrb gsb 2 Ith − Agrb 2 gsb 2 − BCgrb 2 gsb 2 , b = −2ACgrb 3 gsb −2grb gsb ABIth −2Ith grb gsb 2 AB+2Bgrb gsb Ith 2 +2AIth grb 2 gsb + 2Ith BCgrb 2 gsb − ABCgrb 2 gsb 2 , c = ACgrb 3 Ith + 5grb gsb ABIth 2 − Bgrb Ith 3 − Agrb 2 Ith 2 − BCgrb 2 Ith 2 + 2Ith ABCgrb 2 gsb , d = −2grb ABIth 3 − ABCgrb 2 Ith 2 . We assume that the maximum point obtained is (ps1 , pr1 ).

1132

R. Zhou and L. Han

3.2

Boundary Condition B

In boundary condition B, pr = pmax , the function f (ps , pr ) can be simplified as f (ps , pmax ). By taking the first-order partial derivative of function with respect to ps and then making the result equal to zero, we have Bpmax − A −

ABpmax + ABC AC − CB − =0 ps pmax

(11)

In this case, the results do not meet the conditions, so the maximum point is not in this boundary condition. 3.3

Boundary Condition C

For boundary condition C, similarly, for function f (pmax , pr ), the function f (ps , pr ) can be simplified as f (pmax , pr ). By taking the first-order partial derivative of function with respect to pr and then making the result equal to zero, we have (12) apr 2 + bpmax 2 + cpmax = 0 where a = (Bpmax + AB), b = −C, c = −AC We assume that the maximum point obtained is (ps2 , pr2 ). Finally, a general consideration is given to the discussion of the above three cases, the optimal power allocation scheme can be simply expressed as (ps ∗ , pr ∗ ) = arg max f (ps , pr ) (ps ,pr )∈Φ

(13)

where Φ = {(ps1 , pr1 ), (ps2 , pr2 )}.

4

Numerical and Simulation Results

In this section, numerical results and simulation are given to validate that the power allocation obtained under the above theoretical analysis condition 1 is optimal. It should be noted that the channel of full duplex relay system is independent and different Rayleigh distribution. In this simulation, we assume that 10,000 channels are generated randomly. In addition, we set the path-loss factor α to 4 and let the maximum transmission power value to 26 dB. We fix the value of pc to 10 dB. Regarding the choice of distance, we set the distance between S and R and the distance between D and R to 100 m, the distance between S and D and the distance between C and R to 200√ m, and the distance between C and S and the distance between C and D to 100 5 m. Figures 2 and 3 are used to illustrate the relationship between Ps and Pr and the outage probability, respectively. In order to facilitate comparison, we set Pr to the maximum value when comparing the relationship between Pr and the

Optimization of Power Allocation for Full Duplex Relay-Assisted

1133

outage probability. Similarly, when comparing the relationship between Pr and the outage probability, we set Ps to the maximum value. It can be seen that, although the outage probability decreases continuously when both Ps and Pr increases, the extent of the decrease of outage probability decreases significantly when the transmission power reaches a certain level. Moreover, it is undeniable that using large transmission power can reduce the outage probability, but this will lead to waste of resource and energy. Our power allocation scheme can make the outage probability small, which can save resource and energy.

Fig. 2. The relationship between ps and outage probability when pr is fixed

Figure 4 shows the comparison of the outage probability between case A and case B when Ith takes different values. It is obvious that for different selfinterference, the outage probability under case A is always lower than that under case B.

5

Conclusion

This paper proposed an optimal power allocation scheme for full-duplex relayassisted D2D communication system in wireless cellular networks. We derived the outage probability of the full duplex relay-assisted D2D link in an interferencelimited scenario and minimized the outage probability while assuming the interference inflicted on the cellular user is below a predetermined threshold. By using this scheme, the outage probability can be obtained with the smallest possible transmission power, which saves resources and improves the outage performance.

1134

R. Zhou and L. Han

Fig. 3. The relationship between pr and outage probability when ps is fixed

Fig. 4. The change of outage probability with threshold Ith under different selfinterference.

Optimization of Power Allocation for Full Duplex Relay-Assisted

1135

Acknowledgement. This work was supported by the National Natural Science Foundation of China (61701345), Natural Science Foundation of Tianjin (18JCZDJC31900), and Tianjin Education Commission Scientific Research Plan (2017KJ121).

References 1. Andrews JG, Buzzi S, Choi W, Hanly SV et al (2014) What will 5G be? IEEE J Sel Areas Commun 32(6):1065–1082 2. Tehrani MN, Uysal M, Yanikomeroglu H (2014) Device-to-device communication in 5G cellular networks: challenges, solutions, and future directions. IEEE Commun Mag 52(5):86–92 3. Asadi A, Wang Q, Mancuso V (2014) A survey on device-to-device communication in cellular networks. IEEE Commun Surv Tutor 16(4):1801–1819 (Fourth Quarter) 4. Riihonen T, Werner S, Wichman R, Hamalainen J (2009) Outage probabilities in the infrastructure-based single-frequency relay links. In: 2009 IEEE wireless communications and networking conference 5. Riihonen T, Werner S, Wichman R (2009) Optimized gain control for singlefrequency relaying with loop interference. IEEE Trans Wireless Commun 08:2801– 2806 6. Kwon T, Lim S, Choi S, Hong D (2010) Optimal duplex mode for DF relay in terms of outage probability. IEEE Trans Veh Technol 59:3628–3634

Scene Text Recognition Based on Deep Learning Yunxue Shao and Yuxin Chen(&) College of Computer Science, Inner Mongolia University, Inner Mongolia, Hohhot, People’s Republic of China [email protected], [email protected]

Abstract. Image is used everywhere in our life and can bring us abundant information. Unlike general visual elements, text contains a wealth of semantic information. Obtaining the content of the image can help us better to understand the image. Therefore, it is crucial to recognize and understand text in scene images. In recent years, deep learning technology has developed rapidly and has played a leading role in the field of traditional optical character recognition technology (OCR) and it achieved good results. Based on this, this paper combined with the deep learning method further study the scene text recognition. Keywords: Deep learning  Scene text recognition networks  Recurrent neural network

 Convolutional neural

1 Introduction Text is one of the main ways of information transmission and interaction among human beings and plays an indispensable role in our life. In the natural scene image, it often contains a lot of text information, and extracting text information from the natural scene image can help us to understand images further. Therefore, the recognition of scene text is of great significance. Scene text recognition is the process of recognizing a sequence of letters from a cropped text picture. Scene text recognition is a challenging problem, in addition to its complex text background, random distribution of text and abundant type of text, its output length is not fixed. Accordingly, it is different from the general classification problem. Scene text recognition is a problem of recognizing sequences with unfixed length from images.

2 Background Knowledge The text recognition process is usually divided into the following steps: Pre-processing: Before the text recognition, the image shall be pre-processed as necessary to highlight the text part and scale the image to a suitable size for processing.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1136–1143, 2020 https://doi.org/10.1007/978-981-13-9409-6_133

Scene Text Recognition Based on Deep Learning

1137

Feature extraction: It is often difficult to achieve an ideal effect by directly recognizing words at the pixel level, so a set of features needs to be defined to represent the image. Recognition: Think of a text recognition task as a special classification task, with each character representing a category. The recognizer takes extracted features as input and outputs corresponding characters or words. 2.1

Scene Text Recognition Based on Deep Learning Method

Scene text recognition based on deep learning method has achieved remarkable research results along with the development of deep learning. With the successful application of deep neural network in computer vision tasks [1], many people begin to use deep neural network for scene text recognition. The common methods are convolutional neural network (CNN) and recurrent neural network (RNN). In 2015, Jaderberg used neural network to classify words directly [2], each of which was a category. However, this method has high complexity, time-consuming training and needs a large number of learning samples. To solve this problem, Shi proposed a convolutional recurrent neural network (CRNN) structure [3], to achieve the purpose of recognition by integrating CNN and RNN. Compared with traditional DCNN, this structure can directly learn information from image data, and has no constraint on the length of sequence objects and stronger generalization ability. Secondly, due to the irregular character of scene text, Jaderberg proposed a space transformation network (STN) [4] to correct oriented or curved text. On this bais, many models have been derived: Shi proposed a robust recognition model for irregular text in 2016 [5]. The model consists of STN and sequence recognition network (SRN). STN corrects the transmitted or curved text image to a regular and readable image; SRN can directly recognize the input text image as a text sequence. In 2017, Cheng proposed STN-OCR method [6], which can detect and recognize texts through STN under semisupervision. In 2018, Shi proposed the ASTER method [7], which used STN framework to correct images, and then used CNN and RNN to extract and recognize features of images. Generally speaking, the methods used to complete the recognition are to integrate CNN and RNN to achieve the purpose of recognition.

3 Scene Text Recognition Text recognition can be modeled as a sequence recognition problem. Here we introduce a typical sequence-to-sequence network structure, CRNN [3]. The network architecture consists of three parts: (1) Convolution layer: extracting feature sequences from the input image. (2) Recurrent layer: it uses LSTM to analyze the feature sequences, and predicts the label distribution of each frame for the feature sequence. (3) Transcription layer: transform the prediction of each frame into the final label sequence.

1138

3.1

Y. Shao and Y. Chen

Improved Sequence Recognition Algorithm

Since CRNN can directly learn from sequence, detailed annotation is not required. No constraint on the length of sequence object. Recognition has a better performance and the model is more lightweight in the scene text recognition. The model occupies less storage space. Therefore, this paper improves the algorithm on the basis of CRNN. 3.1.1 Image Pre-processing There are many images in the training set that contain factors that affect text recognition. This makes feature extraction difficult. To improve this problem, the image must be pre-processed prior to recognition, and the image corrected, cropped, and scaled to highlight the portion of the text and scale the image to fit the size of the network input. The correction of the image is first done using the connected region algorithm. This stage obtains the number of connected areas by marking the objects in the image, but includes irrelevant areas. In order to eliminate these regions, we need to use some typical letter features to filter: (1) The height of the circumscribed rectangle is greater than the width of the letter. (2) The ratio of the height to width of the circumscribed rectangle is between 1 and 5. The area that does not satisfy the above conditions is filtered out, and then the tilt angle is determined to achieve the purpose of image correction. The second point is to complete the cropping of the image. After correction, it is cutting to remove the excess parts and getting the exact letter part as far as possible. The following are schematic diagram of each stage.

Fig. 1. Original image

Fig. 2. Connected region diagram

Fig. 4. Cropped image Fig. 3. Corrected image

Finally, the scaling of the image is completed. After the image is cropped, the image is scaled to a 128  32 image in order to get an input that fits the size of the network structure (Figs. 1, 2, 3 and 4). 3.1.2

Feature Extraction

Scene Text Recognition Based on Deep Learning

1139

After image pre-processing is completed, feature extraction of the image is started using CNN [8]. Adding the BatchNormalization layer to the CRNN, the BatchNormalization method can improve the convergence speed of the convolutional neural network, reduce the network’s insensitivity to the initialization weight, and also play a regularization role. The input of the CNN network is an image, represented by a third-order tensor of 128  32  3, Each dimension represents the width, height, and number of channels of the image. The final output of the CNN is a 32  1  512 feature map. The details of each network layer are shown in Table 1.

Table 1. CNN configuration details Type Conv1 BN1 Pool1 Conv 2 BN2 Pool 2 Conv 3 BN3 Conv 4 BN4 Pool 4 Conv 5 BN5 Conv 6 BN6 Pool 6 Conv 7 BN7

Kernel 33 – 22 33 – 22 33 – 33 – 12 33 – 33 – 12 22 –

Stride 11 – 22 11 – 22 11 – 11 – 12 11 – 11 – 12 12 –

Output 128  32  64 128  32  64 64  16  64 64  16  128 64  16  128 32  8  128 32  8  256 32  8  256 32  8  256 32  8  256 32  4  256 32  4  512 32  4  512 32  4  512 32  4  512 32  2  512 32  1  512 32  1  512

3.1.3 Processing of Context Information GRU is an effective variant of LSTM network, which is simpler than the LSTM network. The calculation and parameter quantities are smaller than the LSTM unit, and the effect is also very good. Therefore, it is also a very manifold network. GRU contains only two gate structures: the input gate z and the reset gate r. The input gate z and the reset gate r receive the same input: the current moment input signal and the context of the previous moment. The inputs to these two gate structures are defined by Eqs. (1) and (2), respectively. Xt is the input signal at the current time, and ht-1 is the context of the previous moment.

1140

Y. Shao and Y. Chen

zt ¼ rðWz  ½ht1 ; xt Þ

ð1Þ

rt ¼ rðWr  ½ht1 ; xt Þ

ð2Þ

The final output of the GRU is determined by the context of the previous moment, the activation of the hidden layer, and the activation of the reset gate. The activation state of the hidden layer is defined by Eq. (3). The output is defined by Eq. (4). This output is stored to the memory unit h as the context of the current time. ~ht ¼ tanhðW  ½rt  ht1 ; xt Þ

ð3Þ

ht ht ¼ ð1  zt Þ  ht1 þ zt  ~

ð4Þ

There are 512 memory cell modules in each layer of GRU in this module, which are used to correspond to the dimensions of CNN extraction features. The input layer has a total of 512 neurons, which are fully connected to their hidden layers. The hidden layer is then fully connected to the output layer, and then classified using softmax. It is divided into 102 categories (including from 0 to 9 ten digits, from The case-sensitive twenty-six English letters and special symbols). The result of the bidirectional GRU prediction output is y = {y1, y2 … yt}, the length of which is the same as the length of the input sequence feature. 3.1.4 Transcription There are two ways of transcription, the first one is Lexicon-free transcription, and the other is Lexicon-based transcription. This study uses a Lexicon-free model to directly output the predicted results to achieve the recognition of any combination of letters in a natural scene. Using the trained model, we can input the cropped word images into the network to complete the recognition of the image.

4 Experiment and Analysis 4.1

Data Sets and Evaluation Criteria

The data set used in this study is COCO-Text2017’s cropped word dataset, where the training set contains 42618 cropped images, and the validation set contains 9896 cropped images. The purpose of the task is to identify the word image as a sequence of characters. These cropped word images contain various natural scenes and different interference factors, which can effectively verify the validity of the text recognition algorithm. The evaluation of text recognition results depends on two elements: Edit Distance (ED) and Correctly Recognised Words (CRW). This article focuses on the correct recognition rate of words, that is, the ratio of words that are correctly recognized correctly to the total number of words in the test set. There are two cases of ICDAR2017 evaluation criteria: case sensitive and case insensitive. The network we trained this time is case-sensitive, so the following experimental results are also based on case-sensitive results.

Scene Text Recognition Based on Deep Learning

4.2

1141

Results and Analysis

The improved experiment was tested on the original image, the BN layer, and the image after the rotated cropping, the results as shown in Table 2 were obtained (Figs. 5 and 6). Table 2. Recognition results Different network models Original image + CRNN Original image + CRNN + BN Pre-processing + CRNN + BN

CRW (case insensitive) (%) 23.06 24.50 26.03

Fig. 5. Recognition result of adding BN layer

Fig. 6. Recognition results after pre-processing

1142

Y. Shao and Y. Chen

The following are partial recognition results (Fig. 7).

Fig. 7. Partial recognition results on ICDAR2017

According to the above part of the recognition results, combined with the analysis of the picture caused by the low recognition rate, there are mainly the following problems [9]: (1) Special characters: In addition to 10 numbers and 26 differently sized English letters, our model classification is a large number of punctuation marks, which are smaller than text and are not easy to identify. (2) Error labeling: There are problems with the ground truth labeling of some samples, which also leads to a lower recognition rate. For example, our recognition results are case-sensitive, and the labels are not distinguished. (3) Long words: In the images that need to be identified, some strings are very long, which will increase the difficulty of recognition.

5 Conclusion Through the analysis of the research status, the model structure of the existing method is determined. It is proposed to first pre-process the image, highlight the text part, and then scale the image to the appropriate size to be sent to the improved network identification model. Through the comparison and analysis before and after the improvement, it is verified that the recognition accuracy has been improved. Acknowledgements. This study was supported by the National Natural Science Foundation of China (NSFC) under Grant no. 61563039.

Scene Text Recognition Based on Deep Learning

1143

References 1. Ren S, He K, Girshick R, Sun J (2017) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149 2. Jaderberg M, Simonyan K, Vedaldi A, Zisserman A (2016) Reading text in the wild with convolutional neural networks. Int J Comput Vision 116(1):1–20 3. Shi B, Bai X, Yao C (2015) An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans Pattern Anal Mach Intell 39(11):2298–2304 4. Jaderberg M, Simonyan L, Zisserman A, Kavukcuoglu K (2015) Spatial transformer networks 5. Shi B, Wang X, Lyu P, Yao C, Bai X (2016) Robust scene text recognition with automatic rectification 6. Bartz C, Yang H, Meinel C (2017) Stn-ocr: a single neural network for text detection and text recognition 7. Baoguang S, Mingkun Y, Xinggang W, Pengyuan L, Cong, Y, Xiang B (2018) Aster: an attentional scene text recognizer with flexible rectification. In: IEEE Transactions on pattern analysis and machine intelligence, pp 1–1 8. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 9. Xia Y Research on natural scene text detection and recognition algorithm based on deep learning. (in Chinese)

Spectrum Sensing Algorithm Based on Twin Support Vector Machine Xiaorong Wang1(&), Zili Wang1, Dongyang Guo2, and Huiling Zhou3 1

Department of Information Engineering, Engineering University of PAP, Xi-an, China {1650465703,841137311}@qq.com 2 Clerk of Information Department, Beijing, China 3 PLA Unit 95746, Chengdu 611531, China

Abstract. Spectrum sensing is the key of implementing cognitive radio technology. As a kind of machine learning method based on statistical learning theory, support vector machine has the advantages of global optimization, nonlinearity and good generalization ability. The use of support vector machines in spectrum sensing can solve the problem that parameters in spectrum sensing are difficult to determine by learning historical data. Aiming at the problem that the training time of the spectrum perception algorithm based on support vector machine is too long, this paper proposes a spectrum sensing algorithm based on twin support vector machine based on fuzzy mathematics and twin support vector machine. The algorithm extracts the leading eigenvector as the input features that can reflect the signal correlation and calculate the membership degree according to the proportion of the maximum eigenvalue. The twin support vector machine is used to solve the classification hyperplane. The algorithm complexity is only 1/4 of the SVM algorithm, which can greatly reduce the training time. The simulation results show that under equal prior condition, when the number of users and the number of samples are the same, the spectrum sensing algorithm based on TWSVM can obtain a lower minimum error probability than the SVM-based spectrum sensing algorithm and energy detection. As the number of users and the number of sampling points increase, the minimum error probability of the TWSVM-based spectrum sensing algorithm decreases. Keywords: Spectrum sensing Membership

 Twin support vector machine  Eigenvalue 

1 Introduction With the rapid development of wireless communication, spectrum resources are becoming increasingly strained [1]. In order to solve the problem of low spectrum utilization, the concept of cognitive radio is proposed. Spectrum sensing is the core of cognitive radio technology. Traditional spectrum sensing methods include energy detection [2], cyclic stationary detection, matched filtering detection and covariance detection [3]. Common detection methods based on covariance matrix include ratio detection of maximum and minimum eigenvalues (MME) [4], difference between the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1144–1152, 2020 https://doi.org/10.1007/978-981-13-9409-6_134

Spectrum Sensing Algorithm Based on Twin Support Vector

1145

maximum and minimum eigenvalues (DMM) [5] and so on. These detection methods based on covariance matrix are blind detections which do not need prior information, and can overcome the impact of noise uncertainty. Threshold size, node weight and other parameters in spectrum sensing are important factors affecting the detection performance of spectrum sensing directly. The inaccuracy of parameter setting will greatly degrade the detection performance. In order to solve the problem that the parameters in spectrum sensing are difficult to be determined, machine learning is further applied to spectrum sensing. Machine learning includes statistics, approximation theory and other disciplines. By machine learning the data produced in the process of spectrum sensing, the spectrum sensing parameters can be obtained efficiently and accurately. Support vector machine (SVM) is a machine learning method based on statistical learning theory put forward by Vapnik, SVM has the advantages of global optimization, nonlinearity and good generalization ability with the structure risk minimization. In paper [6–8], spectrum sensing algorithm based on support vector machine was proposed. In paper [6], SVM was used for spectrum sensing on the basis of using PCA dimension reduction, which could achieve better detection performance than energy detection at low SNR. In paper [7], a spectrum sensing algorithm was proposed based on hybrid kernel function support vector machine for the singularities and parameter uncertainties of kernel function selection, and used quantum particle swarm optimization to optimize the parameters. The classification accuracy was improved compared with SVM using only a single kernel function. In paper [8], a new feature vector was constructed with the decision statistics of various covariance matrix detection methods, which was input into SVM for training further and obtained better detection performance than ED and MME. The SVM complexity and SVM training time are not considered in the above SVM spectrum sensing algorithms. When the SVM is used to solve the quadratic programming problem, if the data is too large, the time of spectrum perception will be directly increased, resulting in poor perception performance. Aiming at the problem that the training time of spectrum perception algorithm based SVM is so long, this paper proposes a spectrum perception algorithm based on twin support vector machine based on fuzzy mathematics and twin support vector machine. This algorithm uses the twin support vector machine to solve the classification hyperplane, and introduces the membership degree according to the proportion of eigenvalues. Its algorithm complexity is only 1/4 of that of SVM algorithm [9], which can greatly reduce the training time and the error probability of spectrum perception.

2 System Model In this paper, the centralized cooperative perception model is adopted, SU in different positions detect the authorized spectrum and transmit the detected information to the fusion center in real time. Suppose the number of SU in the model is K, and the number of sampling points of each SU in the perception time is N. yi ðnÞ is the signal of the nth sampling point transmitted by the ith SU, i ¼ 1; 2; . . . k, n ¼ 1; 2; . . . N. H0 means the

1146

X. Wang et al.

spectrum is free without PU and H1 means the spectrum is occupied with PU. Based on the above signal analysis in discrete form, binary assumptions about PU signal can be obtained:  yi ðnÞ ¼

H0 wi ðnÞ hi ðnÞsðnÞ þ wi ðnÞ H1

ð1Þ

wi ðnÞ is the additive white Gaussian noise (AWGN) with zero mean and variance d2n; wi ðnÞo Nð0; d2 Þ. Any yk ðnÞ is independent of each other. sðiÞ is PU,

E jsðiÞj2 ¼ r2s , The channel impulse response between the PU and the kth SU at the nth sampling point is hi ðnÞ. Y ¼ ½y1 ; y2 ;. . .; yK T represents the received sample matrix in FC, W ¼ ½w1 ; w2 ;. . .; wK T is received noise matrix. 2

y1 ð1Þ 6 y2 ð1Þ 6 Y ¼ ½y1 ; y2 ;. . .; yK T ¼ 6 . 4 ..

y1 ð2Þ . . . y1 ð1Þ . . . .. .. . . yK ð1Þ y1 ð1Þ . . .

3 y1 ðNÞ y1 ðNÞ 7 7 .. 7 . 5

ð2Þ

y1 ðNÞ

3 Spectrum Sensing Algorithm Based on TWSVM 3.1

Cognitive Process

As is shown in the Fig. 1, the spectrum sensing process based on TWSVM is divided into two stages. The first stage is the training stage. FC collects enough sample matrices to form the training sample set and extracts features from the sample matrix. The extracted features are input into the TWSVM for training, so as to obtain the optimal separation plane. The second stage is the detection stage. FC collects new signal samples, extracts features in the same way and achieve detection decision. Finally the detection results are output.

Training Data Set

Feature ExtracƟon

TWSVM Train

Collect Signal

Feature ExtracƟon

DetecƟon Decision

Fig. 1. Spectrum sensing process based on TWSVM

Output

Spectrum Sensing Algorithm Based on Twin Support Vector

3.2

1147

Feature Extraction

In general, PU signal has a strong correlation while the noise correlation is often very poor, which will result in a large difference between covariance matrix of the received signal and noise. This difference can be used to distinguish the signal and noise. In this paper, covariance matrix is used for feature extraction. For the received sample matrix, the covariance matrix is calculated by the formula: b ¼ 1 YYH R N

ð3Þ

^ is mainly According to the assumptions of the model, the covariance matrix R composed of four parts: the autocorrelation matrix of signal Rs , the autocorrelation matrix of noise Rn , and the cross-correlation matrix of signal and noise EðswT Þ and EðwsT Þ. The signal is independent of the noise, therefore, the values of both EðswT Þ and EðwsT Þ are zero. So the original hypothesis is equivalent to: 

b ¼ Rn H0 : R b ¼ Rs þ Rn H1 : R

ð4Þ

^ and the eigenvalues are arranged in Carry out eigenvalue decomposition for R, descending order as k1  k2  . . .  kK , and the corresponding eigenvector is v1 ; v2 ; . . .vK . The eigenvector represents the direction of signal correlation, and the corresponding eigenvalue can reflect the intensity of the signal on this eigenvector. For the signal with strong correlation, the direction with strong signal correlation corresponds to a larger eigenvalue, while the direction with poor noise correlation corresponds to a smaller eigenvalue with a small difference. In this paper, the main eigenvector v is selected as the feature extraction of the signal. The main eigenvector v is the eigenvector corresponding to the maximum eigenvalue k1 and can represent the most important direction of signal correlation. A single eigenvalue reflects the intensity of correlation in a single direction, while the sum of all eigenvalues reflects the intensity of the overall correlation of the signal. In order to measure the degree to which the selected main eigenvector reflects the overall correlation of the signal, the membership degree of the collected sample is obtained by calculating the proportion of the corresponding maximum eigenvalue in all the P eigenvalues, S ¼ k1 = Ki¼1 ki . Suppose M sample matrices are collected in the training stage, The main eigenvector corresponding to the ith sample matrix is vi ; i ¼ 1; 2; . . . M, the corresponding mark is yi ; yi 2 f1; þ 1g, and the membership degree is Si ; i ¼ 1; 2; . . .M. Then, the final data set extracted through feature extraction is T ¼ fðv1 ; y1 ; s1 Þ; ðv2 ; y2 ; s2 Þ; . . .; ðvM ; yM ; sM Þg. 3.3

TWSVM Training

In order to solve the problem that SVM training time is too long, Jayadeva proposed TWSVM, which transforms the binary classification problem from the original solution of one maximum hyperplane to two non-parallel hyperplanes by solving two quadratic

1148

X. Wang et al.

convex programming problems. The method requires each type of sample to be as close as possible to the hyperplane of the class and keep some distance from the hyperplane of other classes. The new sample will be assigned to one of the categories based on its proximity to two non-parallel hyperplanes. There are m separable sample points in N-dimensional real space Rn : fðx1 ; y1 Þ; ðx2 ; y2 Þ; . . .; ðxm ; ym Þg, xi 2 Rn ; yi 2 f þ 1; 1g; i ¼ 1; 2; . . .; m; xi is the ith sample point, yi is the class tag for xi . There are m1 sample points belong to the positive class, denoted as A ¼ category and m2 sample points  belong to negative  : i ¼ 1; 2; . . .; m respectively. The m1  n fxiþ : i ¼ 1; 2; . . .; m1 g and B ¼ x 2 i matrix A represents positive sample points, and the m2  n matrix B represents negative sample points. The purpose of TWSVM is to find two nonparallel separation hyperplanes. 

xT w1 þ b1 ¼ 0 xT w2 þ b2 ¼ 0

ð5Þ

The two hyperplanes of TWSVM are obtained by solving the following quadratic programming problems: 8 1 > min ðAw1 þ e1 b1 ÞT ðAw1 þ e1 b1 Þ þ c1 eT2 n2 > > > w ;b ;n 1 1 2 2 > > > < s:t:  ðBw1 þ e2 b1 Þ þ n  e2 ; n  0 2

2

1 > > > min ðBw2 þ e2 b2 ÞT ðBw2 þ e2 b2 Þ þ c2 eT1 n1 > > w2 ;b2 ;n1 2 > > : s:t:  ðAw2 þ e1 b2 Þ þ n1  e1 ; n1  0

ð6Þ

Fuzzy factor si ð0\si  1; i ¼ 1; . . .mÞ is introduced to represent that different sample points belong to different weights, and the final training sample set is T ¼ fðx1 ; y1 ; s1 Þ; ðx2 ; y2 ; s2 Þ; . . .; ðxm ; ym ; sm Þg. In the TWSVM training stage, the data set T obtained by feature extraction of the sample matrix is input into TWSVM to obtain two types of hyperplanes. Where xT w1 þ b1 ¼ 0 represents the hyperplane corresponding to H1 and xT w2 þ b2 ¼ 0 represents the hyperplane corresponding to H0 . The two hyperplanes of fuzzy TWSVM are obtained by solving the following quadratic programming problem: 8 1 > min ðAw1 þ e1 b1 ÞT ðAw1 þ e1 b1 Þ þ c1 SB eT2 n2 > > > w ;b ;n 1 1 2 2 > > > < s:t:  ðBw1 þ e2 b1 Þ þ n  e2 ; n  0 2

2

1 > > > min ðBw2 þ e2 b2 ÞT ðBw2 þ e2 b2 Þ þ c2 SA eT1 n1 > > w2 ;b2 ;n1 2 > > : s:t:  ðAw2 þ e1 b2 Þ þ n1  e1 ; n1  0

ð7Þ

Spectrum Sensing Algorithm Based on Twin Support Vector

1149

c1 ; c2 [ 0 is the penalty parameter, e1 ; e2 respectively represents the column vectors whose elements are all 1, n2 , n2 is the relaxation variable, and SA and SB are the fuzzy membership degree in the sample. By introducing Lagrange multiplier method and KKT condition, the dual problem of the above equation is obtained, and the optimal solution of the above equation is obtained by solving the dual problem. 8 1 > > max eT2 a  aT GðH T HÞ1 GT a > > a 2 > > < s:t: 0  a  c1 SB 1 > > max eT1 c  cT HðGT GÞ1 H T c > > c 2 > > : s:t: 0  c  c2 SA

ð8Þ

where H = [A; e1 ; G = [B; e2 ; a 2 Rm2 ; c 2 Rm1 are the Lagrange multiplier vectors. Suppose the optimal solution of the dual problem is, a ; c ; u ¼ ½w1 b1 T ; v ¼ ½w2 b2 T , the optimal solution of the original problem can be obtained as 

3.4

u ¼ ðH T HÞ1 GT a v ¼ ðGT GÞ1 H T c

ð9Þ

Detection Decision

In the detection and decision stage, FC collects a new signal sample matrix, extracts the main feature vector in the same way and determines its category according to the obtained classification decision function: f ðvÞ ¼ arg

  min vT wl þ bl 

l¼1;2

ð10Þ

If f ðvÞ ¼ 1, the detection decision will be H1 , that indicates the presence of a PU signal. If f ðvÞ ¼ 2, the detection decision will be H0 , indicating that no primary user signal exists.

4 Simulation The error probability is one of the indexes to measure the spectrum sensing performance. This paper simulates and verifies the minimum error probability of the spectrum sensing algorithm based on TWSVM with matlab, and compares it with that of the spectrum sensing algorithm based on SVM and energy detection. Paper [10] gives the threshold of energy detection under the condition of minimum error probability. SVM aims to minimize structural risks, and the training threshold enables the spectrum sensing algorithm based on TWSVM and SVM to obtain the minimum error

1150

X. Wang et al.

probability. Assumes that PU signal is BPSK signal, the power is 1 W and the modulation frequency w is 5  104 Hz. Suppose the number of SU is K, the number of sampling points is N, the SU sampling frequency is twice the modulation frequency, and SNR represents signal to noise ratio. RBF kernel is selected for all kernel functions and its parameters are obtained by cross validation.

0.5 ED SVM TWSVM

0.45 0.4 0.35

Pe

0.3 0.25 0.2 0.15 0.1 0.05 0 -20

-18

-16

-14

-12

-10

-8

-6

-4

-2

0

SNR/dB

Fig. 2. Minimum error probability against different SNR

Figure 2 shows the minimum error probability of the three detection algorithms at different SNR under minimum mean error criterion when K ¼ 3, N ¼ 200. In the Fig. 2, ED represents energy detection, SVM represents spectrum perception algorithm based on support vector machine, TWSVM represents spectrum perception algorithm based on twin support vector machine. Simulation results show that the minimum error probabilities of all three algorithms approach 0.5 in very low SNR environments. The minimum error probability of TWSVM algorithm is always lower than other algorithms in different SNR, followed by SVM algorithm, and the error probability of ED algorithm is the highest. And computational complexity of TWSVM is only a quarter of that of SVM algorithm, which means the spectrum sensing algorithm based on TWSVM proposed in this paper can reduce the error probability and reduce the training time at the same time. Figure 3 shows the minimum error probability performance of the TWSVM algorithm at different user numbers when N = 200. Figure 4 shows the detection performance of TWSVM algorithm under different sampling points when K = 3. As is shown in Figs. 3 and 4, with the increase of the number of users and sampling points, the error probability of TWSVM algorithm decreases, indicating that increasing the number of users and sampling points can reduce the minimum error probability of the algorithm and improve the detection performance of TWSVM algorithm.

Spectrum Sensing Algorithm Based on Twin Support Vector

1151

0.5 K=3 K=5 K=7

0.45 0.4 0.35

Pe

0.3 0.25 0.2 0.15 0.1 0.05 0 -20

-18

-16

-14

-12

-10

-8

-6

-4

-2

0

SNR/dB

Fig. 3. Minimum error probability with different user numbers 0.5 0.45

N=200

0.4

N=500 N=800

0.35

Pe

0.3 0.25 0.2 0.15 0.1 0.05 0 -20

-18

-16

-14

-12

-10

-8

-6

-4

-2

0

SNR/dB

Fig. 4. Minimum error probability at different sampling points

5 Conclusions The inaccuracy of threshold setting of blind spectrum sensing detection algorithm will directly affect the spectrum sensing performance. SVM can construct a classifier with better performance by learning from historical data to realize detection. Aiming at the problem that the training time of the traditional spectrum perception algorithm based on SVM is too long. In this paper, a spectrum sensing algorithm based on fuzzy mathematics and twin support vector machine. The simulation results show that under the

1152

X. Wang et al.

same conditions, the spectrum sensing algorithm based on TWSVM can achieve a lower minimum error probability than the spectrum sensing algorithm based on SVM and energy detection.

References 1. Ali A, Hamouda W (2016) Advances on spectrum sensing for cognitive radio networks: theory and applications. IEEE Commun Surveys Tutorials PP(99):1–1 2. Gokceoglu A, Dikmese S, Valkama M et al (2014) Energy detection under IQ imbalance with single- and multi-channel direct-conversion receiver: analysis and mitigation. In: IEEE International symposium on computer aided control system design, IEEE international conference on control applications 3. Xi Y, Peng S, Lei K et al (2009) Spectrum sensing based on covariance matrix under noise uncertainty. In: International conference on wireless communications 4. Penna F, Stanczak S (2013) Decentralized eigenvalue algorithms for distributed signal detection in cognitive networks. IEEE Trans Signal Process 63(2):427–440 5. Xu W, Li Y, Xu H et al (2018) Cooperative spectrum sensing algorithms based on random matrix non-asymptotic spectrum theory. J Electron Information 40(01):123–129 6. Zhang Z (2011) SVM-based spectrum sensing in cognitive radio. In: International conference on wireless communications. IEEE 7. Zhai X, Yang B, Meng T (2016) Spectrum sensing algorithm based on PCA and hybrid kernel function QPSO_SVM. Electron Measur Technol 9:87–90 8. Yao D, Huijie H, Jie Liu et al (2018) Cognitive radio spectrum sensing based on support vector machine. Electron Design Eng 26(21):7–11 9. Binbin Gao, Xia Liu, Qiulin Li (2012) A fast classification algorithm for improved twin support vector machine. J Chongqing Univ Technol 26(11):98–103 10. Xuping Z, Haigen H, Guoxin Z (2010) Optimal threshold and weighted cooperative data combining rule in cognitive radio network. In: IEEE international conference on communication technology. IEEE

Applicability Analysis of Plane Wave and Spherical Wave Model in Blue and Green Band Songlang Li(&), Zhongyang Mao, Chuanhui Liu, and Min Liu Naval Aviation University, Yantai 264001, China [email protected]

Abstract. In view of the existing wireless optical communication analysis, a single model of plane wave or spherical wave model is adopted when calculating turbulence model parameters without considering turbulence intensity and receiving aperture size, causing a large difference between two models among the actual gaussian beam, thus calculating the quoted error of scintillation index and signal-to-noise ratio (SNR) when using different models in different turbulence intensity in blue-green laser communication. The results show that the plane wave model has smaller error in weak turbulence, spherical wave model has smaller error in strong turbulence, but both models have larger error in medium turbulence. Therefore, plane wave model can be used to simplify the calculation in weak turbulence, spherical wave model can be used to simplify the calculation in strong turbulence, but any approximate models should not be used in medium turbulence. Keywords: Gaussian beam  Turbulence model  Scintillation index  Signal to noise ratio  Blue-green laser communication

1 Introduction The actual beam is the Gaussian beam, the plane wave and spherical wave are two approximate models of gaussian beam, which have good approximate relations in different scenes. However, in the existing analysis, the plane wave or spherical wave model is often selected when the turbulence intensity and the receiving aperture are not considered in the calculation of turbulence model parameters. In reference [1], the scintillation index of plane wave and spherical wave are studied. Literature [2] deduces the probability distribution model of intensity fluctuation of gaussian beam based on modified Rytov method. Literature [3] verifies that in the infrared gamma-gamma ffiffiffiffiffiffi  1 is closer to the spherical wave, turbulence model, the gaussian beam in d ¼ pD=2 L=k

and the in d = 3 is closer to the plane wave. Where D is the receiving aperture, L is the transmission distance, and k is the wave number in vacuum. Literature [4] analyzes the effect of aperture average effect on SNR under gaussian beam. However, the existing

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1153–1161, 2020 https://doi.org/10.1007/978-981-13-9409-6_135

1154

S. Li et al.

researches are all carried out in the infrared band. Since the actual systems of bluegreen laser communication and infrared communication are quite different, it is necessary to research the difference between the two approximate models and the gaussian beam in the blue-green band.

2 Gamma-Gamma Turbulence Model of Three Kinds of Beams The relationship between the Rytov variance r2l of plane wave and structure constant Cn2 is as follows [5] r2l ¼ 1:23Cn2 k7=6 L11=6

ð1Þ

The logarithmic light intensity variance of large scale turbulence and small scale turbulence can be expressed as [6] r2ln x;pl ðDÞ ¼ 

r2ln y;pl ðDÞ ¼

0:49r2l 12=5

1 þ 0:65d 2 þ 1:11rl

7=6

 5=6 12=5 0:51r2l 1 þ 0:69rl 12=5

1 þ 0:90d 2 þ 0:62d 2 rl

ð2Þ

ð3Þ

For spherical waves, the Rytov variance is b20 ¼ 0:4r2l

ð4Þ

The logarithmic light intensity variance of large scale turbulence and small scale turbulence can be expressed as [6] r2ln x;sph ðDÞ ¼ 

r2ln y;sph ðDÞ ¼

0:49b20 12=5

1 þ 0:18d 2 þ 0:56b0

7=6

 5=6 12=5 0:51b20 1 þ 0:69b0 12=5

1 þ 0:90d 2 þ 0:62d 2 b0

ð5Þ

ð6Þ

Applicability Analysis of Plane Wave and Spherical Wave Model

1155

The calculation of gaussian beam is complex, and the Rytov variance is [7]   

h i5=12 5 1 þ 2HL 11 5=6 r2B  3:86r2l 0:40 ð1 þ 2HL Þ2 þ 4K2L  KL  cos tan1 2KL 6 16 ð7Þ The logarithmic light intensity variance of large scale turbulence and small scale turbulence can be expressed as [7] r2ln x;gau ðDÞ  0:49r2l

  XG  KL 2 XG þ KL

#7=6  " 1 1 1 2 gx  HL þ HL  3 2 5 1 þ 0:40gx 2  HL ðKL þ XG Þ r2ln y;gau ðDÞ 

1:27r2l g5=6

y 1 þ 0:40gy ðKL þ XG Þ

ð8Þ

ð9Þ

Turbulence parameters can be expressed as a¼ b¼

exp



1 r2ln x



ð10Þ

1

1   exp r2ln y  1

ð11Þ

1 1 1 þ þ a b ab

ð12Þ

The total scintillation index is r2I ¼

Particularly, when calculating the Gaussian beam model for D = 0, an error occurs in Eqs. (8) and (9), and the two equations become [3]. r2ln x;gau ðDÞ ¼ 

r2ln y;gau ðDÞ ¼ 

0:49r2B 12=5

1 þ 0:56rB

0:51r2B 12=5

1 þ 0:69rB

7=6

ð13Þ

5=6

ð14Þ

1156

S. Li et al.

3 Simulation Analysis of Turbulence Model The weak turbulence structure constant is Cn2 ¼ 3:19  1013 , obtained from the observation value of Maui light station high altitude of air force of Haleakala volcano, Hawaii, USA [8], and the medium turbulence structure constant is Cn2 ¼ 3:19  1015 , taken from the observation value near the sea surface of Yantai [9] and the strong turbulence typical value is Cn2 ¼ 3:19  1018 . When the communication distance is L ¼ 8 km and the wavelength is k ¼ 0:532 lm, the Rytov variance of the plane wave are r2l ¼ 0:01, r2l ¼ 10 and r2l ¼ 1000. Simulate the scintillation index r2I of different beam models under three turbulence intensities, and calculate the quoted error c ¼ r2I;pl r2I;gau r2I;gau;max

or c ¼

r2I;sph r2I;gau r2I;gau;max

of plane wave and spherical wave (Fig. 1 and Table 1).

1.4

Plane wave Spherical wave

1.2

Scintillation index

Gaussian beam

1

1.4 1.2

0.8

1 0.8

0.6

0.6 0.4

0.4

0.2

0.2 0

0

0

1

0

2

0.05

3

4

0.1

5

0.15

6

7

0.2

8

9

10

d

Fig. 1. Scintillation index of the three models ðr2l ¼ 1000Þ

Table 1. Quoted error of scintillation index ðr2l ¼ 1000Þ Model Plane wave Spherical wave

cðd ¼ 0Þ (%) −7.9 3.0

cðd ¼ 0:1Þ (%) −8.8 1.7

cðd ¼ 10Þ (%) −3.3 2.0

In strong turbulence, the scintillation index error of spherical wave is smaller in all the value of d (Fig. 2 and Table 2).

Applicability Analysis of Plane Wave and Spherical Wave Model

Scintillation index

1.8

1157

Plane wave

1.6

Spherical wave

1.4

Gaussian beam

1.2 1.0 0.8 0.6 0.4 0.2 0 0

1

2

3

4

5

6

7

8

9

10

d

Fig. 2. Scintillation index of the three models ðr2l ¼ 10Þ Table 2. Quoted error of scintillation index ðr2l ¼ 10Þ Model Plane wave Spherical wave

cðd ¼ 0Þ (%) −26.5 −4.4

cðd ¼ 1Þ (%) −17.7 5.2

cðd ¼ 2Þ (%) −5.0 10.5

cðd ¼ 3Þ (%) −0.3 10.9

cðd ¼ 10Þ (%) 1.0 2.6

In medium turbulence, the scintillation index error of the spherical wave is smaller in the case of d  1, and the scintillation index error of the plane wave is smaller in the case of d  2 (Fig. 3 and Table 3). 0.01

Plane wave Spherical wave Gaussian beam

0.009

-4 x 10 4 3.5 3 2.5 2 1.5 1 0.5 5 6

Scintillation index

0.008 0.007 0.006 0.005 0.004 0.003 0.002

7

8

9

10

0.001 0

0

1

2

3

4

5

6

7

8

9

10

d

Fig. 3. Scintillation index of the three models ðr2l ¼ 0:01Þ

1158

S. Li et al. Table 3. Quoted error of scintillation index ðr2l ¼ 0:01Þ

Model Plane wave Spherical wave

cðd ¼ 0Þ (%) 47.1 41.2

cðd ¼ 1Þ (%) 39.7 0.0

cðd ¼ 5Þ (%) 1.0 0.5

cðd ¼ 9Þ (%) 0.2 0.5

cðd ¼ 10Þ (%) 0.2 0.1

In weak turbulence, the scintillation index error of spherical wave in 1  d  8 and d ¼ 10 is smaller, and the error of plane wave in d  5 is smaller, but the scintillation index error of three models in d  5 is relatively close. Based on the above simulation, when calculating scintillation index, spherical wave can be used to replace gaussian beam in strong turbulence. In the case of medium turbulence, spherical wave can be used to substitute when d  1, and plane wave can be used to substitute when d  2. In the case of weak turbulence, spherical wave can be used to replace when d  1, and plane wave can also be used to replace when d  5.

4 Simulation Analysis of SNR Model According to literature [3], the SNR model is 

 cSNR0 ffi cSNR0 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ 1:63ðr2l Þ6=5 KL þ Ac2SNR0

ð15Þ

where cSNR0 is the SNR without considering turbulence, KL is the parameter related to communication distance and divergence Angle, A is the aperture average factor, and A is defined as A ¼ r2I ðDÞ=r2I ð0Þ. When turbulence is not taken into account, the SNR can reach 25 dB. Simulate the SNR of different beam models under different turbulence hcSNR0 ipl hcSNR0 igau intensities, and calculate the quoted errors c ¼ or c ¼ hcSNR0 igau;max hcSNR0 isph hcSNR0 igau of plane wave and spherical wave (Fig. 4 and Table 4). hcSNR0 igau;max

Applicability Analysis of Plane Wave and Spherical Wave Model

1159

8 7 6 7 6 5 4 3 2 1 0 -1

SNR /dB

5 4 3 2 1

0 0.05

0.1

0.15

0.2 Plane wave Spherical wave Gaussian beam

0 -1

0

1

2

3

4

5

6

7

8

9

10

d

Fig. 4. SNR of the three models ðr2l ¼ 1000Þ Table 4. Quoted error of SNR ðr2l ¼ 1000Þ cðd ¼ 0Þ (%) 0.0 0.0

Model Plane wave Spherical wave

cðd ¼ 0:5Þ (%) 31.4 −6.3

cðd ¼ 2Þ (%) 27.0 −9.5

In strong turbulence, the SNR error of spherical wave is smaller in all the value of d (Fig. 5 and Table 5).

12

Plane wave Spherical wave Gaussian beam

10

SNR /dB

8 6 4 2 0 -2 0

1

2

3

4

5

6

7

8

d

Fig. 5. SNR of the three models ðr2l ¼ 10Þ

9

10

1160

S. Li et al. Table 5. Quoted error of SNR ðr2l ¼ 10Þ

Model Plane wave Spherical wave

cðd ¼ 0Þ (%) 0.0 0.0

cðd ¼ 1Þ (%) 10.5 −4.0

cðd ¼ 2Þ (%) 2.1 −11.7

cðd ¼ 3Þ (%) −5.5 −18.2

cðd ¼ 10Þ (%) −23.8 −31.9

In medium turbulence, the SNR error of spherical wave is small when d  1, the SNR error of plane wave is 0 when d ¼ 2:3, and the error of plane wave is smaller than that of spherical wave when d  2, but the difference is still large after d [ 3 (Fig. 6 and Table 6).

12

Plane wave Spherical wave

10

Gaussian beam

SNR /dB

8 6

0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1 0 0.1

4 2 0 -2

0

1

2

3

4

5

d

6

0.2 0.3 0.4 0.5 7

8

9

10

Fig. 6. SNR of the three models ðr2l ¼ 0:01Þ

Table 6. Quoted error of SNR ðr2l ¼ 0:01Þ Model Plane wave Spherical wave

cðd ¼ 0Þ (%) 0.0 0.0

cðd ¼ 0:5Þ (%) −0.6 −1.8

cðd ¼ 1Þ (%) −7.0 −11.7

cðd ¼ 10Þ (%) 6.0 −12.9

In weak turbulence, no matter the value of d, the SNR error of plane wave is small, and in the case of d ¼ 0:21 and d ¼ 2, the SNR error of plane wave is 0. And in d\1, the SNR error of spherical wave is also small. Based on the above simulation, when calculating SNR, the spherical wave can be used to replace the gaussian beam in strong turbulence. In the case of medium turbulence, spherical wave can be used to replace when d  1, and plane wave can be used

Applicability Analysis of Plane Wave and Spherical Wave Model

1161

to replace when 2  d  3, but the SNR error of two models are large in d [ 3. In the case of weak turbulence, plane wave can be used to replace, and spherical wave can also be used to replace when d\1.

5 Conclusion This article selects three typical structure constants of different turbulence intensity, compare the quoted error of the three kinds of beam model when calculating the scintillation index and the SNR, thus come to a conclusion: plane wave is closer to the gaussian beam in weak turbulence, and spherical wave is closer to the gaussian beam in strong turbulence, but the scintillation index and SNR in medium turbulence have different laws, when d [ 3 the SNR error of plane wave and spherical wave are both large but the scintillation index error are small, so neither plane wave nor spherical wave can be used to replace in medium turbulence. Therefore, the turbulence intensity and the size of d must be paid attention to when selecting the approximation model, otherwise the error with the actual beam will be too large.

References 1. Cao RQ (2007) Research of atmosphere channel for space-to-ground optical communication. Master, Huazhong University of Science and Technology 2. Han LQ (2011) Research on performance of free space optical communication through atmospheric turbulence and its compensation method. Doctor, Harbin Institute of Technology 3. Liu M, Liu XG, Wang HX et al (2014) Study of gamma-gamma model under Gaussian beam. Adv Laser Opt 1:35–40 4. Cao H (2012) Analysis of the effect of aperture average on effective SNR in turbulent atmosphere. Mod Ele Tec (19):35 5. Andrews LC (2001) Laser beam scintillation with applications. SPIE Press, USA 6. Al-Habash MA, Andrews LC, Phillips RL (2001) Mathematical model for the irradiance probability density function of a laser beam propagating through turbulent media. Opt Eng 40 (8):1554–1562 7. Andrews LC, Phillips RL, Hopen CY (1999) Theory of optical scintillation. J Opt Soc Am A 16(6):206–209 8. Miller MG, Zieske PL (1979) Turbulence environment characterization 9. Wu XJ (2015) Research on atmospheric channel characteristics of free space optical communication under the sea environment. Doctor, Naval Aeronautical Engineering University

A Study of the Influence of Resonant Frequency in Wireless Power Transmission System Xiaohui Lu, Xiu Zhang(&), Ruiqing Xing, Xin Zhang, Yupeng Li, and Liang Han Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected]

Abstract. Wireless power transmission (WPT) technology has been applied to many areas that driven by batteries. WPT system based on coupling coils is a popular method to realize WPT. In general, two coils are used in such system including a transmitting coil and a receiving coil. Two coils have to reach the same resonant frequency and impedance matching to assure high power transfer efficiency. In this paper, the influence of resonant frequency on transfer efficiency is studied. Inductance is closely related with frequency and capacitor. Hence, the influence of compensation capacitor on efficiency is also studied. Simulations have been done to perform the study. When the frequency is 130 kHz and the capacitance is 46.7 nF, the efficiency of the system reaches its maximum, and the efficiency of the system is 82.26%. Keywords: Wireless power transmission  Resonant frequency Compensation capacitor  Power transmission efficiency



1 Introduction Wireless power transmission is a kind of non-contact power transmission. With the development of WPT technology, it has greatly changed the way that we use electric energy in the production and life. Without the constraints of cables and wires, people can use electricity more conveniently. From the practical point of view, the WPT realizes the convenient power supply. From the perspective of safety, WPT technology reduces the potential safety hazards caused by electric spark and wire aging caused by contact friction, and also avoids the safety accidents caused by electricity leakage and discharge. From the perspective of environmental protection and economy, WPT technology complies with the development trend of the new era and solves the problem of difficult charging of electric vehicles. In a word, WPT technology has a good development prospect and is an important research direction in the future. In this paper, it introduces the background and meaning of the WPT firstly. Secondly, the paper introduces the research status at the present. Then, in the case of a definite inductance, the paper analyzes the relations of resonant frequency and compensation

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1162–1168, 2020 https://doi.org/10.1007/978-981-13-9409-6_136

A Study of the Influence of Resonant Frequency

1163

capacitance with system efficiency. Finally, it is concluded that the variation trends of efficiency change with frequency and capacitance by simulation experiments.

2 Related Works of WPT As our life become more and more intelligent, people are pursuing simplicity and directness. From small as people’s mobile phones and home appliances to large as cars and drones, they could only be charged by using electric wires in the past. With the continuous use of charging lines, people often replace several data cables without breaking the electrical appliances. Moreover, there are many cases of electrical damage caused by mismatching of late charging lines. In this context, the need for wireless charging has only intensified. Therefore, the wireless power transmission technology has a broad prospect of development and application. In a WPT system, in order to measure the value of the mutual inductance between the coupler based on filamentary circular coils and cylindrical helix coils as the research object. Based on Neumann’s formula deduced the formula for calculating coefficients of two kinds of the mutual inductance coil after the experiment confirmed the mutual inductance coefficient by axial distance, lateral and angular misalignment, the influence of such factors as the experimental results and calculation results are basically identical [1]. If WPT system has multiple receivers, cross coupling happens between transmitter and receiver, which leads to change in transmission characteristics compared with single receiver. Therefore, the author established the double relay more load transmission system of mutual inductance coupling circuit model and the same plane electromagnetic coil mutual inductance calculation model. The experimental results show that the cross coupling effect can cause resonance frequency shift, and even produce resonance frequency division characteristics. For a load, multi-load system can improve the total transmission power and efficiency [2]. Wireless power transmission technology in medical applications are increasingly more. In order to reduce the patient’s wound to damage to the surrounding tissues, the author designes many mm-side of implantable medical device consisting of a distributed neural interface. The device is fully implanted devices, not subject to any wire connection and bondage, the brain-computer interfaces will play a extremely important role in the next year [3]. Leadless pacemakers which use a 50 X miniaturized rectenna for wireless energy harvesting can reduce the patient’s wound to damage to the surrounding tissues and achieve Good effect. In order to obtain the ideal receiving power, a matching network is designed between the rectifier and the antenna. When the input power is −20 dbm with a load of more than 10k, the efficiency of power conversion reaches about 40%. The system can be placed in a cylinder with a radius of 5 mm and a height of 3.2 mm. This system is not only small in size, but also can measure very small power, and the cost is very low, so it can be widely used in the medical field [4]. While we are enjoying the convenience brought to us by WPT technology, and we should also pay attention to the harm and loss caused by electromagnetic leakage to human beings and the environment. Therefore, it is urgent to study the safety. For the sake of safety, the electromagnetic leakage received at a certain distance near the transmission channel is within a certain range. The author suggests that a mu-near-zero metamaterial with permeability near zero, which can be a shielding material to shield

1164

X. Lu et al.

the magnetic field at a specific frequency. At 13.56 MHz, the optimized permeability value of metamaterial is close to zero [5].

3 Research Method WPT technology has been widely applied to the life, such as health care and environmental protection [6, 8, 10–12]. It makes us use electricity more convenient. WPT system based on coupled inductor is a common method [7, 9, 13]. In general, the system uses two coils, including transmitter coil and receive coil. Two coils must achieve the resonance frequency and impedance matching to ensure high power transmission efficiency. Therefore, WPT technique can be effective application in various fields in the transmitter coil and receive coil model. This paper studies the effect of resonance frequency on the transmission efficiency. Since inductance is closely related to resonance frequency and compensation capacitance, the effect of compensation capacitance on efficiency is also studied. In this paper, it is assumes that the coil system was pre-determined, and the inductance is pre-determined too. The influence of changes in resonant frequency and compensation capacitance on system efficiency is analyzed. The model of WPT system is shown in Fig. 1 and the parameters of transmitting coil and receiving coil are listed in Table 1. In the case of parameter determination, the relationship of the resonant frequency and inductance and the capacitance is determined according to formula (1). Table 1. Parameters of the system model Parameters The distance between the transmitting coil and the receiving coil (D) Inner diameter of the coil (R1) Outer diameter of the coil (R2) Diameter of the copper wire (L1) The space of the two turns (L2) Number of turns

Fig. 1. Model of system simulation

Value (mm) 50 50 72.5 2 4.5 5

A Study of the Influence of Resonant Frequency

f ¼

1 pffiffiffiffiffiffi 2p LC

1165

ð1Þ

where, f is resonant frequency; L is the coil inductance; and C is the compensation capacitance. The formula is deformed to (2). C¼

1 4p2 f 2 L

ð2Þ

In this model, the resonant frequency range is from 50 to 150 kHz, and the interval of each frequency is 5 kHz, so the capacitance value corresponding to each resonant frequency can be calculated. Therefore, for each set of resonant frequencies and compensation capacitors, we can test the system’s efficiency and electromagnetic field distribution. Thus, the value of the corresponding frequency and capacitance can be obtained when the system efficiency is the maximum. In practical application, the corresponding frequency and capacitance values can be adopted for better transmission effect.

4 Simulation Results The efficiency of the system is measured according to each group of frequencies and capacitors. When the frequency is 130 kHz and the capacitance is 46.7 nF, the efficiency of the system reaches its maximum, and the efficiency of the system is 82.26%. From the data measured in the experiment, we can get the curve of system efficiency changing with frequency Fig. 2. At the same time, we can get the curve of system efficiency changing with capacitance Fig. 3. 90 80 70

Efficiency (%)

60 50 40 30 20 10 0

50

60

70

80

90

100

110

120

130

Frequence (KHz)

Fig. 2. Efficiency changing with frequency

140

150

1166

X. Lu et al. 85 80

Efficiency (%)

75 70 65 60 55 50 45 40

42

44

46

48

50

52

Compensation capacitor (nF)

Fig. 3. Efficiency changing with compensation capacitance

Therefore, under the condition of 130 kHz, the waveform of transmitting voltage and receiving voltage is shown in Fig. 4. The waveform of transmitting current and receiving current is shown in Fig. 5. Under the condition of maximum efficiency, the electromagnetic field distribution is shown in Fig. 6. 4.00

Curve Info InducedVoltage(RX_w inding) Setup1 : Transient InducedVoltage(TX_w inding) Setup1 : Transient

3.00

Voltage [V]

2.00

1.00

0.00

-1.00

-2.00

-3.00 0.00

5.00

10.00

15.00

20.00

25.00

Time [us]

Fig. 4. The waveform of transmitting voltage and receiving voltage

30.00

A Study of the Influence of Resonant Frequency 0.73

1167

Curve Info Current(RX_w inding) Setup1 : Transient Current(TX_w inding) Setup1 : Transient

0.57

Current [A]

0.38

0.18

-0.03

-0.22

-0.42 -0.53 0.00

5.00

10.00

15.00

20.00

25.00

30.00

Time [us]

Fig. 5. The waveform of transmitting current and receiving current

Fig. 6 The electromagnetic field distribution

5 Conclusion Under the condition of the system model (that is under the condition of fixed inductance), the change of resonant frequency and compensation capacitance effect on efficiency is bigger. When the frequency is 130 kHz and capacitance is 46.7 nF, the efficiency of the system reaches 82.26% in this paper. The efficiency of system is highest. WPT technology can be applied in many industries. In daily life, we can use the wireless charging to avoid the use of the data line. Because of the mismatch of charging interface, we can’t charge. From the perspective of environmental protection,

1168

X. Lu et al.

electric cars have been all over the streets, and charging must be in a fixed place. Through the use of wireless charging technology, it can facilitate our lives. We need high system efficiency to use in our life, so it is critical to get the right frequency and capacitance. Acknowledgements. This research was supported in part by the National Natural Science Foundation of China (Project No. 61601329, Project No. 61603275, Project No. 61704122, Project No. 61701345, Project No. 61801327), the Natural Science Foundation of Tianjin (Project No. 18JCQNJC70900, Project No. 18JCZDJC31900), and the Tianjin Higher Education Creative Team Funds Program.

References 1. Zhang X, Meng H, Wei B, Wang S, Yang Q (2019) Mutual inductance calculation for coils with misalignment in wireless power transfer. J Eng 2019(16):1041–1044 2. Li C, Cao J, Hu D, Zhang H (2019) Transmission characteristics of magnetic resonance coupling-based multi-load wireless power transmission system. J Eng 2019(13):132–137 3. Jia Y, Mirbozorgi SA, Zhang P, Inan OT, Li W, Ghovanloo M (2019) A dual-band wireless power transmission system for evaluating mm-sized implants. IEEE Trans Biomed Circ Syst (to appear). https://doi.org/10.1109/tbcas.2019.2915649 4. Abdi A, Aliakbarian H (2019) A miniaturized UHF-band rectenna for power transmission to deep-body implantable devices. IEEE J Transl Eng Health Med 7, Article No. 1900311 5. Lu C et al (2019) Shielding the magnetic field of wireless power transfer system using zeropermeability metamaterial. J Eng 2019(16):1812–1815 6. Zhao J et al (2019) Control method of magnetic resonant WPT maintaining stable transmission power with wide misalignment tolerance. J Eng 2019(16):3392–3395 7. Luo Z, Ker MD (2018) A high-voltage-tolerant and power-efficient stimulator with adaptive power supply realized in low-voltage CMOS process for implantable biomedical applications. IEEE J Emerg Sel Top Circ Syst 8(2):178–186 8. Jia Y, Khan W, Lee B (2018) Wireless opto-electro neural interface for experiments with small freely behaving animals. J Neural Eng 15(4), Article No. 046032 9. Zhang X, Zhang X, Fu WN (2017) Fast numerical method for computing resonant characteristics of electromagnetic devices based on finite element method. IEEE Trans Magn 53(6), Article No. 16599203 10. Kumwenda B, Mwaku W, Mulongoti D (2017) Integration of solar energy into the Zambia power grid considering ramp rate constraints. In: IEEE PES power Africa, Accra, Ghana, pp 254–259 11. Gagnon-Turcotte G, LeChasseur Y, Bories C (2017) A wireless headstage for combined optogenetics and multichannel electrophysiological recording. IEEE Trans Biomed Circ Syst 11(1):1–14 12. Wang J (2016) Integrated device for combined optical neuromodulation and electrical recording for chronic in vivo applications. J Neural Eng 13(3), Article No. 039501 13. Zhang X, Ho SL, Fu WN (2012) Quantitative design and analysis of relay resonators in wireless power transfer system. IEEE Trans Magn 48(11):4026–4029

Direction of Arrival Estimation Based on Support Vector Regression Baoyu Guo, Jiaqi Zhen(&), and Xiaoli Zhang College of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected]

Abstract. In order to improve the direction finding accuracy under low signal to noise ratio this paper presents a method for estimating the direction of multiple signals by using support vector machine. The signal subspace of the known direction signal is extracted as the input of the model. The fitting ability of the support vector regression to the nonlinear function is used to build the model, and finally estimate the directions of arrival. The method proposed in this paper does not need to perform peak search, which can improve the direction finding accuracy and direction finding speed. Keywords: Support vector regression  Signal subspace  Direction of arrival  Signal to noise ratio

1 Introduction The form of signals is becoming more and more complex, which brings greater challenges to passive direction finding technology. The traditional algorithm multiple signal classification (MUSIC) [1] as low direction-finding accuracy and poor applicability to the actual complex environment. In recent years, some intelligent learning algorithms have been applied to superresolution direction finding, such as neural network [2], high-order cumulant [3], etc. At present, when using neural network to estimate the direction of arrival (DOA), a very large network structure is needed to establish the fitting model. The training time is long and the prediction accuracy is difficult to meet the requirements. However, it is better to use the support vector machine (SVM) for super-resolution direction finding performance. SVM have been used in many fields, such as global sensitivity analysis [4], predicting the sorption capacity of lead (II) [5], etc.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1169–1173, 2020 https://doi.org/10.1007/978-981-13-9409-6_137

1170

B. Guo et al.

2 Uniform Linear Array Model and Directions of Arrival Estimation Model When the signal source impinge on the spatial array with hn ,n ¼ 1; . . .; N, the covariance matrix of the array is R ¼ EfXX H g ¼ ARS AH þ RN

ð2:1Þ

where X ¼ ½X 1 ; X 2 ; . . .; XM T is the data vector of the array; A is the array manifold. Where RS and RN are signal covariance matrix and noise covariance matrix respectively, and H indicates the complex conjugate transpose. For the spatial ideal Gaussian white noise, the noise power is r2 . Characteristic decomposition of R is ð2:2Þ

R ¼ USU H

where U is the eigenvector matrix, the diagonal matrix consisting of eigenvalues is R as follow: 0 R¼@

k1

1 O

A

ð2:3Þ

kM

In the above formula, the eigenvalue relation is k1  k1  . . .  kN [ kN þ 1 ¼ . . . ¼ kM ¼ r2 . Where RS is a diagonal matrix composed of large eigenvalues, then signal subspace is US corresponding to the large eigenvalues. RS can be further written as follows: RS ¼ U S RS U H S

ð2:4Þ

If x ¼ U TS , being x  CNM and h  RN , so we can get a mapping G : h ! x. In this paper, the e  SVR is used to establish the DOA estimation model. To define the optimal linear regression hyperplane, and to find the optimal linear regression hyperplane algorithm to solve a convex programming problem [6–8]. The optimal regression function constructed by SVM in a high-dimensional space is FðxÞ ¼

l X

ðai  ai Þ expðrkx  xi k2 Þ þ b

ð2:5Þ

i¼1

3 Experimental Results 3.1

SVR Test Results

The antenna array is a 6-element uniform linear array, and the array spacing is 0.5 m. The angle range h 2 ð90 ; þ 90 Þ of the signal, sampling the angle, sampling interval

Direction of Arrival Estimation Based on Support Vector

1171

is 1 , so the number of samples is 181, 154 training sample set and 27 test sample set are generated. The parameters optimized through genetic algorithm are C ¼ 9:81, r ¼ 0:1 and e ¼ 7:38. In particular, the SNR is equal to 15 dB. It can be seen from Fig. 1 that the absolute error of the estimation of the proposed method is mostly within 1 . 1.8

1.6

Absolute error/Degrees

1.4

1.2

1

0.8

0.6

0.4

0.2

0

10

20

30

40

50

60

70

80

90

100

Number of experiments

Fig. 1. Absolute error values of the DOAs of test sample

3.2

Simulation Analysis of DOA Estimation Accuracy and Speed

Suppose two narrow-band signals are incident into a uniform linear array. The results are summarized in Fig. 2, which shows the absolute error for the two waves 25 and 45 . It can be seen from Fig. 2 that as the SNR increases, the absolute error becomes smaller and smaller, but the error of the method is between 0.2 and 0.4. In the case of low SNR (−5 dB), the estimation accuracy of the SVR method is still relatively high. In the whole SNR range of the simulation, the proposed method has better robustness than the MUSIC algorithm. From the simulation experiment, the direction of the SVR is 6.978 ms, and the MUSIC algorithm is 31.2 ms. So the direction finding speed of this paper is obviously faster than the MUSIC algorithm.

1172

B. Guo et al. 1.6 SVR MUSIC

1.4

Absolute error/Degrees

1.2

1

0.8

0.6

0.4

0.2 -6

-4

-2

0

2

4

6

8

10

12

14

SNR/dB

Fig. 2. Absolute error of the DOAs versus SNR. Comparison SVR with MUSIC

4 Conclusions An intelligent algorithm is used to solve the problem of the estimation of DOA, the feature vector of signal subspace is extracted as the input feature of the sample, and the complex function approximation ability of SVM is used to construct a model for DOA estimation. The experiments show that the approach has high direction finding accuracy, speed and application value. Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61501176, Natural Science Foundation of Heilongjiang Province F2018025, University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province UNPYSCT-2016017, and the postdoctoral scientific research developmental fund of Heilongjiang Province in 2017 LBH-Q17149.

Direction of Arrival Estimation Based on Support Vector

1173

References 1. Schmidt R, Schmidt RO (1986) Multiple emitter location and signal parameters estimation. IEEE Trans Antennas Propag 34:276–280 2. Randazzo A, Abou-Khousa MA, Pastorino M et al (2007) Direction of arrival estimation based on support vector regression: experimental validation and comparison with MUSIC. IEEE Trans Antennas Propag 6:379–382 3. Gardner WA (1988) Simplification of MUSIC and ESPRIT by exploitation of cyclostationarity. Proc IEEE 76:845–847 4. Cheng K, Lu ZZ, Zhou YC et al (2017) Global sensitivity analysis using support vector regression. Appl Math Model 49:587–598 5. Parveen N, Zaidi S, Danish M (2016) Support vector regression model for predicting the sorption capacity of lead (II). Perspect Sci 8:629–631 6. Vapnik VN (1995) The nature of statistical learning theory. Springer, New York 7. Vapnik VN (1998) Statistical learning theory. Wiley, New York 8. Scholkopf B, Smola AJ, Williamson RC et al (2000) New support vector algorithms. Neural Comput 12:1207–1245

Bistatic ISAR Radar Imaging Using Missing Data Based on Compressed Sensing Luhong Fan(&), Zongjie Cao, Jin Li, Rui Min, and Zongyong Cui University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan, China [email protected]

Abstract. When a bistatic inverse synthetic aperture radar (ISAR) system fails to collect complete radar cross section (RCS) datasets, bistatic ISAR images are usually corrupted using the conventional Fourier transform (FT)-based imaging algorithm. To overcome this problem, this paper proposes a new bistatic ISAR image reconstruction method that includes three steps: construction of the sparse dictionaries according to the range and cross resolution units on the imaging domain and echoes can be considered as the interaction between the twodimensional distribution of point scatterers and the sparse dictionary, construction of the observation matrix and low-dimensional observation samples are obtained, and reconstruction of scattering distribution of target using nonlinear reconstruction algorithm. To validate the reconstruction capability of the proposed method, bistatic-scattered field data using the point-scatterer model is used for bistatic ISAR image reconstruction. The results show that the proposed imaging method based on the bistatic ISAR signal model spatial reconstruction combined with the compressive sensing(CS) theory can yield high reconstruction accuracy for incomplete bistatic RCS data compared to conventional FTbased imaging methods. Keywords: Bistatic radar  Radar imaging  Inverse synthetic aperture radar Compressed sensing  Sparse signal reconstruction



1 Introduction Inverse Synthetic Aperture Radar (ISAR) imaging can obtain high-resolution images of moving target all day and all time, and has a wide range of applications in military and civilian fields, such as analysis of the target’s scattering mechanism and detection and classification of the target. Compared with the monostatic ISAR, the bistatic ISAR uses one radar as a transmitter and the other as a receiver, where the transmitter and the receiver are spatially separated [1–5]. Bistatic ISAR has high security and high anti-interference ability. We can obtain the target forward scattering information, and can also configure multiple receivers for interference processing. The same as a monostatic ISAR, a bistatic ISAR image can also be easily obtained by a conventional Range-Doppler imaging method [6], which is computationally efficient and robust against noise. In the actual work, the electromagnetic waves emitted and received by radar are susceptible to external interference, resulting in data missing in the received dataset. When the receiver and © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1174–1185, 2020 https://doi.org/10.1007/978-981-13-9409-6_138

Bistatic ISAR Radar Imaging Using Missing Data Based

1175

transmitter are placed separately, such phenomenon is more obvious. In particular, the form of missing will be different in the case of different detection. In order to solve this problem, we can use the sparse reconstruction algorithm based on the compressive sensing (CS) theory [7–10] to image the missing echoes. The compression perceptual theory is a new type of signal acquisition and decoding theory that utilizes the sparse and compressible nature of the signal to efficiently sample signals. The basic principle of ISAR imaging is to obtain the spatial distribution of the target reflection characteristic through the target echo dataset received by the radar system. Therefore, the radar imaging process is the process of reconstructing the target representation by echo dataset [11]. And the study of the electromagnetic scattering properties of the target shows that when the size of the radar target is much larger than the radar transmitting wavelength, the echo can be regarded as the superposition of the echoes of multiple point scatterers on the target. This characteristic of the echo dataset can meet the requirements of the sparse characteristic of the compressive sensing theory [12–15]. Therefore, the compression perceptual theory can be applied to the radar imaging system [16–25].

2 Methods In this section, we briefly describe the observation geometric model and signal model of bistatic ISAR, and propose a compressive sensing imaging algorithm based on signal sparse representation. 2.1

Radar Echo Model

Bistatic radar can be described by equivalent radar, but when the target moves, the RTR of equivalent radar changes over time. Assuming that the echo data has been compensated for equal translation, the equivalent turntable target imaging is taken as an example. The geometric relationship between the radar and the target in the imaging domain is shown in Fig. 1. Assuming that the target coordinate system in the radar imaging domain is XOY and the reference coordinate system is Xs OYs , the axial direction of Xs coincides with the direction of the equivalent radar line of sight, and Xs is the observation range direction, Ys is the observation position direction. The electromagnetic scattering distribution function of the target in the coordinates Xs OYs is dðx; yÞ: The distance between the rotation center O and the equivalent radar is R0 ðtÞ: The rotation angle of the target relative to the direction of the radar in the initial direction is hðtÞ and the counterclockwise direction is positive, then the distance between the point scatterers and the radar is RðtÞ  R0 ðtÞ þ xcoshðtÞ  ysinhðtÞ:

ð1Þ

1176

L. Fan et al.

X

Ys Y

Xs

P(x,y) (t)

RLOS

O

Q-1

P-1

R(t) R0 1

1 0

Fig. 1. Discretization using a two-dimensional space grid

Frequency 1st measurement

2nd measurement

fN-1 missing pulses f2 f1 f0

f

T

Nth frequency 1st frequency

Time

Fig. 2. SFCW signal with missing data in time—frequency domain

Bistatic ISAR Radar Imaging Using Missing Data Based

1177

The stepped frequency waveform is used for analysis, the signal bandwidth is B, the expression is as follows: sT ðtÞ ¼

N 1 X n¼0

rect

  t  nTr  expfj2pðf0 þ nDf Þtg; s

ð2Þ

where rectðÞ is unit rectangular function, N is the number of sub-pulses, Tr is the pulse repetition period, s is the pulse width, f0 is the starting frequency of the transmitted signal, Df is the frequency step, and B ¼ ðN  1ÞDf ; n ¼ 0; 1; . . .; N  1: The representation of the M-pulse stepped signal with N sub-pulses, and the burst sequence is randomly lost due to interference in the process of echoes reception. The time-frequency domain of the stepped frequency pulse signal is shown in Fig. 2, and the shaded portion in the figure is a sub-pulse that is randomly lost when the echo is received. When the echo dataset is received, it is assumed that the target contains I scattering ^ dðxi ; yi Þ is the scattering intensity of the ith scattering center, Ri ðtÞ is the center, di ¼ distance from the scattering center to the radar, and the corresponding delay is si ðtÞ ¼ 2Ri ðtÞ=c, then target echo is,   I N1 X X t  nTr  si ðtÞ  expfj2pðf0 þ nDf Þðt  si ðtÞÞg: sR ðtÞ ¼ di rect ð3Þ s i¼1 n¼0 For the convenience of analysis, assuming the distance which the radar can accurately obtain the target reference point to the radar be R0 , the corresponding reference delay is s0 ðtÞ ¼ 2R0 ðtÞ=c, then the reference signal is,   N 1 X t  nTr  s0 ðtÞ sref ðtÞ ¼ rect ð4Þ  expfj2pðf0 þ nDf Þðt  s0 ðtÞÞg: s n¼0 The echo after coherent mixing can be expressed as follows:   I N 1 X X t  nTr  si ðtÞ uR ð t Þ ¼ di rect  expfj2pðf0 þ nDf Þðsi ðtÞ  s0 ðtÞÞg: s i¼1 n¼0

ð5Þ

Set the rotation angle of the target at the slow time tm be hm ¼ ^ hðtm Þ ¼ mDh; m ¼ 0; 1; . . .; M  1, Dh is the angle sampling step, M is the number of angle samples, at this time, si ðtÞ  s0 ðtÞ ¼ 2ðxi cos hm  yi sin hm Þ: The echo data of the nth sampling frequency point in the mth view can be obtained by sampling (5) at time tn ¼ nTr þ ts (ts is the time delay corresponding to the starting position of the range image). Hence, un;m ¼

I X i¼1

di expfj2pðf0 þ nDf Þ

 2ðxi cos hm  yi sin hm Þ : c

ð6Þ

1178

L. Fan et al.

Under the small angle observation conditions, cos h  1, sinh  h, the echo model can be written as follows:       I X 2xi 2xi 2yi mDh  exp j2pnDf  exp j2p ; ð7Þ un;m ¼ di exp j2pf0 c c kn i¼1 Among them, kn ¼ c=fn ¼ c=ðf0 þ nDf Þ, the first index term is only related with the location of the point scatterers, assume d0i ¼ di expðj4pf0 xi =cÞ, adding it into the amplitude information. If the scattering function is sampled in the range ofthe target observation space, the two-dimensional distribution of point scatterers ðd ¼ dpq NM Þ   can be obtained. Among them, dpq ¼ d0 xp;yq , xp  pDx0 , yq  qDy0 , and Dx0 ¼ c=ð2ðN  1ÞDf Þ is the traditional range resolution, Dy0 ¼ k0 =ð2ðM  1ÞDhÞ is the traditional cross resolution, among them, k0 ¼ c=f0 , p ¼ 0; 1; . . .; N  1, q ¼ 0; 1; . . .; M  1: In general, the relative bandwidth of broadband radar is relatively small. So kn  k0 . Then, Eq. (7) can be written as follows: un;m ¼

qm

pn

dpq exp j2p  exp j2p : N M p¼0

M 1 X N 1 X q¼0

ð8Þ

According to two-dimensional separable nature, Eq. (8) can be written as follows: un;m ¼

  Q1 pn X qm exp j2p dpq exp j2p :  P Q p¼0 q¼0

P1 X

ð9Þ

It can be seen from (8), the two-dimensional image of the target is obtained by analyzing the two-dimensional Fourier transform of the data. At the same time, the generalized linear operator can be regarded as a combination of longitudinal and transverse discrete intervals. Equation (9) can be thought of the combination of the generalized linear operator and distribution of the target point scatterers. Therefore, only a few or even a single pulse can be used to obtain the target range-doppler super resolution. Taking into account the traditional stepped frequency radar imaging requires a longer data acquisition time, and in the absence of the signal the Fourier transform imaging can be damaged or blurred. In order to solve these problems, the thought of CS is introduced, which can greatly reduce the number of transmitting pulses to achieve high-resolution imaging in the case of signal absence, whereas reducing the hardware complexity of the imaging system without affecting the image quality. 2.2

Two-Dimensional CS Decoupling Imaging Algorithm

The two-dimensional joint imaging model matches the observation model of the radar system. It has stable performance, strong robustness to noise and clutter, but the constructed dictionary has high dimension, requires a lot of storage space, and the computation cost is large, so sometimes it is difficult to realize. To further reduce the

Bistatic ISAR Radar Imaging Using Missing Data Based

1179

amount of memory and computation, this section explores a more concise imaging algorithm based on the proposed method. From the previous derivation, we can see that when the relative bandwidth of the radar signal is small, then, hp;m ¼

  qm ; p ¼ 0; 1;    ; P  1; m ¼ 0; 1;    ; M  1:; dpq exp j2p Q q¼0

Q1 X

ð10Þ

then the observation model (8) can be written in decoupling form as follows: un;m ¼

pn

hp;m exp j2p ; n ¼ 0; 1;    N  1; m ¼ 0; 1;    ; M  1; P p¼0

P1 X

ð11Þ

   T  T Assume hðmÞ ¼ h0;m ; h1;m ; . . .; hP1;m , ~hðPÞ ¼ hp;0 ; hp;1 ; . . .; hp;M1 , F ¼ Fn;p NP ,   ~m;q ¼ expðj2pqm=QÞ, n ¼ 0; 1; . . .; ~¼ F ~m;q , in which Fn;p ¼ expðj2ppn=PÞ, F F MQ N  1, m ¼ 0; 1; . . .; M  1, p ¼ 0; 1; . . .; P  1, q ¼ 0; 1; . . .; Q  1: Then, Eqs. (10) and (11) can be expressed as follows:

~ ð pÞ : uðmÞ ¼ FhðmÞ ; ~hð pÞ ¼ Fd

ð12Þ

In fact, hðmÞ is the range image at the mth observation angle, and can be described by a few scattering centers at high frequency, ~hð pÞ is the echo data corresponding to the ~ is orientation to sparse dictionary. pth range bin, dð pÞ is the corresponding cross, F Under different observation angles, the same measurement matrix is used. Assuming the cross-measurement matrix is Ud . The measurement matrix under each observation angle is Ur , then the compression sampling result can be expressed as follows: ~ ð pÞ ; yðlÞ ¼ Ur FhðlÞ ; ~yð pÞ ¼ Ud Fd

ð13Þ

where yðlÞ is the compression measurement data at the lth perspective ~yð pÞ is the compression measurement data of the pth range bin.   According to (13), the range image sequence h ¼ hð1Þ ; hð2Þ ; . . .; hðLÞ under L observation angle is obtained by using CS algorithm, and the two-dimensional image of target can be obtained according to the scattering distribution of target by CS algorithm. In summary, the specific steps of two-dimensional decoupling imaging algorithm based on CS under the condition of signal absence are as follows: STEP1 determines L random observation angles, constructs cross random measurement matrices and uses the same measurement matrix for each perspective; STEP2 uses the measurement matrix to obtain the compression measurement data for the lth view according to (21);

1180

L. Fan et al.

STEP3 constructs the range sparse dictionary according to (19), and uses the nonlinear optimization algorithm to restore the one-dimensional scattering distribution of the target. STEP4 repeats steps 2 and 3 to get the range image sequence at all L perspectives; STEP5 constructs the cross sparse dictionary according to (18) and constructs the twodimensional scattering distribution of the target by non-linear optimization based on the measurement matrix and the range image sequence.

3 Results In this section, we use point target imaging to verify the accuracy of the proposed highresolution reconstruction method.

Fig. 3. Bistatic ISAR geometry for moving target that is modeled using dominant point scatterers

First, we consider a bistatic ISAR moving target imaging geometry model, as shown in Fig. 3. Radar A is used for transmitting, and Radar B is used for receiving; these two radars are 10 km away from the initial position of the target. The target is moving at a constant speed v ¼ 50 m=s in the positive direction of the x-axis, starting at 0 km from the initial position. The target of the ship’s shape is composed of isotropic point scatterers, as shown in Fig. 4.

Bistatic ISAR Radar Imaging Using Missing Data Based

1181

Fig. 4. Simulation model with shape of ship

The radar A continuously transmits a signal to obtain a cross resolution that matches the range resolution for the purpose of not distorting the image shape; each pulse group lasts 1.2 ms; thus 120 bursts are sampled at equal intervals. The detailed simulation parameters of the bistatic ISAR applied in Fig. 3 are listed in Table 1. The reference original image is constructed using a Fourier transform algorithm. Finally, in order to obtain an incomplete echo dataset from the complete set of echo data, we randomly select a portion of the pulse and pulse group from the complete echo dataset as shown in Fig. 4. The parameter pm is used to denote the missing rate of the missing data.

Table 1. Simulation Parameters for the Geometry in Fig. 3 Parameters Number of transmitted frequencies (M) Number of bursts (N) Center frequency (GHz) Frequency bandwidth (MHz) Range resolution (m)

Values 120 120 9 500 0.3

When an incomplete echo dataset is applied, the bistatic ISAR image obtained using the proposed method and the conventional method is shown in Fig. 5. Before using the incomplete echo dataset, we apply the interpolation method to the missing data to provide a uniform set of data. In the case of the application of incomplete echo

1182

L. Fan et al.

data, a clear image is obtained using the proposed method (Fig. 5b) and no clear image is obtained using FT-based compression algorithms (Fig. 5c). In addition, images using FT-based compression algorithms still have similar noise interference.

Fig. 5. Reconstructed Bistatic ISAR images when pm = 60%. a Incomplete Bi-RCS dataset with random missing data (pm = 60%). b FFT. c Proposed two-dimensional CS method

In order to test the reconstructed ability of the proposed method in noisy environments, we add additive white Gaussian noise to the complete echo dataset and then randomly remove half of the echo dataset ðpm ¼ 60%Þ: The complex white noise is added to the echo data, and the SNR range is 5, 10, 15 dB, as shown in Fig. 6. As shown in Fig. 6a, when the signal-to-noise ratio is low, the image field appears weak noise, but the geometry of the target is not destroyed. In addition, as the signal-to-noise ratio increases, the method can reduce the number and intensity of noise and reconstruct a clear bistatic ISAR image (Fig. 6b, c).

Bistatic ISAR Radar Imaging Using Missing Data Based

1183

Fig. 6. Recovered Bistatic ISAR images using proposed method for different SNRs when pm = 60%. a 5 dB Bistatic ISAR images. b 10 dB Bistatic ISAR images. c 15 dB Bistatic ISAR images

4 Conclusion This paper presents a new bistatic ISAR image reconstruction method for incomplete echo dataset. When the collected two-dimensional echo dataset is missing, missing data leads to uneven sampling, and we do not get a clear and reliable image using the traditional FT-based algorithm. The results show that the proposed bistatic ISAR image reconstruction method is able to reconstruct a clear radar image without degrading the image quality due to the missing of data. In addition, we quantitatively verify the reconstruction capability of the method. We compare the reconstruction accuracy of the method in the two-dimensional image domain, and compare it with other reconstruction methods in noisy environment and with different amount of missing data. Even if we use an incomplete echo dataset in a noisy environment, we use the point-scatterer model to analyze the echo dataset. The imaging results are superior to the traditional method in the field of numerical analysis. These benefits are at the expense of increased computational complexity. Acknowledgements. The authors would like to thank Professor Yiming Pi for sharing his expertise on ISAR and compressed sensing. This work was supported by the Science and Technology Department of Yibin under Grants 2018ZSF001 and the Science and Technology Department of Sichuan Province under Grants 2018JZ0050.

1184

L. Fan et al.

References 1. Martorella M, Palmer J, Homer J et al (2007) On bistatic inverse synthetic aperture radar. IEEE Trans Aerosp Electron Syst 43(3):1125–1134 2. Chen VC, Rosiers A, Lipps R (2009) Bistatic ISAR range-Doppler imaging and resolution analysis. In: Proceedings of IEEE international radar conference, pp 1–5 3. Martorella M (2011) Bistatic ISAR image formation in presence of bistatic angle changes and phase synchronisation errors. In: European conference on synthetic aperture radar. VDE, pp 1–4 4. Martorella M, Palmer J, Berizzi F et al (2009) Advances in bistatic inverse synthetic aperture radar. IN: Radar conference—surveillance for a safer world, 2009. RADAR. International. IEEE, pp 1–6 5. Martorella M (2011) Analysis of the robustness of bistatic inverse synthetic aperture radar in the presence of phase synchronisation errors. Aerosp Electron Syst IEEE Trans 47(4):2673– 2689 6. Ozdemir C (2012) Inverse synthetic aperture radar imaging with MATLAB algorithms. Wiley, Hoboken, NJ 7. Baraniuk R (2008) Compressive sensing. In: Conference on information sciences and systems, 2008. Ciss 2008. IEEE, pp iv–v 8. Wang CY, Xu J (2015) Improved optimization algorithm for measurement matrix in compressed sensing. Syst Eng Electron 37(4):752–756 9. Dong X, Zhang Y (2014) A novel compressive sensing algorithm for SAR imaging. IEEE J Sel Top Appl Earth Observations Remote Sens 7(2):708–720 10. Bhattacharya S, Blumensath T, Mulgrew B et al (2007) Fast encoding of synthetic aperture radar raw data using compressed sensing. In: IEEE/sp, workshop on statistical signal processing. IEEE Computer Society, pp 448–452 11. Du X (2005) Sparse component analysis and its applications in radar imaging processing. Ph.D. dissertation, Department of Electronics, National University of Defense Technology, Changsha, Hunan, P. R. China 12. Bae JH, Kang BS, Lee SH et al (2016) Bistatic ISAR image reconstruction using sparserecovery interpolation of missing data. IEEE Trans Aerosp Electron Syst 52(3):1155–1167 13. Huajun DUAN, Daiyin ZHU, Yong LI et al (2016) Recovery and imaging method for missing data of the strip-map SAR based on compressive sensing. Syst Eng Electron 38 (5):1025–1031 14. Liu J, Xu S, Gao X et al (2011) A review of radar imaging technique based on compressed sensing. Sig Process 27(2):251–260 15. Liu J (2012) Inverse synthetic aperture radar imaging technique based on compressed sensing. Ph.D. dissertation, Department of Electronics, National University of Defense Technology, Changsha, Hunan, P. R. China 16. Potter LC, Ertin E, Parker JT et al (2010) Sparsity and compressed sensing in radar imaging. Proc IEEE 98(6):1006–1020 17. Rao W, Li G, Wang X et al (2011) ISAR imaging of maneuvering targets with missing data via matching pursuit. In: Radar conference. IEEE, pp 124–128 18. Ye F, Liang D, Zhu J (2011) ISAR enhancement technology based on compressed sensing. Electron Lett 47(10):620–621 19. Zhang L, Qiao ZJ, Xing MD et al (2012) High-resolution ISAR imaging by exploiting sparse apertures. IEEE Trans Antennas Propag 60(2):997–1008

Bistatic ISAR Radar Imaging Using Missing Data Based

1185

20. Khwaja AS, Zhang XP (2014) Compressed sensing ISAR reconstruction in the presence of rotational acceleration. IEEE J Sel Top Appl Earth Observations Remote Sens 7(7):2957– 2970 21. Sun C, Wang B, Fang Y et al (2015) High-resolution ISAR imaging of maneuvering targets based on sparse reconstruction. In: Signal processing, vol. 108, no. C, pp 535–548 22. Bae JH, Kang BS, Kim KT et al (2015) Performance of sparse recovery algorithms for the reconstruction of radar images from incomplete RCS data. IEEE Geosci Remote Sens Lett 12(4):860–864 23. Zhou M, Xu PC, He ZH et al (2017) Sparse chirp stepped-frequency isar super-resolution imaging method based on 2D-FISTA Algorithm. In: International conference on wireless communication and sensor networks 24. Zhang L, Xing M, Qiu CW et al (2009) Achieving higher resolution ISAR imaging with limited pulses via compressed sampling. IEEE Geosci Remote Sens Lett 6(3):567–571 25. Zhuang Y, Xu S, Chen Z, et al (2016) ISAR imaging with sparse pulses based on compressed sensing. In: Progress in electromagnetic research symposium. IEEE, pp 2066– 2070

Medical Images Segmentation Using a Novel Level Set Model with Laplace Kernel Function Jianhua Song1,2(&), Zhe Zhang1, and Jiaqi Zhen1 1

2

College of Electronic Engineering, Heilongjiang University, Harbin, China [email protected] College of Physics and Information Engineering, Minnan Normal University, Fujian, China

Abstract. Medical image segmentation is a complex study due to its disadvantages such as noise, low-contrast, intensity inhomogeneity, and so on. A novel level set model was proposed in this study to segment medical images accurately. The kernel function used to determine the size of neighborhood of central pixel was modified by Laplace kernel function, which is insensitive to the choice of parameters and is more suitable for segmenting medical images. Compared with several state-of-the-art models, both visual and objective experiments can demonstrate the performance and superiority of the novel level set model. Keywords: Medical image  Image segmentation  Level set  Kernel function

1 Introduction Medical image segmentation has been comprehensively studied since is can help extracting valuable information for clinical treatment. However, it is always influenced by the imperfections of devices [1]. Thus, valuable information should be extracted exactly by the segmentation method. In the past decades, numerous segmentation methods have been presented, where level set model is one of the successful segmentation methods [2]. Level set model proposed by Sethian and Osher [3] is applied to segment medical images due to its advantages such as smooth contour and fast convergence. Wang et al. used the kernel function to define a local Gaussian fitting (LGD) energy [4], which can fuse level set model with Gaussian distribution to improve the performance of model. Nonetheless, it cannot completely extract the weak boundary of tissues since it is insensitive to the variation of intensity. Then, Li et al. integrated spatial clustering with level set model to enhance the robustness [5]. However, Li’s model can achieve accurate results only in the images with well-defined boundaries due to lack of kernel function. The local intensity clustering (LIC) was presented to segment medical images and correct inhomogeneous intensity [6], but the kernel function use by LIC is sensitive to the choice of parameters. Therefore, a novel level set model enhanced by Laplace kernel function is proposed in this study to segment medical images accurately, which can reduce sensitivity to parameters and thus improve its robustness. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1186–1189, 2020 https://doi.org/10.1007/978-981-13-9409-6_139

Medical Images Segmentation Using a Novel Level Set Model

1187

2 Level Set Formulation A level set function / can divide the image domain X into two regions X1 and X2 represented by M1 ð/Þ ¼ 1Hð/Þ and M2 ð/Þ ¼ Hð/Þ, where H() is Heaviside function. According to previous study [7], the clustering criterion that the intensity of each tissue can be regarded as a constant is suitable for segmenting medical images. Data term is then defined as Z e¼

0 B @

2 X k¼1

Z

1 C K ðy  xÞjI ð xÞ  bð yÞck j2 Mk ð/ÞdxAdy;

ð1Þ

Xk

where I(x) and b(x) represent observed image and bias field caused by inhomogeneous intensity. ck is clustering center and K(y–x) is the function that is often defined as Gaussian kernel function. However, such a function is sensitive to the choice of parameters and the performance of model heavily depends on the K since it can control the size of neighborhood of central pixel y. Therefore, K is re-defined by Laplace kernel function in this study, which can improve the performance of model effectively. The kernel function K(y–x) is re-defined as   1 jy  xj ; ð2Þ K ðy  xÞ ¼ exp  a r R where a is a constant used to satisfy K ðy  xÞ ¼ 1. r is the standard deviation. Such a kernel function can reduce the sensitivity to parameters effectively, which is more suitable for segmenting images with noise and intensity inhomogeneity than Gaussian kernel function. Then, the length term is given by Z Lð/Þ ¼

jrH ð/Þjdx:

ð3Þ

The zero level set function can be smoothed by L(/). The distance regularization term is defined by Z Rð/Þ ¼ d ðjr/jÞdx; ð4Þ where d(s) is the potential function given by d(s) = (s–1)2/2. The expensive reinitialization can be eliminated by R(/). Finally, the energy minimization of level set function / is obtained by solving the following gradient flow equation     @/ r/ ¼ dð/Þðe1  e2 Þ þ mdð/Þdiv þ ldiv dp ðjr/jÞr/ ; @t jr/j

ð5Þ

where d() is the derivation function of H() and function dp(s) is given by d′(s)/s. ei represents the clustering criterion given in (1).

1188

J. Song et al.

3 Experiments The novel level set model is compared with Li’s model, LGD, and LIC to show its superiority on both visual and objective images. The segmentation results of X-ray image of vessel obtained by the four models are shown in Fig. 1. It can be seen from the results that our model is more effective and accurate than other models. Then, it is used to segment the X-ray image of hand and also compared with the above three models. The results are displayed in Fig. 2. As shown in the first column, original image is corrupted by weak boundaries and inhomogeneous intensity. Compared with three models, the segmentation accurately of our model is superior than the one obtained by other models. Also, the weak boundaries of tissue can be extracted by our model exactly. The objective experimental results can further prove the performance of our model more powerfully. Thus, the average iterations and computational time obtained by the four models are compared in Table 1. As shown in the last column, whether it is iterations or computational time can demonstrate the superiority of our model objectively.

(a) Original image

(b) Li’s model

(c) LGD

(d) LIC

(e) Our Model

Fig. 1. Results of Li’s model, LGD, LIC, and our model on the X-ray image of vessel

(a) Original image

(b) Li’s model

(c) LGD

(d) LIC

(e) Our Model

Fig. 2. Results of Li’s model, LGD, LIC, and our model on the X-ray image of hand

Table 1. The average iterations and computational time of Li’s mode, LGD, LIC, and our model Iteration Time(s)

Li’s model 621 8.091

LGD 2458 25.261

LIC 153 5.161

Our model 112 4.458

Medical Images Segmentation Using a Novel Level Set Model

1189

4 Conclusion In this study, the novel level set model with Laplace kernel function can segment medical images successfully and obtain more accurate segmentation results. The kernel function used to control the size of the neighborhood of central pixel is modified by Laplace kernel function, which is more suitable for segmenting medical images. Both visual and objective experiments can demonstrate the performance and superiority of this model in segmenting medical images. Acknowledgements. This study was supported in part by the Postgraduate Innovative Research Project of Heilongjiang University (Grand No. YJSCX2019-058HLJU), in part by the Research Project on Education and Teaching Reform of Minnan Normal University (Grant No. JG201920), and in part by the Fujian Provincial Natural Science Foundation Project (Grant No. 2017J01708).

References 1. Khadidos A, Li CT, Sanchez V (2017) Weighted level set evolution based on local edge features for medical image segmentation. IEEE Trans Image Process 26(4):1979–1991. https://doi.org/10.1109/TIP.2017.2666042 2. Fedkiw R, Gibou F, Osher S (2018) A review of level-set methods and some recent applications. J Comput Phys 385:82–109. https://doi.org/10.1016/j.jcp.2017.10.006 3. Osher S, Sethian JA (1988) Fronts propagating with curvature dependent speed: algorithms based on Hamilton-Jacobi formulations. J Comput Phys 79(1):12–49. https://doi.org/10.1016/ 0021-9991(88)90002-2 4. He L, Li CM, Mishra A et al (2009) Active contours driven by local Gaussian distribution fitting energy. Signal Process 89(12):2435–2447. https://doi.org/10.1016/j.sigpro.2009.03. 014 5. Chang S, Chui CK, Li BN et al (2011) Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation. Comput Biol Med 41(1):1–10. https:// doi.org/10.1016/j.compbiomed.2010.10.007 6. Ding ZH, Huang R, Gatenby JC et al (2011) A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans Image Process 20 (7):2007–2016. https://doi.org/10.1109/TIP.2011.2146190 7. Song JH, Zhang Z (2019) An adaptive fuzzy level set model with local spatial information for medical image segmentation and bias correction. IEEE Access 7:27322–27338. https://doi. org/10.1109/ACCESS.2019.2900089

Research on Multi-UAV Routing Simulation Based on Unity3d Cong Chen1,2, Yanting Liu1, Fusheng Dai1(&), Yong Li2, Weidang Lu3, and Bo Li1,2 1

2

Harbin Institute of Technology (Weihai), Weihai, China [email protected] Science and Technology on Communication Networks Laboratory, Shijiazhuang, China 3 Zhejiang University of Technology, Hangzhou, China

Abstract. With the development of communication technology, Multi-UAV formation work plays an increasingly important role in the military and civilian fields. In order to ensure the reliable and effective communication between drones, it is important to construct a Multi-UAV information interaction topology based on the performance of UAV communication equipment, the geographic environment where the formation located, the wireless spectrum resources and the quality of service QoS requirements. However, during the presently practical application of routing network planning, there are some weaknesses such as the degree of automation is not high enough, the cost of network planning is large, and the operation is inconvenient. As a result, in this article, we create the Multi-UAV simulation system by using the professional game engine unity3d to solve these problems. In this system, we construct a real terrain environment of Yantai (a city of Shandong province) and import multiple UAVs to solve the problem of automatic pathfinding simulation and build the communication routing network. Besides, we propose some adjustments for this system when the aircraft loses connection. Keywords: Multi-UAV Adjustment algorithm

 Outing network  Network planning  Unity3d 

1 Instruction The Multi-UAV simulation system is created to design, analyze, evaluate and make a decision on the Multi-UAV routing, especially dealing with some system accidents, dangers and disadvantages. The traditional simulation system is based on MATLAB, Simulink and some other software. This kind of system cannot achieve the ideal scene effect of the Multi-UAV. While the Unity3d software can create different sceneries according to the actual needs. It can also simulate real-world environmental factors by its convenient physics engine system. Based on the Unity3d software, we built a MultiUAV routing simulation system.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1190–1197, 2020 https://doi.org/10.1007/978-981-13-9409-6_140

Research on Multi-UAV Routing Simulation Based on Unity3d

1191

This simulation system aims to immediately construct a Multi-UAV information interaction topology, which means generating a networking solution to meet the requirements in a short time based on some constraints, such as the performance of UAV communication equipment, the geographic environment where the formation located, the wireless spectrum resources and interference factors, and the quality of service QoS requirements. Moreover, when the UAV is out of contact due to the environment and some other factors, this system can be adjusted quickly. In this case, this system can meet the information exchange needs that the UAV formation perform multiple tasks and respond to multiple emergencies.

2 Engineering Realization for Multi-UAV Routing Simulation 2.1

Simulation of Real Terrain

We create a real environment model and a natural geographical map, then render the overall environment to realize the real visual experience of the live simulation system. It is true that we can change the terrain parameters by adjusting the Terrain parameter of unity3d using components such as the raise or lower terrain and the smooth height paint texture to make the simulation system closer to the real environment. What is more, we can even directly download the terrain that other people have made by logging in the official Assert Store website, and then import the package into unity3d. However, in order to get closer to reality, we introduced the imported graphics of the reality. Firstly, we went to the Geographic Information Data Cloud website to download a real terrain IDM data, then we imported the real terrain by following the steps given in the Unity3D manual and attached the following C sharp code to the generated terrain. (This part of the C sharp code can refer to my CSDN blog [1]). After importing the terrain, we imported the Skybox resource to make a beautiful sky. We also created some simple UI interfaces through UGUI to observe some real-time data about UAV (Fig. 1). 2.2

Simulation of UAV

After importing the terrain environment model, we embarked solving the problems about the core of the research—UAV. First of all, there are two methods to accomplish the construction of the UAV model, one is downloading the packaged UAV model in the unity asset store and the other is making the UAV model by a professional modelling software—3D Studio Max. There is also a case must be taken into consideration that adding the rigid body component to the UAV to simulate its physical properties. At the same time, adding the collider component is necessary, although we never want to use it in practice.

1192

C. Chen et al.

Fig. 1. The real terrain environment of Yantai, Shandong Province

2.3

Problem of Automatic Pathfinding About UAV

As the routing network we studied is a study of Multi-UAV, the first person and the third person control method for single object mentioned in the paper [2] is unsuitable here. Therefore we need to take advantage of automatic navigation technology. Those who are familiar with unity3d may know that baking the ground environment first, then making it rasterized, and using the A* algorithm [3] can easily simulate the automatic navigation of ground objects. Unity also integrated this functionality into the NavMeshAgent component. However, this method can not be directly implemented in the air, so we improved it and came up with two solutions. (1) Solution 1: Intuitive thinking, since ground navigation is easy to implement, we can use ground navigation to simulate aerial navigation through projection transformation. At first, we let the aircraft bind an invisible ground object and keep an appropriate height. Then we let the ground object automatically navigate so that the aircraft can move following the ground object through the projection transformation. This method is simple and easy to understand, but it brings some problems. UAV will fluctuate with the fluctuation of the ground, and automatically avoid obstacles on the ground. These movements are meaningless for aircraft, so we proposed an improved Solution (Fig. 2).

Research on Multi-UAV Routing Simulation Based on Unity3d

1193

Fig. 2. The automatic navigation method of Solution 1

(2) Solution 2: As we consider the obstacles that the UAV can reach, why not introduce some new navigation terrain in the sky where the aircraft is located? In the beginning, we imported multiple new navigation terrains, and then set the original terrain as an obstacle. After that, we implemented aerial navigation according to the ground navigation method. In this way, the UAV can automatically navigate in the sky and only need to consider the obstacles in the air, making the navigation path more realistic. As shown in Fig. 2, the black part is the unreachable area caused by obstacles in the air. Of course, we are not the professional unity3d developers, so the air navigation idea we promoted still has a long way to go (Fig. 3).

Fig. 3. The automatic navigation method of Solution 2

1194

C. Chen et al.

3 Communication Routing Network of Multi-UAV On the basis where each UAV can navigate autonomously, we need to simulate the communication routing network of Multi-UAV. The current research condition is that the geographical location can be perceived, and information is transmitted through the line of sight between each UAV. When the distance between two aircraft is less than the current communicable distance N(Due to the need for confidentiality, the communication distance is referred to here by N) and there is no obstruction between each other, we draw a line to match two aircrafts in unity3d and set the correlation state in the adjacency matrix to 1. Whether each UAV can communicate through other UAV as a relay and how many relays it needs to be drawn by improving the reachable matrix algorithm [4]. In this way the UAV can be connected in pairs to get all available routing networks [5]. Next, we should choose the optimal path in all path considering all the constraints. We will develop a new dynamic routing algorithm for Multi-UAV based on the previous achievement of the laboratory [6, 7] (Fig. 4).

Fig. 4. The communication routing network of Multi-UAV

Research on Multi-UAV Routing Simulation Based on Unity3d

1195

4 Adjustment Plan When the UAV Loses Connection Since the aircraft may be out of contact due to some factors such as obstacle avoidance in the actual flight, we have proposed three adjustment schemes. (1) Concealed adjustment: It is keeping the aircraft at the former altitude and letting it use the nearby mountain as a cover. When reorganizing the routing topology, the lost aircraft can only rebuild the communication topology by chasing the host group around the mountain. In this adjustment scheme, the establishment of the reconnection needs more time than usual, but it makes the cluster of the aircraft more concentrated and creates a very high concealment condition. Larger agglomeration reduces the probability of losing contact, however, it also raises the risk of discovery because of the aggregation. (2) Mountain-Climbing adjustment: This method is about reconnecting by climbing the mountain after losing contact. As the current altitude of the aircraft continues to increase, the probability of being discovered also rises. It is a method of sacrificing conceal ability in exchange for rapidity. However, the Mountain-Climbing adjustment algorithm has a faster speed to establish the reconnection. (3) Relay adjustment: It means finding the most recent adjustment target of the missing aircraft. In this scheme, we reduce the speed of the relay aircraft, which is similar to the idea of running relay, to achieve a faster reconnection. The disadvantage is that the cohesion of the fleet reduces, which means the loss of contact may become more frequent after the adjustment. In a word, there are three main important parts to implement algorithms: the missing judgment algorithm, the arbitration algorithm and the adjustment strategy algorithm. When a missing aircraft appears in the flying team, instead of adjusting immediately, the first thing to do is to turn on the tolerant timer and enter the tolerant flight mode until the timer overflow. After that, if there are still some aircrafts out of contact, the flight adjustment will be opened. Then, Obtain the host group by the arbitration algorithm, and evaluate the three adjustment strategies in various aspects according to the current environment Finally, giving the decision-making scheme according to the analytic hierarchy process [8] and adjusting the aircraft until it reconnects (Fig. 5). The illustration of the entire process is as follows:

1196

C. Chen et al.

Normal Flight

Is anyone out of contact Y and Open Timer

Timer Completed Continue Normal Flight Timer Overflow Is anyone out of contact

Adjustment Arbitration to obtain the main flying team

In the main flying team? Current Altitude reflects risk from Altitude Current Topological Cohesion Degree Reflects risk from High Cohesion

Concealed Adjustment

Assess the Strategies

Features of three Strategies decide other coefficients

Mountain-Climbing Adjustment Relay adjustment

AHP Output Parameter User Decision-Making

Decision-Making

Is anyone out of contact

Fig. 5. The illustration of the entire process

5 Conclusions Overall, our study has completed the construction of the Multi-UAV simulation system with the help of unity3d. We have constructed a real terrain environment of Yantai, Shandong Province, imported multiple UAVs and solved the problem of automatic pathfinding simulation in the air. We built the communication routing network, and finally, we proposed the adjustment plan when the UAV loses contact. The system is of great significance for the research of UAV routing networking to reduce the development costs and risks. In the future, we intend to continue to improve the routing algorithm and then consider the unattainable geographic information of the UAV.

Research on Multi-UAV Routing Simulation Based on Unity3d

1197

Particle swarms and ant colony algorithm as an alternative adjustment scheme may be added into our program in the following development work. Acknowledgements. This work was supported in part by the Foundation of Science and Technology on Communication Networks Key Laboratory.

References 1. Chen C Unity3d real terrain required calling code. Available via DIALOG. https://blog.csdn. net/qq_42310470. Cited 02 may 2019 2. Wang S, Yang J, Hu R et al (2018) Research on unmanned ship simulation on the basis of Unity3d. The 2018 International Conference 3. Hart PE, Nilsson NJ, Raphael B (1968) A formal basis for the heuristic determination of minimum cost paths in graphs. IEEE Trans Syst Sci Cybernetics 4. Wang L et al (2010) Floyd-Warshall all-pair shortest path for accurate multi-marker calibration. In: 2010 9th IEEE international symposium 5. Dai F (2006) An algorithm to calculate entire routes between communication network nodes based on logic algebra. Math Practice Theory 02:186–192 6. Bao X, Dai F, Han Z (2012) Evaluation method of network invulnerability based on disjoint paths in topology. Syst Eng Electron 34(1):168–174 7. Zou Y (2017) Research on tactical network planning and anti-destruction reconstruction method under multi-constraint conditions. Harbin Institute of Technology, pp 7–70 8. Zheng X, Li L, Zeng H et al (2012) Research on computation methods of AHP wight vector and its applications. Math Practice Theory 42(7):93–100

Video Target Tracking Based on Adaptive Kalman Filter Futong He, Jiaqi Zhen(&), and Zhifang Wang College of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected]

Abstract. Video tracking technology is a hot topic in computer vision research. Video tracking technology is widely used, such as robot vision, intelligent traffic management, medical diagnosis and intelligent monitoring. Therefore, it is of theoretical significance and practical value to study video target tracking technology. In this paper, the background subtraction method and adaptive Kalman filter are combined to realize real time video target tracking. The experimental results show that the proposed method can improve the tracking accuracy. Keywords: Adaptive kalman filter (AKF) tracking

 Background subtraction  Target

1 Introduction Video target tracking technology is one of the important research directions of computer Vision. Because Kalman filter calculation is small and easy to implement, it is often used to achieve target tracking. In the previous paper, the state of noise covariance matrix of the Kalman filter and the covariance matrix of the measured noise are set to a fixed value [1]. In order to improve the accuracy of tracking, adaptive noise Kalman filter is used [2]. Adaptive Kalman has been widely used in traffic system positioning [3], GNSS system and so on. In this paper, adaptive Kalman filter is applied to video target tracking. In the second part of the article, the adaptive Kalman filter algorithm is introduced in detail.

2 Related Algorithms 2.1

Background Subtraction Algorithm

There are many methods for extracting foreground objects, such as frame difference method, background subtraction method, Gaussian of Mixture Models extraction method and so on. This paper applies background subtraction method [4]. The formula is as follows:

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1198–1201, 2020 https://doi.org/10.1007/978-981-13-9409-6_141

Video Target Tracking Based on Adaptive Kalman Filter

Bt ði; jÞ ¼  Xt ði; jÞ ¼

N 1 1X Xt ði; jÞ N t¼0

1199

ð2:1Þ

1; jIt ðx; yÞ  Bt ðx; yÞj [ T 0; jIt ðx; yÞ  Bt ðx; yÞj\T

ð2:2Þ

where Bt ði; jÞ represents the background image of t time, It ðx; yÞ represents the frame image at t time, T is the threshold and Xt ði; jÞ is a binary image at t time. 2.2

Standard Kalman Filter Algorithm

In 1960, Kalman published a famous recursive method to solve the linear filtering problem of discrete data [5]. Standard Kalman filter (SKF) is mainly divided into time update and state update. The formula is as follows: Step 1: time updates ^x k ¼ A  xk1 þ B  uk1

ð2:3Þ

T ^p k ¼ A  pk1  A þ Q

ð2:4Þ

Step 2: measurement updates   Kk ¼ ^pk  H T ½H  p^k  H T þ R

1

ð2:5Þ

y ^xk ¼ ^x x k þ Kk  ½ k  H  ^ k 

ð2:6Þ

^pk ¼ ½I  Kk  H  p ^ k

ð2:7Þ

where xk1 is the state, yk is the measure and uk1 is control item. ^ p k is predicted error covariance matrix and ^pk is update error covariance. Q is state variance matrix and R is measured noise variance matrix. Kk is the kalman gain, and A, B, H are transition matrixs. 2.3

Adaptive Kalman Filter Algorithm

Based on the adaptive Kalman filter based on maximum likelihood criterion [6], the ^k change of system noise statistical characteristics is estimated in real time by system Q ^ k , so as to ensure that the filter can better adapt to this change. and R ^zk ¼ yk  ^yk ¼ yk  H  ^x k

ð2:8Þ

k X ^k ¼ 1 ^zi ^zT C m i¼km þ 1 i

ð2:9Þ

 Sk ¼ H  p^k  H T þ Rk1

ð2:10Þ

1200

F. He et al.

^ k  H  p^  H T ^k ¼ C R k

ð2:11Þ

^ k  KT ^ k ¼ Kk  C Q k

ð2:12Þ

^ k is the actual covariance. The theoretical where ^zk is the innovation sequence and C covariance of the innovation sequence is defined Sk [7].

3 Proposed Algorithm Steps (a) Input video sequence, Initialization of Kalman filter parameters. (b) Using background subtraction method to obtain the center position of the target as the measured value yk . (c) Using AKF method to get the estimated value ^xk . (d) Repeat steps (a)–(c). In the process above, r(i) is the root mean square error(RMSE) criterion, the formula is as follows, the smaller the value, the higher the tracking accuracy. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X ððxi x0i Þ2 þ ðyi y0i Þ2 Þ rðiÞ ¼ t N i¼1

ð3:1Þ

where x,y is the estimated value, x0 ,y0 is the measured value, and N represents the number of samples.

4 Experimental Results and Analysis In order to test the proposed algorithm, we conducted experiments on the standard database (http://www.visual-tracking.net) [8]. The Fig. 1 shows tracking results of the AKF. #16

#29

#60

Fig. 1. Video target tracking results

The yellow ring is the result of video target tracking. As can be seen from Fig. 1 in frame 16, 29, 60 frames, this paper proposes an algorithm that can accurately track the ball. Through the analysis of Table 1, the accuracy of the AKF algorithm is higher than that of SKF algorithm.

Video Target Tracking Based on Adaptive Kalman Filter

1201

Table 1. The comparison table of RMSE Algorithm Original (SKF) Proposed (AKF)

RMSE 4.3768 0.0106

5 Conclusion In this paper, a background subtraction method and adaptive Kalman filter are proposed to realize automatic real-time video target tracking. Firstly, the background subtraction method is used to extract the target, and then the adaptive Kalman filter is used for target tracking. Through the experimental analysis, the algorithm proposed in this paper can improve the accuracy of video target tracking. In the future, try to use adaptive Kalman filter to solve more complex video scenes. Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61501176, Natural Science Foundation of Heilongjiang Province F2018025, University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province UNPYSCT-2016017, and the postdoctoral scientific research developmental fund of Heilongjiang Province in 2017 LBH-Q17149.

References 1. Sriharsha KV, Rao NV (2015) Dynamic scene analysis using Kalman filter and mean shift tracking algorithms. In: 2015 6th international conference on computing, communication and networking technologies (ICCCNT) 2. Akhlaghi S, Zhou N, Huang Z (2017) Adaptive adjustment of noise covariance in Kalman filter for dynamic state estimation. In: 2017 IEEE power & energy society general meeting 3. Fuad AG, Anazida Z (2017) Improved vehicle positioning algorithm using enhanced innovation-based adaptive Kalman filter. Pervasive Mobile Comput 40:139–155 4. Kim ZW(2008) Real time object tracking based on dynamic feature grouping with background subtraction. In: IEEE conference on computer vision & pattern recognition 5. Rudolph EK (1960) A new approach to linear filtering and prediction problems. Trans ASME-J Basic Eng 82(Series D):35–45 6. Congshan QU, Hualong XU, Tan Y(2008) SINS/CNS integrated navigation solution using adaptive unscented Kalman filtering. In: International Conference on Modelling 7. Emami M, Taban MR (2018) A novel intelligent adaptive Kalman filter for estimating the submarine’s velocity: with experimental evaluation. Ocean Eng 158:401–403 8. Sarala BV, Swathi BV (2017) Object tracking using block motion estimation with adaptive Kalman estimates. In: 2017 2nd Ieee international conference on recent trends in electronics, information & communication technology (RTEICT)

Compressed Sensing Image Reconstruction Method Based on Chaotic System Yaqin Xie, Erfu Wang(&), Jiayin Yu, Shiyu Guo, and Xiaomin Zhang Key Lab of Electronic and Communication Engineering, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. The construction of measurement matrix is one of the core parts of compressed sensing. Its performance directly affects the signal sampling and reconstruction. Therefore, it is very important to design a measurement matrix with good performance. Aiming at the problem that the commonly used random measurement matrix is difficult to implement in hardware and practice, this paper proposes a measurement matrix of a chaotic system to construct compressed sensing and proves that the measurement matrix proposed in this paper can reconstruct the information well through simulation experiments and comparative experiments. Keywords: Compressive sensing

 Measurement matrix  Chaos

1 Introduction In recent years, the theory of compressed sensing has been widely used in signal recovery and image processing applications [1]. The emergence of compressed sensing provides a new idea for solving the problems of the high sampling frequency, difficult hardware implementation, and waste of data resources in current signal processing [2]. The construction of the measurement matrix plays an important role in the compression sensing signal sampling and reconstruction, and its performance directly affects the signal reconstruction quality. Measurement matrices that currently meet the requirements of compressed sensing can be divided into three categories. The first category is a random class measurement matrix [3–5]. The advantages of these matrices are universal, but the signals recovered by such measurement matrices are uncertain and difficult to implement in hardware. The second type is to randomly extract several rows from the orthogonal matrix, and then normalize each column [6, 7]. The advantage of these matrices is that they are computationally fast, but not universal. The third type is generated based on a specific signal [8, 9]. The signals recovered by such matrices are also uncertain and require a large number of experimental measurements. Chaotic systems have the characteristics of randomness, initial condition sensitivity, certainty, etc. The chaotic sequences generated by them have the characteristics of random-like signals [10]. The measurement matrix in compressed sensing needs to be uncorrelated with the transformed base [11], and the matrix constructed by the

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1202–1209, 2020 https://doi.org/10.1007/978-981-13-9409-6_142

Compressed Sensing Image Reconstruction Method Based on Chaotic

1203

sequence generated by the chaotic system is a random matrix, which is qualified. Therefore, it is possible to attempt to construct a compressed sensing measurement matrix using a sequence generated by a chaotic system. The combination of chaos and compressed sensing can also effectively solve some of the uncertainties encountered by traditional compressed sensing in constructing measurement matrices and the need to perform multiple tests. In this paper, an image reconstruction method of compressed sensing based on chaotic system is proposed. The algorithm will generate the measurement matrix by controlling the parameters of the chaotic system, and reconstruct the image information by using the random properties of the observation matrix. The experimental simulation results show that the algorithm can complete a certain compression work and has excellent reconstruction performance.

2 Theory of Technology Logistic mapping is a typical nonlinear dynamical chaotic system. This system has good performance and is widely used. Its definition is as follows: xk þ 1 ¼ lxk ð1  xk Þ; 0 \xk \1; k ¼ 0; 1; 2. . .

ð1Þ

where, l 2 ð0; 4, and the initial value is x0 2 ð0; 1Þ.

Fig. 1. Mathematical model of compressed sensing theory

The mathematical model of the compressive sensing theory is shown in Fig. 1. The precondition of using compressive sensing is that the target signal itself is sparse, but the signals in contact with nature do not have sparse, so it should be made sparse in an orthogonal transform domain or sparse dictionary. Suppose a signal of finite length is X 2 RN1 , and X is a column vector of length N, which can be expressed by an orthogonal basis W ¼ fw1 ; w2 ; . . .; wN g with dimension. Its linear expression is:

1204

Y. Xie et al.



N X k¼1

w k hk ¼ W h

ð2Þ

where, the sparse basis is W, sparse is K and hk ¼ hX; wK i is a linear expression coefficient of signal X on the orthogonal basis wk . When the coefficient vector h has only K ðK  N Þ very large values, and most of the remaining values are very small and approximate to zero, the signal X is sparse on the orthogonal basis W. By using the measurement matrix U 2 RMN to observe signal X, the measured value Y can be obtained, where the number of measured value Y is far less than the original signal X: Y ¼ UX

ð3Þ

So if I plug in PI (2) and PI (3). And the formula is Y ¼ UX ¼ UWh = Dh

ð4Þ

Because M  N the linear algebra tells us that the reconstruction of Y into X is NPhard. So you can’t just look at the vector Y and figure out the original signal X. Signal reconstruction is the most critical and important step of compressed sensing. At present, reconstruction algorithms can be roughly divided into two categories. One is to convert non-convex problems into convex optimization problems [1] for a solution. Another kind of greedy tracking algorithm is to obtain the original signal by selecting the local optimal solution in each iteration [12]. In the reconstruction simulation in this paper, the orthogonal matching tracking algorithm in the greedy algorithm is mainly used.

3 Establish System Model and Analysis Metrics 3.1

Model the System

Based on the characteristics of a chaotic system, an image reconstruction method based on compressed sensing is proposed in this paper. The logistic chaotic system was used to generate a measurement matrix, and the system model of this algorithm is shown in Fig. 2. The image reconstruction method of compressed sensing based on a chaotic system is as follows: 1. Determine the initial value x0 = 0:31 and parameter l = 4 of the Logistic chaotic system, and let the Logistic chaotic system enter the iteration to generate the chaotic sequence. The generated chaotic sequence removes the first 100 sequences and retains M  N elements. The sequence is converted into a M  N matrix to generate a compressed sensing measurement matrix, and the generated compressed sensing measurement matrix is quantized. 2. Image information is sparse in the discrete wavelet domain at the same time.

Compressed Sensing Image Reconstruction Method Based on Chaotic

Reconstruction of information

Compressive sensing measurement matrix

Chaotic systems

1205

Channel

OMP reconstruction algorithm

Information Analysis results Compression and sampling

Fig. 2. The system model

3. In this paper, image sparsity is selected K = 50, and the measurement matrix generated by chaos in the previous step is compressed and sampled to obtain a set of measured values, which are homogenized to be within the range of [0,255]. 4. The measured values can be directly stored and transmitted through the channel. Since the measured values can be directly compressed through the measurement matrix, the transmission and stored procedures will reduce the time and memory. 5. The measured values obtained through channel transmission need to be reconstructed to obtain the original information. The measured values are reconstructed by the OMP algorithm to recover the original information. 3.2

Analytical Method

In this paper, the performance of the reconstructed image is analyzed from the two aspects of Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The definition of PSNR and SSIM are as follows:  l  2 1 PSNR ¼ 10 lg MSE

ð5Þ

N 1X ðYi  Xi Þ2 N i¼1

ð6Þ

MSE ¼

where, MSE is the mean square error. In general, when the PSNR value is between 30 and 40, the human eye can no longer tell the difference, and the larger the value is, the smaller the distortion after compression will be. SSIMðs; yÞ ¼

ð2lx ly þ c1 Þð2rxy þ c2 Þ ðl2x þ l2y þ c1 Þðr2x þ r2y þ c2 Þ

ð7Þ

where, SSIM is between 0 and 1. When two signals are completely identical, the structural similarity is 1.

1206

Y. Xie et al.

4 Experimental Simulation and Analysis The test information is a Lena diagram of grayscale image 256  256, and the experimental simulation is carried out under the numerical condition of sparse K = 50 and compression ratio 0.86. The simulation results are as follows.

(a) original image

(b) measurement matrix

(c) image reconstruction

Fig. 3. Experimental simulation diagram

The simulation image of the measurement matrix can be seen from Fig. 3. In the whole experiment, the compression ratio of the original image was 0.86. OMP algorithm was used for information reconstruction. It was found that the reconstructed image was consistent with the original image, and the recovered image PSNR was 31.9054, SSIM value was 0.7287.

30

Reconstructed imageimage Reconstructed

PSNR

25

20

15 15

20

25

30

K

35

40

45

50

Fig. 4. Influence of sparse degree K on image reconstruction

Compressed Sensing Image Reconstruction Method Based on Chaotic

1207

Verify the performance of the measurement matrix proposed in this paper and the effect of changing sparsity on information reconstruction. When the compression ratio of Lena graph is 0.6, the image sparsity K is changed for multiple experimental simulations, and the simulation results are shown in Fig. 4. As can be seen from Fig. 4, with a constant compression ratio of 0.6, as the image sparsity increases, the measurement matrix generated by chaos is better for image reconstruction. Table 1 shows the image reconstruction effect under different compression ratios of Lena images at a sparsity of K = 50. As can be seen from Table 1, the larger the compression ratio is, the higher the SNR of the compressed sensing reconstructed image will be, and the larger the image similarity will be. It can be proved that the information can be reconstructed and recovered at different compression ratios, and the measurement matrix generated by chaos can compress and reconstruct the information well. Table 1. Reconstruction effect of Lena graph under different compression ratios The original image

Compression ratio 4:5

Reconstructed image

PSNR

SSIM

31.4587

0.7155

3:4

31.0045

0.7070

2:3

29.8493

0.6648

1:2

27.0394

0.5585

The above proves the influence of sparsity and compression ratio on image reconstruction under the measurement matrix generated by the chaotic system. Two sets of image reconstruction simulation experiments based on the above measurement matrix generated by chaos can be summarized as follows: when the sparsity K is constant, the larger the compression ratio, the better the image reconstruction; When

1208

Y. Xie et al.

the compression ratio remains unchanged, the more sparse the signal is, the better the image reconstruction will be. In this paper, random measurement matrix, Gaussian measurement matrix, and Logistic measurement matrix are used to reconstruct Lena images. Suppose the sparsity of this experiment is 50, the compression ratio is 0.5, and the reconstruction algorithms are all OMP algorithms. Table 2. Image reconstruction experiments of different measurement matrices Measurement matrix Rand Gaussian Logistic

PSNR 26.4417 26.7908 27.0394

MSE 0.0943 0.0892 0.0843

SSIM 0.5408 0.5602 0.5585

It can be found from Table 2 that, with the same parameters, the measurement matrix generated by chaos is better than the PSNR image SNR and MSE image information reconstructed by random measurement matrix and Gaussian measurement matrix, but the structural similarity coefficient is higher than the value of random measurement matrix.

5 Conclusion In this paper, a compressive sensing measurement matrix based on a chaotic system is proposed for the stability and practicability of the measurement matrix. Simulation experiments are carried out on the proposed measurement matrix, and the influence of each parameter on the reconstructed image is analyzed. It is verified that the proposed measurement matrix can reconstruct the original image information well. Finally, compared with the random measurement matrix and Gaussian measurement matrix, the reconstructed image PSNR is better under the same parameters. Acknowledgements. This work was supported in part by Natural Science Foundation of China (No. 61571181), Heilongjiang Province Postdoctoral Science Foundation under Grant (No. LBHQ14136) and Heilongjiang Natural Science Foundation (No.JJ2019LH1317).

References 1. Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal frequency information. IEEE Trans Inf Theory 52(2):489–509 2. Candès EJ (2008) The restricted isometry property and its implications for compressed sensing. CR Math 346(9–10):589–592

Compressed Sensing Image Reconstruction Method Based on Chaotic

1209

3. Yang J, Liao X, Yuan X et al (2015) Compressive sensing by learning a gaussian mixture model from measurements. IEEE Trans Image Process Publication IEEE Signal Process Soc 24(1):106–119 4. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306 5. Fang H, Zhang Q, Sui AW (2008) A method of image reconstruction based on sub-gaussian random projection. J Comput Res Devel 45(8):210–214 6. Kuestner T, Wurslin C, Gatidis S et al (2016) MR image reconstruction using a combination of compressed sensing and partial Fourier acquisition: ESPRe SSo. IEEE Trans Med Imaging 35(11):2447–2458 7. Tsaig Y, Donoho DL (2006) Extensions of compressed sensing. Sig Process 86(3):549–571 8. Dehghan H, Dansereau RM, Chan ADC (2015) Restricted isometry property on banded block toeplitz matrices with application to multi-channel convolutive source separation. IEEE Trans Signal Process 63(21):5665–5676 9. Qiao H, Pal P (2015) Generalized nested sampling for compressing low rank toeplitz matrices. IEEE Signal Process Lett 22(11):1844–1848 10. Cotler J, Hunter-Jones N, Liu J (2017) Chaos, complexity, and random matrices. J High Energy Phys 2017(11):48. https://doi.org/10.1007/jhep11(2017)048 11. Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51 (12):4203–4215 12. Chang LH, Wu JY (2014) An improved RIP-based performance guarantee for sparse signal recovery via orthogonal matching pursuit. IEEE Trans Inf Theory 60(9):405–408

An Underdetermined Blind Source Separation Algorithm Based on Variational Mode Decomposition Shiyu Guo, Erfu Wang(&), Jiayin Yu, Yaqin Xie, and Xiaomin Zhang Key Lab of Electronic and Communication Engineering, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. By seeking the optimal solution, the modal component and frequency center of the variational modal functions are determined, and the separation of the modal function is realized. This method effectively solves the modal aliasing problem of intrinsic mode functions in empirical mode decomposition. Based on this, Variational Mode Decomposition algorithm is applied to the problem of underdetermined blind source separation. The decomposed variational modal components are extracted in multiple Sequential Extractions to convert the underdetermined problem into a positive definite problem. Then the Independent Component Analysis algorithm is used to separate the source signal. Simulation results show that this method can effectively separate the mixed image information. Keywords: Variational mode decomposition  Underdetermined blind source separation  Independent component analysis  Sequential extraction  Chaotic mask

1 Introduction In recent years, the blind source separation technology represented by positive constant and noise-free is becoming more and more mature and widely used in various fields. However, the actual application is mostly abstract for underdetermined system model. Existing research results show that the commonly used time-frequency analysis methods include Wavelet transform, Hilbert transform and Empirical Mode Decomposition [1]. However, they have some disadvantages in different degrees, such as Wavelet transform needs empirical selection of wavelet basis function, decomposition series, etc., while a kind of decomposition algorithm represented by EMD has mode aliasing, endpoint effect and other phenomena [2]. Many scholars are trying to find new decomposition methods based on this. In 2014, Konstantin Dragomiretskiy and Dominique Zosso proposed a new adaptive Variational Mode Decomposition (VMD) method [3]. The algorithm decomposes signals in a non-recursive way and has strong mathematical support. It controls the modal aliasing problem effectively by controlling the variational mode number obtained by decomposition [2]. Since then, the method has been developed and © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1210–1217, 2020 https://doi.org/10.1007/978-981-13-9409-6_143

An Underdetermined Blind Source Separation Algorithm

1211

applied rapidly. VMD algorithm is widely used in mechanical fault diagnosis [4–6], signal denoising [7], pattern recognition [8, 9] and other fields. In this paper, VMD algorithm is applied to blind signal processing and image signal processing. By using the VMD method and Sequential Extraction [10], a VMDSE-FastICA algorithm is designed to achieve the effectiveness and accuracy of image separation/extraction.

2 Variational Mode Decomposition In this VMD method, the variational mode function is defined as FM–AM signals, written as uk ðtÞ ¼ Ak ðtÞ cosð/k ðtÞÞ

ð1Þ

The nature of the VMD is a constraint variational problem, which is to seek k mode functions uk ðtÞ. The sum of the estimated bandwidth of the decomposed variational mode components is minimized, and the sum of the variational mode components is equal to the original input signal xðtÞ. We propose the following proposal [2–4]: 1. For each mode, the function uk ðtÞ is transformed by Hilbert to obtain its unilateral frequency spectrum 

 j  uk ð t Þ dð t Þ þ pt

ð2Þ

where dðtÞ is the Dirac function,  is the convolution operation and j2 ¼ 1. 2. ejwk t is added to the unilateral frequency spectrum and the spectrum of each mode is modulated to the corresponding base band 

 j  uk ðtÞejwk t dðtÞ þ pt

ð3Þ

where ejwk t represents the estimated center frequency on the base frequency band. 3. It is converted into L2 norm for solving gradient function. The obtained constrained variational problem can be written 8 > > >
> > : s:t

X

 2 )  X   @t dðtÞ þ j  uk ðtÞejwk t    pt k

ð4Þ

uk ð t Þ ¼ xð t Þ

k

where xðtÞ represents the observation signal with zero mean value; uk ¼ ðu1 ; u2 ; . . .; uk Þ is the k variational mode components obtained by decomposition, and wk ¼ ðw1 ; w2 ; . . .; wk Þ is the central frequency of the k variational modal components.

1212

S. Guo et al.

4. The quadratic term penalty factor a and Lagrange multiplication operator kðtÞ are introduced to optimize the solution of the constraint problem, and the constraint variational problem is transformed into a non-constraint variational problem, and the formula is expressed as 2  2      X   X  j   jwk t  @t dðtÞ þ  u ð t Þ e þ x ð t Þ  u ð t Þ Lðuk ; wk ; kÞ ¼ a   k k     pt 2 k k 2 ð5Þ * + X þ kðtÞ; xðtÞ  uk ð t Þ k

5. Alternately update unk þ 1 , wnk þ 1 , and knk þ 1 to find “saddle points” of extended Lagrange function expressions. This is the optimal solution of the constrained variational mode as expressed in Eq. (5). The iterative method is as follows:  þ 1   n   n  n

unk þ 1 ¼ arguk min L uni\k ; ui  k ; w i ; k wnk þ 1 ¼ argwk min L k

nþ1



  þ 1  n  n

; wi [ k ; k uni þ 1 ; wni\k

n

¼ k þs x 

X k

ð6Þ ð7Þ

! unk þ 1

ð8Þ

3 VMDSE-FastICA Algorithm Based on the decomposition efficiency of VMD method, this paper designs a VMDSEFastICA algorithm. Here, the steps are as follows: Step 1: Build the underdetermined mixture model. Let xðtÞ observation signal be a linear mixture of unknown zero mean independent source signals sðtÞ, and the linear mixture model is x ¼ Hs ¼

n X

hj sj;j¼1;2;...;n

ð9Þ

j¼1

where xðtÞ ¼ ðx1 ; x2 ; . . .; xm ÞT is the m-dimensional zero mean random observation signal vector; sðtÞ ¼ ðs1 ; s2 ; . . .; sn ÞT is n-dimensional zero mean independent source signal; H ¼ ½h1 ; h2 ; . . .; hn  is the mixed matrix of order m  n, and m\n is satisfied. The number of observation signals is less than the number of source signals, which constitutes an underdetermined mathematical model [11, 12]. Step 2: VMD decomposition to make up the deficiency. The observed signal xðtÞ is decomposed into k variational modal components by the VMD algorithm, and the virtual array is complemented by the multi-component complement method [10]. Convert the underdetermined separation model into a positive definite separation model.

An Underdetermined Blind Source Separation Algorithm

1213

Step 3: Use the FastICA algorithm to separate the source signals. Step 4: Extract the effective component from the separated signal, then subtract the component from the observed signal, and realize the reduction. Step 5: Sequential extraction algorithm is adopted to separate the source signal again. Step 6: Repeat 3–5 until all source signals are separated. The flowchart of VMDSE-FastICA algorithm is shown in Fig. 1. Image Signal

Chaotic Signal

Chaotic Mask

Signal Encapsulation Variational Modal Decomposition Independent Component Analysis Blind Signal Separation N Good visual effect

Y SSIM>0.8

N

Y Achieve signal separation

Failed to achieve signal separation

End Fig. 1. The flowchart of VMDSE-FastICA algorithm

1214

S. Guo et al.

4 Simulation Experiment and Analysis In this experiment, 4 grayscale images in the standard test library were used with a resolution of 256  256. Together with the x component in the Chen chaotic system, it passes through a 3  5 mixed channel to achieve the purpose of chaotic masking and underdetermined mixing. The four grayscale images (S1, S2, S3 and S4) are shown in Fig. 2 and the mixed three-way observation signals (X1, X2 and X3) are shown in Fig. 3.

Fig. 2. Source image information

Fig. 3. Observation signal information

An Underdetermined Blind Source Separation Algorithm

1215

Carry out VMD of the observed signal xðtÞ, taking k ¼ 5, a ¼ 5000 and s ¼ 0. The obtained variational mode components are used as the supplementary observation signals of the fourth and fifth channels, and the separated image information is obtained after independent component processing, as shown in Fig. 4. As can be seen from the separation results, only two different source signal images were separated after removed duplicates, and their structural similarity coefficient were 0.9970 and 0.9990, respectively.

Fig. 4. First separation of images

The SSIM was used to evaluate the accuracy of the separation signal. After calculation, the SSIM was shown in the data of the first extraction part in Table 1. Next, the sequential reduction method is used to remove the two separated image information from the observation signal, namely “remove duplicates”, and independent component processing is performed for the remaining observation signal again. The simulation results are shown in Fig. 5. It can be seen that after two-step processing, the image information that could not be separated in the first separation was obtained, and its structural similarity coefficient with the source image signal was 0.9982 and 0.9978. Finally, all the separation of the source signal was completed. After calculation, the SSIM is shown in the data of the second extraction part in Table 1.

1216

S. Guo et al.

Fig. 5. Separation results after sequential subtraction Table 1. The SSIM between the image and the source image after two separations First extraction

Second extraction

Lena Cameraman Peppers Lake Chaotic signal Cameraman Peppers Chaotic signal

Lena 0.9970 0.9970 0.9970 0.1693 0.0082 0.2436 0.1978 0.0167

Cameraman 0.2436 0.2436 0.2436 0.1610 0.0041 0.9982 0.1867 0.0128

Peppers 0.1978 0.1978 0.1978 0.1684 0.0060 0.1867 0.9978 0.0126

Lake 0.1693 0.1693 0.1693 0.9990 −0.0047 0.1610 0.1684 0.0157

Thus, the VMDSE-FastICA algorithm realizes the underdetermined blind extraction under the condition of 5 sends and 3 receives. Further experiments show that the algorithm is stable when the number of antennas is changed.

5 Conclusion In this paper, the VMD method is used to decompose the observed signals in the undetermined mixed model, and the optimal variational modal component is selected to supplement the virtual array. It can be seen from the similarity coefficient between the separated image and the source image that the under-blind separation based on VMD algorithm has a good effect, and the similarity coefficient is close to 1, which can achieve effective and accurate signal separation. In the next step, the application of the algorithm will be extended and the noise model will be studied. The distortion caused by additive noise is eliminated and the robustness of the algorithm is improved. Acknowledgements. This work was supported in part by Natural Science Foundation of China (No. 61571181), Heilongjiang Province Postdoctoral Science Foundation under Grant (No. LBHQ14136) and Heilongjiang Natural Science Foundation (No. JJ2019LH1317).

An Underdetermined Blind Source Separation Algorithm

1217

References 1. Zhao Z, Huang Y (2017) Single-channel blind source separation algorithm based on empirical mode decomposition. Comput Appl Res 34(10):3010–3012 2. Wang T, Wang Y (2019) Gas signal adaptive compressive sensing algorithm based on VMD. J Xi’an Univ Sci Technol 39(02):366–373 3. Dragomiretskiy K, Zosso D (2014) Variational mode decomposition. IEEE Trans Signal Process 62(3):531–544 4. Zhang X, Liu X, Luan Z (2018) Fault diagnosis of rolling bearing based on VMD and FastICA. J Beijing Univ Inf Technol (Natural Science Edition) 33(05):28–33, 87 5. Tang G, Luo G, Zhang W et al (2016) Underdetermined blind source separation with variational mode decomposition for compound roller bearing fault signals. Sensors 16 (6):897 6. Huang Y, Lin J, Liu Z, Wu W (2019) A modified scale-space guiding variational mode decomposition for high-speed railway bearing fault diagnosis. J Sound Vibr 444 7. Chang Q, Gao B (2019) A 2d-VMD based medical image denoising algorithm. Autom Technol Appl 38(02):92–95 8. Bi F, Li X, Ma T (2018) Knock feature recognition method based on variable mode decomposition. Vibr Test Diagn 38(05):903–907, 1076 9. Sahani M, Dash PK (2018) Variational mode decomposition and weighted online sequential extreme learning machine for power quality event patterns recognition. Neurocomputing 10. Chen X (2018) Research on chaotic masking and blind extraction of image information. Heilongjiang University 11. Long D, Niu C, Zhou H, Huang R, Zhou W, Application of VMD algorithm in timefrequency analysis of seismic data. Prog Geophys:1–13[2019-05-20] 12. Wu Z, Wang M (2018) Single-channel blind source separation algorithm based on variational mode decomposition. Commun Technol 51(04):774–777

A Ranking Learning Training Method Based on Singular Value Decomposition Yulong Lai and Jiaqi Zhen(&) College of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected]

Abstract. With the development of artificial intelligence, the use of machine learning technology to sort search results has become a very popular research field in recent years. In this paper, the correlation annotation dataset and the nonannotated dataset are combined with the singular value decomposition, and the new feature set is added to the training set, then the non-labeled data information is introduced in the training set. The differences in the ranked models trained by the new feature set before and after the experiment were compared. Experiments show that the feature set selected by SVD (singular value decomposition) helps to improve the accuracy of ranking effect. Keywords: Machine learning  Learning to rank  Information retrieval  SVD

1 Introduction The core problem ranking model based on machine learning is how to construct a model to reflect the greatest degree of relevance for query document. The main process is as follows: Given the training set D of a document, the format of the document is generally , r is the correlation between the evaluation document and the query, and the judgment condition value is generally {0, 1, 2}, 0 means irrelevant, 1 signify correlation, 2 imply very relevant. q is the query; d is the document feature set. The ranking model is obtained through the training set, and then tested with the test set to calculate the relevance ranking of the document. The current learning to ranking is based on algorithms and models for labeling data, the discussion and research on the use of non-labeled data is still slightly insufficient. In view of this situation, this paper uses SVD to jointly decompose the annotated data and the non-labeled data and introduce non-labeled data information in the training set and eliminate existing noise. By the way, the performance of the ranking result is improved.

2 Application of SVD in Ranking Training Drawing on the previous research [1, 2], the test set (non-labeled data) is introduced into the training process of the sorting mode. The overall operation process of the method used in this paper is shown in Fig. 1. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1218–1221, 2020 https://doi.org/10.1007/978-981-13-9409-6_144

A Ranking Learning Training Method Based on Singular Value New training set

Training set Merge set

1219

Ranking learning algorithm

SVD decomposition New test set

Test set

Ranking model

NDCG score

Fig. 1. SVD-based training process

2.1

Related Algorithms

The ranking learning algorithm mainly includes three categories: PointWise, PairWise, ListWise. Among them, List Wise is currently the most researched and widely used. In view of this situation, we choose to experiment on Listwise. The Listwise method directly optimizes the sequence of the entire document collection for a given query, with better sorting effects. Therefore, this paper selects the most widely recognized AdaRank [3], ListNet [4], and LambdaMART [5] in ListWise for testing. 2.2

SVD Overview and Feature Extraction

Using SVD, it is possible to implicitly obtain a more efficient main eigenvector, and at the same time achieve the purpose of data dimensionality reduction. The SVD can decompose the matrix M of any m rows and n columns into three matrices U, R and VT. Where U is an orthogonal matrix of m  m, VT is an orthogonal matrix of n  n, and R is a diagonal matrix of m  n. The process of SVD decomposition is as follows: M ¼ umm emn vTnn

ð2:1Þ

There are only diagonal elements in R and all other elements are 0, it called singular values. Take the first t non-zero singular values, we can restore the original matrix [6] and obtained by the following principle. M mn  umt ett vTtn

ð2:2Þ

The algorithm for SVD extraction in this paper is as follows: Input: training set D and test set T Output: new training set Dnew and new test set Tnew 1. 2. 3. 4.

Combine the training set D and the test set T to obtain a new set M; SVD decomposition set M obtains matrix U, matrix R, matrix VT; Calculate the value of t; Select t eigenvectors in U corresponding to t eigenvalues to form a main eigenvector matrix F;

1220

Y. Lai and J. Zhen

5. F is split according to the dimension of the original training set test set, and a new training set D and T is obtained. A new training set with non-labeled test set information is obtained by the above algorithm.

3 Experimental LETOR 4.0 is a package of benchmark data sets for research on learning to rank. It uses the Gov2 web page collection and two query sets from Million Query track of TREC 2007 and TREC 2008. We chose MQ2008. We use the open source Java package RankLib to train the datasets that have been decomposed by SVD. The principle of feature extraction in engineering to save 90% of the energy information in the matrix. In this paper, we know that we need to save the 24 eigenvalues after decomposition. So we take the 24 characteristic values after decomposition. Input data, respectively obtained data unprocessed evaluation indicators (old) and processed evaluation indicators (new). The evaluation criteria are NDCG@K (Normalized Discounted cumulative gain). NDCG@K only cares whether the first k ranks are correct and k we chose 10. Because the study found that people usually only care about the top 10 results of the search results. The NDCG@10 result is shown in Fig. 2.

NDCG@10 Value

Comparative Results 0.5

0.4891 0.4563 0.4364 0.4412

0.45 0.4

lambdaMART

AdaRank old new

0.4537 0.4564

ListNet

Ranking Algorithm

Fig. 2. NDCG@10 comparative results

4 Conclusion In this paper, we construct new training test data by SVD performing main feature extraction on the data set and adding non-label data in the training set. Through experiments we found that the data processed by the above is better for ranking than the original data set especially in LambdaMART. We also found that among the three most popular ranking algorithms based on document pairs, LambdaMART has the best ranking effect, ListNet ranking effect is second, and AdaRank has the worst ranking effect. In addition, we compressed the original 46-dimensional features into 24 dimensions, shortening the time for model training.

A Ranking Learning Training Method Based on Singular Value

1221

Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61501176, Natural Science Foundation of Heilongjiang Province F2018025, University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province UNPYSCT-2016017, and the postdoctoral scientific research developmental fund of Heilongjiang Province in 2017 LBH-Q17149.

References 1. Duh K, Kirchhoff K (2008) Learning to rank with partially-labeled data. ACM Special Interest Group on Information Retrieval (SIGIR2008). Singapore, pp 251–258 2. Lin Y, Lin H, Su W (2009) An RnaBkoots sorting learning method using singular value decomposition (2007–2009) 3. Burges JC (2010) From RankNet to LambdaRank to LambdaMART: an overview. TR-2010– 82, Microsoft Research 4. Xu J, Li H (2007) AdaRank: a boosting algorithm for information retrieval. In: Proceedings of the 30th annual international ACM SIGIR conference on research and development in information retrieval. Amsterdam 5. Cao Z et al (2007) Learning to rank: from pairwise approach to ListWise approach. In: International conference on machine learning. ACM 6. Ji JZ, Tian LG, He YX (2019) Improvement and implementation of SVD algorithm based on mass date. SAMSON 27:1–5

Research on Temperature Characteristics of IoT Chip Hardware Trojan Based on FPGA Junru An1, Zhiwei Cui1, Zhenhui Zhang1(&), Liji Wu2(&), and Xiangmin Zhang2 1

Heilongjiang University, 74 Xuefu Road, Harbin 150080, Heilongjiang Province, China {ajrzero,zhangzhenhui01}@163.com, [email protected] 2 Tsinghua University, Haidian District, Beijing, China {lijiwu,zhxm}@mail.tsinghua.edu.cn

Abstract. With the development of science and technology, people gradually have a profound understanding about IoT technology, but due to the fast integration of IoT technology into life too fast in recent years, plenty of security risks are there brought about. Trojan virus in software level can be detected and cleared by software, but the Hardware Trojan (HT) on the chip has caused certain difficulty for detection it, and it will bring some troubles to users, like the increased power consumption of the chip or the risk of exposing privacy of the user. In this paper, the Hardware Trojan has been tested at different temperatures by means of the ring oscillator network (RON), with the implementation of the whole model is on the FPGA. In the end, it is found that the RON has a corresponding change in the detection degree of the HT at different temperatures. Keywords: IoT  Hardware Trojan (HT) (RON)  Different temperatures

 FPGA  Ring oscillator network

1 Background With the development of integrated circuit (IC) industry, the technology for producing each part of a chip becomes extremely complicated, and a single company fails to complete the whole manufacturing, which makes the various aspects of manufacturing independent of each other, leaving a large gap for the hacker. In order to implant HT, it undoubtedly brings certain security risks to users. In 2007, the concept of HT was first proposed by IBM [1]. HT, abbreviated by Hardware Trojan refers to the implantation of malicious circuits or the tampering of original circuits during the entire life cycle of hardware chips involved in several aspects of R&D, manufacturing, and package testing. Since then, it has attracted the attention of the academic community, and thus the corresponding detection method theory has also emerged. Currently, the main detection methods can be divided into: physical detection [2], functional detection [3], Built In Self Test (BIST) [4], Side Channel Analysis (SCA) [5, 6], Delay-Path Analysis [7] etc. In the same year, Agrawal D et al. utilized the power consumption information of the side channel to © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1222–1230, 2020 https://doi.org/10.1007/978-981-13-9409-6_145

Research on Temperature Characteristics of IoT Chip Hardware

1223

detect HT. They input the random vectors as the gold chip (without HT chips), collected their power consumption as a standard library and then conduced a comparison of the power consumption of the Trojan chip-with the standard library and thus found out whether the Trojan is included or not. In 2015, Karimian N et al. proposed a method for detecting HT employing a RO [8]. In this article the HT is briefly introduced first, and then the circuits and algorithms are introduced and designed for conducting and carrying out experiments at different temperatures. In the end, there will have a summary for the whole experiment.

2 Preliminary Preparation In this section, some preparations are needed, like the ring oscillator principle and the circuit structure. 2.1

Ring Oscillator Principle

Fig. 1. 2 types of ring oscillators

Figure 1 presents two simple 3 (must be an odd number) stages of ring oscillators. Figure 1a is composed of inverters, and Fig. 1b is composed of NAND gates. As can be seen from the figure, the one side input of the second oscillator is connected to the power supply, therefore this one is more sensitive to power supply noise, but the corresponding area will be relatively larger. Then the oscillation period of a ring oscillator can be recorded as T ¼ 2  n  Td

ð2:1Þ

n denotes the number of stages of the RO, td means the delay generated from each stage [9]. The IC is composed of a large number of gates, and the HT is also made up of various gate circuits. In the current IC process, the flipping of a gate affects the supply voltage of the peripheral gate circuit, and the HT is the extra part of the IC. If it is working, it will definitely generate additional power supply noise. If the HT is surrounded by a RO, then the RO will be definitely affected by the power supply noise and then change the oscillation frequency, which is the principle of RO to detect the HT.

1224

2.2

J. An et al.

Circuit Configuration

FPGA can implement plenty of functions, and in this experiment, we preferred to design it as a cryptographic application specific integrated circuit (ASIC) with detection circuit. The structure of the detection circuit as depicted in Fig. 2.

Fig. 2. Detection circuit structure

It can be seen that from Fig. 2 that the whole detection circuit consists of RON, which evenly distributed in the square, FSM, decoder, MUX, counter, and serial ports. Since the FPGA can calculate data simultaneously, which is not conducive to the reading of our final data, hence it is necessary for us to apply FSM to control the decoder and MUX. When the circuit starts to work, the FSM will send a signal to the decoder. The signal is processed into ENn by the decoder to activate the corresponding ROn. The oscillation output OUTn of the ROn is sent to the counter through the MUX. When the output data satisfy our preset requirements, it will be transferred to the PC via the serial port, therefore we will obtain the data we require, and complete a RO counting process. By virtues of repeating the above process n times, and the count values of n ROs will be obtained.

Research on Temperature Characteristics of IoT Chip Hardware

1225

3 Circuit Design and Data Processing Method This section aims at designing the circuits and the algorithms for data processing. In the literature [10], some of the circuits and data processing algorithms designed by the author are very suitable and feasible for this research, hence we will follow some designs in this paper. 3.1

Circuit Design

It is well known that the Logic Elements (LEs) in different FPGAs are distinctive. The LEs utilized by the conventional encryption algorithm is not very large, and the ASIC cannot be well simulated. Therefore, the circuit in this design includes 6 SM3 encryption algorithm modules, 2 HT modules and 12 RO modules. In order to reflect the detection of dynamic HT in this experiment, the designed HT made up of ripplecarry adders, in view that its working characteristics can be regarded as a dynamic circuit. Figure 3 reveals the resources occupied by the entire circuit.

Fig. 3. Flow summary

The entire circuit was completed on Cyclone-IV EP4CE30F23C8. It can be seen that there are 12 ROs and 2 HTs in chip planner based on Fig. 4. In view that this experiment mainly focuses on the study of the characteristics in different temperatures of HT detection, thus it is a good choice to put the layout of HT around RO. As can be seen from the figure, most of the LEs in the chip are occupied. This kind of processing can simulate the FPGA to get closer to the ASIC design, and the occupied LEs are generally circular. In order to ensure each RO can detect the surrounding power supply noise, it is necessary to allow them collectively distributed at the center of the entire chip.

1226

J. An et al.

Fig. 4. Layout in chip planner

Quartus II contains two very powerful features, Design Partition and logic-lock. By making use of these two functions flexibly, a lot of time can be saved during the process of designing the whole circuit. It is not necessary for us to delete or add HTs in the code, and directly compare the HT modules that are instantiated in these two functions. With the appropriate setup, you can acquire circuits with HT and circuits without HT. 3.2

Data Process Method

This design takes advantage of 12 ROs, therefore each data is a vector of 12 elements, and each of these elements is the corresponding RO count. Each circuit counted 100 times and finally 100 vectors are obtained. Observed data will discover find that each RO count is in tens of thousands of units, and the count of the RO with HT count is only dozens or hundreds more than the RO count without HT. Therefore, the effect is so tough to be seen. We figure out the average-data under 25 °C, and we obtained a line chart as shown in Fig. 5.

Research on Temperature Characteristics of IoT Chip Hardware

1227

Fig. 5. Average vector of 100 vectors at 25 °C. a Data output without HT; b Data output with HT

Based on the above figures, it can be known that the data of the two pictures are very similar. It is impossible to judge whether the circuits contain HT. Therefore, a special difference normalization process is employed here. The algorithm process is given in Table 1. Table 1. Difference normalization process code in MATLAB Difference normalization process code in MATLAB (1)sub=abs(A_ave-B_ave); (2)add=A_ave+B_ave; (3)C1=mean(sub, 2); (4)C2=mean(add, 2); (5)dis1=(C2*sub)./(C1*add); (6)C3=mean(dis1, 2); (7)dis2=C3*ones(1, 12); (8)diff=dis1-dis2;

A and B are 1*12 vectors generated by circuits with/without HTs, each is 100; A_ave and B_ave are 1*12 vectors composed of these numbers that the corresponding elements in 100 vectors are averaged; C1, C2 and C3 are all constants; dis1, dis2 and dis3 are 1*12 vectors; diff is the final result, also is a 1*12 vector. After testing, the advantage of this method can be discovered that is, if the corresponding elements in multiple vectors present a large difference value, the resulting value will become larger; and if the elements are similar, the value is small. We make use of the difference normalization processing with the vectors of the circuit with/without HT at 25 °C, and acquire the line graph shown in Fig. 6.

1228

J. An et al.

Fig. 6. Data of the normalized at 25 °C

It can be concluded in the figure that the entire polyline has two distinct peaks at No. 3 RO position and No. 10 RO position, which proves that in these two circuits, the difference value of No. 3 and No. 10 RO are bigger than others. This result is consistent with the circuits we originally designed, which manifests that the HT is detected.

4 Test Results at Different Temperatures The established test temperature is 25 °C, 30 °C, 35 °C, 40 °C and 45 °C respectivly. Take the 25 °C test as an example, the temperature of the incubator is set at 25 °C. After the preset temperature has been determined, put the powered FPGA development board into the incubator, on account of presenting a certain temperature change of the FPGA development board under the status of power-on and power-off. And then download the code without HT to the development board and heat it for ten minutes. Besides, these data are not recorded when inputting 100 vectors, in view that the FPGA itself generates heat when it works, which imposes an impact on the overall temperature, hence it can start recording from the 101st vector and stops at recording 200. Next download the code with HT to the development board, and also start recording from the 101st vector, stop at recording 200, which indicates the accomplishment of collecting the data at 25 °C. Thereafter, the above experimental procedures were repeated at 30 °C, 35 °C, 40 °C, and 45 °C respectively, and all the test results were subjected to normalization of the difference to obtain 5 results of Fig. 7.

Research on Temperature Characteristics of IoT Chip Hardware

1229

Fig. 7. The results at different temperature. a at 25 °C; b at 30 °C; c at 35 °C; d at 40 °C; e at 45 °C

5 Conclusion Employing RON to detect HT is deemed as a very effective method, and applying a special difference normalization data processing method to amplify the difference between these two similar data will achieve the goal of improving the accuracy of detection. Based on the 5 images we obtained, it can be seen that such a phenomenon,

1230

J. An et al.

with the increase of temperature, the effect of RON on HT detection is increasing, but with a so high temperature it will have certain influence on the development board, therefore can be concluded that under the high temperature environment, the test results will present a certain jump.

References 1. Agrawal D, Baktir S, Kara KD et al (2007) Trojan detection using IC fingerprinting. In: 2007 IEEE symposium on security and privacy. IEEE, pp 296–310 2. Wang XX, Tehranipoor M, Plusquellic J (2008) Detecting malicious inclusions in secure hardware: challenges and solutions. In: 2008 IEEE international workshop on hardwareoriented security and trust. IEEE, pp 15–19 3. Wolff F, Papachriston C, Bhunia S et al (2008) Towards Trojan-free trusted ICs: problem analysis and detection scheme. In: 2008 design, automation and test in Europe. IEEE, pp 1362–1365 4. Chakraborty RS, Narasimham S, Bhunia S (2009) Hardware Trojan: threats and emerging solutions. In: 2009 IEEE international high level design validation and test workshop. IEEE, pp 166–171 5. Wang X, Salmani H, Tehranipoor M et al (2008) Hardware Trojan detection and isolation using current integration and localized current analysis. In: 2008 IEEE international symposium on defect and fault tolerance of VLSI systems. IEEE, pp 87–95 6. Wang LW, Luo HW (2011) A power analysis based approach to detect Trojan circuits. In: 2011 International conference on quality, reliability, risk, maintenance, and safety engineering. IEEE, pp 380–384 7. Exurville I, Zussa L, Rigaud JB et al (2015) Resilient hardware Trojans detection based on path delay measurements. In: 2015 IEEE international symposium on hardware oriented security and trust (HOST). IEEE, pp 151–156 8. Karimian N, Tehranipoor F, Rahman MT et al (2015) Genetic algorithm for hardware Trojan detection with ring oscillator network (RON). In: 2015 IEEE international symposium on technologies for homeland security (HST). IEEE, pp 1–6 9. Tehranipoor M, Salmani H, Zhang X (2014) Integrated circuit authentication hardware Trojans and Counterfeit detection. Springer International Publishing, Switzerland, pp 79–80 10. Qu K, Wu L, Zhang X (2015) A novel detection algorithm for ring oscillator network based hardware Trojan detection with tactful FPGA implementation. In: 2015 11th international conference on computational intelligence and security (CIS). IEEE, pp 299–302

Wireless Communication Intelligent Voice Height Measurement System Danfeng Zhao and Peidong Zhuang(&) College of Electronics Engineering, Heilongjiang University, Harbin 150080, Heilongjiang, People’s Republic of China [email protected], [email protected]

Abstract. As we all know, the traditional mechanical and electromechanical hybrid height measuring instrument which occupies the main position of the height measurement market has some insurmountable disadvantages, such as bulky body, large occupation area, high energy consumption and difficulty in maintaining the measurement accuracy. However, due to the high price and frequent maintenance, it is difficult for the sophisticated large-scale height measurement equipment to enter ordinary families. This makes it difficult for people to obtain accurate height measurement results conveniently and in real time. In recent years, although the electronic height measuring instrument has gradually entered the line of sight of people, its intelligent level is still not high, the price is hard for ordinary families to accept. In the face of such a situation, this topic designed a wireless communication intelligent voice height measurement system, is committed to promoting the realization of the dream of the contemporary society of all things Internet. Compared with the mainstream height measuring instrument in the market, this distance measuring system not only has an LCD screen display but also its biggest highlight is that it can realize WIFI wireless transmission. People can realize height measurement by the remote control of the instrument through mobile phone software, and feedback the height measurement result immediately. At the same time, the design can also achieve interaction with the computer, the height information uploaded to the computer for data analysis and storage. Another highlight of this design is that it has voice broadcast function, which can carry out voice prompt and voice broadcast of height value, greatly improving its intelligence degree. Also, the system combines temperature compensation and data error correction analysis in one, greatly improving the height measurement accuracy, to meet the needs of people’s daily life. Keywords: AT89C55WD  Temperature  Compensation display  ESP8266WIFI  ISD4004 voice broadcast

 LCD screen

1 Overall Design of Height Measuring Instrument With the improvement of social living standards and quality of life, people are more and more concerned about the change of their bodies. For children and adolescents in growth and development period, height as one of the important indicators of their growth and development health is particularly valued. Similarly, for girls pursuing a © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1231–1240, 2020 https://doi.org/10.1007/978-981-13-9409-6_146

1232

D. Zhao and P. Zhuang

healthy figure, height is the object that they always care about; for the whole society, height is an important assessment standard reflecting the national physical quality. Besides, the design can not only measure the light of the human body, but the height of the general object detection also has a very good measurement effect. This paper designs a wireless communication intelligent voice height measurement system based on a single chip microcomputer. This design is small and light, and it has an accurate and intuitional height measurement. Compared with the mechanical contact height measurement instrument, it has the advantages of wireless transmission control, non-contact, strong adaptability to the environment, not easy to wear, corrosion resistance, precision fidelity, and at the same time have the data record upload and the advantages of interconnection, which other traditional height measurement does not have. It can show its unique performance advantages when dealing with data measurements such as mass height census. Not only that, this design product has shining points such as low price, suitable for the large-scale mass production, not easy to damage, very likely to enter the common people’s family. As the home height measurement or other height inspection necessary products, the product is designed to meet the needs of modern people for intelligent home life. The height measurement error of this measurement system is within 1 cm, temperature compensation is carried out by DS18B20 temperature measurement system, measurement results are displayed by LCD screen in real time, ISD4004 voice module is used for voice broadcast, and it through ESP8266WIFI module to achieve data upload and record. This subject is committed to realize intelligent home life and to meet folk’s glorious yearning for the Internet of Everything era.

2 Principle of Ultrasonic Ranging Ultrasonic ranging is a non-contact ranging technology that people have mastered by exploring the principle of ultrasonic ranging in animals such as bats. When measuring the distance, the ultrasonic transmitter sends out the ultrasonic waves, and the microcontroller opens the timer at the same time. The ultrasonic waves propagate in the medium, reflect after encountering obstacles, and are received by the ultrasonic detector. At the same time, the microcontroller closes the timer. The distance between the measured object and the ultrasonic probe can be determined by timing and ultrasonic propagation velocity in the medium [1]. 1 D ¼ ct 2 D c 1 2t

measured distance (m); propagation velocity of ultrasonic wave (m/s); ultrasonic one-way propagation time (s).

ð1Þ

Wireless Communication Intelligent Voice Height Measurement

1233

It can be seen that the accuracy of measuring distance mainly depends on time accuracy and propagation speed. Timing precision is determined by the MCU timer, timing time is the product of the machine cycle and the count number. This design uses a 11.0592 MHz crystal oscillator. And timing precision is 1 ls. For the height measurement, the precision completely meets the need. The velocity of the ultrasonic wave is not fixed. It is affected by air temperature, density and gas molecular composition. If the ambient humidity is randomly distributed between 10 and 90% RH, the speed is influenced to 0.15% by it at 20 °C, which leads to a 0.2 cm error within 30 miter standard error. Therefore, in general conditions, we don’t need correction toward humidity. rffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi c RT T ¼ c0 1 þ c¼ M 273 c R T c0 M

ð2Þ

the ratio of the air’s heat capacity at contestant pressure to its heat capacity at contestant heat, and the air’s is 1.40; gas universal constant, 8.314 (Kg/mol); the Thermodynamic temperature of the gas, which is related to the Celsius temperature, is T = 273(K) + t; the acoustic velocity when 0 °C, is 331.4(m/s); the relative molecular mass of the gas, the air is 28:8  103 (Kg/mol). Taylor series of Eq. (2) is expanded as follows: cðtÞ ¼

1 ðnÞ X v ð 0Þ n¼0

n!

tn ¼ vð0Þ þ v0 ð0Þt þ

1 00 1 v ð0Þt2 þ    þ vðnÞ ð0Þtn 2! n!

1 þ vðn þ 1Þ ðdÞtn þ 1 ðn þ 1Þ!

ð3Þ

where 0 hd h1, the ultrasonic velocity can be obtained by omitting the high order: cðtÞ ¼_ 331:5 þ 0:607tðm/sÞ

ð4Þ

By type, you can see that ultrasonic velocity changes about 0.607(m/s) on 1 °C temperature change. When the environment temperature changes between 0 and 40 °C, the sound velocity error could reach 6.8% maximally. Through the above formula, it can be concluded that it is most affected by temperature when ultrasonic wave propagates in the air. And the relationship between temperature and wave velocity can be calculated from the expression. The higher the temperature is, the faster the propagation speed is, and there is a big difference in the propagation speed between different temperatures. Therefore, temperature compensation is the most effective and necessary measure when a high-precision measurement is needed [2].

1234

D. Zhao and P. Zhuang

In practical application, according to (2) can be obtained: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 T D ¼ t 1 c0 1 þ 2 273 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 273 þ t2 ¼ t 1 c0 1 þ 273 2 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 t2 ¼ 331:4t1 2 þ 273 2 t1 t2

2.1

ð5Þ ð6Þ ð7Þ

the counting time of the MCU (S); the temperature measured by temperature measurement module (°C).

Frequency Characteristics

When the ultrasonic wave propagates in the air, the sound intensity will gradually decrease with the increase of the propagation distance. And the main reason is the loss of energy on account of the diffusion of sounds itself in the propagation process, ultrasonic reflection, scattering, and other factors. In the process of propagation, with the increase of the propagation distance, the signal weakens following the exponential rule. Among them, it should be noted that the attenuation characteristic during the ultrasonic propagation process (including other factors such as beam diffusion) is related to its frequency. The higher the frequency is, the faster the attenuation is, and the shorter the propagation is. 2.2

Ultrasonic Detection Error

The ultrasonic transducer receives an ultrasonic signal, which is transformed into an electrical signal. And then the signal is amplified and rectified for filtering. The ultrasonic signal is judged to have reached the receiving node when the voltage value is higher than the preset threshold value and stopping timer. We must know that ultrasonic amplification detection will bring a certain delay, and there is a process of vibration starting of the transducer after receiving the sound wave. It is about 112.3 ps from the vibration starting to voltage level jump which the threshold value comparator output it. As a result, this time will case about the 3.8 cm ranging error. And the time interval may be different from changing measurement distance. 2.3

Least Square Fitting

To correct the ranging error caused by other parameters on the ultrasonic propagation speed, the least square regression method is used to measure the distance based on temperature correction.

Wireless Communication Intelligent Voice Height Measurement

1235

Mathematical model of least squares regression method: Dn ð xÞ ¼ an xn þ an1 xn1 þ    þ a1 x þ a0

ð8Þ

Define array {(xi ; Dn ðxi Þ) | i ¼ 1; 2; . . .; m} to estimate the true measurement distance{(xi ; yi ) | i ¼ 1; 2; . . .; m}, xi is the ranging value after velocity correction, yi is the true distance, n  m. In this paper, the first-order least squares regression is adopted based on keeping the computational complexity of nodes. And we determine coefficients a1 ; a0 to minimize the variance of E. To minimize the variance, we take the partial derivative to the above formula and set it to 0, that is: 8 m P > @E > ¼ 2 x i ð y i  a1 x i  a0 Þ ¼ 0 < @a 1 > > :

@E @a0

i¼1 m P

¼ 2

i¼1

ðyi  a1 xi  a0 Þ ¼ 0

ð9Þ

The result is: a0 ¼

Pm

a1 ¼

Pm 2 Pm i¼1 xi yi  i¼1 xi i¼1 yi  Pm  2 Pm 2 i¼1 xi m i¼1 xi

i¼1 xi

Pm

Pm

Pm

Pm i¼1 xi yi  m i¼1 xi yi  Pm  2 Pm 2 i¼1 xi m i¼1 xi

i¼1 xi

ð10Þ

ð11Þ

Among them, xi is the mean value of each measurement (measured every 1 m and 50 sets of data every measurement). Because we only analyze the estimated distance within 15 m, m = 15. By fitting the measured data after velocity correction, the expression of the linear regression model is: Dð xÞ ¼ 0:9921x þ 0:348

ð12Þ

The linear fitting of measured data displays that the average error is only 3 cm after velocity correction. By results and a1 \1, the coefficient of the linear regression model, the absolute error of ranging increases with the increase of distance. The main reason is that the distance estimate is generally small, but as the distance increases, ultrasonic detection errors and other factors lead to the increase of time delay. To some extent, it makes up for the small distance error caused by sensor hardware and other factors. Therefore, the measurement results show that the absolute error decreases with the increase of distance.

1236

2.4

D. Zhao and P. Zhuang

Least Square Correction

The range correction model is: D ¼ 0:9921d  þ 0:348

ð13Þ

D is the corrected value of final distance, d  is the distance estimate after velocity correction [3].

3 Height Measuring Instrument Module Design The core component of the height measuring instrument is the AT89C55WD minimum MCU system. Hardware circuits include Power circuit; Reset circuit; LCD circuit; Ultrasonic Ranging circuit; Temperature Compensation circuit; Voice broadcast circuit; Wireless Transmission circuit; Serial Communication circuit; Buzzer circuit and function button circuit. 3.1

Overall Hardware Circuit Schematic Diagram

All hardware circuit design drawings are designed with Altium Designer [4] (Fig. 1).

Fig. 1. Hardware circuit design drawing

Wireless Communication Intelligent Voice Height Measurement

3.2

1237

Ultrasonic Ranging Module

This design adopts the HC-SR04 ultrasonic module to measure the height of the human body [5] (Fig. 2).

Fig. 2. HC-SR04 ultrasonic module circuit diagram

3.3

WIFI Data Transmission Module

Connect ESP8266 WIFI module with the MCU ranging system and then power it up. Then mobile phone searches the WIFI of the module and establishes the connection. Next, we let mobile login port 192.168.4.1, and input the WIFI name, password and copy the device number. Then open the mobile control terminal–APP matching the module, and then input the customized device name and corresponding device number to establish the WIFI wireless communication between the system and mobile. Besides, through this WIFI module, the mobile can realize the remote wireless control of the entire ranging system, and realize the man-machine interaction of remote operation of height measurement (Fig. 3).

1238

D. Zhao and P. Zhuang

Fig. 3. Mobile terminal interface

4 Results Display 4.1

System Function Design Drawing

See Fig. 4.

Wireless Communication Intelligent Voice Height Measurement

1239

Fig. 4. System design block diagram

4.2

Test Outcome

See Table 1.

Table 1. Measurement results of measurement system Number A B C D Error 1 180 180 164 164 0 2 180 180 166 166 0 3 181 182 165 166 −1 4 182 182 167 167 0 5 184 184 167 167 0 6 184 184 170 170 0 7 186 186 170 170 0 8 187 186 174 174 0 9 188 188 174 174 0 10 188 188 175 175 0 A—the measuring distance to the ground, B—the actual distance to the ground, C—the measuring height, D—the actual height

5 Conclusion From the measurement results, the design can meet the required accuracy on both short distance measurement and long distance measurement. And its error is controlled within one centimeter. The main function meets the requirements, and the overall design function is complete and stable, the product has achieved the established goal. In the future, it can achieve better performance with more intelligence after combining with functions such as weight, lipid and temperature measurement.

1240

D. Zhao and P. Zhuang

References 1. Yang X (2007) High precision ultrasonic height measurement system design. Electron World 2. Hu K (2011) Design of human height measuring instrumental based on ultrasonic technology. Senor Microsyst 3. Hu Y (2015) Ultrasonic ranging error analysis and correction research. Comput Measur Control 4. Wang J (2018) Human body mass measurement instrumental based on MSP430. Innovation Pract 5. Jin Y (1999) The application of multitasking mechanism in single-chip computer system. Wuhan Univ J Nat Sci

Design of Intelligent Classification Waste Bin with Detection Technology in Fog and Haze Weather Ailing Zhang, Peidong Zhuang(&), Yuehua Shi, and Danfeng Zhao College of Electronics Engineering, Heilongjiang University, Harbin 150080, Heilongjiang, People’s Republic of China {1806218065,1196890833,2481121503}@qq.com, [email protected]

Abstract. In recent years, the degree of concern about air pollution, especially for suspended particulate matter, has gradually increased. The suspended particles are solid particles that have a particle diameter of fewer than 100 lm such as PM1, PM2.5, and PM10 in the atmosphere. This design determines the theoretical basis for measuring particle concentration by laser scattering method using the principle of Mie scattering. Based on the Mie scattering, the light scattering characteristics of suspended particles were analyzed theoretically. The scattered light intensity distribution curve and extinction coefficient curve of single suspended particles under different characteristic parameters were discussed. The scattered light intensity distribution and incident light were obtained respectively. The design also has automatic garbage sorting capability. When people need to throw garbage, it can automatically open the cover. When the garbage bin is full, it will automatically alarm and remind. And it can automatically distinguish metal and non-metal and classify it to recycle metal resources and reduce waste. Meanwhile, solar panels can also be used to power solar energy, saving energy and environmental protection. To further improve the intelligence of the overall design, the product is equipped with luminescent material and an LCD on the outer wall of the garbage can, which can display the status and time of the current garbage bin, temperature, air quality and change its color according to the air quality. Moreover, the wireless communication function is designed. And the remote human-computer interaction is realized through the mobile terminal. The air quality information and the garbage bin classification situation are automatically uploaded to the mobile terminal. Keywords: ADC0832  Mie scattering communication  GP2Y1010AU

 GP2Y0A02  Wireless

1 Overall Design 1.1

The Overall Design of Haze Detection

At present, there are three methods for measuring suspended particles at home and abroad: micro-oscillation balance method, b-ray absorption method, and weighing method. The principle of light scattering by Mie particles is the theoretical basis for © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1241–1249, 2020 https://doi.org/10.1007/978-981-13-9409-6_147

1242

A. Zhang et al.

particle detection by light scattering. When the light beam is incident on the particulate matter, the incident light undergoes light scattering. In the case of a certain particle property, the intensity of the scattered light is linear with the mass concentration of the particles. By measuring the intensity of the scattered light, the mass concentration of the particles can be obtained according to the conversion formula. The analysis shows that the light scattering method has a wide measuring range, wide adaptability, the small influence of polarized light, and a high degree of intelligent miniaturization. The principle of continuous real-time measurement is simple and practical. In this paper, based on the Mie light scattering phenomenon of suspended particles in the beam, AT89C55WD is used to detect particles in the air. This system combines sensor technology with the control technology of AT89C55WD and AD conversion technology to achieve the collection of air particle concentration. The analog quantity is converted into a digital quantity, and after being processed by the AT89C55WD, it is finally displayed on the liquid crystal screen. The practice has proved that the design system is convenient for people to monitor the air quality around them in real time, which can effectively improve the health of the body and improve the environment around people. It has a very important practical significance. The operation is simple, the integration is high, and the work is stable. It has high test accuracy and a certain practical value. So the market application prospects are very extensive [1]. 1.2

Intelligent Waste Bin Overall Design

Due to the late start of technologies such as smart garbage disposal and the status quo of China’s special waste recycling network, smart waste classification is still in its infancy and urgently needed to break through. By visiting the survey and analyzing the trend of garbage classification at home and abroad, the design of the intelligent sorting bin based on AT89C55WD breaks through the traditional classification method. It recovers harmful metals according to metal and non-metal classification improves the metal recovery rate and reuse rate and protects the environment. The intelligent classified garbage bin design transforms the traditional manual sorting garbage into intelligent, which can greatly reduce a large amount of manpower and material resources required for manual garbage sorting, and at the same time, recycle and reuse the recyclable garbage to avoid waste. Besides, artificial intelligence technology is used to detect harmful metals to reduce environmental pollution, control environmental degradation, and strive to improve the environment, beautify the environment, and protect the environment, so that it can better adapt to human life and work needs [2].

2 Principle of Air Quality Measurement Scattering occurs when the beam passes through an inhomogeneous medium. When the particle size of the scattering particles is much smaller than the wavelength of light, it belongs to Rayleigh scattering. When the particle size of the scattering particles is equal to or greater than the wavelength of the human radiation, the intensity of the scattered light is related to the scattering angle, and has no dependence on the human light, and

Design of Intelligent Classification Waste Bin with Detection

1243

belongs to the Mie scattering. The beam produced by the laser generator is passed through a focusing lens collimating lens to obtain an approximate ideal beam. Light scattering occurs when the light is suspended by particles. According to the Mie scattering principle, the mass concentration of suspended particles satisfies the relationship between the light intensity after the light scattering [3].

Mv ¼

3k2 vS

  4p3 r2 q II0

P i

½I1 ða;m;hÞ þ I2 ða;m;hÞnr ðDi Þ DDi D3i

ð1Þ

where: r is the distance between the particle and the light intensity detector; q is the relative density of the suspended particles; I is the intensity of the incident light after scattering; I0 is the incident light intensity; k is the wavelength of the incident light; The gas flow rate is detected; S is the cross-sectional area of the gas flow pipe; I1 and I2 are the vertical and horizontal components of the scattered light intensity, which are related to the particle size a, the particle refractive index m, and the scattering angle h; nr (Di) is the suspended particle A function; D is the particle diameter. It can be known from the formula (1) that the mass concentration of the suspended particles is linearly related to the intensity of the scattered light under the condition that the wavelength of the optical East and the characteristics of the particles are constant. Under a specific structure, the wavelength of the human light is known, the appropriate scattering angle is selected, and the mass concentration of the suspended particles can be calculated by measuring the intensity of the scattered light. 2.1

Single Particle Scattering Intensity Distribution Characteristics

According to the basic theory of Mie scattering, the vertical and horizontal components of the scattered light intensity can be expressed by the scattering amplitude function concerning the scattering plane determined by the incident and scattered light:  I¼

 k2 ðI1 þ I2 ÞI0 8p2 r 2

ð2Þ

I1 ða; m; hÞ ¼ jS1 j2

ð3Þ

I2 ða; m; hÞ ¼ jS2 j2

ð4Þ

Scatter amplitude function S1 and S2 expressions: S1 ¼

1 X 2n þ 1 ½apn ðcos hÞ þ bn sn ðcos hÞ nðn þ 1Þ n¼1

ð5Þ

S2 ¼

1 X 2n þ 1 ½asn ðcos hÞ þ bn pn ðcos hÞ nðn þ 1Þ n¼1

ð6Þ

1244

A. Zhang et al.

The parameters an and bn are defined as: an ¼

wn ð xÞw0n ðmxÞ  mwn ðmxÞw0n ð xÞ nð xÞw0n ðmxÞ  mwn ðmxÞn0n ð xÞ

ð7Þ

bn ¼

mwn ð xÞw0n ðmxÞ  wn ðmxÞw0n ð xÞ mnðxÞw0n ðmxÞ  wn ðmxÞn0n ð xÞ

ð8Þ

Pðn1Þ ðcos hÞ sin h

ð9Þ

dPðn1Þ ðcos hÞ dh

ð10Þ

pn ðcos hÞ ¼ sn ðcos hÞ ¼

where x ¼ p a=k, an and bn are Mie coefficients, which are the first class of Jn þ 1=2 ðzÞ semi-integer order Bessel functions and the second class Hn þ 1=2 ðzÞ a function related to semi-integer-order Hankel function; pn ðcos hÞand sn ðcos hÞ are Legendre polynomials relating only to the scattering angle h. It can be seen from the formulas (2)–(10) that the scattered light intensity distribution of the suspended particles is related to the human light wavelength k, the particle diameter a, and the particle refractive index m. To more clearly analyze the influence of suspended particles on the scattered light intensity, the incident light intensity I0 takes the unit intensity, and the relevant parameters are selected for MATLAB theoretical simulation to obtain the scattered light intensity values of the corresponding parameters at different scattering angles [4]. 2.2

Effects on Scattered Light in Different Situations

The effect of the particle refractive index on the intensity of scattered light According to the calculation of Mie theory, the calculation of the refractive index of the suspended particles is a machine and its important quantity, which is an important factor affecting the numerical calculation of the scattered light intensity. For particles of the same particle size and size, when the imaginary part of the refractive index is set, the larger the real part, the more obvious the peak change of the extinction curve. But the convergence speed does not change. The effect of the wavelength of the incident light on the intensity of the scattered light When the particle size of the particles is set, as the wavelength of the human light increases, the scattered light intensity gradually changes from being concentrated in the forward small angle to being concentrated in the forward slightly larger angle. If the wavelength of the human light is continuously increased, the scattered light is scattered. Strongly concentrated at a larger angle. Ideally, all the information of the obtained particles is measured. And it is necessary to detect the scattered light intensity in the range of 0° to 180°. Considering that it is difficult to measure all of them in practice, it

Design of Intelligent Classification Waste Bin with Detection

1245

is necessary to detect the concentrated scattered light intensity information instead of all the information. From the standpoint of theoretical derivation, the more concentrated the scattered light distribution, the clearer the scattering effect, and the stronger the collected signal, the higher the measurement accuracy [5]. The effect of suspended particle size on the distribution of scattered light The larger the particle size, the larger the scattered light intensity, and the more concentrated it is in the forward small angle. With the change of particle size, the vertical and horizontal components of the scattered light change regularly.

3 Infrared Sensor Ranging Principle The infrared distance measuring sensor is a sensor that measures the distance of an obstacle by using the principle of infrared reflection. When the infrared rays encounter an obstacle, a reflection occurs, and the strength of the reflected signal represents the distance between the sensor and the obstacle. The infrared ranging sensor has a pair of infrared signal transmitting and receiving diodes, the infrared transmitting tube is used for transmitting signals, the infrared signal is reflected by an obstacle, and the infrared receiving tube receives the reflected signal, and an analog voltage output is generated according to the strength of the signal. This design uses the GP2Y0A02 sensor, which combines the design structure of the optical triangulation principle so that the material of the measured object, the ambient temperature and the measurement time have little effect on the measurement accuracy of the sensor. The voltage value of the sensor output has a mapping relationship between the measured distances, and the detected distance can be obtained by measuring the voltage value. The GP2Y0A02 uses a triangulation method. And Fig. 1 shows its triangulation principle.

Fig. 1. Triangulation principle

Available from the geometric relationship in the figure: L¼

A f X

ð11Þ

1246

A. Zhang et al.

where: f is the focal length of the receiving lens; A is the center distance between the two lenses; X is the distance of the spot focused on the surface of the PSD from the center of the receiving lens; L is the distance to be measured. In this paper, the experimental data fitting is completed by the least square method, and the obtained nonlinear ranging model is assumed as follows: SðxÞ ¼ a0 u0 ð xÞ  a1 u1 ð xÞ þ    þ an un ð xÞ According to the inner product definition, the corresponding weighted inner product mark is introduced, and x is the weight coefficient. \uj  uk [ ¼

\f  uk [ ¼

m X

m X i¼0

xð xÞf ð xÞuk ð xÞ

ð13Þ

xð xÞf ð xÞuk ð xÞ ¼ dk

ð14Þ

i¼0

The sum of squared errors is: d¼

m X

xð xÞ½Sð xÞ  f ð xÞ2

i¼0

4 System Function Design See Fig. 2.

Fig. 2. System design block diagram

ð15Þ

Design of Intelligent Classification Waste Bin with Detection

1247

5 Haze Detector and Smart Bin Module Design The basic core of this subject is the AT89C55WD microcontroller. The theme mainly includes five modules, which are A/D analog-to-digital conversion module for conversion between digital simulations; dust sensor acquisition module for collecting dust in the surrounding environment; ADC0832 module; microprocessor control module, As the core control of the whole module; the display module is used to display information; the servo drive module transmits the pulse signal of different widths to the steering gear to achieve the control effect; the metal detection module, according to the feedback to the AT89C55WD high and low power Ping judge whether the garbage is metal; the infrared sensor ranging module is used to detect whether a person appears in a certain range and whether the garbage is full; the buzzer module, when the garbage bin is full of garbage, the buzzer will promptly report the alarm. The system detects the dust particles in the air through the sensor circuit and displays it on the LCD screen through the calculation of the AT89C55WD and A/D analog-to-digital conversion. 5.1

The Overall Hardware Circuit Schematic

All hardware circuit design diagrams of this system are designed using Altium Designer software (Figs. 3 and 4).

Fig. 3. Hardware circuit design

1248

A. Zhang et al.

Fig. 4. Mobile interface

5.2

WIFI Data Transmission Module

The ESP8266WIFI module is connected to the AT89C55WD ranging system and powered on. The mobile phone searches for the WIFI signal of the module and establishes a connection. Log into the 192.168.4.1 port, enter the router WIFI name and password, and copy the device number. Then open the mobile phone control terminal APP that is matched with the module, and by inputting the customized device name and the corresponding device number, the WIFI wireless communication between the ranging system and the mobile phone can be established. Besides, the control terminal mobile APP software can be further optimized by itself, the remote wireless control of the entire ranging system by the mobile terminal is realized by the wifi module and the human-computer interaction function of the remote operation height measurement is realized.

Design of Intelligent Classification Waste Bin with Detection

5.3

1249

LJA30A3-15-Z/BX Metal Detection Module

The metal sensor has the built-in anti-interference chip, accurate measurement, working voltage 6–36 V, output current 300 mA, response frequency 0.4 kHz, effective detection distance 15 mm. The output mode is DC three-wire NPN. When metal is not detected, it outputs a low level and detects that the metal output is high. When you need to adjust the sensitivity, there is a screw-like plane behind or at the side of the sensor. Turn the clockwise distance or sensitivity to increase and counterclockwise.

6 Conclusion In this paper, the methods of haze detection and the influence of scattered light intensity under different conditions are analyzed theoretically. The air quality detection system is designed and combined with the infrared sensor intelligent classification garbage bin to realize the combination of detection and protection. It is a piece of environmental protection equipment. According to the practice, this product has normal functions and convenient use. It has a good promotion effect on the greening environment and conforms to the social concept of modern green environmental protection. If it can be popularized in the future, it will greatly increase the degree of intelligent socialization and promote the beautiful and ecological environment.

References 1. Wen B (2018) Design of air quality detector based on single chip microcomputer. Innovation Appl Sci Technol 2. Lu D (2017) Design and research of an air quality detector. Smart City 3. Zhang H (2011) Design of fumigation temperature control system based on single-chip microcontroller. Procedia Eng 4. Santos JL, Antunes F, Chehab A, Cruz C (2005) A maximum power point tracker for PV systems using a high performance boost converter. Solar Energy 5. Airoldi A (2004) A chip removal facility for indium bump bonded pixel detectors. Nucl Inst Methods Phys Res

A False-Target Jamming Method for the Phase Array Multibeam Radar Network Liu Tao(&), Zong Siguang, Tian Shusen, and Peng Pei Naval University of Engineering, Jiefang Road, No. 717, Wuhan, China [email protected]

Abstract. For the false target jamming method of network radar, the data processing flow is analyzed. On this basis, the characteristics and weak links of the data processing algorithm of the networked radar system are analyzed. A method of false target disturbance based on phased-array multi-beam for the networked radar system is proposed. The paper analyzes its mathematical model, interference performance and applicable conditions, and provides corresponding technical solutions for the development of the simulation system. Keywords: Network radar

 False target  Multibeam  Jamming

1 Introduction There are two types of false target interferences for networked radars. The first is a cluttered false target whose main role is to delay the processing of information by increasing the amount of calculations of the radar. Followed by related false targets, the principle is to reduce the tracking performance of the radar of the networked radar by deceiving the interference and generate the wrong track [1, 2]. Consistency of spoofing effect is the first problem to be solved for deception targets for different radar interferences. Jammers will have time-space synchronism when they send out interfering signals. This paper uses transmit adaptive beamforming technology, analyser mathematical model, interference performance and practical conditions. This technology can effectively interfere with different radars. At the same time, the jammer realizes cross-interference suppression by forming different beams and zero points [3].

2 Analysis of False Target Interference The networked radar system has a centralized and distributed fusion structure. From the point of view of data processing, the biggest difference between the centralized structure and the distributed structure is that the convergence center of the centralized structure networking system centrally processes the target original information reported by the radars of each substation. The fusion center of the distributed structure networking system only deals with the track information reported by radars of each substation [4, 5]. The data fusion processing procedures are similar. Therefore, the false target interference technology should be generated for each single station, mainly in the two parts of data preprocessing and data fusion processing. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1250–1256, 2020 https://doi.org/10.1007/978-981-13-9409-6_148

A False-Target Jamming Method for the Phase Array Multibeam

1251

During system data fusion processing, track generation includes track start, track maintenance, and track retraction. The temporary track determines whether the track is a reliable track or needs to be canceled. The trails will be stored as clutter. These processes need to be implemented using associated points. Only false track deception jamming technology can be used to create high fidelity false trajectories. The process of forming the desired zero point by the beamforming technology is to adaptively adjust the antenna beam shape by analyzing the environment conditions. At the same time, controlling the parameters of the weighting coefficients of the left and right array elements is one of the more common methods for controlling the antenna beam. However, this method is more costly. Another method is to form a zero point in a single direction by changing the phase of the element. The best method when transmitting adaptive beams is the beam nulling method, which can change the phase of each transmitted signal without changing its amplitude and power. This method can ensure the full use of microwave power, realize zero-disturbance and other processing, and can form a depth zero point in multiple directions, so as to achieve targeted interference suppression of cross-interference.

3 The Establishment of Interference Model With the left tractive element as the phase reference center, the parameters of the antenna array are assumed. The number of array elements is L, and the spacing between array elements is d. In the case where each signal source in the space is independent of each other, the number of the narrowband signal sources is M. Then you can determine the receive signal of the array element k [6, 7]. xk ðtÞ ¼

M X

sm ðtÞejðk1Þqd sin hm þ nk ðtÞ

ð1Þ

m¼1

where, k hm nk ðtÞ

q ¼ 2p=k received signal wavelength; signal source normal azimuth; measuring noise.

And, X ðtÞ ¼ ASðtÞ þ N ðtÞ X ðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; . . .; xL ðtÞT

ð2Þ

SðtÞ ¼ ½s1 ðtÞ; s2 ðtÞ; . . .; xM ðtÞT

ð3Þ

A ¼ ½aðh1 Þ; aðh2 Þ; . . .; aðhm ÞT

ð4Þ

h iT aðhm Þ ¼ 1; ejqr d sin hm ; . . .; ejðL1Þqr d sin hm

ð5Þ

1252

L. Tao et al.

The weighted sum of observed signals from each array element is called array output. If the weight of the array element k is wk . And W ¼ ½w1 ; w2 ; . . .; wL T as a weight vector, available array output is YðtÞ ¼ W H XðtÞ. Receiving array pattern can be written as F ðhÞ ¼ W H aðhÞ. If the total output power of the array is EfYðtÞY H ðtÞg ¼ W H ARs AH W þ W H Rn W. Signal covariance matrix is Rs ¼ EfSðtÞSH ðtÞg. Noise Covariance Matrix is Rn ¼ EfNðtÞN H ðtÞg. Array performance metrics are the first issue to consider when making weighted vector selection because the purpose of beamforming is to optimize array performance. The optimal Wiener solution has a very close relationship with the optimal weight vector in many common performance measures. However, if one tries to determine the adaptive weight of the main lobe constraint, the noise variance performance measure of the inclusion measurement noise has a better measure effect. Studying the adaptive optimal weighting vector with the specified zero point can realize the simultaneous formation of zero point in the interference direction and the specified direction [8]. The solution of the optimal weight, that is, the minimum total solution of the total power output of the array, requires attention not only to the main lobe constraint. At the same time, we must also consider the issue of specifying zero constraints. It can describe the problem of specifying a zero constraint as: W H ag;j ¼ 0;

i ¼ 1;    Ng

ð6Þ

where, a Specified zero point direction; Ng Specified zero number. Then the problem is transformed into solving the optimal weight. While satisfying both the main-lobe constraint and the specified zero-point conditions, find the minimum value for the total output power of the array. C ¼ ½qag;j ; . . .ag;Ng ;

b ¼ ½1; 0; . . .0

WHC ¼ b

ð7Þ ð8Þ

To make the above solution exist, we need to meet Ng  N − 1. The Lagrange function is used to solve the above problem and the Lagrange function is defined as:   LðwÞ ¼ 0:5wH Rx w þ b b  wH C :

ð9Þ

rw LðwÞ ¼ Rx w  Cb ¼ 0

ð10Þ

wopt ¼ R1 x Cb

ð11Þ

A False-Target Jamming Method for the Phase Array Multibeam

1253

At the same time, the optimal must meet the specified zero constraint:  1 H b ¼ CH R1 b x C

ð12Þ

 H 1 1 H wopt ¼ R1 b x C C Rx C

ð13Þ

The radiation pattern forms a zero point at the same time in the direction of the unknown interference and in the specified direction according to the obtained transmitted beam weight. In the actual situation, the adaptive beamforming algorithm needs to adjust the weight by accepting the output information of the array. The choice of adaptive operation is the most important part of the entire process because it determines the complexity of the beamforming implementation process.

4 Simulation Experiment and Analysis The following simulation experiments are performed to analyze the beamforming in the desired direction of the phased-array interference technique and the interference nulling performance of other radar stations. At the same time, simulation experiments are conducted to analyze the effect of false target jamming interference on the radar network to form a trailing interference. Assume that the radar network contains three radars, and the spacing of the interference reflection front lines is even, with 16 array elements. The array element spacing is adjusted to half the wavelength, and the dry noise ratio is set to 20 dB and the number of snapshots is 512. The transmit beamforming main lobe direction is 0°, and the transmit beamforming effects under different interference nulling directions are analyzed.

Fig. 1. Interference zeroing direction (−60, −20, 20)

From the simulation results in Figs. 1 and 2, it can be seen that the use of the transmit adaptive beamforming algorithm forms a depth zero at the same time in both the direction and the direction of the interference, and its zero-depth is above 60 dB.

1254

L. Tao et al.

Fig. 2. Interference zeroing direction (−20, −10, 10)

This method can effectively symmetrically set zeros, thereby suppressing crossinterference, and providing targeted interference to deceptively interfering objects. If you use windows that can reduce the side lobes (such as Hamming windows, etc.), the effect will be better (Figs. 3 and 4).

38

330

Radar1 Radar2 Radar3

36

0

30

300

60 250

270

34

240

32 108

110

112 0

330

114

30

300

330 60

250

270 240

500 90

120 210

180

150

120 210

116

500 90

180

0

150

30

300

60 200

270 240

400 90

120 210

180

Fig. 3. False target disturbs interference emission

150

A False-Target Jamming Method for the Phase Array Multibeam

1255

38

37.5

37

36.5

36

35.5

35

34.5 108.5

109

109.5

110

110.5

111

111.5

112

112.5

113

113.5

Fig. 4. False target disturbs interference effect diagram

5 Conclusion The following simulation experiments are performed to analyze the beamforming in the desired direction of the phased-array interference technique and the interference nulling performance of other radar stations. At the same time, simulation experiments are conducted to analyze the effect of false target jamming interference on the radar network to form a trailing interference. Assume that the radar network contains three radars, and the spacing of the interference reflection front lines is even, with 16 array elements. The array element spacing is adjusted to half the wavelength, and the dry noise ratio is set to 20 dB and the number of snapshots is 512. The transmit beamforming main lobe direction is 0°, and the transmit beamforming effects under different interference nulling directions are analyzed.

References 1. Poirot J (2005) Application of linear statistical models to radar location techniques. IEEE AES 10(6):124–127 2. Zhao Y, Chen Y, Meng J et al (2011) A data processing method against multi-false-target deception jamming for distributed radar network. Electron Optics Control 18(3):25–30 3. Brits R, Eenelbrecht AP (2008) Locating multiple optimization using particle swarm optimization. 38(5):1270–1272

1256

L. Tao et al.

4. Praveen K, Sanghamitra B, Sankar K (2007) Multi-objective particle swarm optimization with time variant inertia and acceleration coefficients. Inf Sci 177(2):503–512 5. Jiang Q (2010) Netted radar countermeasures system introduction. National Defense Industry Press, Beijing 6. Blackman S (2006) Multiple-target tracking with radar application. Artech House, Boston 7. Li N (1998) ECCN efficacy assessment in surveillance radar analysis and simulation. Int Radar 98(3):1415–1419 8. Prather D (1981) Estimation of radar range biases at multiple site. Naval Research laboratory, vol 66(5), pp 124–127

Analysis of TDOA Location Algorithm Based on Ultra-Wideband Wenquan Li1 and Bing Zhao2(&) 1

College of Electronic Engineering, Heilongjiang University, Harbin 150080, China 2 Heilongjiang University, Harbin, China [email protected]

Abstract. Ultra-wideband has great advantages in indoor positioning. This paper introduces the TDOA technology commonly used in positioning systems. The two algorithms based on TDOA technology are emphasized: Chan and Taylor are described by mathematical modeling, and then simulated and compared. The algorithm localization results based on root mean square error are given. Keywords: Ultra-wideband

 TDOA  Chan  Taylor

1 Introduction With the development of information technology and the demand for spatial location information services, people’s accuracy requirements for spatial location are becoming more and more accurate. Researchers need to develop a set of indoor positioning systems that meet the requirements of positioning accuracy and low cost [1]. Among the existing wireless communication technologies, ultra-wideband (UWB) signals have the advantages of high transmission rate, low power consumption, and strong antiinterference, so that when they are applied to a wireless positioning system, In the positioning accuracy, there are advantages that other positioning technologies can’t match, which has become a reliable choice for short-range wireless positioning [2]. This paper introduces two classic classical position estimation algorithms based on TDOA technology: Chan and Taylor algorithms. And based on TDOA, the positioning accuracy performance simulation of the two is performed.

2 TDOA Positioning Algorithm Description The TDOA positioning algorithm is a method of positioning based on time difference, which is also called hyperbolic positioning [3]. If there are N base stations in the twodimensional space, where the position coordinates of the target node M to be sought are ðx; yÞ, the base station position coordinates are ðxi ; yi Þ. Then its hyperbolic mathematical model can be obtained:

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1257–1261, 2020 https://doi.org/10.1007/978-981-13-9409-6_149

1258

W. Li and B. Zhao

ri;1

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ðxi  xÞ2 þ ðyi  yÞ2  ðx1  xÞ2 þ ðy1  yÞ2

ð2:1Þ

where ri;1 is the difference between the distance between the target node and the ith and 1st base station. 2.1

Based on Chan Algorithm

Chan algorithm is a position estimation algorithm based on TDOA non-recursive hyperbolic equation with analytical expression solution [4]. Its main feature is that when the noise conforms to the Gaussian distribution, its positioning accuracy is high, and the positioning accuracy can be improved by increasing the number of base stations. And the amount of calculation is relatively small. It is assumed that the base stations such as the base station Bi are known as ðx1 ; y1 Þ, ðx2 ; y2 Þ,ðx3 ; y3 Þ, wherein the i-th base station position is ðxi ; yi Þ, and the position of the moving target M to be tested is unknown, and the coordinates are ðx; yÞ. Then the distance between M and the i-th base station is ri : ri ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð xi  x Þ 2 þ ð y i  y Þ 2

ð2:2Þ

Available from Eqs. (2.1) and (2.2):  2 ri2 ¼ ri;1 þ r1

ð2:3Þ

Bringing the formula (2.1) into the formula (2.3) gives: 2 þ 2ri;1 ri ¼ Ki  K1  2xi;1 x  2yi;1 y ri;1

ð2:4Þ

where K i ¼ x2i þ y2i , xi;1 ¼ xi  x1 . Equation (2.4) is a key step: it eliminates the square term of the unknown, only one term remains, and a series of linearities is obtained equation set. For example, when i = 1, 2, 3, you can have the following expression: (

r22;1 þ 2r2;1 r1 ¼ ðK 2  K 1 Þ  2x2;1 x  2y2;1 y \!endarray [ r23;1 þ 2r3;1 r1 ¼ ðK 3  K 1 Þ  2x3;1 x  2y3;1 y

ð2:5Þ

In Eq. (2.5), we must first understand that ri;1 ; K i , and xi;1 are known, and r1 , ðx; yÞ are unknown. However, ri can be obtained by the formula (2.2), so the coordinates ðx; yÞ of the moving target M can be obtained. 2.2

Taylor Series Expansion Positioning Algorithm

The Taylor series expansion algorithm is a kind of recursive algorithm. It needs to choose a recursive algorithm for the initial moving object (MS) initial coordinate value. It solves the local least squares of the TDOA measurement error by solving the

Analysis of TDOA Location Algorithm Based on Ultra-Wideband

1259

recursive iterations again and again [5]. The (LS) solution optimizes the position estimate for the MS. Assuming the initial coordinate values are ðx0 ; y0 Þ, x ¼ x0 þ Dx, y ¼ y0 þ Dy, the false set threshold is b, and jDx þ Dyj\b. Equation (2.1) is Taylor series expansion at ðx0 ; y0 Þ, because the number of items in the expansion formula greater than two is very small, so it can be ignored. Its expansion is as follows: x ¼ hi  Gi d

ð2:6Þ

among them, 2

3 r2;1  ðr2  r1 Þ   6 r3;1  ðr3  r1 Þ 7 Dx 6 7 d¼ ; Gi ¼ 6 7 .. Dy 4 5 . ri;1  ðri  r1 Þ

2 x1 x

y1 y y2 y 3  x2rx r1  r2 2 y1 y y3 y 7  x3rx r1  r3 7 3 6 7 hi ¼ 6 .. 7 4 5 . yi y x1 x xi x y1 y   r1 ri r1 ri r1 6 x1 x 6 r1

The final weighted least squares solution of Eq. (2.6) is:  1 d ¼ GTi Q1 Gi GTi Q1 hi

ð2:7Þ

where Q represents the covariance matrix, and if d meets the threshold value b, then the problem solving standard condition of the algorithm is reached. Then the position coordinate value of the unknown target node can be solved.

3 Algorithm Analysis Comparison The positioning performance of the analysis and comparison algorithm is analyzed, and the simulation is verified by Matlab. It is assumed that the positioning range is 100  100 two-dimensional space, and the number of reference base stations is 3, 4, 5, 6, and 7, respectively. The positioning performance is reflected by the root mean square error. Both algorithms are derived from 1000 operations. It can be seen from Fig. 1 that for the Chan algorithm, in the Gaussian noise environment, the number of base stations that are located increases, which can improve the positioning accuracy of the algorithm. The positioning performance is optimized. It can be seen from Fig. 2 that the performance of the Taylor positioning algorithm is not affected by the base station, and when the number of base stations exceeds 4, the positioning performance is optimal, but second to the Chan algorithm.

1260

W. Li and B. Zhao 250 7BS 6BS 5BS 4BS 3BS

RMSE/cm

200

150

100

50

0

0

20

40

60

80

100

120

140

160

180

200

TDOA error standard deviation/cm

Fig. 1. Comparison of Chan algorithms under different base station numbers 250

7BS 6BS 5BS 4BS 3BS

RMSE/cm

200 150 100 50 0

0

20

40

60

80

100

120

140

160

180

200

TDOA error standard deviation/cm

Fig. 2. Comparison of Taylor algorithm under different base stations

4 Conclusion Based on TDOA technology, this paper simulates the Taylor and Chan algorithms, and analyzes the influence of the number of positioning base stations on the positioning accuracy of the two. It is concluded that the Chan algorithm is better than the Taylor algorithm under ideal conditions. Acknowledgements. This work was supported by the National Natural Science Foundation of Chian under Grant 61801173.

Analysis of TDOA Location Algorithm Based on Ultra-Wideband

1261

References 1. Yang B (2016) Based on ultra-wideband positioning technology research and application. University of Electronic Science and Technology 2. Segura M, Mut V, Sisterna C (2012) Ultra wideband indoor navigation system. IET Radar Sonar Navig 6(5):402–411 3. Luo P, Xiang F, Mao J et al (2014) Research on TDOA co-localization algorithm based on linear decrement weight PSO and Taylor method based on natural selection. J Comput Appl 31(4):1144–1150 4. Lu Y, Wang B, Qiu G (2015) The positioning of CHAN algorithm in LOS and NLOS environment. Comput Technol Dev:61–65 5. Li XL (2016) Research on mobile location algorithm based on UWB. Inner Mongolia University, Huhhot

Algorithm Design of Combined Gaussian Pulse Xunchen Jia1 and Bing Zhao2(&) 1

College of Electronic Engineering, Heilongjiang University, Harbin 150080, China 2 Heilongjiang University, Harbin, China [email protected]

Abstract. In order to obtain better spectrum utilization under radiation confinement constraints, the radiation masking requirements outside the 3.1– 10.6 GHz band, this paper chooses to study the random number algorithm and the least mean square error criterion algorithm commonly used in designing Gaussian pulses. MATLAB is used to simulate the power spectrum image, compare and analyze, and choose a better algorithm to meet the requirements under various radiation masking conditions. Keywords: Ultra-wideband algorithm

 Combined gaussian pulse  Random number

1 Introduction The research on UWB pulse has never stopped, and the domestic research on its existence is large but not deep enough. However, there is a unique way to eliminate the mutual interference between UWB and narrowband wireless communication systems and realize the research on coexistence [1]. Regarding the optimal design of UWB pulse waveforms, experts and scholars have proposed many design methods, most of which only meet the FCC radiation masking requirements, and can not meet other requirements, lacking certain flexibility. This paper addresses the radiation masking standards set by the FCC and The existing communication industry standard YD/T 2237-2011 is a radiation mask that conforms to the Chinese standard. Two algorithms are used: the Random Selection algorithm and the Least Square Error algorithm [2]. These two algorithms study the optimization problem of Gaussian pulse signal combined waveform.

2 Combined Algorithm Design One of the important steps in ultra-wideband communication is the generation of pulse waveforms [3]. The commonly used pulse waveform is a narrow pulse of ns level, and its signal model can be expressed by the following formula:

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1262–1266, 2020 https://doi.org/10.1007/978-981-13-9409-6_150

Algorithm Design of Combined Gaussian Pulse

f ðt Þ ¼

1 X

ai pð t  t i Þ

1263

ð2:1Þ

i¼1

In Eq. (2.1): and represent the amplitude and delay of the single-cycle pulse transmitted by the system, respectively. This paper chooses to study Gaussian pulse signals with characteristics such as easy generation, easy combination, infinite differentiation and so on. A Gaussian function can be represented by the following formula: pffiffiffi t2 2 2pt22 1 e a f ðtÞ ¼  pffiffiffiffiffiffiffiffiffiffi e2r2 ¼  a 2pr2

ð2:2Þ

In the formula (2.2): The continuous derivation of the Gaussian function can obtain an infinite number of pulses, and the order derivative can be expressed by the following method: f ðkÞ ðtÞ ¼ 

4pt ðk1Þ 4pk f ðtÞ  2 f ðk2Þ ðtÞ 2 a a

ð2:3Þ

In Eq. (2.3): is the order of the derivative function, which is the order Gaussian derivative function. Changing the Gaussian derivative differential order affects its ESD, peak frequency, and pulse bandwidth. Both Gaussian pulse differentiation and changing pulse formation factors can change the ESD of Gaussian pulses, both of which can be used to design ultra-wideband signal waveforms. 2.1

Random Selection Algorithm

It is a classic method of designing a combined waveform that is compatible with radiation masking. Using a linear combination of Gaussian derivatives, each derivative is characterized by a given value. They can be thought of as independent basis functions in dimensional space. The choice of the weighting factor for the linear combination depends on the design requirements [4]. In this paper, the FCC requirements for radiation masking are to be met. The process of selecting the weighting factor can be divided into the following processes: (1) Select a set of basis functions. (2) Randomly generate a set of weight coefficients, recorded as. Verify that the power spectral density of the weighted linear combination satisfies the radiation masking. If the radiation masking requirement is met earlier and is the first set of coefficients that satisfy the condition, then it is assumed to be initialized. If the requirements for radiation masking are previously met but the process has been initialized, i.e., an initial value that satisfies the requirements has been assigned, then the resulting comparison is compared to if the resulting waveform is closer to the radiation than the power spectral density of the resulting waveform. Masking, then set.

1264

X. Jia and B. Zhao

(3) Repeat the second step until the distance between the power spectral density of the generated waveform and the radiation mask is below a fixed threshold. The first fifteenth derivative function of the Gaussian pulse is selected as the basis function. 2.2

Random Selection Algorithm

The random selection algorithm is just one way to set the linear combination coefficient [5]. The minimum mean square error criterion algorithm LSE is a more systematic method of selecting these coefficients, which follows the principle of using a standard to minimize the following error function: Zþ 1 jRM ðtÞ  RF ðtÞj2 dt

e¼ 1

ð4Þ

 2 32  Zþ 1 Zþ 1  N X    2  4 5 ¼ ak fk ðnnÞk ðn þ tÞdn  dt RM ðtÞ    k¼1 1

1

3 Simulation Comparison Analysis According to the above method, we select two sets of data for simulation experiments. The first group selects the first order pulse factor to be 1.5 ns, and the other Gaussian derivative functions have pulse factor values of 0.314 ns. The pulse factor of the second group of one to fifteenth derivative functions is 0.714 ns (Figs. 1 and 2). 0

0 FCC Radiation masking

-50

-50

-100

-100 Combined waveform PSD

-150

PSD [dBm/MHz]

PSD [dBm/MHz]

FCC Radiation masking

-200 -250 -300 -350 -400

-150 Combined waveform PSD -200 -250 -300 -350

0

2000

4000

6000

8000

Frequency [MHz]

10000

12000

-400

0

2000

4000

6000

8000

Frequency [MHz]

Fig. 1. First derivative function a = 1.5 ns, the rest a = 0.314

10000

12000

Algorithm Design of Combined Gaussian Pulse 0

-150 Combined waveform PSD

-200 -250 -300

FCC Radiation masking

-50

PSD [dBm/MHz]

PSD [dBm/MHz]

-100

-100 -150 -200 -250

Combined waveform PSD

-300 -350

-350 -400 0

0

FCC Radiation masking

-50

1265

2000

4000

6000

8000

10000 12000

-400

0

2000

4000

6000

8000

10000 12000

Frequency [MHz]

Frequency [MHz]

Fig. 2. All a = 0.714 ns

The analysis of graph got two conclusions. The first conclusion is that when the algorithm is the same, the change of a value will affect the waveform. The second conclusion is that when the pulse factor value is fixed, the effect of combining the waveforms of the algorithm is compared. It is found that the minimum mean square error criterion algorithm has defects. The algorithm only follows the principle of minimizing the absolute error between the obtained power spectral density and radiation masking. The PSD of the combined waveform must be lower than the radiation mask.

4 Conclusion This paper introduces the basic principles of the random selection algorithm and the minimum mean square error algorithm. At the same time, the two algorithms are used to combine and design the Gaussian pulse and its derivative pulse. The simulation is carried out by MATLAB to verify whether the two pulse combinations can satisfy the FCC. Radiation masking standard, in which the combined waveform generated by the random selection algorithm is under the FCC radiation masking standard, the minimum mean square error criterion algorithm cannot guarantee that the PSD of the resulting combined waveform is lower than the radiation masking, and the random selection algorithm is better. Acknowledgements. This work was supported by the National Natural Science Foundation of Chian under Grant 61801173.

References 1. Zhang X, Bao W (2000) Communication signal processing. National Defense Industry Press 2. Yue G, Ge L (2000) Overview of ultra-wideband radio. J PLA Univ Sci Technol: Nat Sci Ed 3. Xu X, Wang C (2009) Coexistence of ultra-wideband radio signals with other wireless communication systems. International Information Technology and Applications Forum

1266

X. Jia and B. Zhao

4. Lin Z, Wei P (2005) A UWB communication pulse design method for FCC radiation masking. In: Proceedings of the 2005 national conference on ultra-wideband wireless communication technology 5. Liu Y, Ye J, Qian X (2008) An ultra-wideband pulse waveform design method based on FCC radiation masking. In: 2008 Communication theory and technology new development, Proceedings of the 13th national youth communication conference (Part II)

A Network Adapter for Computing Process Node in Decentralized Building Automation System Liang Zhao1(&), Zexin Zhang1, Tianyi Zhao2, and Jili Zhang2 1 Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education, School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China [email protected] 2 Institute of Building Energy, Dalian University of Technology, Dalian 116024, China

Abstract. Based on the STM32 processor, a network adapter is designed, which can realize data communication and protocol conversion between sensor and computing process node (CPN) in group intelligent projects. The adapter needs to be configured firstly, to know the communication parameters of the sensor and the corresponding storage location in the CPN. After that, the adapter begins to collect data, convert the message format, and send data to the CPN periodically. The results of practical project reveal that the proposed adapter is reliable and steady. Keywords: Computing process node  Network adapter  Protocol conversion

1 Introduction The operation level of most building automation systems is generally low, and it does not play the function of real automatic control and intelligent management because of the incompatibility of communication protocol [1]. In order to solve this problem, the Building Energy Conservation Research Center of Tsinghua University proposes a group intelligent building control management system after several years of research, which is a kind of network and control mode without center, self-Organization and selfknowledge [2, 3]. The system has the characteristics of practicality and good versatility in practical application. CPN (Computing Process Node) is the core device of the proposed system, integrated WiFi communication and Ethernet communication interfaces, which can be embedded in building space and large electromechanical equipment. However, most of the sensors, and controllers in the building space unit can’t communicate directly with CPN because they only have RS485 communication interface and support the Modbus RTU communication protocol [4]. To this end, this paper develops a CPN network adapter, through which the sensor can communicate with CPN and realize the rapid deployment of group intelligent projects.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1267–1271, 2020 https://doi.org/10.1007/978-981-13-9409-6_151

1268

L. Zhao et al.

2 System Structure As shown in Fig. 1, the adapter works as a bridge between the sensor and the CPN. At the perceptual layer, sensors and instrumentation that comply with the standard Modbus RTU communication protocol can be connected to the adapter via a wired RS485 bus connection. At the network layer, the adapter (TCP client) initiates a communication connection via WiFi as a to CPN (TCP server). Once connected, the adapter converts the collected data according to the CPN message format and sends it to the CPN.

Fig. 1. Network structure of adapter and CPN

3 System Design 3.1

Communication Protocol

Network adapter is the bridge between the sensor and the CPN, the communication protocol is the communication language between sensor, network adaptor and CPN. Communication protocol is divided into two formats, read data message and write data message, as shown in the Table 1. Table 1. Communication protocol format Format

ID

Function

Type

Start

Length

Write/Read Positive response

6 ID 6 ID 6

1 Fuction 1 Fuction 1

1 Type 1 Type 1

2 Start 1 Error 1

1 Length 1

Negative response

CRC 2 Data 2*length CRC 2

CRC 2

A Network Adapter for Computing Process Node in Decentralized

1269

The network adapter ID takes up 6 bytes. The function code is divided into 03, the reading message, and 04, the writing message. The type code is divided into 4 kinds, serial port configuration-1, acquisition module configuration-2, real-time data configuration-3 and adapter configuration-4. 3.2

Hardware Design

As shown in Fig. 2, the hardware circuit of the network adapter can be divided into power management module, processor module, data acquisition module, network transmission module and other auxiliary modules. The processor uses the STM32F103C8T6 with Cortex-M3 kernel, which is equipped with 64 k SRAM memory, 512 kB Flash memory, and has three channel serial communication interfaces. RS485 acquisition circuit is based on MAX13487, which is integrated automatic selection control function, and programmers do not need to care about sending and receiving switching processes. WiFi communication module is M0E10XPX, which can realize transparent transmission via TTL serial port to MCU. In addition, three LEDs are added to represent the working status of the adapter, that is, configuration, acquisition and transmission statuses.

Fig. 2. Hardware circuit structure diagram

3.3

Software Design

Figure 3 shows the flow chart of adapter. After initialization, the processor will check whether the adapter has been configured. If it has already been configured, the sensor is collected, formatted, and sent data to CPN according to configuration information. Otherwise, wait for the configuration message of the host computer to be received and parse the configuration.

1270

L. Zhao et al.

The contents of the configuration message include: serial communication parameters, sensor address, data starting address, data length and register address in CPN. If multiple sensors are configured, the adapter rotation the sensor in a configured order and packs the data for each sensor independently.

Fig. 3. Flow chart of network adapter

4 Conclusion In this paper, a network adapter for group intelligent project is designed and developed to assume the bridging effect between sensors and CPN. The adapter has the functions of protocol conversion, data acquisition, data processing and network transmission. The experiments in decentralized control project prove that its performance is stable and efficient, the communication is reliable. At present, the uplink transmission function from sensor to CPN is realized, next stage will continue to research downlink communication function for PLC and other field controllers. The proposed network adapter will have a wide range of applications with the application of group intelligent projects in the buildings.

A Network Adapter for Computing Process Node in Decentralized

1271

Acknowledgements. This work is supported by National Key Research and Development Project of China No. 2017YFC0704100 (entitled New generation intelligent building platform techniques), by National Natural Science Foundation of China (61803067) and by Dalian Highlevel Talent Innovation Support Program (Youth Technology Star) (Grant No. 2017RQ099).

References 1. Yunchuang D, Ziyan J, Shen Q (2015) A decentralized algorithm for optimal distribution in HVAC systems. Building Environ 95:21–31 2. Ziyan J, Yunchuang D (2017) A decentralized, flat-structured building automation system. Energy Procedia 122:68–73 3. Qianchuan Z, Ziyan J (2018) Insect intelligent building (I2B): a new architecture of building control systems based on internet of things (IoT). In: International conference on smart city and intelligent building, vol 890, pp 457–466 4. Liang Z, Jili Z, Ruobing L (2013) Development of an energy monitoring system for large public buildings. Energy Build 66:41–48

Model Reference Adaptive Control Application in Optical Path Scanning Control System Lanjie Guo(&), Hao Wang, Wenpo Ma, and Chun Wang Beijing Institute of Space Mechanics & Electricity Beijing, Beijing 100094, China [email protected]

Abstract. Improving the optical path scanning range of Fourier transform spectrometer (FTS) interferometer is a necessary for obtaining fine spectra. Translating optical path scanning is used to achieve large optical path difference (OPD). The permanent magnet linear synchronous motor (PMLSM) is used as the driver, and the position and speed are selected as the state variables to establish the state space model. This paper presents a model reference adaptive control (MRAC) algorithm, by designing a second-order system with the same order as the controlled object in advance as an ideal model. When the system is disturbanced, the control quantity is adaptively adjusted to eliminate errors between the actual system and the reference, and to achieve high tracking performances. Adding interferences to the system, the simulation results show that there is a delay of 0.1 s when tracking the reference position, the velocity has a super adjustment of 3.6%, and the velocity stability of the constant velocity range is 99.237%, which satisfies the requirement, indicating that the control strategy is effective. Keywords: Interferometer control system model  Model reference adaptive control

 Speed stability  State space

1 Introduction For the time-modulated FTS, the moving mirror scanning mechanism and its control system in the interferometer system are one of the core components that determine the overall performance of the spectrometer [1]. The effective scanning range of the moving mirror, that is, the maximum OPD determines the spectral resolution of the spectrometer. The motion of the moving mirror in the interferometer produces the change of the OPD. The stability of the moving velocity directly affects the signal-to-noise ratio after the Fourier transform. Domestic and foreign literatures and engineering experiences show that to achieve satisfactory spectrometer performances, its scanning velocity stability must reach more than 99%. At present, the on-board interferometer moving mirror mechanism and its control system working in orbit at home and abroad are represented by the GF-5 satellite [2]. The optical path scanning adopts the swing arm. There is no need to consider friction effects, and the use of voice coil motor drive makes the system less nonlinear [3]. As the swing range is limited, a translating optical path scanning is presented to achieve low speed, high stationarity, and long scanning path. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1272–1279, 2020 https://doi.org/10.1007/978-981-13-9409-6_152

Model Reference Adaptive Control Application in Optical Path

1273

The scanning control method mainly adopts the traditional methods of PID, pole configuration and feedforward. Considering the unpredictability of the spaceborne environment, it is necessary to improve the robustness of scanning system. This paper proposes the MRAC [4] to improve the system’s perturbation ability. Firstly, the translating scanning structure diagram and the OPD generation mechanism are given. Then the mathematical model and state space expression of the PMLSM are established. Applying Lyapunov stability theorem to analyze the system stability, and the control requirements are met by designing the damping ratio and angular frequency of the model. The position, velocity and acceleration curves of the moving mirror motion are planned as reference inputs, and the effectiveness of the control strategy is verified by Matlab numerical simulation.

2 Optical Path Scanning System Analysis 2.1

Composition of Optical Path Scanning System

The translational interferometer needs to complete high stationarity reciprocating motion. PMSLM is used as the scanning actuator. The corner mirror base is directly mounted on the iron core of the linear motor. Under the drive of electric current, it moves back and forth along the linear guide rail. The OPD modulation is completed by means of the roof mirror at both ends of the guide rail. The PMSLM has the advantages of zero cogging effect, low speed stability and high dynamic characteristics. The system composition and optical path diagram are shown in Fig. 1. The combination of the corner mirror reciprocating and the reflection of the roof mirror form a 4:1 relationship between the maximum optical path and the mechanical path. When the angular mirror moving lm, the OPD is expressed as OPD ¼ 4l

Laser detector

ð1:1Þ

Measurement laser

Outgong beam Incident beam

Linear grating

Roof mirror Angle mirror

Permanent magnet synchronous linear motor

Fig. 1. The translating optical path scanning control system

1274

2.2

L. Guo et al.

State Space Model of the Controlled Object

The PMLSM is converted from a rotating electric machine, and establishing a mathematical model of the d-q axis. In the stator coordinate system, the three-phase current is decomposed into orthogonal two-phase currents by 3/2 transformation, and the twophase orthogonal currents in the stator coordinate system are converted into two-phase orthogonal current ðid ; iq Þ in the rotor coordinate system by PARK transform. Finally, the d-q axis mathematical model of the PMLSM is obtained, and the vector control strategy id ¼ 0 is adopted to obtain the voltage balance equation of the motor, Lq

diq ðtÞ dlðtÞ þ Riq ðtÞ ¼ uðtÞ  Ke dt dt

ð2:1Þ

The motion equation is represented as m

d2 l ð t Þ dlðtÞ dlðtÞ ¼ Kt iq ðtÞ  TL ðtÞ  Bb ¼ Tm ðtÞ  TL ðtÞ  Bb dt dt d2 t

ð2:2Þ

where uðtÞ, iq ðtÞ, L and R are, respectively, motor voltage, motor current, motor inductance, and electrical resistance; Kt and Ke represents force constant and backEMF constant; m, lðtÞ, Bb being load quality, moving distance, and viscous friction coefficient; T ðtÞ ¼ K i ðtÞ is the motor force; T ðtÞ is load resistance. Let v ¼ dlðtÞ be m

t q

L

dt

2 load movement acceleration velocity, and a ¼ d lðtÞ be the load movement acceleration, d2 t Eq. (1.2) can be rewritten as

ma ¼ Kt iq ðtÞ  TL ðtÞ  Bb v

ð2:3Þ

Considering the response time of the current loop is faster than the speed loop and the position, so the current loop is simplified into a proportional in the theoretical analysis, ignoring the motor inductance, the controlled system is assumed as a secondorder system. Let x1 ðtÞ ¼ lðtÞ; x2 ðtÞ ¼ vðtÞ are state variables. The expression for the state space is 8 < x_ 1 ¼ x2   Kt Bb Rm t Ke ð2:4Þ x2 þ Rm u  Rm T  x x_ 2 ¼  KRm L 2 Kt Kt : y ¼ x1 

 x1 ðtÞ Let xðtÞ ¼ 2 R2 be the state vector, x_ ðtÞ is the first-order differential of x2 ðtÞ     0 1 0 22 2R xðtÞ; A ¼ represents system matrix; B ¼ Kt 2 R2 represents t Ke 0  KRm Rm   1 control vector; C ¼ 2 R2 represents output vector; D ¼ 0 represents direct 0 Bb Rm transfer vector; f ðtÞ ¼  Rm Kt TL  Kt x2 ðtÞ represents unknown perturbation. Therefore, Eq. (2.4) can be written as the following compact form,

Model Reference Adaptive Control Application in Optical Path

1275

x_ ðtÞ ¼ AxðtÞ þ BðuðtÞ þ f ðtÞÞ

ð2:5Þ

yðtÞ ¼ CT xðtÞ

ð2:6Þ

where ðA; BÞ is controllable.

3 Model Reference Adaptive Control Stability Analysis This paper design a MRAC method based on the position loop to realize the adaptive adjustment of the control amount to reduce the impact of perturbation on the system. The MRAC schematic is shown in Fig. 2. r (t )

xref (t )=A ref xref (t ) Bref r (t )

xref (t )

Kr x(t )=Ax(t ) B(u(t )

f ( x))

x (t )

e( t )

Kx ˆT (t ) (x)

(t )=

(x)eT PB

Fig. 2. Diagram of the model reference adaptive control framework

For Eq. (2.5), the state feedback adaptive rate is designed to make the system state xðtÞ globally uniformly asymptotically track the reference state xref ðtÞ 2 R2 , the reference model is x_ ref ðtÞ ¼ Aref xref ðtÞ þ Bref rðtÞ

ð3:1Þ

where Aref 2 R22 is Hurwitz matrix, and rðtÞ 2 R is bounded position information. Assume that there are ideal known control gains Kr 2 R and Kx 2 R12 satisfy Aref ¼ A þ BKx and Bref ¼ BKr , and the unknown perturbation is written as f ðtÞ ¼ hT ðtÞuðxðtÞÞ; f ðtÞ 2 R, where hðtÞ 2 Rtm is a known constant matrix, uðxðtÞÞ ¼ ½u1 ðxÞ; . . .; um ðxÞT 2 Rm is a known regression vector. The control objective is to design state feedback adaptive laws so that the system state xðtÞ globally asymptotically tracks the reference state xref ðtÞ, and it is also necessary to ensure that all signals of the closed-loop system remain bounded during the tracking. Therefore, given any bounded reference rðtÞ, the control input u must satisfy the tracking error eðtÞ ¼ xðtÞ  xref ðtÞ globally asymptotically tending to zero, that is limt!1 kxðtÞ  xref ðtÞk ¼ 0.

1276

L. Guo et al.

Assuming that matrix A is known, consider the gain-adjustable control amount as uðtÞ ¼ u0 ðtÞ þ ua ðtÞ

ð3:2Þ

where u0 ðtÞ is formal control law, ua ðtÞ is adaptive control law, that is u0 ðtÞ ¼ Kx xðtÞ þ Kr r ðtÞ

ð3:3Þ

ua ðtÞ ¼ ^hT ðtÞuðxðtÞÞ

ð3:4Þ

where ^hðtÞ is the estimation of weight vector hðtÞ. Define its update law as ^h_ ðtÞ ¼ CuðxÞeT PB, C ¼ CT [ 0 is adaptive rate, P ¼ PT [ 0 and Q ¼ QT [ 0 satisfy

the following algebraic Lyapunov equation, PAref þ ATref P ¼ Q. Substituting Eqs. (3.2)–(3.4) into Eq. (2.5), and let DhðtÞ ¼ ^ hðtÞ  hðtÞ, we can get x_ ðtÞ ¼ AxðtÞ þ BðKx xðtÞ þ Kr rðtÞ  ^hT ðtÞuðxðtÞÞÞ þ BhT ðtÞuðxðtÞÞ ¼ AxðtÞ þ BðKx xðtÞ þ Kr rðtÞÞ þ BDhT ðtÞuðxðtÞÞ

ð3:5Þ

Subtract Eq. (3.5) from Eq. (3.1), the error derivative is expressed as e_ ðtÞ ¼ x_ ðtÞ  x_ ref ðtÞ ¼ Aref ðxðtÞ  xref ðtÞÞ þ Bref  BDhT ðtÞuðxðtÞÞ

ð3:6Þ

¼ Aref eðtÞ  BDh ðtÞuðxðtÞÞ T

Consider the following Lyapunov equation:   V eðtÞ; DhT ¼ eT ðtÞPeðtÞ þ DhT C1 Dh

ð3:7Þ

where P ¼ PT [ 0, then the time derivative of VðeðtÞ; DhÞ, evaluated along the trajectories of (14), V_ ¼ 2eT ðtÞP_eðtÞ þ 2DhT C1 Dh_ ¼ 2eT ðtÞPðAref eðtÞ  BDhT ðtÞuðxðtÞÞÞ þ 2DhT C1 CuðxÞeT ðtÞPB

ð3:8Þ

¼ eT ðtÞPAref eðtÞ þ eT ðtÞATref PeðtÞ then V_ ¼ eT Qe  0. Therefore, tracking error eðtÞ and the error of parameter estimation Dh is uniformly bounded. Because of rðtÞ is bounded, and Aref is Huewitz matrix, xref ðtÞ and x_ ref ðtÞ is bounded. Ultimately, system state xðtÞ is uniformly bounded. As uðtÞ is also uniformly bounded, then x_ ðtÞ and e_ ðtÞ are bounded. Furthermore, the second-derivative of VðtÞ, € ¼ 2eT Q_e is bounded. In summary, V_ is consistent and continuous. Apply Barbalat V _ ¼ 0. It is further proved that lemma [5], we can obtain limt!1 VðtÞ limt!1 kxðtÞ  xref ðtÞk ¼ 0.

Model Reference Adaptive Control Application in Optical Path

1277

4 Matlab Simulation and Results The parameters of the motor are R ¼ 3:05 X; L ¼ 6:5  104 H; Kt ¼ 11:8794 N=A; Ke ¼ 7:9097 V=ðm=sÞ, then we have 













1

0

x1 ðtÞ 0 1 0 x_ 1 ðtÞ C B T ¼ þ @uðtÞ  h ðtÞuðxðtÞÞ A x_ 2 ðtÞ x2 ðtÞ 0 6:1615 0:5187 |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} |fflfflfflffl{zfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflffl{zfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl} f ðtÞ x_ ðtÞ

A

B

xðtÞ

Let initial value of state x1 ðtÞ ¼ 0; x2 ðtÞ ¼ 0. The parametric unknown perturbation f ðtÞ ¼ hT ðtÞuðxðtÞÞ ¼ 0:2314x1 þ 0:7878x2  0:624jx1 jx2 x2

The reference model is Gref ¼ s2 þ 2nxnn þ x2 , where the natural angular frequency n xn ¼ 8, and the damping ratio n ¼ 0:88, then we have      0 1 0 1 0 þ Kx ¼ x2n 2nxn 0 6:1615 0:5187 |fflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl} 

A

B

Aref

So the feedback gain Kx ¼ ½123:3928  15:2670, and  thefeed-forward gain 5 2 . In one scanning Kr ¼ 123:3928. Let the adaptive gains C ¼ 100 and Q ¼ 2 5 period, the corner mirror moves at a velocity of 0.010625 m/s for 4.8 s, following the sinusoidal commutation of 0.5 s, then moves 4.8 s at the velocity of 0.010625 m/s. The whole scanning period is 11.6 s. Applying the MRAC, the displacement output, velocity output, and velocity error are depicted in Figs. 3, 4, and 5, respectively. Then we can obtain the following conclusions. • As shown in Fig. 3, the displacement output of the MRAC system can track the r (t) with high accuracy. • Seeing the Figs. 4 and 5, the peak velocity oscillation in the commutation is 0.000390093 m/s, there is a super adjustment of 3.6%. The ratio of the standard deviation and average value of velocity is 0.002195, that is to say, the velocity stability in constant velocity section is 99.78%, that is less than 1. This shows that the control strategy is feasible.

1278

L. Guo et al. 0.04 Dref Dout

0.03 0.02

0.03 0.02

0.01

0.01 13

14

15

16

0 -0.01 -0.02 -0.03

0

5

10

15

20

25

30

35

40

45

Fig. 3. Displacement output of MRAC system versus t 0.015

Vref Vout

0.01 0.005

0.01 0.005

0

0

-0.005

-0.005

-0.01 -0.01 -0.015

0

5

10

15

20

25

30

35

40

45

0

2

4

6

8

Fig. 4. Velocity output of the MARC system versus t -3

x 10

6 x 10

4

-5

-2 -4

2

16

18

20

0 -2 -4 -6 0

5

10

15

20

25

30

35

40

Fig. 5. Velocity error of the MARC system versus t

45

10

12

Model Reference Adaptive Control Application in Optical Path

1279

5 Conclusions This paper establishes a translational interferometer control system and gives a state space model of the controlled object. The MRAC method is introduced, and employing the Lyapunov stability theorem to obtain the system stability conditions, finally the controller parameters is gotten. The MATLAB numerical simulation results show that the control strategy is effective. Acknowledgements. This work was supported by the National Key Research and Development Program of China (No. 2016YFB0500702).

References 1. Griffiths PR, James AH (2007) Fourier transform infrared spectrometry, 2nd edn. WileyInterscience, New Jersey 2. Fan B, Chen X, Li B et al (2017) Technical innovation of optical remote sensing payloads on GF-5 satellite. Infrared Laser Eng 46(1):8–14 3. Nian W, Liu ZJ, Lin Z, Kang JB (2012) Arm scanning control of Michelson interferometer. Sci Technol Eng 12(33):1617–1815 4. Dey R, Jain SK, Padhy PK (2016) Robust closed loop reference MRAC with PI compensator. IET Control Theory Appl 10(8):2378–2386 5. Sun MA (2009) Barbalat-Like Lemma with its application to learning control. IEEE Trans Autom Control 54(9):2222–2225

UAV Path Planning Design Based on Deep Learning Song Chang(&), Ziyan Jia, Yang Yu, Weige Tao, and Xiaojie Liu College of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, Jiangsu 213001, China [email protected]

Abstract. UAVs are widely used in different fields of social life. With the increasing use of UAV, the main direction of future UAV technology development is “intellectualization”. Facing the problems of abundant information sources, large amount of information, interaction of various equipment systems and stability of communication signals, UAV can not only rely on manual operation, so it is important to strengthen the ability of process data to UAV. Nowadays deep learning is one of the hottest topics in the field of science and technology, and there are more and more researches based on the theory of deep learning, which provides a method for realizing artificial intelligence. This paper based on deep learning technology builds a neural network framework to study UAV path planning. Compared with the traditional UAV path algorithm, the neural network model is smaller and the recognition speed is faster. A detection and recognition method suitable for the network is proposed, which is applied to the design of UAV obstacle avoidance system to realize UAV’s recognition of the surrounding environment, obstacle avoidance and to ensure UAV’s flight safety. Keywords: UAV

 Deep learning  Neural networks  Path planning

1 Introduction Recent years, with the development of UAV and the substantial growth of its share in the global market, UAV is widely used in commerce, government and consumption. At present, in China and around the world, many fields have shown the vigorous development momentum of “UAV+ industry applications” [1]. The development of UAV has entered a glorious era. In order to apply UAV in all walks of life better, it is becoming important to research a better UAV system. The earth has a vast forest area, and many forest fires occur every year. In order to prevent forest fires, UAV can be used for fire detection and prediction earlier. In order to avoid the emergence of fires, UAV uses heat source induction for fire detection, but it can be disturbed by some special substances at high altitude, so UAV will take lowaltitude flight way. However, there are many trees in the forest, so it is difficult to control UAV artificially. This requires UAV self-path planning, obstacle avoidance and flight safety.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1280–1288, 2020 https://doi.org/10.1007/978-981-13-9409-6_153

UAV Path Planning Design Based on Deep Learning

1281

At present, the algorithms used in path planning include A* algorithm, ant colony algorithm, particle swarm optimization algorithm and so on [2]. For different algorithms, the MCU usage of UAV is different. which affects the performance of UAV. Here, the neural network can be tailored and optimized to obtain the neural network which can be used in MCU, and realize the research value of UAV autonomous management and path planning technology. In this paper, the characteristics of complex systems can be accurately controlled by using the neural network in deep learning technology, and the path planning of UAV can be come true by using the neural network. Obstacle identification and path analysis are carried out through convolutional layer, pooling layer and fully-connected layer in the neural network model. Compared with the existing path algorithm, new way will solve the problem of UAV avoiding in the case of dense obstacles.

2 Scheme Design Particle swarm optimization (PSO) [3] algorithm is used as the data provider, and convolutional neural network (CNN) is used as the training model. Finally, learning parameters are obtained and the expected path is planned. 2.1

Source of Data

PSO is a group intelligent guidance optimization search generated by cooperation and competition between group particles. It has the advantages of easy implementation, high precision and convenient adjustment of parameters. In PSO, each particle has its own position and speed, and an adaptation value determined by the optimized function. The best position and current location that have found also be showing. PSO can calculate the optimal path for UAV. PSO initializes a group of random particles and finds the optimal solution by iteration. In each iteration process, the particle follows itself by tracking two extremes: the first is the optimal solution found by the particle itself, which is called the individual extreme value; the other is the optimal solution found by the whole population, which is the global extreme value. In a d-dimensional target search space, there are n particles forming a community, each particle i is a d-dimensional vector Xi, and the velocity Vi of the first particle is also a d-dimensional vector. The optimum position searched by the second particle so far is called the individual extreme value Pi, and the optimum position searched by the whole particle swarm so far is the global extreme value Gi. When the two optimum values are found, the particle updates its speed and position according to the dimension: vid ðk þ 1Þ ¼ w  vid ðkÞ þ c1 r1 ðpid ðkÞ  xid ðkÞÞ þ c2 r2 ðgid ðkÞ  xid ðkÞÞ

ð1Þ

xid ðk þ 1Þ ¼ xid ðkÞ þ vid ðk þ 1Þ

ð2Þ

W is inertia weight, c1, c2 is acceleration factor, r1, r2 is uniform random number in [0, 1].

1282

S. Chang et al.

The path parameters obtained by particle swarm optimization provide data support for CNN. In this paper, neural network is used to train data and obtain the expected results. 2.2

Neural Network Model

In this paper, convolutional neural network is used as the training model of data [4]. Convolutional neural networks consist of one or more convolutional layers and fullyconnected layers at the top, as well as association weights and pooling layers. Compared with other deep learning structures, CNN can be trained by adding back propagation algorithm in addition to forward propagation and use fewer parameters to achieve higher performance. The first layer of CNN is convolutional layer. The main function of this layer is to change the matrix. By initializing the weight, defining the weight matrix and filtering the special information in the data, one weight combination may be used to extract the edge information, the other one may be used to extract a specific parameter, and the next may be to fuzzy the unnecessary data. What we should pay attention to in convolution calculation is the setting of matrix step size and boundary. Step size determines the integrity of information, and boundaries often contain some important feature information [5]. In the face of a large number of parameters, the pooling layer is responsible for reducing the number of training parameters, sharing weight and matrix calculation to get the target parameters, but the pooling layer will not reduce the accuracy of the features. After the training is completed, the full connection layer is used to generate an expected data output. In the calculation process of neural network, it is necessary to set corresponding variables to support the calculation of neural network. Firstly, the weights and biases of the network layer are initialized, and the weights and biases are optimized by forward and backward propagation to obtain the ideal values. According to the number of neurons in each layer, the calculation process is as follows:   a21 ¼ f w111 x1 þ w112 x2 þ w113 x3 þ    w11n n þ b11  a22 ¼ f w121 x1 þ w122 x2 þ w123 x3 þ    w12n n þ b12  a23 ¼ f w131 x1 þ w132 x2 þ w133 x3 þ    w13n n þ b13 ...   a2n ¼ f w1m1 x1 þ w1m2 x2 þ w1m3 x3 þ    w1mn n þ b1m  hw;b ðxÞ ¼ a31 ¼ f w211 a21 þ w212 a22 þ w213 a23 þ    w31n n þ b21

ð3Þ

According to the number of hidden layers involved in the calculation, it can be extended to m layers and calculated accordingly. After learning weights, the loss function can be minimized, similar to the multilayer perceptron (MLP). Therefore, we need to learn the parameters to extract information from the original data, so as to lp the network to predict correctly. When there are multiple convolution layers, the initial layer often extracts more general features. As the network structure becomes deeper, the features extracted from weight matrix are

UAV Path Planning Design Based on Deep Learning

1283

becoming more and more complex, and they are more and more suitable for immediate problems. 2.3

Training Parameters

The flight path of UAV is a gather of points, and connecting the training data is a flight path [6]. The output obtained by CNN is only generated randomly by the parameters of the neural network itself, and it is not a safe path in the ideal state. In order to get the ideal path, it is necessary to back-propagate the data so that the parameters of the neural network can be turned into the best value and the ideal output can be obtained from the input. The output obtained by convolution neural network needs to be fitted with the planned data by regression function. The regression model initially establishes the relationship between independent variables and dependent variables. This relationship includes two parts, one is the linear function of independent variables, the other one is the residual error term. After multi-layer convolution and filling, put data out [7]. The loss function with cross-entropy in the output layer is used to calculate the prediction error. Once forward propagation is completed, back propagation will begin to update weights and deviations to reduce errors and losses. Cross-entropy describes the D-value between the actual output and the expected output. The cross-entropyP is smaller and the two probability distributions are closer. The formula as: Hðq; pÞ ¼  pðxÞ log qðxÞ P is expected output, q is actual output, H(p, q) is cross-entropy.

3 Constraint Conditions of Flight Given the flight environment of UAV, ignoring the influence of terrain and UAV operation performance, considering the threat area of obstacles, UAV range and so on, a path planning model of single aircraft and single target is established based on twodimensional plane. Considering that the plane is a continuous set, in order to facilitate calculation, we divide the plane that the UAV can move into a finite square area, the infinite and continuous object of study can be transformed into a finite and discrete one, which is convenient for calculation and research. In addition, this division can also ensure the accuracy of the calculation results [8]. According to the known flight environment, the threat areas of obstacles are divided. Modeling in the form of a disc, the inside is called the absolute danger zone, and the outside is called the relative safety zone. Threat probability is defined as: 8 1; m 2 danger area < min r ð4Þ m 2 relative satety area pðmÞ ¼ eRRmax Rmin ; : 0; m 2 safety area P(m) denotes the crash probability of UAV at point m, Rmin represent the radius of the danger area, Rmax represent the radius of the relative safety area at a fixed height, which can be added as compensation for threat range estimation. r represent the distance from the current location to the threat point. For the overlapping part of the threat,

1284

S. Chang et al.

the threat probability of different threats to UAV is calculated as pi ðmÞ i = 1, 2, 3, 4…, p(m). The threat probability of comprehensive evaluation is calculated by the following formula: 8 m2 < max(p1 ðmÞ; p2 ðmÞ; . . .pn ðmÞ), ð5Þ pðmÞ ¼ p1 ðmÞ þ p2 ðmÞ þ    þ pn ðmÞ; m  2; p1 þ p2 þ    þ pn \1 : 1; m  2; p1 þ p2 þ    þ pn [ 1

in

Rm

Rmax

Danger r

RelaƟve safety

m

Safety area Fig. 1. Danger flight area model

The hazardous area model is shown in Fig. 1. Unmanned aerial vehicle avoids collision with the maximum turning angle in horizontal direction and the longest flight distance with obstacles. (1) Maximum turning angle in horizontal direction hmax Design of avoidance angle at fixed height based on two-dimensional plane. ðxj ; yj ; xi ; yi Þ Relative coordinate center points of UAV and dangerous area respectively. R is the maximum radius of dangerous area. The steering angle of the UAV is calculated as follows:

UAV Path Planning Design Based on Deep Learning

0

1285

1

R B C arc sin@qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA  hmax  180 2 2 ðyj  yi Þ þ ðxj  xi Þ 0 1 R B C  arc sin@qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA 2 2 ðyj  yi Þ þ ðxj  xi Þ

ð6Þ

(2) Safe Distance The distance between UAV and obstacle is rmax ; the maximum radius of dangerous area is R  rmax ; UAV need sufficient safe distance to avoid. Avoidance Distance of UAV Constraints: n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X ðyj  yi Þ2 þ ðxj  xi Þ2 \len, i¼0

Len is the maximum avoidance distance for UAV.

4 Expected Results The process of path planning by CNN is shown in Fig. 2. Firstly, import the learning objects. Read and train them in batch due to the large data. In the process of defining parameters, it need to define the learning objects and output objects. The optimal path depends on whether the dangerous area is avoided safely. In the known twodimensional flight environment, the features are read and the safety zone and dangerous zone are distinguished by using the neural network. The trained parameters are used for path planning. The planned path can safely pass through the dangerous area depends on whether the path intersects with the dangerous area. If it is safe, the path planning is successful. If it is not safe, the neural network will use the trained parameters to re-plan. As we can see from Fig. 3, there are two paths with green and yellow color. Analysis of the two paths, the green line is better than the yellow one.

1286

S. Chang et al. Read data

Output path danger

Set obstacle model

safe

Define training parameters

constrained condiƟon

CNN module idenƟfies learning data

Path planning

Output parameter

New environment

Fig. 2. Path planning flow chart

Fig. 3. Path output

The main part of the model is convolutional neural network, which determines whether the last output parameters are optimal. After the final parameters are obtained, the new flight environment and neural network are used to test. Test the data for many times by using the model to get the qualified data. After many training tests, the expected path is obtained. The shortest path is between the starting point and the target point. After identifying the dangerous area with obstacles, the path planning is carried out and a static path is obtained. As can be seen from Fig. 4, the path obtained by the neural network is basically the same as that planned by PSO algorithm, which proves the reliability of this method.

UAV Path Planning Design Based on Deep Learning

1287

Fig. 4. PSO path and CNN path

5 Conclusion By comparing the neural network with the traditional algorithm, we can see the advantages of the neural network. In a narrow space, it is necessary to calculate and avoid motion in a very short time. The operation mode of the neural network is very suitable for this scenario. But in-depth learning also brings some technical needs. Huge learning data, high system configuration requirements and enough storage space are very high requirements for UAV flight control system. How to reduce the storage space of the neural network and get the best result, and how to reduce the requirement of hardware configuration for computing are further studied for UAV systems based on in-depth learning. Acknowledgements. This work was supported in part by the Natural Science Foundation of Jiangsu under Grant BK20160294, and in part by the Graduate Innovation Program project of Jiangsu University of Technology under Grant 20820111839.

References 1. Basbous B (2018) 2D UAV path planning with radar threatening areas using simulated annealing algorithm for event detection. In: 2018 international conference on artificial intelligence and data processing (IDAP), pp 1–7 2. Xia C, Yudi A (2018) Multi—UAV path planning based on improved neural network. In: 2018 Chinese Control And Decision Conference (CCDC), Shenyang, 2018, pp 354–359 3. Supakar N, Senthil A (2013) PSO obstacle avoidance algorithm for robot in unknown environment. In: 2013 international conference on communication and computer vision (ICCCV), pp 1–7, Coimbatore 4. Yanagisawa H, Yamashita T, Watanabe H (2018) A study on object detection method from manga images using CNN. In: 2018 international workshop on advanced image technology (IWAIT), Chiang Mai, pp 1–4

1288

S. Chang et al.

5. Kuang P, Cao W, Wu Q (2014) Preview on structures and algorithms of deep learning. In: 2014 11th international computer conference on wavelet active media technology and information processing (ICCWAMTIP), Chengdu, pp 176–179 6. Yan F, Zhuang Y, Xiao J (2012) 3D PRM based real-time path planning for UAV in complex environment. In: 2012 IEEE international conference on robotics and biomimetics (ROBIO), Guangzhou, pp 1135–1140 7. Wang Y (2017) Cognitive foundations of knowledge science and deep knowledge learning by cognitive robots. In: 2017 IEEE 16th international conference on cognitive informatics and cognitive computing (ICCI*CC), Oxford, pp 5–5 8. Kido S, Hirano Y, Hashimoto N (2018) Detection and classification of lung abnormalities by use of convolutional neural network (CNN) and regions with CNN features (R-CNN). In: 2018 international workshop on advanced image technology (IWAIT), Chiang Mai, pp 1–4

Research on Temperature and Infrared Characteristics of Space Target Xiang Li(&) and Jindong Li Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected]

Abstract. In order to calculate the temperature and infrared radiation characteristics of space target, it is necessary to master the distribution and variation regularity of the orbit external thermal flux on target. Analysing the energy exchange relation-ship between the inside and outside of the space target in orbit, the numerical simulation method was used to simulate the temperature and infrared of a cone-shaped space target, and the influence of various parameters on the calculation results was quantitatively analysed. The results show that the angle b between sunlight and orbital plane has a significant effect on the temperature and infrared variation of space target, the calculation results are more sensitive to the change of b when b is larger. Considering the change of the earth albedo with latitude, the temperature and the infrared radiation intensity of 8– 14 lm reduced by 1.73 °C and 0.27 W/sr respectively. The variation of albedo cannot be ignored when calculating the characteristics of space target accurately. Keywords: Space target albedo

 Orbital external  Thermal flux  B angle  Earth

1 Introduction Nowadays space target detection technology has been paid more and more attention. The use of infrared optical detectors for space targets detection has good concealment and can work day and night, playing an incomparable role in space target detection and tracking tasks. The accurate target infrared characteristic da-ta has great significance for system sensor design, tracking and recognition algorithm research [1]. Extensive research on the infrared characteristics of space targets has been carried out at home and abroad [2]. Including the target surface temperature calculation model [3], the target own infrared radiation calculation model [4], the target reflection radiation of Earth and solar calculation model [5, 6]. In the past, the research on the influencing factors of the target infrared characteristics focused on the target itself, such as geometric structure, material, surface coating properties, etc., while ignoring the influence of the external environment of the target during space flight. The change of the radiation characteristics of the target is mainly caused by the change of the heat budget when the target moving in the orbit [7]. This paper takes the conical space target as an object to study the key parameters related to the orbit extremal thermal flux, such as the influence of the angle between the sun and the orbital © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1289–1297, 2020 https://doi.org/10.1007/978-981-13-9409-6_154

1290

X. Li and J. Li

plane b, the orbital inclination i, and the Earth albedo on the target temperature and infrared radiation characteristics.

2 Orbit External Thermal Flux of Space Target The external heat sources that affect the target temperature field are mainly the sun and the earth. In the state of illumination, there are three parts, one is the radiation that the sun directly irradiates on the surface of the target, the other is the radiation of the earth and its atmospheric system on the target surface, and the third is the solar radiation reflected by the Earth. When the target is in the shadow state, because the sunlight is blocked by the earth, only earth radiation exists, which is relatively simple. The illumination conditions are mainly discussed later. Solar radiation is related to the angle b between the sun and the orbital plane. Under different b angles, the incident angle of the solar radiation and its variation are different when target moving. The temporal characteristics of target temperature and infrared radiation are affected subsequently. Earth albedo radiation is related to the relative position between the Earth, the Sun and the target, and is also affected by b. In addition, it is closely related to the Earth’s albedo. To simplify the calculation, the global average of the Earth’s albedo usually taken by 0.3. But actually the albedo in different regions of the Earth varies greatly. Studies have shown that albedo varies slightly with longitude, while the rotation of the Earth weakens the influence of longitude [8], but the albedo of different latitude varies widely from 0.1 to 0.8 [9], as shown in Fig. 1. In order to accurately calculate the infrared characteristics of the target, it is necessary to consider the influence of albedo with latitude changes on the calculation results.

Fig. 1. Variation of earth albedo with latitude

Research on Temperature and Infrared Characteristics

1291

3 Space Target Temperature and Infrared Mathematical Model Target temperature is calculated by finite volume method, the heat transfer between the target outer surface unit i and the space environment is: qi ¼ qsun;i þ qref ;i þ qearth;i  qself ;i

ð1Þ

qsun;i ¼ as S0 Ai cos bi

ð2Þ

qearth;i ¼ aIR  E0  Ai  Fe;i

ð3Þ

qreflect;i ¼ as  S0  Ai  qE  Fr;i

ð4Þ

Solar radiation:

Earth emitted radiation:

Earth albedo radiation:

The radiant energy emitted by unit i: qself ;i ¼ eIR rT 4 Ai

ð5Þ

In the formulas (2)–(5), aS is absorption rate of solar radiation; aIR is absorption rate of infrared radiation; eIR is infrared emissivity; qE is earth albedo; Ai is area of the unit i; r is Boltzmann’s constant; S0 is solar constant; E0 is earth constant; bi is included angle between sunlight and the normal of target surface of element i; Fe;i is the angle coefficients of the earth’s radiation; Fr;i is the angle coefficients of the earth albedo radiation. The target radiation characteristics consists of two parts, which are superposed by the target’s own infrared radiation and the reflection of the environmental radiation. The solar and earth radiation spectrum will affect the spectral distribution of the target radiation characteristics. At a certain time, within k1  k2 band, the radiation intensity of the target surface element i in a certain direction of observation is: Ai cos hi Ii ¼ p

Z

k2

k1



 eIR Mk;i þ qs S0k Fs;i þ qIR E0k Fe;i þ qs qE S0k Fr;i dk

ð6Þ

where hi is the angle between the direction of observation and the normal of element i; qs is the reflectivity to solar radiation, qIR is the reflectance to infrared radiation; qE is the earth albedo; Mk;i is the radiance of the target surface element i; E0k and S0k are spectral irradiance of sun and earth on wavelength k. Mk;i , E0k and S0k and are given by Prang’s law, which assumes that the spectral distributions of sun and earth are the same as blackbody of 6000 and 254 K, respectively. Finally, the radiation characteristics of the target in the direction of observation

1292

X. Li and J. Li

can be obtained by summing the radiation of all visible surface elements in the observation direction.

4 Analysis of Calculation Results 4.1

Calculation Model

The calculation model is a conical target with a height of 1800 mm, a cone top radius of 30 mm and a bottom diameter of 540 mm. The shell material is made of aluminium alloy, 5 mm thick, density is 2500 kg/m3, specific heat is 860 J/(kg K), thermal conductivity is 260 W/(m k). The solar absorptivity is 0.54 and the infrared emissivity is 0.45. The target initial temperature is 20 °C. The target is running on a 960 km circular orbit. 4.2

Effect of b on Target Temperature and Infrared Radiation Characteristics

Figure 2 is a calculation result of the average surface temperature of the target when the earth albedo is 0.3 and the b angle is changed from 0° to 90° in the illumination state. In the initial stage of calculation, as b is smaller, the target surface temperature rises faster, but as time advances, the temperature rises slowly and tends to a maximum value, and when b = 90°, the temperature curve approximates a straight line. Comparing the curves in the figure, it can be found that the temperature difference is getting larger while in the process of b becomes larger, which indicates that the calculation result is more sensitive to the change of b when b is larger. In general, during the calculated time, the smaller the b, the higher the average temperature of the target,

Fig. 2. Comparison of temperature variations of different b

Research on Temperature and Infrared Characteristics

1293

Fig. 3. Comparison of solar radiative flux variations of different b

Figure 3 shows the comparison of solar radiant heat flow on the target surface with different b. Due to the configuration of target, in the first half of the flight, the difference in solar radiation heat flux density at different b angles is gradually reduced. When the target is just below the sun, the values with all b angles are the same. The target continues to fly and the disparity gradually widens. The smaller the b, the larger the range of solar heat flux. At b = 0°, the maximum and minimum heat flux density differs by 284.2 W/m2. At b = 90°, the solar radiant heat flux density remains unchanged.

Fig. 4. Infrared radiation intensity variations within 8–14 lm of different b

1294

X. Li and J. Li

Fig. 5. Infrared radiation intensity variations within 3–5 lm of different b

Figures 4 and 5 are the infrared radiation intensities of the target observed from nose aspect within 8–14 lm and 3–5 lm, respectively. Within 8–14 lm spectrum, infrared radiation mainly comes from the target itself, so it is basically consistent with the change in temperature. The radiation intensity of 3–5 lm spectrum is more complex, and the influence of the reflected solar radiation spectrum cannot be ignored. 4.3

Influence of Earth Albedo on Temperature and Infrared Radiation Characteristics

Figure 6 shows the influence of albedo and orbit inclination i on target surface temperature. After considering the influence of the albedo change with latitude, the average surface temperature of the target surface decreased, and the difference at the end of the calculation was 1.73 °C. This is probably because the calculated orbital segment is located within 45° of north and south latitude. The target mainly flies at low latitude, while the albedo in low latitude is lower than the albedo in high latitude, thus the target temperature is lower than the result calculated with constant albedo. Figure 6 shows the effect of albedo on the earth albedo heat flow. After considering the change of albedo, the earth albedo heat flux was significantly reduced, and the maximum difference was 29.66 W/m2 in the middle of calculation.

Research on Temperature and Infrared Characteristics

1295

Fig. 6. Influence of albedo and orbit inclination on temperature

Fig. 7. Influence of albedo and orbit inclination on albedo radiative flux

Different orbital inclination i will make the subastral point move at different latitudes. As can be seen from Fig. 7, the albedo radiation has a slight difference in the first half of the target flight, but has a complex influence on the latter part, which is related to the target configuration. However, comparing Fig. 6, the albedo radiation variation caused by albedo ratio change has a slight influence on the target surface temperature.

1296

X. Li and J. Li

Fig. 8. Infrared radiation intensity variations within 8–14 lm of different albedo

Figure 8 shows the effect of the emissivity change on the 8–14 lm infrared radiation intensity from nose aspect. At the end of the calculation, the difference is the largest, 0.27 W/sr.

5 Conclusion Through the numerical calculation of the temperature and infrared characteristics of space target, the following main conclusions are obtained: (1) The angle b has an influence on the incident angle of solar radiation and albedo radiation, which affects the target temperature and infrared radiation characteristics. (2) The influence of b on the infrared radiation intensity of 8–14 lm is the same as that of temperature, while the infrared radiation intensity of 3–5 lm cannot ignore the influence of solar spectrum. (3) When considering the influence of albedo ratio varies with latitude, the average surface temperature of the target surface is 1.73 °C lower, and the radiation intensity of 8–14 lm is 0.27 W/sr lower than the result with constant albedo ratio. To accurately calculate the target characteristics, the albedo cannot be ignored.

References 1. Liang-chen SHI, Guo-biao CAI, Ding-qiang ZHU (2009) The simulation to infrared signatures of spacecraft. J Astronaut 30(1):229–234 2. John D, Mill, John T, Stair J (1997) MSX design driven by targets and backgrounds. In: Aerospace sciences meeting and exhibit

Research on Temperature and Infrared Characteristics

1297

3. Chenxing Luo, Wen Sheng (2014) Modeling of missile wall temperature field and its infrared radiation characteristics. J Air Force Early Warn Acad 28(4):254–257 4. Wentao Shen, Dingqiang Zhu, Guobiao Cai (2010) Calculation of temperature field and infrared radiation characteristics of midcourse ballistic target. J Astronaut 31(9):2210–2217 5. Xiao Wang, Sili Gao, Fanming Li (2017) Simulation and target analysis of dual-band infrared dynamic scene in deep space. Infrared Laser Eng 47(9):1123–1127 6. Hongyuan Wang, Yun Chen (2016) Modeling and simulation of infrared dynamic characteristics of space-based space targets. Infrared Laser Eng 45(5):1–7 7. Hao ZHANG, Jianguang CAO, Jiang WANG et al (2018) Analysis of satellite’s extreme heat flow in inclined orbit. Aerosp Shanghai 35(1):36–42 8. Qing PAN, Pingyang WANG, Yiying BAO et al (2012) On-orbit external heat flux calculation of spacecraft based on reverse monte carlo method. J Shanghai JiaoTong Univ 46 (05):750–755 9. Guirong MIN, Shun GUO (1998) Satellite thermal control technology. The Science Publishing House, Beijing

A Multispectral Image Edge Detection Algorithm Based on Improved Canny Operator Baoju Zhang(&), FengJuan Wang, Gang Li, CuiPing Zhang, and ChengCheng Zhang Tianjin Normal University, Tianjin 300387, China [email protected]

Abstract. The traditional canny operator performs edge detection, which needs to artificially intervene in the variance of Gaussian filtering, and the choice of variance will affect the edge retention and denoising effects. When filtering out the noise, many edge details are lost. Aiming at the shortcomings of the traditional canny algorithm, this paper proposes an improved canny algorithm for edge detection. After multispectral image Gaussian filtering, the mixed and enhanced operation is made on multispectral image. This operation filters out the noise and retains the important edge detail information. In addition, when the gradient amplitude image is obtained, more edge information are obtained by changing the size of the sobel operator. The edge details of the multispectral image processed by this operation are more abundant and the detection is more accurate. The multispectral image is then subjected to non-maximum suppression and double threshold processing. Experiments show that compared with the traditional canny edge detection effect, the algorithm proposed in this paper has greatly improved the effect of edge connection and pseudo edge removal, and the objective evaluation and visual effect have been greatly improved than before. Keywords: Sobel operator Mixed enhancement

 Multispectral image  Canny edge detection 

1 Introduction At present, the edge information of multispectral images plays an important role in the early detection of many diseases. Some clinical trials determine the disease status of patients by analyzing the edge information characteristics of breast cancer lesions and decide whether to conduct further examination [1]. What is more, skin-related diseases can be screened by detecting edge information of partial images of corresponding tissues [2]. These methods all eliminate the part of the irrelevant information by extracting the edge information of the multispectral image and retain the important feature attributes of multispectral image, which can greatly reduce the data volume and improve the inspection efficiency. In previous studies, there were many algorithms for edge detection of multispectral images. Classic edge detection operators such as Prewitt, Sobel, Roberts, and Laplace © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1298–1307, 2020 https://doi.org/10.1007/978-981-13-9409-6_155

A Multispectral Image Edge Detection Algorithm Based on Improved

1299

are simple and easy to implement, but they are sensitive to noise, poor denoising ability, prone to false edges, and low detection accuracy. Compared with the above several detection algorithms, the Canny algorithm has the advantages of large signal-tonoise ratio and high accuracy, but it also has some drawbacks. Xu et al. used the Hough transform instead of the traditional double-threshold method to detect the connected edges and the weak edges in recent research [3, 4]. Iqbal et al. used the adaptive median filter to denoise and used the otsu to calculate the image height threshold, which improved the edge connection problem [5, 6]. Hao et al. used statistical filtering denoising and gray-based iterative method to calculate the threshold to improve canny edge detection which added more edge information [7, 8]. Based on the above situation, this paper proposes a canny edge detection algorithm that improves the size of the sobel operator for edge detection of multispectral images. In the process of calculating the gradient amplitude image, the Laplacian and Sobel gradient operator hybrid enhancement algorithm is added in multispectral images to obtain more edge details, and the size of the sobel operator is enlarged to obtain more accurate edge position information. The experimental results show that the canny algorithm proposed in this paper has higher detection accuracy and better visual effect than the traditional detection algorithm.

2 Traditional Canny Edge Detection Algorithm The Canny edge detection algorithm is generally divided into five steps: first, it is need to use a Gaussian filter to smooth the image to filter out the noise, then use the sobel operator to calculate the gradient intensity and direction of each pixel in the image, and apply Non-Maximum Suppression to the gradient image to eliminate spurious responses from edge detection, and finally double-threshold detection is used to determine real and potential edges. Edge detection is finally accomplished by suppressing isolated weak edges. The traditional canny algorithm convolves the original image with a onedimensional zero-mean Gaussian filter when smoothing the image. The filter function expression is as follows  2 1 x exp GðxÞ¼ 2 2d2 2pd

ð2:1Þ

In the formula, d is the standard deviation, which is used to control the degree of smoothness. As d is increasing, the smoothing effect is better, and the noise suppression capability is gradually enhanced, but the detected edge point position will deviate from the actual position, and the edge positioning accuracy become lowered. Since the values need to be artificially set, different images all have great limitations in extracting edges.

1300

B. Zhang et al.

3 Image Acquisition Experiment Edge detection is implemented of the relevant single multispectral image based on previous work [9]. Acquisition of experimental data: In a closed experimental box with almost no external light and only transmitted light, a mixed solution containing milk and water was placed in a rectangular parallelepiped container, and the transparency of the rectangular container was 96%. Then, according to the optical properties of different materials at different wavelengths, two pieces of suspended pork of different sizes are placed in the solution (The pork here does not involve ‘live animals’, which is the edible meat). The solution was irradiated with an LED light source, and a multispectral image was acquired by a mobile phone, and the computer was connected to the mobile phone to obtain more experimental data.

4 Improved Canny Edge Detection Algorithm 4.1

Laplacian and Sobel Operator Hybrid Enhancement Algorithm

After obtaining the multispectral image, the Laplacian operator and the Sobel gradient operator are combined to perform the hybrid enhancement operation on the multispectral image. First, the image is subjected to a Laplacian operation, and the processed image is added to the original grayscale image to obtain a Laplacian sharpened image. The effect image is shown in Fig. 1.

Fig. 1. Laplacian sharpening image

The sobel operator gradient calculation is performed on the original multispectral image, and the horizontal and vertical gradients of the image are respectively obtained, and the final gradient image is obtained, as shown in Fig. 2. Since the resulting sobel gradient image still has some noises, the image is further smoothed by the averaging filter. The average response of the gradient transform in the region of grayscale variation (gray slope or step) is stronger than the average response of Laplace operation, while the response to noise and small details is weaker than

A Multispectral Image Edge Detection Algorithm Based on Improved

1301

Fig. 2. Sobel operator for gradient image

Laplace operation. At this time, the smoothed gradient image can be regarded as a template image and multiplied by the Laplacian image. The product image retains the details of the strong gray-scale region, and also reduces the noise of the flat area of the grayscale change. The experimental image is shown in Fig. 3.

Fig. 3. Product image

1302

B. Zhang et al.

The final sharpened image is obtained by adding the product image to the original grayscale image. At this time, the multispectral image not only retains important edge information, but also removes corresponding noise. 4.2

5  5 Size Sobel Operator to Calculate the Gradient Amplitude Image

The 3  3 operator template is derived from the Pascal triangle (Fig. 4).

Fig. 4. Pascal triangle

For the 3  3 size sobel operator, the operator templates in the x and y directions are convoluted by [1, 0, −1] and [1, 1, 2], respectively (Fig. 5).

Fig. 5. Sobel operator model

Where the Sobel operator in the x direction is used to detect the edge in the y direction; the Sobel operator in the y direction is used to detect the edge in the x direction (the edge direction and the gradient direction are perpendicular). The edges in the sobel operator can point in all directions, and four operators can be used to detect horizontal, vertical, and diagonal edges in the image. The operator returns the first derivative value in the horizontal and vertical direction, thereby determining the gradient G and the direction h of each pixel. The gradient magnitude and direction formula are as follows. G¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi G2x þ G2y 

h ¼ arctan

Gy Gx

ð4:1Þ

 ð4:2Þ

A Multispectral Image Edge Detection Algorithm Based on Improved

1303

where G is the gradient strength, h is the gradient direction, and arctan is the inverse tangent function. The working principle of the Sobel operator: Assuming that a 3  3 window in the image is A, e is the pixel point that needs to calculate the gradient, after convolving with the Sobel operator, the gradient values of the pixel point e in the x and y directions are (Fig. 6).

Fig. 6. Gradient convolution process

Where * is the convolution symbol and sum represents the summation of all the elements in the matrix. The gradient and direction of the pixel point e can be calculated according to the formula. Based on the 3  3 sobel operator template, this paper uses a 5  5 size sobel template, which is convolution in volume of [1, 2, 0, −2, −1] and [1, 1, 4, 4, 6] in the Pascal triangle. Sobel operator template as shown (Fig. 7).

Fig. 7. 5  5 sobel operator model

The enhanced multispectral image is convoluted using the sobel operator template. The working principle is similar to that of the 3  3 size template, Convolution the template with the window of the image and sum it up to get the corresponding gradient image as the following pictures show (Fig. 8).

1304

B. Zhang et al.

Fig. 8. Gradient amplitude image

4.3

Non-maximum Suppression

Non-maximum suppression is an edge sparse technique. After the gradient calculation of the image, the edge extracted based on the gradient value is still unreasonable. Nonmaximum suppression can make the gradient values suppress to zero other than the local maximum. This can eliminate a large part of the false edge points. When performing non-maximum suppression on an image, the angle firstly divide into four directions: 0° (horizontal), −45°, 90° (vertical), +45° and these four directions can substitute for the gradient direction of each pixel. In this case, the non-maximum suppression is compared to its two adjacent pixels. According to the above theory, a non-maximum suppression image is obtained by the corresponding program (Fig. 9).

Fig. 9. Non-maximum suppression image

A Multispectral Image Edge Detection Algorithm Based on Improved

4.4

1305

Double Threshold Detection and Edge Connection

After the non-maximum value suppression operation of the multispectral image, there will still be a small number of non-edge pixels in the output amplitude. In this case, the threshold is selected to make a trade-off. The canny algorithm based on the double threshold (Fuzzy threshold) implements the edge extraction well. The double threshold selection and edge connection method has the following principles by assuming two thresholds, one of which is a high threshold TH and the other is a low threshold TL: a. Discard any edge pixel below TL b. Reserved for any edge pixels above TH c. For any edge pixel value between TL and TH, if it can be connected by edge to a pixel larger than TH and all pixels of the edge are larger than the minimum threshold TL, it is reserved. Otherwise, it will be discarded. The experimental results are shown in the figure by setting the corresponding threshold parameters (Fig. 10).

Fig. 10. Comparation of experimental results

5 Analysis of Experimental Results In order to further illustrate the effectiveness of the improved algorithm, this paper uses the edge detection quality factor PFOM proposed by Pratt to test the proposed algorithm [10]. It is a commonly used edge detector performance evaluation index. PFOM is defined by a combination of three factors: leakage detection of real edges, false detection of false edges, and positioning error of edges, which are defined as follows: PFOM ¼

Nd X 1 1 maxðNe ; Nd Þ k¼1 ð1 þ bdðkÞ2 Þ

ð5:1Þ

1306

B. Zhang et al.

where Ne is the number of reference edge points, Nd is the number of edge points extracted by the algorithm, q is usually set to a constant 1/9, aðkÞ is the Euclidean distance between the kth true edge point and the detected edge point: dðkÞ ¼ kNe  Nd kEucild

ð5:2Þ

Pratt’s figure of merit is a fidelity function ranging from 0 to 1. The greater the value of the quality factor, the better the detector performance, and PFOM = 1 when it’s fully accurate. Finally, the peak signal-to-noise ratio (PSNR) is used as the quality evaluation index of the whole edge image. It is the most widely used image objective evaluation index, which is based on the error between corresponding pixel points, that is to say, the image quality evaluation based on error sensitivity. Its mathematical formula is as follows:  PSNR ¼ 10  log10

ð2n  1Þ MSE

 ð5:3Þ

Among them, the unit of PSNR is dB, and the larger the value, the smaller the distortion. n is the number of bits per pixel, MSE represents the mean square error, the formula is as follows: MSE ¼

H X W 1 X ðXði; jÞ  Yði; jÞÞ2 H  W i¼1 j¼1

ð5:4Þ

where MSE represents the mean square error of the original image X and the target image of the original image X and the target image Y, and H and W are the height and width of the image respectively. The data of the evaluation indicators are compared as follows: According to the data in Table 1, it can be seen that the quality factor of the improved canny operator is improved, which shows that the improved canny algorithm is used to detect the edge of the multispectral image, which can improve the edge accuracy of the detected image. Then, the signal-to-noise ratio has also improved, indicating that the improved canny algorithm improves the overall quality of multispectral edge detection images.

Table 1. Index data Method Traditional canny algorithm Improved canny algorithm

PFOM 0.4111 0.4461

PSNR 1.7960 2.1350

A Multispectral Image Edge Detection Algorithm Based on Improved

1307

6 Conclusion Based on the traditional canny algorithm, this paper uses the Laplacian operator and the sobel gradient operator to enhance the multispectral image and obtain more edge position information. At the same time, changing the size of the sobel operator template, convolution operation is made on the multispectral gray image to obtain a gradient image with more detailed edge information. Then, the gradient image is subjected to operations such as non-maximum suppression and double threshold detection to obtain a multispectral edge detection image with higher edge accuracy. Through the visual effect of the multispectral edge detection image compared and the analysis of experimental data of the evaluation index PFOM and PSNR, we can draw a conclusion that the improved canny algorithm proposed by the text has made a certain improved in accuracy of the edge position information and the quality of the overall multispectral edge detection image. Hence, the proposed algorithm in this paper is effective.

References 1. Liu Y, Gao D, Xu M (2019) Multispectral photoacoustic imaging of cancer with broadband CuS nanoparticles covering both near-infrared I and II biological windows. J Biophoton 12 (3):e201800237 2. Romano RA, Rosa RG et al (2019) Label-free multispectral lifetime fluorescence to distinguish skin lesions. Label-free biomedical imaging and sensing (LBIS) 10890:108902L 3. Leling X, Shi H (2018) Adaptive double threshold modified edge detection algorithm for boot filtering. J Nanjing Univ Sci Technol 42(2):177–182 4. Qian H Medical image edge detection algorithm based on improved canny operator. School of Optoelectronic Information and Computer Engineering, University of Shanghai for Science and Technology 5. Iqbal N, Ali S, Khan I (2019) Adaptive edge preserving weighted mean filter for removing random-valued impulse noise. Symmetry-Basel 11(3):395 6. Wang B, Zhao H, Liu C Edge detection algorithm based on canny operator improvement. Jiamusi Power Supply Company Northeastern Electric Power University School of Computer Science 7. Hao Huaqing, Liu Ming, Xiong Peng (2019) Multi-lead model-based ECG signal denoising by guided filter. Eng Appl Artif Intell 79:34–44 8. Duan J, Zhang B Improved canny operator edge detection algorithm. Inner Mongolia University of Science and Technology, Inner Mongolia University of Science and Technology 9. Zhang B, Zhang C, Li G (2019) Multispectral heterogeneity detection based on frame accumulation and deep learning. IEEE Access 7(1):29277–29284 10. Pratt WK, Deng L, Zhang Y (2005) Digital image processing, 3rd edn. Mechanical Industry Press, Beijing

A Green and High Efficient Architecture for Ground Information Port with SDN Peng Qin(&), Jianming Li, Xiaohong Xue, Hongmei Zhang, Chang Jiang, and Yunlong Wang Space Integrated Ground Network Company Limited of CETC, Beijing 100041, China [email protected]

Abstract. The ground information port composed of distributed cloud data centers with interconnection is the core for spatio-temporal data access, processing and distribution in the space-ground integrated information network and can provide large-scale computation, storage and network forwarding resources. Thus, the performance improvement of the ground information port not only faces the traditional resource scheduling and task allocation problems, as the scale of business grows, its power consumption also explodes, which has become a bottleneck restricting the sustainable development of the ground information port. By introducing Software-Defined Networking (SDN) into the ground information port, this paper proposed an overall SDN-based green and efficient architecture for the ground information port. The SDN controller unifying scheduling and allocation of the whole network resources can not only maintain the efficient operation of the ground information port, but also effectively reduce the power consumption. This paper provides a feasible solution to solve the pain points in the industry. Keywords: Ground information port architecture (GIP)  Data center network  SDN  Green  High efficiency

1 Introduction The ground information port is an important part of the space-ground integrated information network, and is the hub for the convergence of information and services. According to the requirements of the scientific and technological achievements “Eggs along the way”, the space-ground integrated information network major project will take the lead in building the ground information port and opening up the entire link from data acquisition, data processing to application services. Relying on the ground information port to carry out spatio-temporal data operation services, it will play an important role in national disaster prevention and mitigation, environmental monitoring, and smart cities [1–4]. The ground information port composed of distributed cloud data centers with interconnection is the core for spatio-temporal data access, processing and distribution and can provide large-scale computation, storage and network forwarding resources. The performance improvement of big data centers not only faces the traditional © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1308–1315, 2020 https://doi.org/10.1007/978-981-13-9409-6_156

A Green and High Efficient Architecture for Ground Information

1309

resource scheduling and task allocation problems, but also the explosive growth of power consumption as the scale of the business in the ground information port grows, which directly leads to an average annual growth of 1.5% in global power consumption [5]. Thus, those challenges have become a bottleneck restricting the sustainable development of the ground information port. At the same time, Software-Defined Networking (SDN) as a new network architecture [6] can deploy network control functions on a controller via the use of the standard protocol OpenFlow, thus achieving the decoupling of control functions and data forwarding functions. The SDN controller has an innate global perspective on the resource distribution of the entire network. Therefore, by introducing SDN into the ground information port and unifying scheduling and allocation of the entire network resources through the SDN controller, we can not only realize the efficient operation of the data center, but also effectively reduce the power consumption. In view of the above requirements, this paper proposed an overall SDN-based green and efficient architecture for the ground information port, which provides a feasible solution for solving the pain points in the industry.

2 Background Information 2.1

Space-Ground Integrated Information Network

The space-ground integrated information network [1, 2, 7] is based on the ground network and expanded to the space-based network. It adopts a unified technical structure, a unified technical system, and a unified standard specification and is composed of the space-based information network, the Internet and the mobile communication network, which interconnects with each other. The space-based information network consists of a space-based backbone network, a space-based access network and a ground-based node network. The space-based backbone network is networked by several backbone nodes deployed in the geosynchronous orbit, and the space-based access network is composed of some access nodes deployed in the low earth orbit, and the ground node network is composed of a plurality of ground interconnected backbone nodes or ground information ports. 2.2

Ground Information Port

The ground information port is an important carrier for application services in the space-ground integrated information network and is the hub of space-time information application services. Relying on the space-ground integrated information network, the ground information port will build a data storage and processing center of “Physical distribution and logical unity”, and access spatio-temporal resources to realize the aggregation of space-based distributed information to the ground. The data storage and processing center can provide reliable and efficient networking and communication, positioning and navigation timing, remote sensing and geographic information services for users such as industries, enterprises and the public, which will ultimately form a radial spatio-temporal information service system.

1310

2.3

P. Qin et al.

Remote Sensing Data Open Policy

“The Interim Measures for the Management of Major Specialized Satellite Remote Sensing Data for High-Resolution Earth Observation Systems” [8] stipulates: Encourage and support high resolution data application technology research, application development, value-added services and industrial applications, and strengthen the establishment of market-oriented mechanisms and commercial service model to promote the breadth and depth of high resolution data resource applications. Among them, Clause 21 encourages the development of industry applications, regional applications and education, research applications based on high resolution data and supports various social organizations such as associations and societies, and organize high resolution data quality evaluation, application evaluation, and data domain achievement promotion. 2.4

SDN

The concept of SDN [7, 9] was first proposed by Nick McKeown et al. in Stanford University. He published an article at the ACM SIGCOMM conference in 2008, for the first time, he introduced the concept of SDN in detail, that is, the data plane and the control plane of the traditional network devices are separated from each other, and the centralized controllers are used to standardize the interfaces to manage and configure all kinds of devices. Different from traditional network technology, SDN technology has three characteristics: control forwarding separation, logic centralized control and open network programming API, which bring programmable features to the network and provide more possibilities for resource management and use. Therefore, SDN technology is a major innovation and development, and is listed as one of the top ten technologies in the IT field. In 2012, Google announced that it had successfully applied SDN technology on its internal backbone network B4, which marked the commercialization of SDN technology. This successful experience of Google has also made the data center become the most preferred choice for promoting SDN applications in both academic field and industrial field.

3 An Overall SDN-Based Green and Efficient Architecture for Ground Information Port As shown in Fig. 1, the SDN-based green and efficient architecture for ground information port can be divided into three levels: shared infrastructure (sharing), softwaredefined service platform (universal), and business application system (application) according to the concept of “Firm sharing, integrate universal, and open application”, as well as the security operations and standardization systems all through the three levels.

A Green and High Efficient Architecture for Ground Information Business application system

Marine pasture

Disaster prevention and mitigation

General Network service communicaintegration tion service

Software defined Service Model service management library platform

Data Data management access

Security operation

Precision agriculture

Remote sensing information service

Service package

Data storage

Business management

Data retrieval

Wisdom tourism

Geographic information service

PNT service

Visual display

Basic / thematic services

Data Data processing distribution

Task assignment Global scheduling Qos guarantee

OpenFlow controller Access control

SoftwareTransport Distributed Distributed Computing Storage defined exchange database file system service service cloud service service service infrastrucDistributed cloud environment ture layer Shared infrastructure

Software defined resource layer

Physical layer

Storage resource

Internet resource

Standard specification

Dynamic optimization

Centralized control

Resource virtualization

Computing resource

1311

Data resource

Space-based Space-based Ground node network backbone access network (ground information port) network Space-ground integrated information network

Fig. 1. Overall SDN-based green and efficient architecture for ground information port

The shared infrastructure consists of three parts: the physical layer, the resource layer, and the cloud infrastructure layer. The physical layer mainly includes physical entities such as a space-based backbone network, a space-based access network, and a ground-based node network (ground information port). The resource layer is composed of computing resources, storage resources, network resources and data resources, and is defined by software and the virtualization technology to form a virtualized resource pool to provide basic support for the on-demand call of the cloud infrastructure. The cloud infrastructure layer calls the virtualized resources downwards by providing a distributed cloud environment to provide various types of services such as unified computing, storage, transmission exchange, distributed database and file system. Global scheduling of virtual resources in the information port can be achieved by using the SDN technology. Software defined service platform consists of data management module, service management module and service integration module. Data management module can provide united data access, data storage and data procession. As for spatio-temporal data with different sources, with the help of SDN controllers’ global view, date management module can output united view, support integrated and efficient querying and retrieval of data. In addition, data management module can conduct efficient secure data sharing and distribution, according to network conditions and user requirements. Service management module is the standardized encapsulation of common and

1312

P. Qin et al.

supportive components often used in various business application, aiming to separate services from applications. Based on the united storage and management of data, through service interface, service management module can provide upper level business services with service encapsulation, business management, visual management and so on to further support united and efficient services of data and application. Service integration module can provide network communication services, remote sensing information services, PNT services, geographic information services and so on through invoking service management module. Business application system can provide customized application services for industries or specific users, such as marine ranching, disaster prevention and mitigation, precision agriculture, intelligent tourism and so on. By adopting SDN technologies, OpenFlow controller can unify the control of software defined service platform and common infrastructures’ resource layer and cloud infrastructure layer. Utilizing the global view of OpenFlow controller, ground information port can achieve global task scheduling, unified allocation of resources, dynamic performance optimization and high quality QoS guarantee, eventually efficiently reduces system energy consumption and achieve the green and highly efficient operation of ground information port.

4 Efficiency Analysis 4.1

Green Evaluation: Energy Consumption Reduction W9

W10

W11

W12

W3

W4

W7

W8

W1

W2

W5

W6

N1

N2

N3

N4

N5

N6

N7

N8

N9

N10

N11

N12

Fig. 2. The big data transmission analysis of ground information port [10]

As shown in Fig. 2, the ground information port network consists of 12 servers and 12 OpenFlow switches. Assumptions are as followed:

A Green and High Efficient Architecture for Ground Information

1313

(1) The fixed energy consumption of each OpenFlow switch is notated by F, D denotes the energy consumption of each port, each link bandwidth is notated by B, Ai denotes the incoming rate of data flows of OpenFlow switch i. (2) Ports at both ends of the link will sleep, namely energy consumption is 0, when none of data flows through the link. The OpenFlow switch will sleep when all ports of it sleep. According to default route adopted in traditional big data center, the whole energy consumption can be calculated by FB  5A1 þ 7A2 þ 5A3 þ DB  ð12A1 þ 14A2 þ 10A3 Þ. After adopting SDN technology, in the context that transmitting same capacity data, F D energy consumption can be reduced to B  5A1 þ 5A2 þ 5A3 þ B  ð10A1 þ 10A2 þ 10A3 Þ relying on the route optimized by OpenFlow controllers. The reduced energy can by calculated by: F D  2A2 þ  ð2A1 þ 4A2 Þ B B As we can see, the whole energy consumption and system carbon emission of ground information port can be effectively reduced by dynamic adjustment of task transmission path and utilizing the global view of OpenFlow, which achieves the green operation of ground information port. 4.2

High Efficiency: Efficiency Improvement

Taking Hadoop, the big data procession system adopted in data center network, as an example. The big data processing of Hadoop has two stages: Map and Reduce. Hadoop usually divides a Job into some Tasks which are then assigned to different servers to be processed in parallel. The Job is done after all Tasks being processed. Hence, the process time of Job always depends on the Task ending last. Owning to that each server runs multiple Tasks at the same time, for Tasks belonging to the same Job, some server can end Tasks early, while others end Tasks late. In the extreme case, there is still one server that does not end Tasks while all Tasks of other servers are finished, thus the last Task becomes the bottleneck for Hadoop to process big data. Thanks to that OpenFlow controllers have the global view of ground information port, Hence, OpenFlow controller will reassign Tasks that become the bottleneck of Job processing to the idle servers, thereby highly improves the efficiency of ground information port.

1314

P. Qin et al. second

39

Tk9 30

29

27

Tk7 Tk4 25 Tk6 Task compleƟon 21 20 Tk8 Ɵme Tk3 18 16 Tk1 12 Tk5 Tk2 9

Server iniƟal available Ɵme

7

3 Server1

Server2 Server3 Server4

2 4

3 6

1 5

7

9

8

3 6

1 7

4 9

2 8

5

Data split replicas

Fig. 3. The big data processing task analysis of ground information port

We take an example to illuminate the big data processing of ground information port. As shown in Fig. 3, each server has k Task data blocks and spends 9 s to process single Task. The end time of the Job is 39 s without OpenFlow controllers’ scheduling, the green part of which is 5 s, denoting the duration of transmitting Task data block 9 to server4. While Task data block 9 will be processed in server3 relying on the global view of OpenFlow controller, which can reduce the end time of Job to 38 s [11] at least. Therefore, the SDN-based, green and highly effective ground information port architecture proposed by us not only improves system performance, but also reduces energy consumption effectively.

5 Ground Information Port Development Roadmap As the great important move of practicing the achievements “Eggs along the way”, the space-ground integrated information network will adopt three-step strategy for future development. The first step is “building prototypes, verification functions”, which penetrates the procedure from data retrieval to application server. The second step is “constructing pilot, typical demonstration”. The green and efficient ground information port architecture based on prototype system verification mainly select partial cities to build ground information port business system. The last step is “overall layout, building system”. Ground information port will be systematically constructed while taking geographic layout and network conditions into consideration, eventually forms radial spatio-temporal information service system.

A Green and High Efficient Architecture for Ground Information

1315

6 Conclusions As the core of spatio-temporal data access, processing, management and distribution in space-ground integration information network, ground information port is confronted with such serious problems as unbalanced resource scheduling and task allocation, explosive growth of power consumption and so on. This paper proposes the SDNbased ground information port architecture. Relying on SDN controller to unified allocate and schedule the whole network resources, the architecture not only achieves the high efficient operation of ground information port, but also efficiently reduces energy consumption, which points an innovative road to solve the pain point of big data industry to develop sustainably. Acknowledgements. This work was supported by National 863 Program of China (No. 2015AA015701) and NCFC (No. 91338201).

References 1. Wu W, Qin P, Feng X et al (2017) Reflections on the development and construction of space ground integration information network. Telecommun Sci 12:3–9 2. Wu M, Wu W et al (2016) Design of the overall architecture for space ground integrated information network. Satell Netw 3:30–36 3. Shen R (2006) Some thoughts of chinese integrated space-ground network system. Eng Sci 10:19–30 4. Zhang N, Zhao K, Liu G (2015) Thought on constructing the integrated space-terrestrial information network. J CAEIT 10:223–230 5. Gao PX, Curtis AR, Wong B, Keshav S (2012) It’s not easy being green. SIGCOMM Comput Commun Rev 6. McKeown N, Anderson T, Balakrishnan H, Parulkar G et al (2008) OpenFlow: enabling innovation in campus networks. SIGCOMM CCR 38(2):69–74 7. Qin P, Zhou L, Huang Z et al (2016) The SoS capability model of space-ground integrated network. J CAEIT 6:629–635 8. The Interim Measures for the Management of Major Specialized Satellite Remote Sensing Data for High-Resolution Earth Observation Systems 8 (2015) 9. Qin Peng, Zhou Lu et al (2015) Design of ground system for SCN driven by concurrent multitasking. J CAEIT 5:492–496 10. Dai B, Guan X, Huang B, Qin P et al (2017) Enabling network innovation in data center networks with software defined networking: a survey. J Netw Comput Appl 94:33–49 11. Qin P, Dai B, Huang B, Xu G (2015) Bandwidth-aware scheduling with SDN in Hadoop: a new trend for big data. IEEE Syst J 10

Marked Watershed Algorithm Combined with Morphological Preprocessing Based Segmentation of Adherent Spores Jiaying Wang1, Yaochi Zhao1, Yu Wang1, Wei Chen1, Hui Li1, Yugui Han2, and Zhuhua Hu1(&) 1

School of Information and Communication Engineering, School of Computer Science & Cyberspace Security, Hainan University, No. 58, Renmin Avenue, Haikou 570228, Hainan, China [email protected] 2 College of Intelligence and Computing, Tianjin University, No. 135 Yaguan Road, Tianjin 300350, China

Abstract. The anthracnose is one of the most serious diseases in the growth period of mango. In order to take preventive measures timely, it is indispensable to calculate accurate statistics on the distribution density of anthrax spores on the farm, which has challenges in accurate instance segmentation of adherent spores. Based on the traditional watershed algorithm, which treats the image as a morphological topography and segments the image by finding the lowest and highest points on the topography, we proposed the marked watershed algorithm combined with morphological preprocessing to realize the segmentation of adherent spores. Firstly, the spore images are preprocessed with morphology technique. Then the gradient values of the spore images are calculated. The segmentation of spores is performed in the gradient image by the watershed algorithm with foreground mark and background mark. The experimental result shows that our proposal has a better segmentation performance for adherent spores than the morphological method and the level set evolution. Keywords: Pests detection  Adherent spores segmentation algorithm  Level set evolution

 Watershed

1 Introduction The traditional method of mango disease detection, which mainly relies on the visual observation of professionals, is not only labor-intensive but also has high requirements for the observer’s professional skills and experience, while the technology of computer vision based automatic detection of mango disease is efficient and objective. In recent years, computer vision based segmentation of crops’ disease spore images has been studied extensively. For example, Li et al. [1] realized the automatic counting and labeling of wheat stripe rust spores by using K-means clustering algorithm, morphological manipulation modification and watershed algorithm. The information of the lesion images was combined to improve the energy function of the level set algorithm by Hu et al. [2]. They segmented the images of the plant lesion with complex © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1316–1323, 2020 https://doi.org/10.1007/978-981-13-9409-6_157

Marked Watershed Algorithm Combined with Morphological

1317

background, and finally obtained the segmentation results with good applicability. Camargo et al. [3] achieved an accurate segmentation of banana leaf lesion images by optimal threshold method. However, Since the images of spores captured in the field may adhere to each other actually, it is difficult to separate the adherent spores by the existing methods, which are based on the independent spores, aiming to segment the images into spore regions and background regions. For this reason, the counting accuracy is reduced. As to these problems, we proposed the marked watershed algorithm combined with morphological preprocessing. Firstly, the gradient values of the preprocessed images are calculated. Then the spores almost adherent are separated initially with the morphological corrosion based mark for foreground region on the gradient images, which preserves the important contours of the image area and alleviates over segmentation to some extent. After that, the background is marked by distance transform and watershed transform. In addition, the image gradient is improved by setting the foreground mark and the background mark to a local minimum, which can eliminate the oversegmentation phenomenon further and enhance the robustness of the algorithm. Finally, we apply the watershed transform to the above results to achieve accurate segmentation.

2 Materials and Methods 2.1

Data Acquisition

The anthrax spores in this paper were collected by culture dish from the mango plantation, the mango high-quality and high-yield demonstration base in Baoping Village, Shiyuetian Town, Changjiang County, Hainan Province. After collection, the spores were cultured for 2–3 days in the laboratory. Then we observed and photographed under a microscope (model SDPTOP E5) to obtain the experimental images of anthrax spores. The microscopic images obtained are shown in Figs. 1 and 2 respectively.

(a) Sporee culture vessel with medium

(b) Microscope for observation and photographing

Fig. 1. Acquisition devices for experimental data

1318

J. Wang et al.

Fig. 2. Microscopic image of the spores of mango anthracnose

2.2

The Watershed Algorithm Combined with Morphology Algorithm

2.2.1 The Brief Framework of the Propose Method The concrete process of the propose method are shown in Table 1. Table 1. The marked watershed algorithm combined with pretreatment Input: image X, structuring element B (1) gray scale transformation: 0:299  R þ 0:587  G þ 0:114  B, the Ostu threshold method: t ¼ Max½w0 ðtÞ  ðu0 ðtÞ  uÞ2 þ w1 ðtÞ  ðu1 ðtÞ  uÞ2 Þ, 1 2 2 2 Gaussian filtering: Gðu; vÞ ¼ 2pr 2 expððu þ v Þ=ð2r ÞÞ, bottom cap transformation: Bhatðf Þ ¼ ðf  bÞ  f ; (2) gradient computation: Gðx; yÞ ¼ dxði; jÞ þ dyði; jÞ, dxði; jÞ ¼ Iði þ 1; jÞ  Iði; jÞ, dyði; jÞ ¼ Iði; j þ 1Þ  Iði; jÞ; (3) morphological erosion: XHB ¼ fx : B þ x  Xg; (4) distance transform: D½i½j ¼ minfDistance½ði; jÞ; ðx; yÞg; ðx; yÞ 2 Bg, qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Distance½ði; jÞ; ðx; yÞ ¼ ði  xÞ2 þ ðj  yÞ2 ; watershed transform: CBh ðMÞ ¼ fp 2 CBðMÞjf ðpÞ  hg ¼ CBðmÞ \ Tt  h ðf Þ; (5) gradient image modification: fmin ðx; yÞ ¼ gðx; yÞfm ðx; yÞ; (6) perform watershed on the gradient image Output: the resulting images

The algorithm flow showed in Table 1 is: (1) pre-treating the preliminary images with morphology algorithm; (2) binary images are transformed by gradient transform [4]; (3) marking the foreground of the binary image by morphological erosion [4, 5], and according to which define a height threshold, thereby marking the local minimum areas of the foreground pixel in binary image; (4) binary images are marked in background. We perform the watershed transform and distance transform on binary images, then find the ridge lines between the areas marked in the foreground, which are the mark of background; (5) the gradient images are modified by minima imposition; (6) modified gradient images are segmented by watershed algorithm.

Marked Watershed Algorithm Combined with Morphological

2.2.2

1319

The Concrete Description of the Propose Method

(1) Morphologic preprocessing The first step transforms color images to gray scale images using weighted average method [4]; In the second step, binary images are obtained by OTSU (maximization of interclass variance) algorithm [6], which simplifies the analysis of images in the later stage and improves processing speed; In the third step, we smooth images by Gaussian filtering, which removes the noise; In the fourth step, we reduce non-uniform illumination by bottom cap transformation [7]. The image effect with pre-treatments are as follows (Fig. 3).

( the gray scale image of spo res (a)

(c)) the image after gaussian filtering

(b) the binary image of spores

(d) the image after bottom cap transformation

Fig. 3. Preprocessing results

(2) The marked watershed algorithm combined with pretreatment The watershed algorithm is an image segmentation algorithm of mathematical morphology based on topological theory. However, result of classical watershed algorithm is always over-segmented because of the noise and the multiple local least value that may exit. Aiming at the limitation of classical watershed algorithm, we proposed the marked watershed algorithm, which can segment the adherent spores effectively. Due to the watershed algorithm has much to do with image gradient, we get gradient image on the base of preprocessing image, then perform threshold processing on gradient image to reduces over segmentation effectively. The marked watershed algorithm combined with pretreatment is proposed, which performs threshold processing by marking on foreground and background. First, “local minimum areas” are extended by a threshold, which contains more pixels that similar with minimum area into “local minimum areas”, and the “extended local minimum

1320

J. Wang et al.

areas” is the marker of foreground. Then, the background should be marked. Under ideal condition, the mark is not hoped to be close to the goal object, so we get the dividing between adjacent regions by implementing distance transform and watershed transform on binary images as the marker of background. To restrain the oversegmentation problem further, this paper uses minima imposition [8–10] to modify gradient images with marked images. At last, we implement watershed transform on gradient images modified. The foreground is marked by morphological erosion. This paper erodes binary images of the spore with the disk structure element. After morphological erosion, the shape feature of each spore is saved, the approximate adherent spores are separated effectively and a part of noise is deleted. And this paper combines Euclidean distance transform and watershed transform to mark the background. Till then we get the separatrix obtained by distance transform and watershed transform as background mark, and foreground mark is get by morphological operations. Performing minima imposition on gradient image G(x, y) with the marked image fm ðx; yÞ. The formula of minima imposition means that fmin ðx; yÞ is 0 while (x, y) is marked, contrarily, fmin ðx; yÞ equals to the value of gradient function gðx; yÞ. At last, after the watershed transform, we can get the segmented image.

3 Experimental Results and Analysis 3.1

Experimental Results

Figure 4 is the image effect obtained in the processing process of the improved watershed algorithm in this paper. We can see that the adherent spores have been precisely segmented in the part marked by circles from Fig. 5.

(a) preprocessed spore image

(d) background markers

(b) original image gradient

(e) modified image gradient

(c) foreground m arkers

(f) image segmented by watershed algorithm

Fig. 4. Segmentation result by marked watershed algorithm

Marked Watershed Algorithm Combined with Morphological

1321

Fig. 5. The accurately segmentation results of marked approximate adherent spores

3.2

Result Analysis

Table 2 shows the comparison of segmentation effect between the marked watershed algorithm and morphological segmentation and level set segmentation. In this paper, the algorithm considers the image edge and detail information, and preliminarily segments the approximate adhesion spores by morphological processing, retaining the important contour of the region while removing the details and noise. It can be clearly seen from the comparison figures in Table 2 that the watershed algorithm based on markers proposed in this paper is more accurate for the segmentation of approximately adherent spores.

Table 2. The comparison of segmentation effect between the watershed algorithm based on markers and other algorithms The image segmented by morphological method

The approximate adherent spores are not segmented accurately

(None)

The image segmented by level set

(continued)

1322

J. Wang et al. Table 2. (continued)

The approximate adherent spores are not segmented accurately

The image segmented by the watershed algorithm

The approximate adherent spores segmented accurately

4 Conclusion In practice, the spores collected in the field may come into contact with each other, and it is difficult to segment these spores by common methods. To solve the problem, this paper makes a deep research on the segmentation of spores of mango disease, and we proposed an improved watershed algorithm combined with morphological corrosion. The experimental results show that our algorithm can accurately extract spore edges, realize the accurate segmentation of the approximate adherent spores, and has strong noise suppression ability, which can weaken the over-segmentation phenomenon and achieve accurate segmentation. Finally, we compared the marked watershed algorithm proposed in this paper with other algorithms in segmentation effect, and verified that our algorithm has good performance in the treatment of adherent spores. Acknowledgements. This research was supported by Hainan Province Natural Science Foundation, China (619QN195, 618QN218), the National Natural Science Foundation of China (61963012), Key R&D Project of Hainan Province, China (ZDYF2018015), and Collaborative Innovation Fund Project of Tianjin University-Hainan University (HDTDU201907).

Marked Watershed Algorithm Combined with Morphological

1323

References 1. Li XL, Ma ZH, Sun ZY et al (2013) Automatic counting for trapped urediospores of Puccinia striiformis f. sp. tritici based on image processing. Trans Chin Soc Agric Eng 29 (2):199–206 2. Hu QX, Tian J, He DJ, Ning JF (2012) Segmentation of plant lesion image using improved C–V model. Trans Chin Soc Agric Machin 43(05):157–161 3. Camargo A, Smith JS (2009) An image-processing based algorithm to automatically identify plant disease visual symptoms. Biosyst Eng 102:9–21 4. Hu ZH, Zhao YC (2019) The application and research of computer vision and machine learning in wisdom agriculture. Harbin Institute of Technology Press 5. Acharjya PP, Ghoshal D (2013) The role of structuring element in morphological image segmentation. Int J Sci Eng Res 4(7):2560–2569 6. Chen YZ, Huang YF (2009) Survey of Ncut in application of image segmentation. Comput Technol Dev 19(1):228–230 7. Zhao W, Wang XC, Li XH (2010) Image segmentation method based on top-hat transformation and FCM clustering. Comput Technol Dev 20(8):52–55 8. Wang PW, Wu XQ, Zhang MC (2006) Watershed segmentation based on multiscale morphological fusion. J Data Acquisit Process 21(4):398–402 9. Masood S, Sharif M, Masood A (2015) A survey on medical image segmentation. Curr Med Imag Rev 11(1):13–14 10. Bhargavi K, Jyothi S (2014) A survey on threshold based segmentation technique in image processing. Int J Innov Res Dev 3(12):234–239

Data Storage Method for Fast Retrieval in IoT Juan Chen1,2(B) , Lihua Yin1 , Tianle Zhang1 , Yan Liu3 , and Zhian Deng4 1

4

Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China {chenjuan,yinlh,tlezhang}@gzhu.edu.cn 2 Pengcheng Laboratory, Cyberspace Security Research Center, Shenzhen 518000, China 3 Suzhou Institute of Biomedical Engineering and Technology (CAS), Suzhou 215163, China [email protected] College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China [email protected]

Abstract. Nowadays, data query in the Internet of Things (IoT) is suffering from high latency as increasing amounts of data collected by devices every day. In this paper, we focus on data retrieval and present a novel storage method for fast retrieval by managing hot data separately. Experimental results show that our storage method has reduced the query latency by at least one order of magnitude compared with the existing typical method.

Keywords: IoT

1

· Data storage · Fast retrieval

Introduction

Internet of Things (IoT) connects real-world objects to the Internet, allowing objects to collect, process and communicate data without human intervention [1]. The IoT’s vision is to create a better world for humans, where objects (or physical objects, devices, things) around us can comprehend our preferences and likeness to act appropriately without explicit instructions [2]. With the recent advances in low-cost IoT devices, technologies in IoT have gained momentum in connecting objects together. Devices such as sensors, RFID tags are expected to be attached to objects around us, which has resulted in a vast amount of data available to be used. The data which people is interested in implicates great value about the monitored environment. Data retrieval is an excellent way to

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1324–1327, 2020 https://doi.org/10.1007/978-981-13-9409-6_158

Data Storage Method for Fast Retrieval in IoT

1325

find such valuable data. Perera et al. and Sheth [3] also emphasized the importance of search and discovery techniques in IoT. To accelerate data query, various storage, indexing, and ranking techniques have been presented to optimize query performance in large scale and dynamic IoT networks. Nevertheless, given the increasing number of resource-limited devices, one challenging issue remains unsolved: How to reduce the high query latency caused by the massive amount of data in IoT? To address the challenging issue, we present a novel data storage method to speed up queries in IoT. The method divides the storage space into a general area for all data and a hot area for frequently requested data. The query latency for accessing the hot data would be significantly reduced if we store the hot data efficiently. As a result, the low average query latency would be achieved as most queries request the data in the hot area.

2

Storage Method for Fast Retrieval

The idea of the data storage method mainly includes two parts. Firstly, the method tries to find frequently requested data, also known as hot data. Then, the method manages the hot data separately by dividing the storage space into a general area and a hot one. The hot data is stored in the corresponding hot area. Since the hot data is relatively small and requested frequently, the query response time over the hot data is also relatively low compared with other data. So, the average query latency will be largely reduced. Specifically, for a query, we first check whether it is high-frequency. Then, we search the data in the hot storage area for a high-frequency query and in the general area for other queries. However, one challenging issue needs to be solved: how to find hot data. The hot data is the data which is frequently requested. So, we first find the highfrequency queries by statistics. Then, we store the data requested by these queries in the hot area. Generally, we would count queries directly which however may overlook some high-frequency queries. We observe that data is stored based on its attributes as attributes instead of the semantics are the critical characteristics of a query. Nevertheless, queries having the same attributes and different semantics will be counted separately. So, it is possible that some high-frequency queries may be omitted. To address this issue, we abstract attributes from queries to support high-frequency queries statistics and generate query-patterns for queries.

3

High-Frequency Queries Statistics

The form of a query-pattern is designed to ‘constraint attributes → request attributes’, as a query generally expresses what is requested under certain constraints. Take the query ‘The quiet restaurant?’ for example. What people request is one ‘restaurant’ under one constraint: ‘quiet’. Specifically, the request attribute is ‘location’, and constraint attributes is ‘sound’. The ‘constraint attributes’ are attributes related to the constraints. Similarly, the ‘request attributes’ are attributes related to the request. Specifically, we use the following three steps to generate query-patterns.

1326

J. Chen et al.

– We split each query sentence into words or phrases using typical methods [4]. Then, we obtain a query set {word1 /phrase1 , word2 /phrase2 , . . . }. – We construct request attribute table and constraint attribute table by statistics of a large number of queries. The former is used to find if a word/phrase in the query set is related to a constraint attribute. Similarly, the latter is used to find whether a word/phrase is related to a request attribute. For example, ‘quiet’ is related to the constraint attribute if it is found in the constraint attribute table. Similarly, ‘restaurant’ is related to the request attribute if it is found in the request attribute table. – We generate the query-pattern for each query in the form of ‘constraint attribute(s) → request attribute(s)’. Specifically, we choose a lower letter to represent a constraint attribute and an upper letter for a request attribute. – We find the high-frequency queries by statistics of query-patterns. For each query-pattern, we count the number of queries represented by it as its appearance frequency. Then, we select the top β query-patterns as the highfrequency query-patterns.

4

Experimental Results

Query Latency (ms)

Figure 1 shows the average query latency (in ms) for 500 queries on the 2D dataset. As can be seen, the query latencies of the two methods grow as the number of records increases. We note that DSMFR outperforms compared with PH-tree. More importantly, DSMFR is 20x faster than PH-tree. 20

PH-tree DSMFR

15 10 5 0

0

1

2

3

4

5

6

7

Number of Records

8

9 10

4

Fig. 1. Query performance on 2D data.

Figure 2 studies the query performance on real-world traffic 3D data. Similar to Fig. 1, the query latencies of the two methods grow with an increasing number of records. DSMFR has an extremely smaller query latency compared with the PH-tree. In all, we note that combining FR, the query performance of PH-tree has been significantly improved both on 2D and 3D dataset.

Data Storage Method for Fast Retrieval in IoT

Query Latency (ms)

40

1327

PH-tree DSMFR

30 20 10 0

0

2

4

6

Number of Records

8 4

10

Fig. 2. Query performance on 3D data.

5

Conclusion

In this paper, we focus on reducing query latency for data retrieval in IoT and present a storage method to speed up data retrieval. Our method can be used in combination with existing indexing and ranking techniques to reduce their query latency. Experiments have verified the feasibility and performance of our storage method. Acknowledgements. This work was supported by the National Key research and Development Plan (No. 2018YFB0803504) and the National Natural Science Foundation of China (No. 61871140, No. 61572153, No. 61702220 and No. 61702223).

References 1. Kizilkaya B, Caglar M, Al-Turjman F, Ever E (2019) Binary search tree based hierarchical placement algorithm for IoT based smart parking applications. Internet Things 5:71–83 2. Ramachandran N, Perumal V, Gopinath S, Jothi M (2018) Sensor search using clustering technique in a massive IoT environment. In: Industry interactive innovations in science, engineering and technology. Springer, pp 271–281 3. Barnaghi P, Sheth A (2016) On searching the internet of things: requirements and challenges. IEEE Intell Syst 31(6):71–75 4. Goh C-L, Sumita E (2011) Splitting long input sentences for phrase-based statistical machine translation. In: Proceedings of the 17th annual meeting of the association for natural language processing, pp 802–805

Equivalence Checking Between System-Level Descriptions by Identifying Potential Cut-Points Jian Hu1(B) , Guanwu Wang1 , Guilin Chen1 , Yun Kang1 , Long Wang1 , and Jian Ouyang2 1

The 63rd Research Institute, National University of Defense Technology, Nanjing 210000, China [email protected] 2 The School of Communication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210000, China

Abstract. Symbolic simulation is one of the most important equivalence checking method, but it can not deal with large designs due to the limited capacity of BDD and SAT/SMT. Cut-points technique is used with symbolic simulation to verify the large size of designs. However, the existing approaches need the mapping information, which is not easily obtained in the verification process. This paper presents a new equivalence checking method against system-level descriptions without mapping information of variables. Our method first randomly simulate the designs under verification to generate potential cut-points set. If some output variables do not belong to the potential cut-points set, return not equivalent. Second, our method select and remove the potential cut-point pair from the set and slice the program according to it. Third, we symbolically simulate the slice and compare the results. If the corresponding potential cut-points are really equivalent, they are put into equivalent set. Finally, the process is repeated until the potential cut-points set is empty or some outputs are not equivalent. Our method can check the equivalence of designs without mapping information and decrease the verification size. The experimental results shows the effectiveness and efficiency of our proposed method. Keywords: Symbolic simulation verification · Cut-point

1

· Equivalence checking · Formal

Introduction

In recent years, Electronic system-level design is extensively accepted in VLSI and SoC (System-on-a-Chip) designs. Electronic system-level design contains several advantages, such as: c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1328–1335, 2020 https://doi.org/10.1007/978-981-13-9409-6_159

Equivalence Checking Between System-Level Descriptions

1329

– Faster simulation – Faster implementation and architecture exploration – Faster verification. System-level design allows designers to develop VLSI or SoC designs more efficiently, reducing design cycles and increasing productivity. In general, verification and debugging occupy 50–80% of the entire design period. In addition, functional verification in system design is important because designers sometimes need to return to the system level when a functional error is discovered at a later design stage. This means that effective verification methods for systemlevel designs can significantly increase design productivity by detecting more errors at the high level. There are a few related works addressing the equivalence checking issue of high level designs. A simulation based equivalence checking method is proposed in [1]. In their method, property and mutation as the measurements of stimuli are given for equivalence checking. Since there is no coverage analysis in the checking process, the stimuli cannot be guaranteed to completely validate the designs. To solve the problem, a coverage directed simulation equivalence checking method is proposed in [2]. Their technique utilized the similarities between high-level designs to greatly reuse the verification efforts. To accelerated the process of simulation, a multiple threads based equivalence checking method is proposed in [3]. However, there is no good way to select the variables to compare, the application of this method is limited. The simulation-based methods are easy to use and can quickly spot easy-to-find bugs, but they are far from covering a reasonable number of states, at least within feasible execution time. For formal techniques based method, [4] proposed an rule-based equivalence checking method for system-level design description. The rule-based equivalence checking of system-level design descriptions proves the equivalence of two design descriptions by applying a set of local equivalence rules in a bottom-up manner. By regarding each equivalence rule as a pattern in a graph, the rule-based equivalence checking can be regarded as a graph matching problem. Using the extended system dependence graph ExSDG [5] as an internal representation, an efficient implementation of the equivalence checking method is achieved. Matsumoto et al. [6] proposed an equivalence checking method between C descriptions. They compare the C codes texturally and verify the difference by symbolic simulation. But the verification efficiency depends on the similarity of the codes. Compare to simulation based method, the previously proposed symbolic simulation methods can verify designs completely, but this kind of methods needs structural mapping information and cannot verify complex designs due to state exploration and capacity limitations of solvers. To overcome the limitation, we introduce another method utilizing the random simulation and cut-points techniques. The contributions of this paper lie in two factors. First, random simulation is used to find the potential cut-points, which can divide the verification problem to decrease the verification scale. Second, our method can verify the designs without mapping information according to the simulation result.

1330

J. Hu et al.

The remainder of this paper is organized as follows. Section 2 gives the preliminary of our method. The equivalence checking algorithm is described in Sect. 3. Section 4 gives the experimental results. Section 5 presents our conclusions.

2 2.1

Preliminary Symbolic Simulation

Symbolic simulation has become one of the most common techniques in hardware verification. Since variables in the descriptions are treated as word-level symbols rather than bit-level, symbolic simulation can efficiently verify large descriptions more than traditional logic simulation. Symbolic simulations include two types: (i) result expressions are all represented by inputs, and symbolic expressions are verified by BDD. (ii) introduce new variables and represent the results as a series of conjunction normal form, validated by SAT or SMT. In this paper, the symbolic simulation results are expressed in conjunction normal form (CNF). A symbolic simulation run can express multiple traditional simulation runs. For example, the symbolicsimulation result of E1 = A + B, E2 = C +D, E3 = E1 + E2 is (E1 = A + B) (E2 = C + D) (E3 = E1 + E2 ). If A, B, C, D all have n bits, one symbolic simulation can represent 24n traditional simulations. If the resulting variable is related to n assignment nstatements, the symbolic simulation produces a CNF expression of Sim exp = i=1 si (si is the ith outcome variable assignment statement). 2.2

Program Slice

Program slicing [7] consists of statements that affect the value of a variable at a particular location. The program slice is sliced according to the slice standard. The slice standard consists of the variable name and the code location where the variable is located [8] (if the variable x is on the ith line, < i, x > is the slice location). This paper uses slices to extract statements related to potential cut-points. Program slicing techniques include two forms: backward slicing and forward slicing. Our approach is based on backward slice technology. Definition 1. Backward slice: The backward slice S of < i, x > in program P consists of statements that affect the value of variable x at line i. Since the backward slice can extract the relevant statements of the variable, the equivalence of the potential cut-point can be verified by the symbolic representation of the sliced code, which decreases the verification scale. 2.3

Program Dependence Diagram

Program dependence graph (PDG) [9] in computer science is a representation, using graph notation that makes data dependencies and control dependencies explicit. Program slicing technique is mainly based on the PDG. The nodes of

Equivalence Checking Between System-Level Descriptions

1331

the PDG represent program statements, and the edges represent dependencies in the program (control dependence and data dependence). Figure 1 shows an example of the program dependence graph for a C program (blue edges for data dependence and black edges for control dependence).

Fig. 1. Program dependence graph of C code

Algorithm 1. Equivalence Checking Algorithm 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28:

Input: Program: p0 , p1 , Number of tests: n Output: Equivalent or Not equivalent P eqset ← Randomsim(p0 , p1 , n) n←n−1 while n > 0 do Randomsim(p0 , p1 , n) Delete unequal potential equivalent points in P eqset n←n−1 end while Eqset ← ∅ if any output not in Peqse then return  N ot equivalent end if while P eqset = ∅ do (p1, p2) ← select and remove pep nearest to input in P eqset c1 ← P roslicing(p1) c2 ← P roslicing(p2) s1 ← Sym simulate(p1, c1, Eqset) s2 ← Sym simulate(p2, c2, Eqset) if Compare(s1, s2) then Insert equivalent points to Eqset else if p1 and p2 in O then return  N ot Equivalent end if end if end while return  Equivalent

1332

3

J. Hu et al.

Equivalence Checking Algorithm

Definition 2. Potential cut-point: < x, i >, < y, j > are the variables at the location i and j. o(x, i, vk ) and o(y, j, vk ) are the output of x and y under the input vk . If o(x, i, vk ) = o(y, j, vk ), (< x, i >, < y, j >) is a pair of potential cut-point. Theorem 1. O1 and O2 are the output sets of program P1 and program P2 . If ∀y1 ∈ O1 and ∀y2 ∈ O2 , y1 and y2 are corresponding output variables. ∀vr in input I, y1 = y2 , then P1 ≡ P2 . Theorem 2. If ∀y1 ∈ O1 and ∀y2 ∈ O2 , O1 and O2 are the output sets of program P1 and program P2 . y1 and y2 are the corresponding outputs. P eqset is / P eqset, P1 = P2 . a collection of potential cut-points. If (< y1 , i >, < y2 , j >) ∈ Proof. I is the input set. y1 and y2 are the corresponding output variables. P1 and P2 are program slices of y1 and y2 . If (< y1 , i >, < y2 , j >) ∈ / P eqset, then ∃y1 ∈ O1 ∃y2 ∈ O2 ∃vr ∈ I, o(y1 , i, vr ) = o(y2 , j, vr ), according to the Theorem  1, P1 = P2 . The equivalence checking algorithm description is presented in Algorithm 1. The algorithm takes two designs P0 and P1 as the inputs representing designs before and after scheduling, respectively. The algorithm examines whether the two designs before and after scheduling are equivalent. The details of the algorithm are described in the following subsections. 3.1

Generation of Potential Cut-Points

Since the design under test has no variable name mapping, procedure Randomsim() uses a random simulation method to generate potential cut-points set P eqset. However, the lack of simulation may cause problems: (i) the wrong cut-point; (ii) the cut-point with the same value is too much. In order to solve these problems, we divide the input space and randomly select the input in each partition to simulate the design under test. Only the variables that produce equal values for each simulation are put into the set of potential cut-points. Our approach can greatly reduce the possibility of generating false equivalent potential cut-points.If there is an output that does not belong to the potential cut-point set, the program returns not equivalent (Theorem 2). 3.2

Selection of Potential Cut-Points and Program Slicing

After the potential cut-points are generated, the potential cut-points need to be verified to determine whether they are really equivalent. Because of the data dependencies, you must first verify the nodes that are close to the input. Therefore, according to the topological logic of the PDG, the potential cut-points closest to the input are selected and removed. Then, the selected potential cutpoints are sliced using function P roslicing().

Equivalence Checking Between System-Level Descriptions

3.3

1333

Symbolic Simulation

After obtaining the slice of the cut-points, the Sym simulate() function will symbolically simulate the slices. Symbolic simulation generates a CNF of a SSA [10] expression of the potential cut-points. Then the CNF expression is converted to SMT format. The Compare() function uses the SMT solver to validate these symbolic expression. If the SMT solver returns “sat”, it means that the corresponding potential cut-points are not equivalent. Otherwise the SMT solver returns “unsat”, it means the corresponding potential cut-points are equivalent. The equivalent cut-points are put into Eqset and the checking process will repeat until the P eqset set is empty. This process is repeated until there is an output that is not equivalent (return ‘Not equivalent’) or the P eqset is empty (return ‘Equivalent’).

4

Experiment Results

Our equivalence checking algorithm has been implemented in Python and all the experiments have been conducted on a 2.5 GHz Intel i5 workstation with 8G RAM. The symbolic simulator and FSMD extractor are implemented based on Pycparser [11]. Frama-C [12] is used to slice the program and constructs PDG. Z3 [13] is used as our SMT solver. Some of the benchmarks are from spark publicly available example suite, such as CONTINUE, MINSORT. Some are from high level synthesis benchmarks [14], such as GCD, FIR. The rest are from SystemC2.3.0 source code like FFT. The benchmarks not only include control intensive design, such as GCD and MINSORT, but also include data intensive design, such as FFT. System level descriptions of these benchmarks are implemented in C. We use high-level syntheses tool SPARK [15] to optimize the code and produce an optimized C code without variable name mapping. Table 1. Characteristics of the designs Design

Lineori Lineopt Inputori Inputopt Outputori Outputopt

FFT

161

182

FIR

86

78

1

GCD

72

91

2

95

128

1

176

209

1

CONTINUE MINSORT

2

2

2

2

1

1

1

2

1

1

1

1

1

1

1

1

Table 1 shows the characteristics of the design under test. The first column is the name of the design under test. The second and third columns are the number of lines of code for design before and after scheduling. The fourth and fifth columns represent the number of input variables for design before and after scheduling. The sixth and seventh columns indicate the number of output variables design before and after scheduling.

1334

J. Hu et al. Table 2. Experimental designs

Design

Clausesori Clausesopt Cut-points Basic (ms) Our method (ms)

Equivalent

FFT

35

11

20

1043

503

Eqv

FIR

65

5

32

1571

413

Eqv

GCD

41

9

8

817

461

Not Eqv

CONTINUE 19

7

5

885

426

Eqv

MINSORT

3

5

629

372

Eqv

11

Table 2 shows the comparison between our method and the traditional symbolic simulation method. The first column is the name of the design under test. The second and third columns are the numbers of maximum clauses produced by the traditional symbol simulation method and our method. The fourth column is the number of cut-points in the program. The fifth column is the verification time required for traditional symbolic simulation. The sixth column is the verification time for our method. The last column is the result of the equivalence checking. Eqv indicates that the design to be checked is equivalent, and InEqv indicates that the design to be checked is not equivalent. From the second and third columns of Table 2, our method generates fewer clauses during verification due to the cut-points, which is more scalable than traditional symbolic simulation. At the same time, the potential cut-points will give us mapping information between variables, which makes our method more efficiently than traditional symbolic simulation without blind comparisons.

5

Conclusion

In this paper, a new system-level equivalence checking method by identifying potential cut-points is proposed. This method can effectively divide and verify the design without mapping information. Random simulation technique is used to generate potential cut-points. And symbolic simulation is used to check the designs under test. The experimental results confirm the efficiency and effectiveness of our method. In the near future, we intend to use our method to check the equivalence of the real industry designs to prove our scalability. Acknowledgements. This work was supported by the National Natural Science Foundation of China (61902421).

Equivalence Checking Between System-Level Descriptions

1335

References 1. Bombieri N, Fummi F, Pravadelli G (2008) RTL-TLM equivalence checking based on simulation. In: Proceedings of design and test symposium (EWDTS), pp 214– 217 2. Hu J, Li T, Li S (2015) Equivalence checking between SLM and TLM using coverage directed simulation. Front Comput Sci (FCS) 9(6):934–943 3. Groβe D, Groβ M, Khne U, Drechsler R (2011) Simulation-based equivalence checking between System C models at different levels of abstraction. In: Great lakes symposium on VLSI, pp 223–228 4. Shankar S, Fujita M (2008) Rule-based approaches for equivalence checking of SpecC programs. In: ACM/IEEE international conference on formal methods and models for codesign, pp 39–48 5. Nishihara T, Ando D, Matsumoto T, Fujita M (2007) ExSDG: unified dependence graph representation of hardware design from system level down to RTL for formal analysis and verification. In: Proceedings international workshop of logic and synthesis, pp 83–90 6. Matsumoto T, Nishihara T, Kojima Y, Fujita M (2009) Equivalence checking of high-level designs based on symbolic simulation. In: International conference on communications, circuits and systems, pp 1129–1133 7. Weiser M (1982) Programmers use slices when debugging. Commun ACM 25(7):446–452 8. Tip F (1994) A survey of program slicing techniques, CWI (Centre for Mathematics and Computer Science) 9. Horwitz S, Reps T, Binkley D (1990) Interprocedural slicing using dependence graphs. ACM Trans Programm Lang Syst 12(1):26–60 10. Cytron R, Ferrante J, Rosen BK, Wegman MN, Kenneth Zadeck F (1991) Efficiently computing static single assignment form and the control dependence graph. ACM Trans Program Lang Syst 13(4):451–490 11. https://pypi.python.org/pypi/pycparser 12. http://frama-c.com 13. de Moura L, Bjorner N (2008) Z3: an efficient SMT solver, international conference on Tools and algorithms for the construction and analysis of systems, pp 337–340 14. http://computing.ece.vt.edu/mhsiao/hlsyn.html 15. Gupta S, Dutt N, Gupta R, Nicolau A (2003) Spark: a high-level synthesis framework for applying parallelizing compiler transformations. In: International conference on VLSI design, pp 461–466

An Improved Adversarial Neural Network Encryption Algorithm Against the Chosen-Cipher Text Attack (CCA) Yingli Wang, Haiting Liu, Hongbin Ma(&), and Wei Zhuang Electronic Engineering College, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. Password attacks are classified into four types: known-plain text attack, cipher text attack, chosen-plain text attack, and chosen-cipher text attack. The chosen-cipher text attack is the strongest attacking mode among the four attack modes. In order to resist the most aggressive chosen-cipher text attack mode, this paper proposes an improved adversarial neural network encryption method. We call it CCA-ANC encryption algorithm which changes the attack mode of the attacker to use the chosen-cipher text attack (CCA) method that can give the attacker a stronger cracking capability so as to force both encryption and decryption to use a better encryption system generating a more secure encryption algorithm. Keywords: Chosen-Cipher text attack  Generative adversarial network Adversarial neural network  Encryption and decryption system



1 Introduction With the advancement of deep learning technology, the research of generative adversarial network (GAN) has achieved rapid development. GAN is a generation model proposed by Good-fellow in 2014. The GAN model consists of a generator G and a discriminator D and the model optimizes the ability of G to generate samples by establishing a mini-max game of discriminator D and generator G [1]. In a theory, the generator G generated by the GAN model training can gradually approximate the real sample data distribution. Recently, some researchers have tried to introduce GAN into the field of information security. In the field of cryptography, Abadi and Andersen showed that neural networks can learn how to perform encryption and decryption to achieve the goal of secure communication by using the idea of game confrontation for the first time [2]. They designed an encryption system based on network that includes neural networks called Alice and Bob whose goal is to limit the third neural network called Eve to eavesdrop on the communication between them. These neural networks do not specify a specific encryption algorithm, instead the neural network performs end-to-end training in a confrontational manner. Password attacks are classified into four types: known-plain text attack, cipher text attack, chosen-plain text attack, and chosen-cipher text attack. The chosen-cipher text © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1336–1343, 2020 https://doi.org/10.1007/978-981-13-9409-6_160

An Improved Adversarial Neural Network Encryption Algorithm

1337

attack is the strongest attacking mode among the four attack modes. In this model, an attacker can select any cipher text he wants and use the decryption system to obtain the corresponding plain text. In order to resist the most aggressive chosen-cipher text attack mode, this paper proposes an improved anti-neural network encryption method which we call CCA-ANC encryption algorithm. The encryption algorithm changes the attack mode of the attacker to use the chosen-cipher text attack (CCA) method that can give the attacker a stronger cracking capability so as to force both encryption and decryption to use a better encryption system generating a more secure encryption algorithm.

2 Improved Adversarial Neural Network Encryption Algorithm Based on CCA (CCA-ANC) 2.1

Algorithm Principle

Chosen Cipher text Attack (CCA) is an attack model used for crypt-analysis, and it is an attack behavior that the crypt-analyst can obtain decrypt information by selecting chosen cipher text and in the case of unknown key [3]. In this type of attack, an attacker can obtain the generated plain text and can attempt to recover the security key used for decryption from plain text through entering one or more known cipher texts into the system. The original anti-network encryption method cannot resist the chosen-cipher text attack, so this paper improves the ANC algorithm model in order to solve the problem and establish a more reliable security communication model for both encryption and decryption [4]. The system allows the attacker to use chosen-cipher text attack to decipher the key if he can decipher the key successfully means that he can decipher the cipher text successfully. The improved encryption method uses the neural network for deciphering training. Simultaneously, the only way for Alice and Bob to ensure secure communication is to find a solution that is more secure and makes Eve unable to crack. 2.2

Model Structure

The improved Eve is a classifier that receives two keys K0 and K1 as input, and also receives cipher text C. The final fully connected layer of the network introduces hidden layer variables into the classification logic through the soft-max layer. Alice randomly selects a key that is used for encrypting the plain text through the neural network from the two keys K0 ,K1 and obtain the C which is sent to Eve and Bob. Bob will use the C and key to decrypt the information through the neural network. Eve will not attack the C but use the neural network to determine whether C is encrypted by K0 or encrypted by K1 . This model is called the CCA-ANC as shown in Fig. 1. In the case of this model, Alice and Bob must find a better encryption system for secure communication.

1338

Y. Wang et al.

Fig. 1. Symmetric encryption adversarial network model based on CCA-ANC

2.3

Adversarial Neural Network Architecture

The encryption and decryption network structure are initially a fully connected layer. The input dimension is equal to the output dimension. The input of the fully connected layer is plain text P and key K. The fully connected layer is followed by four 1-D convolutional layers, the window size, the input depth and the output depth of each convolution layer are [1, 2, 4], [2, 4], [1, 4] and [1, 4]. Strides are 1, 2, 1, and 1, and the length of the output vector of the last layer is the length of p. The activation function of the other layers (fully connected layer + 3 convolutional layers) is the sigmoid, and the activation function of the last convolutional layer is the tanh [5]. This setting is to make the output value in the range (−1,1). Figure 2. shows the architecture of Alice’s neural network. Bob have similar network architectures with Alice except that Bob’s input is C and K.

Fig. 2. Alice (Bob) network architecture

Since Eve’s input is only C, the network architecture is slightly different from Alice and Bob. The network needs to add one more fully connected layer as shown in Fig. 3. 2.4

Improvement of Network Structure and Loss Function Design

The Eve structure we designed receives two keys K0 , K1 and C, and the last fully connected layer of the network introduces hidden layer variables into the classification logic through the softmax layer. Thus, a C with a probability k0 ðcÞ of K0 and a probability k1 ðcÞ of K1 are obtained. Finally, if k0 is more than k1 the network outputs 1 otherwise the network outputs 0. The network structure is shown in Fig. 4.

An Improved Adversarial Neural Network Encryption Algorithm

1339

Fig. 3. Eve network architecture

As the model changes, Eve’s loss function needs to be redefined to optimize the problem to adapt to the new adversarial network model [6]. To do this, give an M-bit key and C, we define Eve’s loss as cross-entropy: LE ¼ 

1 X 1   X 1M ðiÞ ðiÞ tj log pj M i¼0 j¼0

Fig. 4. Improved Eve neural network

ð1Þ

1340

Y. Wang et al. ðiÞ

where if C ðiÞ is generated by the key k1 , then tðiÞ ¼ 1,and if it is generated by the key ðiÞ t0

¼ 0. Eve learns by minimizing LE , and Alice and Bob learn by minimizing k0i , then the given L: L ¼ LAB cminðErr; 0:5Þ

ð2Þ

where Err is the classification error of Eve, c is a hyper parameter, LAB is given by Eq. (3). LAB ¼

1      X 1M d Pi ; nn W; nn W; PðiÞ ; K ðiÞ ; K ðiÞ M i¼0

ð3Þ

3 Model Experiment Simulation and Safety Analysis 3.1

Model Experiment Simulation

This article uses the machine learning framework Tensor-flow in Python to train this model. To train the network, we used a mini-batch with M = 4096. in the formula (2), c ¼ 7 is used [7]. In addition, the L2 regularization method is used in the experiment, in which the hyper-parameter with a key length of 4 bits (n ¼ 4) and 8 bits (n ¼ 8) is a ¼ 0:1, and the hyper-parameter with a key length of 16 bits (n ¼ 16) is a ¼ 0:015. At the same time, for the Eve network, we define the number of hidden layer neurons R as R ¼ 4n. The number of neurons defines the number of linear combinations of functions that Eve can analyze simultaneously. As the key size increases, Eve needs more function calculations. Therefore, we choose these parameters based on experience and set it proportionally to the number of bits in the key, increasing the number of linear combinations, making Eve own a great ability to crack the passwords that Alice and Bob learned through neural networks [8]. Training alternates between Alice, Bob’s neural network and Eve’s neural network. In order to give Eve a great computational advantage, Alice and Bob train for three mini-batch, and Eve trains 60 mini-batch. 3.2

Model Safety Analysis

In each trial, if Alice and Bob were able to communicate without errors while executing the algorithm, then the model of learning training was considered successful, otherwise the model was considered to have failed [9]. If Eve can’t extract any information from C, it can be considered that a combat-encrypted network that has been trained to learn is safe. Tables 1 and 2 show the results of the ANC algorithm and CCA-ANC testing 20 networks. In the experimental test of this paper, almost all the trained models learn a secure encrypted network successfully but one model training failed with which Alice and Bob could not communicate successfully [10]. Comparing the results of Tables 1 and 2, it can be observed that the number of successful trials has increased. A reasonable explanation can be obtained from the principle of the encryption algorithm and the

An Improved Adversarial Neural Network Encryption Algorithm

1341

Table 1. Learning and testing 20 networks using ANC algorithms for different key sizes Key length 4-bit 8-bit 16-bit

Test network number 20 20 20

Number of successful communications 20 20 20

Encryption algorithm security number 2 7 12

Table 2. Learning and testing 20 networks using CCA-ANC for different key sizes Key length 4-bit 8-bit 16-bit

Test network number 20 20 20

Number of successful communications 19 20 20

Encryption algorithm security number 19 20 20

experimental analysis. In the original ANC encryption method, Eve is an attacker with weak deciphering ability, and the information obtained is less, and it is more difficult to decipher only the pure C. Because Alice and Bob have many encryption solutions that can be applied, and the degree of freedom of choice is too large, ANC will train more complex objective functions with many local solutions [11]. Using CCA-ANC encryption method to perform training and deciphering, Alice and Bob’s only way to ensure secure communication results is to find a solution that is more secure and makes Eve unable to crack. Therefore, the objective function will be better than the original ANC model, as shown in Figs. 5 and 6.

Fig. 5. 16-bit ANC training results

1342

Y. Wang et al.

Fig. 6. 16-bit CCA-ANC training results

Eve tried to maximize its classification accuracy, while Alice and Bob tried to minimize Eve’s classification accuracy and Bob’s decryption error rate. In the left graph of Fig. 6, Bob’s decryption error rate decreases continuously over time. Similarly, in the right graph of Fig. 6, we can see Eve’s classification error rate increases continuously over time [12]. However, when Alice and Bob learned a secure encrypted network structure, Eve’s classification accuracy rate was no better than random guessing. Compared with the ANC training results in Fig. 5, the convergence speed of the ANC algorithm is improved because the attacker changes from the discriminating multi-bit problem to the second-class problem. In the ANC algorithm, the attacker uses a pure ciphertext attack. The attacker is more difficult to decrypt and the security model

An Improved Adversarial Neural Network Encryption Algorithm

1343

cannot be obtained. In the ANC, the attacker uses the cipher text attack method to obtain more information, which makes the encryption algorithm of the legitimate communication parties more secure and reliable than ANC.

4 Conclusion In this paper, it has been proved that neural networks can learn a secure encryption system in an appropriate environment. Through security analysis, we found that original method of ANC is not enough to achieve the goal of designing a secure encryption system. The new CCA-ANC encryption method proposed in this paper aims to generate a secure encryption algorithm by improving the objective function and the learning model. Experiments show that the improved encryption algorithm can make the encryption system more secure. Acknowledgements. This work is supported by Heilongjiang Provincial Education Department Project (SJGY20180390), Heilongjiang University Project (2018B14), Heilongjiang University Graduate Innovation Competition (20170160903).

References 1. Goodfellow I, Mirza M, Xu B (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680 2. Abadi M, David G (2018) Learning to protect communications with adversarial neural cryptography. 12 April 2018. https://arxiv.org/abs/1610.06918.pdf 3. LeChun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444 4. Ratliff LJ, Burden SA, Shankar Sastry S (2013) Characterization and computation of local Nash equilibria in continuous games. In 51st annual Allerton conference on communication, control, and computing (Allerton), 2013, pp 917–924. IEEE 5. Williams RJ (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn, 229–256 6. Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2(1):1–127 7. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. In: Advances in neural information processing systems, pp 2234–2242 8. Hinton GE, Osindero S, The YW (2006) A fast learning algorithm for deep belief nets. Neural Comp 18:1527–1554 9. Ho J, Ermon S Generative adversarial imitation learning. In: Advances in Neural Information Processing Systems, pp 4565–4573 10. Yu L, Zhang W, Wang J, Yu Y (2017) Seqgan: sequence generative adversarial nets with policy gradient, pp 2852–2858 11. Hinton GE, Osindero S, The Y-W (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554 12. Kulkarni TD, Whitney WF, Kohli P, Tenenbaum J (2015) Deep convolutional inverse graphics network. In: NIPS, pp 2530–2538

Hardware Implementation Based on Contact IC Card Scalar Multiplication Feng Liang1, Yanzhao Yin2, Zhenhui Zhang1(&), Liji Wu2(&), and Xiangmin Zhang2 1

Heilongjiang University, 74 Xuefu Road, Harbin, Heilongjiang Province, China [email protected], [email protected] 2 Tsinghua University, Haidian District, Beijing, China [email protected], {lijiwu,zhxm} @tsinghua.edu.cn

Abstract. Based on Montgomery modular multiplication structure, this paper proposed and realized SM2 algorithm core module for IC cards, that is, scalar multiplication hardware structure of elliptic curves in prime field. Modular addition operation was improved so as to gain faster operation and anti-SPA capacity. Point addition and multiple point algorithm of Jacobi projective coordinates and Montgomery modular inversion algorithm were adopted in the structure. Experimental results indicated that, the structure had better performance and met the specific design indicators of the IC cards. Contact IC card communication protocol module (ISO7816 Protocol) was designed, and serving as the interface, it was tested by the contact IC card reader. In previous scalar multiplication designs, designers tended to focus on certain performance improvement of scalar multiplication, rather than a fixed product. This paper designed scalar multiplication encryption module, which was specific to the design of the interface of the contact IC cards widely used in life, so as to make the IC cards work more effectively. Keywords: Prime field  FPGA card  Communication protocol

 SM2 algorithm  Scalar multiplication  IC

1 Introduction Public key cryptography is a significant guarantee for information security transmission. Based on elliptic curve public key cryptography (ECC), state secret SM2 algorithm has been applied for IC card encryption because of its advantages of high security strength, conservation of storage space and communication bandwidth, fast computation speed, short key length and so on. SM2 algorithm was released by the State Cryptography Administration of China on December 17, 2010 to guarantee password application security of crucial economic systems [1]. Contact IC cards use card reading device to communicate with the cards. Thus, this paper designed communication protocol module (ISO7816 Protocol), and adopted ISO7816 Protocol to communicate © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1344–1352, 2020 https://doi.org/10.1007/978-981-13-9409-6_161

Hardware Implementation Based on Contact IC Card

1345

with the universal card readers. In the PC, the Windows system’s own winscard.dll dynamic link library was utilized to write the test program. In terms of security, smart IC cards generally use cryptographic chips which contain a variety of different cryptographic algorithms, and a certain degree of security protection can be achieved through information encryption. Although smart IC cards have superior performance, their design faces many challenges due to the limited chip resources, mainly including elements in the four aspects of balance area, power consumption, speed and security. In terms of area, high integration of IC cards is required, and a large number of modules are desired to be concentrated in a limited space so as to reduce the costs. Therefore, the area should be reduced in hardware as much as possible while satisfying the functional requirements [2].

2 Scalar Multiplication Module In this chapter, we use Jacobi projective coordinate system and Montgomery modular multiplication design scalar multiplication operation, and give the hardware microcode form of point addition and multiple point. 2.1

Scalar Multiplication Theory

The core algorithm in elliptic curve cryptographic algorithm is scalar multiplication “kP”, in which, “k” is defined as a large integer, “P” is the point in prime field GF(p) (“p” is the prime number that satisfies p > 3), and scalar multiplication is the sum of “p” of “k” times. The elliptic curve in GF(p) is defined to satisfy the following equation [1]: y2 ¼ x3 þ ax þ b

ð1Þ

In the equation, a; b 2 Fp ; and ð4a3 þ 27b2 Þmod p 6¼ 0 is consists of points and infinity points. The function prototype is: ðxN ; yN Þ ¼ ½kðxM ; yM Þ

2.2

ð2Þ

Jacobi Projective Coordinate System

This paper adopted quick point addition and point multiple in Jacobi projective coordinate system. Points (X, Y, Z) in Jacobi projective coordinate system correspond to points (X/Z2, Y/Z3) in the plane coordinate system [1]. The formula of the elliptic curve is: Y 2 ¼ X 3 þ aXZ 4 þ bZ 6

ð3Þ

Microcodes are used to compute hardware implementation of point addition and multiple point, as shown in Table 1.

1346

F. Liang et al. Table 1. Hardware microcodes of point addition and multiple point

Elliptic curve adding

Elliptic curve doubling

A ¼ Z12 B ¼ Z1  A C ¼ X2  A D ¼ Y2  B E ¼ C  X1 F ¼ D  Y1

A ¼ Y12 B ¼ 4X1  A C ¼ 8A2 D ¼ 3X12 þ aZ14 Z3 ¼ 2Y1  Z1 X3 ¼ D2  2B Y3 ¼ D  ðB  X3 Þ  C

G ¼ E2 H ¼GE I ¼ X1  G X3 ¼ F 2  ðH þ 2IÞ Y3 ¼ F  ðI  X3 Þ  Y1  H Z3 ¼ Z1  E

2.3

Montgomery Modular Multiplication

Montgomery modular multiplication algorithm was proposed by Montgomery, a mathematician, to solve the division problem in modular operations. Division can be converted to the operation of division by decimal numbers, which can be realized by using shift operation, and thus the cost of computation can be greatly reduced. The specific steps are shown in Table 2. Table 2. Montgomery modular multiplication algorithm Algorithm (ZN-MontRed) Input: A base-b, unsigned integer, 0tN  q  1 Output: A base-b, unsigned integer r ¼ t  q1 ðmod NÞ 1r t 2 For i ¼ 0 up to lN  1, step + 1 do u ri  x ðmod bÞ r r þ ðu  N  bi Þ end 3r r=bl N 4 if r  N then r rN end 5. return r

Algorithm (ZN-MontMul) Input: Two base-b, unsigned integer, 0  x; y\N Output: A base-b, unsigned integer r ¼ x  y  q1 (mod N) 1. r 0 2. For i ¼ 0 up to lN  1 , step + 1 do u ðr0 þ yi  x0 Þ  x ðmod bÞ r ðr þ yi  x þ u  NÞ=b end 3. if r  N then r rN end 4. return r

Hardware Implementation Based on Contact IC Card

1347

Prior to the calculation, Montgomery reduction algorithm (upper left) is adopted to reduce all operands into Montgomery field, and Montgomery modular multiplication algorithm can be used later. In the figure, “b” represents the decimal number in the corresponding system, and the operation of division by “b” represents shift.

3 Design of Scalar Multiplication This chapter first improves the underlying modular addition operation, then designs the hardware of scalar multiplication module, and gives the comprehensive simulation results and simulation diagram. 3.1

Design of Modular Addition Module

The algorithm of low-level modules plays a crucial role in scalar multiplication. Therefore, the research on low-level module algorithm can significantly improve the calculation speed of scalar multiplication. Addition, square, multiplication and inversion are the major operations in low-level module algorithm. Among them, addition is relatively simple and people tend to ignore it. However, this design is to modify addition algorithm and increase its speed, and to enable that it has simple capacity to resist the attack of SPA. A sign bit can be added to determine the result of the module “p”. “1” represents negative and “0” represents positive, and modular addition operation can be completed in one “clk” clock. Figure 1 is the flow diagram of modular addition operation before and after modification.

C0=a+b If a+b-P

N

MSB =1

Y If C0>=P

Y

N C=C0

C=C0-P

C=a+b

C=a+b-P

Fig. 1. Flow diagram of modular addition before and after modification

3.2

Design of Scalar Multiplication Module

The design is divided into three main parts: main control module, ISO7816 communication module, and elliptic curve scalar multiplication computation module. Scalar multiplication module is complex, in which one secondary controller controls other basic computation units to complete the entire elliptic curve scalar multiplication module. This design adopted Jacobi projective coordinate system to simplify point addition and point multiple operation, and utilized Montgomery modular multiplication algorithm to simplify and accelerate modular multiplication algorithm (Fig. 2).

1348

F. Liang et al.

Fig. 2. Design of scalar multiplication module

3.3

Simulation Results

Hardware structure of elliptic curve scalar multiplication in GF(p256) prime field was designed, and the proposed structure was described by Verilog HDL and synthesized by Xilinx XC7Z020 target device. Experimental evaluation was conducted based on Vivado tools and the synthesized result is introduced by Xilinx Vivad. The synthesis result was shown in Table 3: 10,279 lookup tables and 6383 registers were occupied; the 64-bit multiplier occupied 16 DSP units; the 256-bit width RAM occupied 4 BlockRAM units. In the experiment, we selected the recommended curve standard test data of digital signature and verification in SM2 algorithm [1]. Table 3. Synthesis result with xilinx xc7z020 serving as target device Site type Slice LUT as logic Slice registers Block RAM tile DSPs

Used 10,279 6383 4 16

Available 53,200 106,400 140 220

Util (%) 19.32 6.00 2.86 7.27

Hardware Implementation Based on Contact IC Card

1349

Simulation Diagram of Scalar Multiplication Module See Fig. 3.

Fig. 3. Simulation diagram of scalar multiplication

4 Introduction to ISO7816 Communication Protocol ISO7816 Protocol is an international communication protocol for contact smart cards [3]. The protocol uses only one data “io” line to transmit, and time-sharing is adopted in receiving and sending data. Developers must ensure that the card and the terminal do not send data simultaneously during the normal use of the card. T0 protocol was adopted in this design, and after successfully receiving the reset response, the terminal sends user instructions to the card, and at the same time, the card conducts corresponding operations and responses according to the protocol [4]. The card reader provides power supply and clock drive for the contact IC card, conducts reset operation, and communicates through a two-way IO line. Communicate data is in bytes, and each communicate byte consists of one flag bit and one check bit in addition to 8 data bits. After the IC card is reset, four bytes of reset response data must be sent to the reader within the specified time to illustrate the communication parameters adopted by the card. After receiving the correct reset response, the card reader communicates according to the communication parameters specified in the reset response. Each communication must be initiated by the upper computer (the card reader). The upper computer sends a five-byte command header first, then the upper computer and the card send data in turn according to the specified protocol, and eventually it ends with the completion of card sending. 4.1

System Module Design

The design of system module consists of master state machine, ISO7816 transceiver interface, 256-bit width RAM and scalar multiplication unit. The communication module performs the functions of sending and receiving data in bytes, sends the

1350

F. Liang et al.

received data to the main control module, and sends data to the upper computer according to the control of the main control module. The master state machine serves as the main control module. The computation module starts to compute elliptic curve scalar multiplication after receiving the starting signal of the main control module, and sends a completion signal after the completion of the computation (Fig. 4).

Fig. 4. System module design

4.2

System Simulation of Communication Modules

The system simulation of communication module is shown in Fig. 5: 1. 2. 3. 4. 5.

After reset, the card sends reset reply: 3B 60 00 00 The terminal sends a set Param command that specifies the number to set The card sends process bytes to verify the correctness of the command The terminal receives a byte of validation success and sends the data After the card receives the data, it sends back the success flag: 90 00.

Hardware Implementation Based on Contact IC Card 2

1

3

4

1351 5

Fig. 5. System simulation of communication modules

4.3

Hardware and Software Co-simulation Screenshot

See Fig. 6.

Fig. 6. Hardware and software co-simulation

5 FPGA Validation The test software uses Windows Smartcard interface to write, and the card reader communication interface can be directly called to communicate with the circuit to be tested. The software sends instructions and date to FPGA, and receives computation result for validation. The ISO7816 communication module worked normally after debugging, and the test result was consistent with the simulation result (Fig. 7).

1352

F. Liang et al.

Fig. 7. Test platform photos: fpga chip model: xilinx zynq 7020

6 Conclusions This paper adopted Montgomery modular multiplication structure in Jacobi projective coordinate system, and proposed and realized SM2 algorithm core module for IC cards, that is, scalar multiplication hardware structure of elliptic curves in prime field. Modular addition operation was improved so as to gain faster operation and anti-SPA capacity. At the same time, contact IC card communication protocol module (ISO7816 Protocol) [4] was designed, and ISO7816 hardware circuit design, hardware and software co-simulation as well as FPGA board level validation were accomplished. What’s more, serving as the interface, this module was tested by the contact IC card reader, thus accomplishing specific functions. The experimental result indicated that, the structure had better performance and met the specific design indicators of the IC cards.

References 1. GM/T 0003-2012, SM2 elliptic curve public key cryptographic algorithms. Chinese Cryptography Administration, China 2. Wang Z, Wei S (2005) Research and progress of low power design in SOC era. Microelectronics 35(2):174–179 3. ISO/IEC 7816 International Standard Cards With Contacts-Electrical interface and transmission protocols Third Edition 2006-11-01:4–10 4. Chen C, Research and implementation of ISO/IEC 7816-12 intelligent card interface [master’s thesis]. University of Chinese academy of sciences (school of engineering management and information technology)

Tiered Spectrum Allocation for General Heterogeneous Cellular Networks Haichao Wei1(B) , Anliang liu1 , and Na Deng2,3 1

3

School of Information Science Technology, Dalian Maritime University, Dalian, Liaoning 116026, China [email protected] http://www.springer.com/lncs 2 School of Information and Communication Engineering, Dalian University of Technology, Dalian, Liaoning 116024, China National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China

Abstract. In this paper, we establish a general heterogeneous cellular network (HCN) model based on stochastic geometry and employ the accurate signal-to-interference-ratio (SIR) approximation of the ASAPPP (approximate SIR analysis based on the Poisson point process) technique to investigate spectrum allocation problems for general HCNs. Firstly, a spectrum partitioning problem where each tier uses a dedicated band is formulated to maximize the area spectral efficiency. The solution is obtained by approximating the problem with the ASAPPP method, deriving the optimal SIR thresholds, and solving a simple linear program to give the optimal spectrum allocation. Without a spectral efficiency constraint for each tier, the area spectral efficiency is maximized if the entire bandwidth is allocated to a single tier. Secondly it is shown that the proposed algorithm can also be used to solve the spectrum sharing optimization problem, where bands can be shared between tiers. Keywords: Heterogeneous cellular networks · ASAPPP · Area spectral efficiency · Spectrum allocation · Poisson point process

1

Introduction

Heterogeneous cellular networks (HCNs) have gained much momentum in the wireless industry and research communities. In the context of HCNs, area spectral efficiency (ASE) or network utility are preferred to evaluate the overall network performance, and spectrum allocation plays an important role in these performance metrics [1]. However, the increasing spatial irregularity of radio access systems makes the impact of the spectrum allocation (e.g., spectrum

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1353–1360, 2020 https://doi.org/10.1007/978-981-13-9409-6_162

1354

H. Wei et al.

sharing and partitioning) on these performance metrics challenging to analyze. Hence, for general HCNs, the tractable analysis is of great importance to explore the operating regimes of different spectrum allocation schemes. Recently, stochastic geometry theory has been one of the most widespread tools to model the HCNs and analyze the corresponding key performance indicators [2]. The articles [3–5] investigate the spectrum allocation in HCNs for maximization of the ASE or network utility, where different types of base stations (BSs) are assumed to follow independent Poisson point processes (PPP) or a hexagonal lattice, and the corresponding performance metrics strongly depend on the signal-to-interference-ratio (SIR) distribution. However, both models deviate from the real spatial characteristics of radio access systems, and only the PPP model yields highly tractable results for the SIR distribution. To capture the realistic spatial characteristics, [6, 7] proposed the non-Poisson models, such as the perturbed lattice, the β-Ginibre point process (β-GPP), etc., however, the corresponding SIR distributions are difficult (usually impossible) to derive. As a result, the non-Poisson models restrict themselves to a further analysis and comparison of spectrum allocation schemes. To simplify and generalize the analysis of non-Poisson deployments, the ASAPPP (approximate SIR analysis based on the PPP) method is proposed and yields a simple yet accurate approximation of the SIR distribution [8]. Therefore, the ASAPPP method lends itself to a further analysis of network-level performance metrics depending on the SIR, and hence permits the analysis and design of spectrum allocation schemes in general cellular networks. In this paper, we propose a novel framework to investigate spectrum allocation schemes for general HCNs assisted by the approximative SIR distribution from the ASAPPP method. We first formulate an ASE optimization problem for the spectrum partitioning case under different spectral efficiency (SE) constraints for users of each tier. The ASE maximization problem is solved by decomposing it into two subproblems, where the first is to obtain the optimal SIR thresholds corresponding to the maximal spectral efficiency through the ASAPPP method, and the second gives the optimal spectrum allocation based on the optimal SIR thresholds. Using the effective gain ASAPPP method for general HCNs, we show that the spectrum sharing optimization problem can be formulated similarly to the spectrum partitioning case, and thus the proposed framework is also applicable to the sharing problem.

2

System Model

We consider a K-tier heterogeneous network where the BSs in the kth tier are distributed as an independent point process Φk , k = 1, 2, . . . , K, and Gk is the corresponding MISR-based (mean-interference-signal-ratio-based) gain [9]. Denote by μk and λk the transmit power and node density of tier k, respectively. We assume each user is associated with the BS that offers the strongest average received power, and the users are correspondingly divided into K types according to the serving tier. Each BS is assumed to serve their associated users in a timedivision fashion. We suppose that the total bandwidth is partitioned into K

Tiered Spectrum Allocation for General Heterogeneous Cellular Networks

1355

parts where the k-th tier uses the fractional bandwidth bk . We assume all the BSs are fully loaded and apply the ASE metric by multiplying with the coverage probability, Shannon’s formula and density of tier k. A standard power path loss law (x) = |x|−α is assumed and the small-scale fading follows independent Rayleigh fading with unit mean, i.e., E(h) = 1. Due to the stationarity of all Φk , the received SIR for the kth type user located at the origin is SIRk 

Sk μk (x0 )hx0 = , Ik x∈Φk\{x0} μk (x)hx

(1)

where x0 is the serving BS of tier k and hx is the power fading coefficient associated with x. Due to no inter-tier interference, the SIR distribution of each tier can directly use the ASAPPP method. Letting δ = 2/α and F¯SIR (θ)  PPP (θ) of the SIR in P (SIR > θ), the complementary cumulative distribution F¯SIR Poisson networks is given by [8] PPP (θ) = 1/2 F1 (1, −δ, 1 − δ, −θ), F¯SIR

(2)

where θ is the SIR threshold and 2 F1 is the Gaussian hypergeometric function [10, Eq. 15.3.1]. From the ASAPPP method, the coverage probability Pk (θk )  PPP (θk /Gk ). F¯SIRk (θk ) of tier k is approximated by Pk (θk ) ≈ F¯SIR

3 3.1

Area Spectral Efficiency Optimization Problem Formulation for Spectrum Partitioning

Under a fixed-rate transmission based on the SIR threshold, the overall area spectral efficiency is given as  ASE(θ, b) = λk bk Pk (θk ) log2 (1 + θk ). (3) k∈[K]

where [K] = {1, 2, . . . , K}, b = (b1 , b2 , . . . , bK ), θ = (θ1 , θ2 , . . . , θK ). The problem is to determine the optimal bandwidth partitioning b and SIR thresholds θ, given as  ASE(θ, b), max k∈[K] b,θ  bk = 1, bk ≥ 0, k ∈ [K]. (4) subject to k∈[K]

For the user served by tier k, denote by ck (θk ) = Pk (θk ) log2 (1 + θk ) the achievable spectral efficiency and θ ∗ is the vector of SIR thresholds that maximize ck (θk ), k ∈ [K]. Inspecting the Problem (4), it can be seen that   ASE(θ, b) = λk bk ck (θk ) ≤ λk bk ck (θk∗ ) = ASE(θ ∗, b), ∀θ, b.(5) k∈[K]

k∈[K]

Therefore, maximizing ASE(θ, b) is equivalent to perform maximization ASE(θ ∗ , b), and the optimal SIR thresholds that maximize the overall ASE are

1356

H. Wei et al.

the same the ones that maximize ck (θk ), k ∈ [K]. As a result, the optimization problem is solved by two consecutive steps, first over θ and then b, and thus Problem (4) is decomposed into the two subproblems as follows. Subproblem 1: c∗k = max Pk (θk ) log2 (1 + θk ) θk

1 ln(1 + θk ) max ln 2 θk 2 F1 (1, −δ, 1−δ, −θk /Gk )  Subproblem 2: max λk bk c∗k , k∈[K] b  subject to bk = 1, bk ≥ 0, k ∈ [K] ≈

k∈[K]

(6)

(7)

For Subproblem 1, each tier can be individually treated, and the goal is to find the optimal SIR threshold that maximizes the spectral efficiency ck (θk ). It is difficult to adopt derivative method to obtain the maximum for θk , due to complexity of the Gaussian hypergeometric function. Therefore, we first bound θk∗ with the following lemma, and then obtain the optimal solution using the fminbnd function in Matlab. Lemma 1. Letting W(x) be the Lambert W function satisfying W(x)eW(x) = x, ln(1 + θ) 2 F1 (1, −δ, 1 − δ, −θ/Gk )      ζk  exp W e1−α (α/2 − 1)Gk + 1 − α + α − 1 − 1, zk (θ) 

(8) (9)

the maximum of zk (θ) is achieved in (0, ζk ). Proof. We obtain the derivative of zk (θ) as zk (θ) = κ(θ)s(θ), where κ(θ) > 0 for all θ > 0 and s(θ) = Gk (1−δ)2 F1 (1, −δ, 1−δ, −θ/Gk ) −δ(1+θ) ln(1+θ)2 F1 (2, 1−δ, 2−δ, −θ/Gk ).

(10)

Hence sign(zk (θ)) = sign(s(θ)) for all θ > 0. From the hypergeometric differential equations [10, Eq. 15.5.1] and [10, Eq. 15.2.2], we have −2θ(1+θ/Gk ) 2 F1 (3, 2−δ, 3−δ, −θ/Gk ) Gk (2 − δ) 1−δ+(2−δ)θ/Gk + 2 F1 (2, 1−δ, 2−δ, −θ/Gk ). 1−δ

2 F1 (1, −δ, 1−δ, −θ/Gk )

=

(11)

Thus, letting ηk (θ) = (1−δ)Gk +(2−δ)θ−δ(1+θ) ln(1+θ), we have s(θ) =

−2θ(Gk + θ)(1 − δ) 2 F1 (3, 2−δ, 3−δ, −θ/Gk ) Gk (2 − δ) + 2 F1 (2, 1−δ, 2−δ, −θ/Gk )ηk (θ).

(12)

Tiered Spectrum Allocation for General Heterogeneous Cellular Networks

1357

When θ ≥ 0, we have 2 F1 (·) > 0 in (12) due to the formula [10, Eq. 15.3.1]. Therefore, for ηk (θ) ≤ 0, we have s(θ) < 0. It is easy to prove that ηk (ζk ) = 0 w.r.t. ζk has a unique root ζk in [0, ∞) and for θ ≥ ζk , we have ηk (θ) < 0 and s(θ) < 0. Hence, we obtain that the global maximum of zk (θ) achieves in (0, ζk ), and then the root of ηk (ζk ) is expressed as   (13) eρk ρk = e1−α (α/2 − 1)Gk + 1 − α where ρk = ln(1 + ζk ) + 1 − α. Finally, the result is obtained using the Lambert W function.

0.9

0.8

0.7

ck

0.6

0.5

0.4

G=1 per fminbnd G=1.25 per fminbnd G=1.5 per fminbnd Approx. (14)

0.3

0.2 2.5

3

α

3.5

4

Fig. 1. The maximum spectral efficiency c∗k and its approximation (14).

Figure 1 gives the optimum spectral efficiency for different α and G and a further approximation for the maximum spectral efficiency c∗k , given as √ c∗k ≈ a1 + a2 (α − 2) + a3 (G − 1) α − 2, (14) for which we obtain a1 = 0.06, a2 = 0.34, and a3 = 0.22 using the nlinfit function in Matlab. As shown in Fig. 1, (14) is quite accurate for different G and thus will greatly accelerate the solution to the problem. Subproblem 2 is an easily-solved linear program based on the optimal SIR thresholds. In order to maximize the overall ASE, the optimal spectrum allocation is to allocate the total spectrum resource to the tier with k = argmaxj∈[K] λj c∗j ,

(15)

where 1(·) is the indicator function. Hence, if the goal is ASE maximization, the entire bandwidth is allocated to a single tier, i.e., the HCN degenerates to a

1358

H. Wei et al.

single-tier network. This changes if we consider other extra constraints, such as deployment cost, user distribution, and fairness, and the case without constraints serves as a baseline value for the ASE. Next, we consider the minimum SE constraint to the Problem (4) to guarantee a minimum user performance in each tier, given as bk ck > Rk , where Rk is the minimum SE requirement of tier k. The revised problem can also be solved in two consecutive steps, first over θ and then b. Therefore, two similar subproblems can be obtained where the first one for θ ∗ is the same as (6) and the second one is given with the introduction of the minimum SE constraints based on (7), given by bk c∗k ≥ Rk . For the bandwidth allocation, we first assign the /c∗k , k ∈ [K], that satisfies the minimum minimum bandwidth fraction b0k = Rk SE constraint and then check whether k∈[K] b0k ≤ 1 is satisfied. If satisfied, the remaining bandwidth is allocated to the tier with maximal λk c∗k , i.e.,  b0j , for k = argmaxj∈[K] λj c∗j , (16) bk = 1 − j∈[K]\{k}

and the maximal overall ASE of the HCN is achieved. If not, the revised problem is infeasible. We also consider the fairness constraint that users of each tier should achieve equal SE, i.e., bk ck = c, where c is the maximum common  SE achievable. For each tier, we have bk c∗k = c, and with the sum constraint k∈[K] bk = 1, we obtain c =  1 1/c∗ , and bk = c/c∗k . k∈[K]

1.8

2

Area Spectral Efficiency (bps/Hz/m )

2

1.6

k

×10

-5

8.8 ×10

-6

Spectrum Sharing Spectrum Partitioning

8.75 8.7 8.65

1.4 1.2

8.6 8.55

0.02

0.04

0.06

0.08

0.1

1 0.8 0.6

No SE Constraints Minimum SE Constraints Fairness Constraints

0.4 0.2

0.2

0.4

0.6

λ /λ 2

0.8

1

1.2

1.4

1

Fig. 2. The optimal ASE versus the density ratio λλ21 with α = 4, λ1 = 10−5 , λ2 = λ3 , μ1 = 1, μ2 = 50μ1 , μ3 = 25μ1 and R1 = R2 = R3 = 0.1.

Tiered Spectrum Allocation for General Heterogeneous Cellular Networks

3.2

1359

Spectrum Sharing

If spectrum sharing between tiers is allowed, the proposed algorithm can still be used, thanks to the effective gain ASAPPP method for HCNs [9]. When some tiers share the bandwidth resource, these tiers can be considered to form a single tier, say tier S, with an effective gain Geff , given by    (17) wi2 (Gi − 1), where wi = λi μδi / λi μδi Geff = 1 + i∈I

i∈I

and I is the set of tiers that share the same band. In contrast to the partitioning case, here the relative transmit powers matter. The density of  the new composite tier equals the sum of the densities for these tiers λS = i∈I λi , and the corresponding user minimum SE requirement (if any) is RI = maxi∈I Ri . The coverage probability of the users served by the new composite tier can be approxiPPP (θS /Geff ), mated through scaling the SIR threshold with Geff , i.e, PS (θS ) ≈ F¯SIR and thus the spectrum allocation problem can be similarly formulated as (4) by replacing the sum ASE of the tiers sharing same band with the ASE of tier S. Consequently, the proposed algorithm is also valid in the spectrum sharing case. We present the numerical results concerning the spectrum allocation in a three-tier HCN, composed of β-GPPs with β = 1, 0.5 for tiers 1 and 2, and a PPP for tier 3, and thus we obtain G1 = 1.5, G2 = 1.25 and G3 = 1 [9]. For spectrum sharing, we consider sharing between tiers 1 and 2. Figure 2 shows how the optimal ASE varies with the density ratio λ2 /λ1 for different spectrum allocation schemes with different constraints. In the case with no SE constraints, the ASE for spectrum sharing decreases first and then increases as λ2 /λ1 increases. The beginning decline is due to the SIR performance degradation caused by the intertier interference between tiers 1 and 2, and then the increasing density of tier 2 compensates the SIR degradation, resulting in an ASE improvement. The ASE for spectrum partitioning stays constant at first and then increases with λ2 /λ1 . Because the entire bandwidth is allocated to tier 2 rather than tier 1 as λ2 /λ1 increases to a certain value (say, 1.1). Note that spectrum sharing is not always better than spectrum partitioning. In the case with minimum SE constraints, the ASE for the spectrum sharing decreases first and then increases, and the ASE for the spectrum partitioning increases slowly at the beginning and then increases faster with λ2 /λ1 . In the case of fairness constraints, both ASEs increase linearly with λ2 /λ1 . We also observe that the ASE without SE constraints outperforms that with SE constraints.

4

Conclusion

In this paper, a novel framework was proposed to investigate spectrum allocations for general HCNs based on the ASAPPP approximation of SIR distributions. We considered ASE optimization problems for both spectrum partitioning and spectrum sharing cases, and showed that both can be solved using the same approach. The optimal solution that maximizes the ASE is to first obtain optimal

1360

H. Wei et al.

SIR thresholds through the ASAPPP method and then give the optimal spectrum allocation based on the optimal SIR thresholds. The proposed framework easily finds the optimum operating regime for spectrum sharing and partitioning under different scenarios, which gives key insights to guide the design of HCNs. Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61701071, by the China Postdoctoral Science Foundations (2017M 621129, 2019M651095 and 2019T120204), by the open research fund of National Mobile Communications Research Laboratory, Southeast University (No. 2019D03) by the Fundamental Research Funds for the Central Universities under Grants DUT19RC(4)014, 3132019210 and 3132019220.

References 1. Peng M, Wang C, Li J, Xiang H, Lau V (2015) Recent advances in underlay heterogeneous networks: interference control, resource allocation, and self-organization. IEEE Commun Surv Tutor 17(2):700–729 2. ElSawy H, Hossain E, Haenggi M (2013) Stochastic geometry for modeling, analysis, and design of multi-tier and cognitive cellular wireless networks: a survey. IEEE Commun Surv Tutor 15(3):996–1019 3. Cheung WC, Quek TQ, Kountouris M (2012) Throughput optimization, spectrum allocation, and access control in two-tier femtocell networks. IEEE J Sel Areas Commun 30(3):561–574 4. Mankar PD, Das G, Pathak SS (2015) A novel proportionally fair spectrum allocation in two tiered cellular networks. IEEE Commun Lett 19(4):629–632 5. Lin Y, Bao W, Yu W, Liang B (2015) Optimizing user association and spectrum allocation in hetnets: a utility perspective. IEEE J Sel Areas Commun 33(6):1025– 1039 6. Guo A, Haenggi M (2013) Spatial stochastic models and metrics for the structure of base stations in cellular networks. IEEE Trans Wirel Commun 12(11):5800–5812 7. Deng N, Zhou W, Haenggi M (2015) The Ginibre point process as a model for wireless networks with repulsion. IEEE Trans Wirel Commun 14(1):107–121 8. Ganti RK, Haenggi M (2016) Asymptotics and approximation of the sir distribution in general cellular networks. IEEE Trans Wirel Commun 15(3):2130–2143 9. Wei H, Deng N, Zhou W, Haenggi M (2016) Approximate SIR analysis in general heterogeneous cellular networks. IEEE Trans Commun 64(3):1259–1273 10. Abramowitz M, Stegun IA, Romain JE (1964) Handbook of mathematical functions, with formulas, graphs, and mathematical tables. Courier Corporation

Human Action Recognition Algorithm Based on 3D DenseNet-BC Yujiao Cui, Yong Zhu(&), Jun Li, Luguang Wang, and Chuanbo Wang Electronic and Communication Engineering, Heilongjiang University, No. 74, Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. A video human action recognition algorithm based on 3D DenseNetBC is proposed. The convolution operation is used to acquire the characteristics of human action in video. Based on the connection mode of DenseNet-BC, network level connection is obtained to acquire high-dimensional features, thus constructing 3D DenseNet-BC for human action recognition in video. Tests were carried out on the data sets KTH and UCF-101 respectively. The experimental results show that the constructed network structure has a good recognition effect in the video action recognition task. Keywords: Human action recognition DenseNet  3D DenseNet-BC

 Convolutional neural network 

1 Introduction Human action recognition in video is a challenging task, it has been studied for many years and has made significant progress. Most of the early main stream video recognition algorithms are extracted from artificially designed video features such as contour, HOG/HOF, MHI and their extension in three dimensions [1]. Although the traditional algorithm has better performance, it is difficult to adapt to many scenes in the reality because it is set for a specific purpose, and it is affected by human-like prior knowledge, and it is more complicated. Since the convolutional neural network (CNN) [2] achieved excellent results in the 2012 ImageNet Challenge, CNN has begun to be widely used in the image processing field. From AlexNet [3] to VGG [4], GoogleNet [5], BN-Inception [6] and ResNet [7], the improvement of the convolutional neural network structure promoted the rapid development of image processing, and people began to study the design of effective deep convolution neural network for action recognition in video. Du design a threedimensional convolutional C3D network containing time dimensions, it is used to obtain spatiotemporal features in the video through a three-dimensional convolution operation [8]. C3D network features are highly represented in a variety of tasks, including motion recognition, motion detection, video subtitles, and gesture detection. Huang designed a densely connected network DenseNet by making each layer directly connected to the previous layer [9]. Compared with the DenseNet network, this structure is simpler, the parameters are more efficient, and the features can be © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1361–1367, 2020 https://doi.org/10.1007/978-981-13-9409-6_163

1362

Y. Cui et al.

transmitted intensively, it achieves the best performance in image classification task as so far. Therefore, we build a dense network based on DenseNet’s hierarchical connection method, and use 3D convolution to expand the motion information in the video to obtain the 3D Dense-BC network model, which is used for the human action recognition in the video.

2 3D Densenet-BC Construction 2.1

3D-CNN

The input of 3D convolutional neural network is a cube with multiple consecutive frames stacked together, and features can be extracted at the same time on three scales. Through the three-dimensional convolution kernel, features can be extracted from multiple consecutive frames, and the feature cube can be connected to multiple consecutive frames in the upper layer, so as to capture the motion information within a period of time. The convolution kernel in 3D convolutional neural network is a threedimensional cube. In the network, each feature cube in the convolutional layer can be connected with multiple adjacent continuous frames in the previous layer for convolution operation. The value of a particular position of a feature cube is sensed locally by convolving multiple consecutive frames at the same position in the upper layer. The k feature cube position (x, y, z) of the i layer can be calculated as follows: Vikxyz ¼ f bik þ

i 1 R i 1 Q i 1 X PX X X

m

p¼0 q¼0 r¼0

ðx þ pÞðy þ qÞðz þ rÞ

pqr Wikm uði1Þm

ð1Þ

where Vikxyz is the output at the k feature map (x, y, z) of the i layer, f ðÞ is the activation function; bik is the shared bias value of the feature map; PQR is the height, width and pqr time length of the three-dimensional convolution kernel; Wikm is the (k, q, r) of the k feature map of the i layer. The weight between the m feature map of the i−1 layer; u is the input of the i − 1 layer to the i layer (Fig. 1).

C 1 :16@3×3×3

C 2 :32@3×3×3

C 3 :64@3×3×3

S1 :1 ×2×2

S2 :2 ×2×2

S3 :1 ×2×2

Input

Output D2

C 5 :256@3×3×3

C 4 :128@3×3×3

S5 :1 ×2×2

S4 :2 ×2×2

D 1 :256

Fig. 1. 3D CNN network structure

Human Action Recognition Algorithm Based on 3D DenseNet-BC

1363

The first layer is the input layer. It consists of 15 consecutive grayscale images of adjacent video frames, with a size of 120  160. C1–C5 is the convolution layer, and the convolution kernel of each layer is 3  3  3, and the volume kernel quantity is successively increased from 16 to 256, so as to generate more types of high-level features from low-level feature combinations; S1–S5 layer is the down-sampling layer, and the maximum pooling method is used to decrease the resolution of the feature map, narrow the size of the feature map down, reduce the amount of calculation and improve the tolerance for input image distortion. In layer S2 and S4, a window of 2  2  2 is used to down-sample both time dimension and space dimension. In other layers, a window of 1  2  2 is used to down-sample only in spatial dimension. The D1 layer is a fully connected layer containing 256 neurons. The output feature cube of layer S5 is connected with 256 neurons of layer D1. Layer D2 is the second full connection layer as well as the output layer, and the number of neurons is 6, the same as the number of target categories. Each neuron in layer D2 is fully connected to 256 neurons in layer D1, and finally classified by classifier softmax regression to obtain the output that can mark the action categories. 2.2

DenseNet-BC

DenseNet is a neural network with a dense connection. In this network, there is a direct connection between any two layers. By making use of the reduplication of the characteristics, the transmission of the characteristics can be enhanced, so that the required parameters of DenseNet can be reduced and the amount of computation can be reduced, and the problem of gradient disappearance can be solved effectively. DenseNet is connected by multiple Dense Blocks. The network structure of Dense Block is shown in Fig. 2.

Fig. 2. The network structure of DenseNet block

1364

2.3

Y. Cui et al.

3D DenseNet-BC

DenseNet was originally applied to static image processing, and can only extract the spatial information of the image, which is not enough to capture the temporal and spatial information of human action in the video. In this paper, DenseNet is extended by threedimensional convolution. The three-dimensional convolution is used to extract the spatiotemporal information of the action in the video, and DenseNet’s hierarchical connection method is used to construct a three-dimensional convolution dense connection 3D DenseBC network to complete the video recognition task. Similar to 2D’s DenseNe-BC connection, the 3D convolutional layer in the network constructed in this paper is also densely connected. Among them, the size of the three-dimensional convolution kernel is (d  s  s), d is the time depth, and s is the space size of the inner core. Specifically, for the construction of the 3D Dense-BC network, first we expand all convolution kernels in DenseNet from d  d to 3  d  d. For example, the convolution kernel size of Conv1 is 3  7  7, which is the initial information of integrated visual frequency frame, which is the initial information of integrated visual frequency frame, and the size of its residual accretion kernel is 3  3  3. Secondly, the DenseNet-based hierarchical connection in each convolution layer. The method is densely connected, so that the feature maps output by each layer can be reused to enhance the transmission of features and reduce the amount of calculation. In order to reduce the memory size required for calculation, only two convolution layers are used in each Dense Block. And added batch regularization to reduce over-fitting; add a “transition” layer with a compression ratio of 0.5 after each dense block; use the average pooling after convolution to perform a down-sampling operation to reduce features redundant features of the image; finally, the classifier softmax function classifies the output to be able to mark the action category. Therefore, the 3D Dense-BC network model designed in this paper can be obtained. The network structure parameters are shown in Table 1. Table 1. The structure parameters of 3D DenseNet-BC network Layer Convolution Dense block(1) Transition layer(1) Dense block(2) Transition layer(2) Dense block(2) Transition layer(2) Classification layer

Output size feature 8  56  56 8  56  56 4  28  28 4  28  28 2  14  14 2  14  14 177 111

Network parameters 3  7  7conv, stride 2 [1  1  1conv, 3  3  3conv]  6 [1  1  1conv, 3  2  2pool] [1  1  1conv, 3  3  3conv]  12 [1  1  1conv, 3  2  2pool] [1  1  1conv, 3  3  3conv]  8 [1  1  1conv, 3  2  2pool] 7  7global average pool, FC

The network input size is set to (16, 112, 112), that is a video frame sequence with a time length of 16 frames and a space size is 112  112; the output size corresponds to the size of the 3D feature of each convolutional layer output. The final network feature and the number of channels are connected to the fully connected layer of the number of video categories.

Human Action Recognition Algorithm Based on 3D DenseNet-BC

1365

3 Experimental Results and Analysis In order to verify the performance of the network structure 3D Dense-BC, we tests the video line recognition rate on the data sets KTH and UCF-101, and compares it with C3D network. 3.1

Data Sets

The data set UCF-101 consists of actual video from YouTube, it contains 13,320 videos and 101 different human action videos. UCF-101 offers the greatest diversity in motion classification, and has great changes in camera motion, object appearance and posture, object scale, viewpoint, clutter background, lighting conditions, etc. It is one of the most challenging video data sets to date. The entire data set has three training/testing groups, each of which contains more than 8,000 video and the remaining parts as test sets, we only evaluates the first group. The KTH data set consists of 6 types of actions performed by 25 people in 4 different scenes, for a total of 600 videos. The video contains scale changes, clothing changes and changes in illumination, but the video content background is simple and the camera is fixed. In this paper, the video of 16 people in each class in the data set is selected as the training sample, and the video of 9 people is used as the test sample. 3.2

Experimental Environment Settings

The experiment was implemented using Tensorflow’s open source frame; the experimental environment is Ubuntu18.04, NVIDIA 1080Ti; the network input is a continuous 16-frame video sequence, and the video sampling rate is set to 2. 3.3

Experimental Results and Analysis

In this paper, the video line of the two networks is tested on the small data set KTH for the recognition rate, and the experimental training parameters are set the same. During the training, the video frame size of the training sample in the data set KTH is set to 80  100, the batch size is 32, the initial learning rate is 0.01, and the number of iterations is 10000. The results of the 3D Dense-BC network and the C3D network’s recognition rate on the dataset KTH test samples are shown in Table 2. Table 2. Recognition rate comparison Model 3D CNN 3D DenseNet-BC

Dataset KTH KTH

Recognition rate (%) 88.73 94.50

1366

Y. Cui et al.

It can be seen from Table 2 that the Dense-3D network has a recognition rate of 5.77% higher than that of the C3D network, although the two networks use the threedimensional convolution neural network to learn. In contrast, the 3D Dense-BC network achieves the best recognition effect, which shows that the 3D convolutional network structure can be improved by dense connection of dense networks, which can improve the recognition rate of human action in video and verify the 3D Dense-BC network structure. The action recognition performance is better than the C3D network (Table 3).

Table 3. Recognition result in data set KTH Action category Jogging Running Boxing Waving Clapping Walking

Number of recognition Jogging Running 98 1 4 94 0 0 0 0 0 0 3 0

Boxing 0 0 98 0 1 0

Waving 0 0 2 97 2 0

Clapping 0 0 0 3 97 0

Walking 1 2 0 0 0 97

The model has a high recognition rate for the types of actions such as boxing, clapping, waving and walking, but there are more errors in the categories of jogging and running. From the analysis of the original video, this is mainly due to the low resolution of the two kinds of actions of jogging and running. The recognition of the data itself is very difficult and it is easy to make a mistake. In general, the Dense-3D network model has a good recognition effect on the data set KTH. The model has a recognition rate of 96.83% for the entire data set KTH (including training and test grouping). We also tests the recognition effect of the 3D Dense-BC network model on the human data in the video data set UCF-101. The experiment tests the recognition rate of a video sequence of 16 frames long. The size of the input video frame is 128  171, the batch size is 32, and the initial learning rate is 0.01, which ends at 100,000 iterations. The recognition rates of the two networks on the large data set UCF-101 are shown in Table 4. The recognition rate is not based on the recognition of the entire video, but based on the recognition of the continuous 16-frame video sequence.

Table 4. Recognition rate of two networks on data set UCF-101 Model 3D CNN 3D DenseNet-BC

Dataset UCF-101 UCF-101

Recognition rate (%) 43.68 49.60

Human Action Recognition Algorithm Based on 3D DenseNet-BC

1367

It can be seen that the action recognition rate of the 3D Dense-BC network is 5.92% higher than that of the C3D network, which proves that the recognition of the 3D Dense-BC network on the complex data set is better than that of the C3D network. And the optimal performance of the C3D structure in the video human action recognition task is realized.

4 Conclusion We build a three-dimensional convoluted dense connection network based on 3D convolution and DenseNet-BC. Due to the tight connection characteristics of DenseNet-BC, each layer feature can be reused, which reduces the calculation of the network constructed. By using 3D convolution extract the spatio-temporal features of video, compared with the traditional method, it can avoid complicated data processing and feature extraction, fully utilize the original video information. We test the behavior of video in the dataset KTH and UCF-101 respectively. The results show that the Dense-3D network has better recognition effect than the C3D network model with the same three-dimensional convolution. The best performance of video human behavior recognition under the product network structure.

References 1. Wei Z, Quanqi C, Yujin Z (2014) Deep learning and its new progress in target and action recognition. J Image Graph 19(2):175–184 2. Russakovsky O, Deng J, Su H et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vision 115(3):211–252 3. Lecun Y, Bottou L, Bengio Y et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324 4. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: NIPS’12 proceedings of the 25th international conference on neural information processing systems, pp 1097–1105 5. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Comput Sci 6. Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1–9 7. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778 8. Du T, Ray J, Shou Z et al (2017) ConvNet architecture Search for spatiotemporal feature learning. Comput Vision Patt Recogn 17(8):65–77 9. Huang G, Liu Z, Maaten LVD et al (2017) Densely connected convolutional networks. In: Jameajd, Harry S, Danj et al (eds) IEEE conference on computer vision and pattern recognition. Honolulu, Hawaii, USA, IEEE Computer Society, pp 2261–2269

Color Image Encryption Based on Principal Component Analysis Xin Huang, Xinyue Tang, and Qun Ding(&) Heilongjiang University, 74 Xuefu, Harbin, China [email protected]

Abstract. For the M  N encrypted color image size, data of at least M  N  3 will be calculated, and the amount of data is too large, so that the operating system is also required to be very strict. Therefore, it is necessary to propose an algorithm that can encrypt some relatively small data to obtain a good encryption effect and use the data to reconstruct the original image. An image encryption algorithm based on PCA (principal component analysis) is proposed in this paper, which performs small variance reduction on the data of those dimensions, and then encrypts the obtained data. In this paper, the chaotic sequence generator uses 2D-Logistic to calculate the matrix generated by the chaotic sequence and the reduced-dimensional data, so as to obtain a good encryption effect. Keywords: PCA

 Image encryption  2D-Logistic  Image correlation

1 Introduction With the development of science and technology, digital image becomes the main medium of information transmission [1, 2]. In [3–7] constructs a chaotic image encryption algorithm. Pseudo-random sequence is composed of multiple explicit pixel values and combined with two-level chaotic mapping algorithm. Therefore, for different images, the formation of pseudo-random sequence encrypted images is also different and has a high sensitivity to plaintext. Loose coupling problems related to implicit strategy substitution and confusion in [3–13]. However, due to the large amount of digital image data and high redundancy, digital image encryption algorithm increases the complexity of system model analysis to a certain extent. In the process of digital image modeling, many variables often depend on and correlate with each other. Therefore, frequency analysis of this feature can reduce the number of variables, further improve the operation efficiency of the algorithm, and reduce the requirement for system hardware configuration. Principal component analysis (PCA) is a commonly used dimensionality reduction method in image processing, because it can extract color, texture, SIFT, SURF and other related features in the image. However, there are many such feature points in the image, so dimensionality reduction is needed to improve efficiency. This paper proposes a chaotic color image encryption algorithm based on PCA. The design idea of the algorithm is as follows: firstly, the color image is compose of three vectors of RGB, and the principal component R′G′B′ is extracted by PCA algorithm, © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1368–1375, 2020 https://doi.org/10.1007/978-981-13-9409-6_164

Color Image Encryption Based on Principal Component Analysis

1369

then the scrambling and confusion matrix is generated by the 2D-logistic chaotic system, and finally the ciphertext is generated by matrix operation to achieve the encryption effect.

2 2D-Logistic Chaos System and PCA 2.1

2D-Logistic Chaos System

Chaos theory, relativity and quantum theory are the three major discoveries in the scientific community in the 20th century. Scientists and researchers from various disciplines have conducted in-depth research, especially in the fields of economics, artificial intelligence and encryption algorithms. The study of chaos theory and its basic dynamic characteristics has broad prospects and significance for the application of chaos and the interdisciplinary integration of various disciplines. By analyzing and studying the basic dynamics of chaos, including dissipation, symmetry and disturbance characteristics, the chaotic system is improved according to actual needs and practical application requirements. Therefore, an in-depth study of the basic theory of chaos and its dynamics will provide assistance for future research and application. This paper uses 2D-Logistic chaotic system, and its formula is defined as Eq. 1. 

x1 ðn þ 1Þ ¼ l1 x1 ðnÞð1  x1 ðnÞÞ þ c1 x22 ðnÞ x2 ðn þ 1Þ ¼ l2 x2 ðnÞð1  x2 ðnÞÞ þ c2 ðx1 ðnÞ þ x1 ðnÞx2 ðnÞÞ

ð1Þ

When 2.75 < l1  3.4, 2.7 < l2  3.45, 0.15 < c1  0.21, 0.13 < c2  0.15, x1(n), x2(n) 2 (0, 1), the map is a chaotic map. 2.2

PCA (Principal Component Analysis)

Principal Component Analysis (PCA) is a common method of data analysis. For a set of data that may have a linear correlation between different dimensions, the PCA can transform this set of data into linear independent data between different dimensions by orthogonal transformation. The relationship between each sample in PCA processed data is generally more intuitive, and it is a very common tool for data analysis and preprocessing. The data processed by PCA is linearly independent in all dimensions, and the purpose of dimensionality reduction can be achieved by eliminating data on the dimension with smaller variance. If there is a strong linear correlation between certain dimensions in the data, the information provided by the sample in these two dimensions will be repeated to some extent, so the data is irrelevant. In addition, in order to reduce the computation of data processing or to eliminate noise, some minor (small variance) dimensions in the data set are eliminated. Let X = [X1, X2, …, Xp]T is a p-dimensional random vector, denotes l = E(X) and R ¼ DðX Þ, and the eigenvectors corresponding to p eigenvalues k1  k2  …  kp P of R are t1, t2, …, tp, ti ¼ ki ti ; tiT ti ¼ 1; tiT tj ¼ 0 ði 6¼ j; i; j ¼ 1; 2; . . .; pÞ, and the linear transformation is as follows:

1370

X. Huang et al.

2

3 2 Y1 L11 6 Y2 7 6 7 6 .. 6 .. 7 ¼ 4 . 4 . 5 Ln1 Yn

 .. . 

2 3 2 T3 3 X L L1p 6 1 7 6 1T 7 L X .. 76 2 7 ¼ 6 2 7X . 5 6 6 7 . 4 . 5 4 .. 7 . . 5 Lnp Xp LTp

ðn  pÞ

ð2Þ

So if Y = [Y1, Y2, …, Yn]T is used to describe X = [X1, X2, …, Xp]T, then Y is required to reflect the information of X as much as possible, the larger the variance P Li of Yi, the more the information of X can be described. In addition, in DðYi Þ ¼ LTi order to most effectively express the information, Yi and Yj cannot contain   original P duplicate content, that is, cov Yi ; Yj ¼ LTi Lj ¼ 0. It can be proved that when Li = ti is D(Yi) takes the maximum value and the maximum value is ki, Yi and Yj satisfy the orthogonal condition. PCA image processing belongs to a dimension reduction method. By reducing the dimensionality of the high-dimensional image block vector space, the multi-variable image block data edge is optimally simplified, and a few principal components are derived, and then realized at a certain ratio. Preserving the original image information, and maintaining the irrelevance between the image blocks, thus ensuring that the image is substantially indistinguishable from distortion in the naked eye. The number m of principal components directly affects the image reconstruction effect, so the value of m is especially critical.

3 The Scheme of Image Encryption and Decryption 3.1

Image Encryption Algorithm Structure

The types of images are grayscale images and color images. The grayscale image is composed of a matrix of pixels having an integer value between 0 and 255, and the color image is composed of three-color components of three primary color matrices R, G and B. In this paper, color image is taken as an example for image encryption. First, a color image is converted into a matrix of three-color components, and then an encryption operation is performed. The steps for the encryption operation are as follows: 1. Find the pixel of the three components of RGB and record it as d1, d2, d3. The value of the principal component m is calculated according to the following equation:  Pavri ¼

 floor

  di þ r1 þ r2 þ r3 þ r4 mod 256 þ 50; i ¼ 1; 2; 3 MN

ð3Þ

r1, r2, r3 and r4 are 8-bit random integers with a range of [0, 255]. Pavri (i = 1, 2, 3) is the main component of R G and B. 2. According to the main component Pavri (i = 1, 2, 3) obtained in 1, the R G and B components are respectively reduced in dimension, and the processing results in R′G′ and B′. 3. According to the range of the parameter of the 2D Logistic, the key parameters x1, x2, l1, l2, c1, c2 are allowed. Then two real chaotic sequences {x1(i)}, {x2(i)}, i = 1, 2, …, MN are constructed according to the Eq. (1). Take the 14th digit to the

Color Image Encryption Based on Principal Component Analysis

1371

right of the decimal point of {x1(i)}, {x2(i)} and then modulate 256 to quantize it into two matrices X, Y of size M  N. 4. Diffusion of three matrices R’ G’ and B’ according to Eqs. (4) and (5) Að1; 1Þ ¼ ðPð1; 1Þ þ Xð1; 1Þ þ r1 þ r2 Þmod 256

ð4Þ

Aði; jÞ ¼ ðPði; jÞ þ Xði; jÞ þ Aði; j  1ÞÞmod 256

ð5Þ

Where r1 and r2 are encryption-introduced constant parameters, and i = 1,…, M, j = 2,…, N + 1. The matrix P(i, j) represents the matrix R’ G’ and B’. 5. Let pixel A(i, j), i = 1, 2, …, M, j = 1, 2, …, N and A(m, n) changing the position, on the calculation of the equation of m and n is Eq. 6 and 7. m ¼ ðXði; jÞ þ sumðAðXði; jÞ; 1 to NÞÞ mod M þ 1

ð6Þ

n ¼ ðYði; jÞ þ sumððAð1 to M; Xði; jÞÞ mod N þ 1

ð7Þ

When m 6¼ i or n 6¼ j, the positions of A(i, j) and A(m, n) are exchanged. On the contrary, A(i, j) and A(m, n) are unchanged, and the image B is obtained by changing the position of the pixel points to eliminate the correlation of adjacent pixel points in the original image. 6. The second round of confusion operation is different from the first round, which is calculated from the last pixel of the image. The calculation formula is Eqs. 8 and 9. CðM; NÞ ¼ ðBðM; NÞ þ YðM; NÞ þ r3 þ r4 Þ mod 256

ð8Þ

Cði; jÞ ¼ ðBði; jÞ þ Yði; jÞ þ Cði þ 1; jÞ þ Cði; j þ 1ÞÞ mod 256

ð9Þ

Where r3 and r4 encryption-introduced constant parameters, and j = N − 1, N − 2,…,1, i = M − 1, M − 2,…, 1. Matrix B is changed to matrix C by means of pseudo-random sequence Y. The matrix C(i, j) represents the original matrix R″G″ and B″. 7. Finally, the three matrices R″G″ and B″ obtained by the encryption operation are synthesized into a three-dimensional image, that is, the three-dimensional image is an encrypted image. The encryption algorithm first preprocesses the color image to obtain three twodimensional grayscale images, and then extracts key information by principal component analysis to reduce the dimension of the image, and then performs a diffusion operation to destroy the connection between adjacent pixels. Then perform image scrambling and secondary diffusion. The decryption process is the reverse process described above. It can be seen from the equation above that the encryption operation is non-linear. 3.2

Encryption Result

Take the color Lena image of size 512  512 as an example. The results of encryption and decryption are shown in Fig. 1.

1372

X. Huang et al.

(a) Plain Image

(b) Encrypted Result

(c) Decrypted Result

Fig. 1. Encryption and decryption results of plaintext association algorithms

According to the encryption result, the ciphertext image obtained after the clear original image is encrypted has no clear recognition, and the decrypted image is identical to the original image.

4 Safety Analysis and Experimental Results 4.1

Key Space

Encrypting the digital image of the proposed system, the key initial value and the eight Logistic system random number, comprising: K ¼ fx1 ; x2 ; l1 ; l2 ; c1 ; c2 ; r1 ; r2 ; r3 ; r4 g; x1 ðnÞ; x2 ðnÞ 2 ð0; 1Þ, l1 2 ð2:75; 3:4; l2 2 ð2:7; 3:45; c1 2 ð0:15; 0:21 and c2 2 ð0:13; 0:15 have a step speed of 10−14. r1, r2, r3, r4 are an eight-bit random integer with a value interval of [0, 255] and a step size of 1, therefore, the key entropy is about 300b, indicating that the key space system is large enough to resist exhaustive attacks. 4.2

Histogram Analysis

This paper is based on the 2D Logistic system encryption algorithm. Figure 2 also compares the statistical properties between the plaintext and its corresponding ciphertext. The key is set to K = {0.7812, 0.5478, 3.144, 3.258, 0.1678, 0.1496, 69, 73, 80, 226} without loss of generality. As shown in Fig. 2, The ciphertext image has a flat histogram, indicating that the pixel points on each gray value are approximately evenly distributed. According to the calculation method of the correlation coefficient, the correlation coefficients of the original image and the three sequence encrypted images are respectively obtained. Since the color image is used, the correlation coefficient is calculated from the three-color component matrix, respectively. It can be seen from Table 1 that the correlation coefficient between adjacent pixels in the plain text image is close to 1, and the correlation of adjacent points in the ciphertext image approaches 0, that is, the plaintext image has strong correlation, while the ciphertext is almost no correlation.

Color Image Encryption Based on Principal Component Analysis

(a) R

(b) G

(c) B

(d) R’

(e) G’

(f) B’

1373

Fig. 2. Histogram of the image Table 1. The correlation coefficients of plaintext and ciphertext Image Lena-R Lena-G Lena-B

4.3

Plaintext Cipher Plaintext Cipher Plaintext Cipher

Horizontal

Vertical

Diagonal

Opposite diagonal

0.988752 −0.000754 0.980022 0.012799 0.956705 −0.020979

0.980894 −0.029733 0.967897 0.030182 0.935796 −0.006489

0.967766 −0.000551 0.959593 0.012829 0.917701 −0.001404

0.974616 0.025477 0.965035 0.021864 0.931084 0.001485

Sensitivity Analysis

Key sensitivity is an important indicator of security analysis of encryption algorithms. The higher the key sensitivity, the higher the security of the encryption algorithm. Key sensitivity means that the key has a very small change, which can lead to the failure of decryption. The initial values of the encrypted sequences used in Fig. 3b herein are {0.7812, 0.5478, 3.144, 3.258, 0.1678, 0.1496, 69, 73, 80, 226}. To verify the key sensitivity of the algorithm, only a very small change to the initial value is made, and the value of x1 is changed to 0.7812001, that is, the key of the entire encryption algorithm becomes {0.7512001, 0.7878, 3.134, 3.256, 0.1678, 0.1496, 78, 234, 89, 26}, the encryption result is decrypted with the correct key, and the encrypted result is decrypted with the changed key, and the obtained result is shown in Fig. 3c.

1374

X. Huang et al.

(a) The result of encrypted x1=0.7812

(b) The correct key to decrypt results

(c) The result of decrypted x1=0.7812001

Fig. 3. The test results of key sensitivity

According to the test results of key sensitivity shown in Fig. 3, when the key is changed very little, we can’t obtain the correct decryption result and the decryption result is far from the original image. It shows that the algorithm is sensitive to key sensitivity, ensuring the security of its encryption result.

5 Conclusion In this paper, a color image encryption algorithm based on principal component analysis is proposed, which completely reduces the correlation between RGB components in color images. Firstly, the RGB image information values are extracted by principal component analysis, and then the color images are operated by the pseudorandom property of the chaotic sequence and the two-dimensional high complexity to change the relationship between the original image pixels. Theoretical analysis and experiments show that the algorithm has a large key space, and the plaintext data is fully encrypted and diffused, and the encryption effect is better, which can effectively resist detailed attacks and known plaintext attacks. Therefore, the algorithm is more secure.

References 1. Cao W, Zhou Y, Chen CLP et al (2017) Medical image encryption using edge maps. Sig Process 132:96–109 2. Wang X, Liu C (2017) A novel and effective image encryption algorithm based on chaos and DNA encoding. Multimedia Tools Appl 76(5):6229–6245 3. Feng W, He YG (2018) Cryptanalysis and improvement of the hyper-chaotic image encryption scheme based on DNA encoding and scrambling. IEEE Photon J 10(6):1–15 4. Li C, Luo G, Qin K et al (2017) An image encryption scheme based on chaotic tent map. Nonlinear Dyn 87(1):127–133 5. Suryanto Y, Ramli K (2017) A new image encryption using color scrambling based on chaotic permutation multiple circular shrinking and expanding. Multimedia Tools Appl 76 (15):16831–16854

Color Image Encryption Based on Principal Component Analysis

1375

6. Chai X, Gan Z, Zhang M (2017) A fast chaos-based image encryption scheme with a novel plain image-related swapping block permutation and block diffusion. Multimedia Tools Appl 76(14):15561–15585 7. Ping P, Fan J, Mao Y et al (2018) A chaos based image encryption scheme using digit-level permutation and block diffusion. IEEE Access 6:67581–67593 8. Guanrong C, Yaobin M, Chui CK (2004) A symmetric image encryption scheme based on 3D chaotic cat maps. Chaos, Solitons Fractals 21(3):749–761 9. Mao Y, Chen G, Lian S (2004) A novel fast image encryption scheme based on 3D chaotic baker maps. Int J Bifurc Chaos 14(10):3613–3624 10. Parvaz R, Zarebnia M (2018) A combination chaotic system and application in color image encryption. Opt Laser Technol 101:30–41 11. Wu X, Zhu B, Hu Y et al (2017) A novel color image encryption scheme using rectangular transform-enhanced chaotic tent maps. IEEE Access 99:1 12. Zhang Y, Xiao D (2014) An image encryption scheme based on rotation matrix bit-level permutation and block diffusion. Commun Nonlinear Sci Numer Simul 19(1):74–82 13. Wang X, Zhu X, Zhang Y (2018) An image encryption algorithm based on Josephus traversing and mixed chaotic map. IEEE Access 6:23733–23746

Research on Transmitter of the Somatosensory Hand Gesture Recognition System Fei Gao1(&), Jiyou Fei2, Hua Li2, Xiaodong Liu2, and Ti Han3 1

2

3

School of Mechanical Engineering, Dalian Jiaotong University, 116000 Dalian, Liaoning, China [email protected] School of Locomotive and Vehicle Engineering, Dalian Jiaotong University, 116000 Dalian, Liaoning, China [email protected] School of Intelligence & Electronic Engineering, Dalian Neusoft University of Information, Dalian, China

Abstract. A high-precision sensitive and portable “transmitter” of the hand gesture recognition system is designed in this paper. The MPU6050 sensor is used to collect the hand gesture data in real time. And the quaternion is obtained by the digital motion processing (DMP) unit of the sensor, and the Euler angle is obtained by the mathematical formulas operation. After that the complementary filtering correction algorithm is used to calculate the human hand gesture. Wireless modules are used to send data to the intelligent terminals and realize the function of wireless control of the intelligent terminals by hand gesture. Finally, the effectiveness and reliability of the scheme are verified by experiments. The experimental results show that compared with Euler angle method and direction cosine method, the attitude calculation method based on quaternion method and adding correction algorithm proposed by this scheme has the characteristics of less calculation, fast calculation speed and less drift error. Keywords: Somatosensory  Hand gesture recognition Complementary filtering correction

 Attitude algorithm 

1 Introduction In recent years, somatosensory technology has been applied to 3D virtual reality, somatosensory games, health care and other fields. Therefore, wireless somatosensory device has become a hot research direction. However, there are some problems, such as the somatosensory device is not sensitive enough, and it’s not convenient to wearing for the large size. And once the running time is longer than a few hours, the drift error is relatively big and the device needs to be restarted. How to capture the user’s body movements in real time and accurately are the focus and difficulty of somatosensory technology. This paper focuses on the somatosensory hand gesture recognition [1] system which can be used in the somatosensory device, and introduces the design and implementation process of the data acquisition and processing function of the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1376–1384, 2020 https://doi.org/10.1007/978-981-13-9409-6_165

Research on Transmitter of the Somatosensory Hand Gesture

1377

“transmitter” of the system. The transmitter of the hand gesture recognition system collects the human hand gesture data, and transmits the detected hand motion to the intelligent terminal, such as PC (Personal Computer) et al., thereby realizing the wireless control function of intelligent terminal by hand gesture.

2 Overall Design of Hand Gesture Recognition System The hand gesture recognition system is divided into two parts: the “transmitter” and the “receiver”. The STM32F103C8T6 is used as the microprocessor chip for the transmitter which collects the MPU6050 sensor data, completes the hand gesture analysis [2], and sends the data to the receiver through the nRF24L01 wireless module. The receiver is composed of STM32F103ZET6 microprocessor, USB conversion serial port chip and wireless module. The wireless data received and processed by the receiver microprocessor and then sent to the intelligent terminal through the USB protocol to realize the human-computer interaction interface instead of the mouse or touch screen. The overall structure block diagram is shown in Fig. 1. “Receiver”

“Transmitter”

Intelligent terminal

Hand gesture capture module

USB

I2 C

Receiver microprocessor SPI Wireless receiving module

Transmitter microprocessor Wireless communication

SPI Wireless transmiting module

Fig. 1. The overall structure block diagram of hand gesture recognition system

This paper mainly introduces the information acquisition, processing and wireless data transmission function implementation process of the “transmitter” of the hand gesture recognition system, including 3 aspects: circuit hardware design, software program design and functional verification experiment.

3 Hand Gesture Recognition Attitude Algorithm Principe Through comprehensive analysis of the three types of somatosensory modes (inertial sensing, optical sensing, and inertial and optical joint sensing), hand gesture recognition functions and applications, the inertial sensing method is selected to program the microprocessor to complete the attitude algorithm in this design.

1378

3.1

F. Gao et al.

Attitude Algorithm of the Six-Axis Sensor MPU6050

In this design, the Euler angle is obtained by the quaternion method, and the correction algorithm is added to complete the hand posture calculation. The MPU6050 is a six-axis sensor [3] with integrated gyroscope and accelerometer. The package space is greatly reduced, the cost is low, and it is widely used in the market. In this design, the quaternion is directly calculated by the integrated digital motion processing unit in MPU6050, and then the rotation angles on the x, y and z axes in the Euler angle method including roll angle, pitch angle and yaw angle, are obtained by the rotation transformation operation of the mathematical formula. 3.2

Transformation from Quaternion to Euler Angle

The rotation in three-dimensional space can be described by the plural form of quaternion as in (1). Q ¼ q0 þ q1 i þ q2 j þ q3 k

ð1Þ

Navigation coordinate system is n system, the carrier body axis coordinate system is b system, and the direction cosine matrix described by Euler angles can be described by quaternion as in (2). 2

Cnb

q20 þ q21  q23  q22 4 ¼ 2 ð q1 q2 þ q0 q3 Þ 2 ð q1 q3  q0 q2 Þ

2ðq1 q2  q0 q3 Þ q22  q23 þ q20  q21 2 ð q2 q3 þ q0 q1 Þ

3 2 ð q1 q3 þ q0 q2 Þ 2 ð q2 q 3  q0 q 1 Þ 5 2 q3  q22  q21 þ q20

ð2Þ

If the carrier’s roll angle is c, the pitch angle is h, and the yaw angle is w, then the coordinates of the three basic rotations are transformed into (3). 2

sin w sin h sin c þ cos w cos c cos w sin h sin c  sin w cos c sin w cos h cos w cos h Cnb ¼ 4 cos w sin c  sin w sin h cos c  cos w sin h cos c  sin w sin c

3  cos h sin c 5 sin h cos h cos c ð3Þ

2

T11 T12 Record Cbn ¼ 4 T21 T22 T31 T32 to the Euler angle according

3 T13 T23 5, and then convert from the direction cosine matrix T33 to these formulas as in (4), (5) and (6).   T31 c ¼ tan  T33

ð4Þ

h ¼ sin1 ðT32 Þ

ð5Þ

1

Research on Transmitter of the Somatosensory Hand Gesture

w ¼ tan

3.3

1



T12 T22

1379

 ð6Þ

Complementary Filter Correction Algorithm

The slight deviation and drift of the sensor will cause a certain integral error which will lead to the drift of Euler angle data [4]. Therefore, the inclination angle cannot be obtained by using MPU6050 accelerometer or gyroscope alone. Complementary correction is needed. A complementary filter is built according to the sensor characteristics, a low-pass filter is used to eliminate the high-frequency noise of the accelerometer, a high-pass filter is used to filter out the low-frequency noise of the gyroscope, and finally a more accurate angle value is output after the fusion. The algorithm formula is as in (7). an ¼ K1  a þ K2 ðan1 þ x  dtÞ

ð7Þ

Among them, an and an−1 are filtered angles at sample n and (n − 1) respectively. And a and x are the acceleration values and angular velocity values obtained by the current acquisition. K1 is the low-pass filter coefficient, and K2 is a high-pass filter coefficient. The two coefficients are adjusted to the optimal values during the program debugging. dt is the sampling time coefficient. After testing and comparison, the final K1 and K2 were 0.02 and 0.98, respectively. For the three inclination angles: the roll angle, the pitch angle and the yaw angle are all corrected using a complementary filtering algorithm to reduce the integral error and then the drift error.

4 Data Acquisition Process The function of the transmitter is mainly to realize data acquisition and processing of MPU6050 sensor and wireless data transmission. The STM32 is programmed using software I2C (Inter-Integrated Circuit) communication protocol to configure and read data from the registers of the sensor. The range of MPU6050 is set to ±2000°/s with a sensitivity of 16.4LSB°/s. The range of acceleration is set to ±2 g and the sensitivity is set to 16 384LSB/g. When the transmitter of the hand gesture recognition system maintains the vertical state in the air, that is, perpendicular to the y-axis, the acceleration on the y-axis is 1 g, and the acceleration on the other two axes is close to 0, and the data for reading the yaxis is 16 384 under these circumstances. Other axes can also be tested in this way [4, 5]. The quaternion and Euler angles are obtained by using DMP and mathematical formula operations, and the hand gestures are calculated from these parameters and the gesture data is corrected using a complementary filtering algorithm and then sent to the wireless module. The flow chart of data acquisition program is shown in Fig. 2.

1380

F. Gao et al. Begin Initialize quaternion

Y

Key pressed? N Collect sensor data

Quaternion updated?

N

Y Key Action Packet

Update Euler Angle

Send data through the wireless module

Remove mechanical jitter and correct data

End

Gesture data packet

Fig. 2. Flow chart of the sensor data acquisition program

5 Functional Verification Experiments In order to verify the function of the hand gesture recognition system, a functional verification experiment is designed. At present, the gesture recognition verification experiments are mostly verified in an entire spatial region as a whole [6]. In order to traverse each possible spatial region further, the verification experimental region is segmented according to the range of the three Euler angles in this paper. In this experiment, the roll angle is divided into two regions: −180° to 0°, 0° to 180°, the pitch angle is divided into two regions of −90° to 0°, 0° to +90°, the yaw angle is divided into −180° to 0° and 0° to +180° two regions. Therefore, 8 verification regions are obtained. Four rounds of experiments are performed in each experimental region, and 10 Euler angle data are recorded per round, and 32 rounds of experiments in total are performed in 8 experimental regions. In this paper, the roll angle −180° to 0°, the pitch angle 0° to +90°, the yaw angle 0° to +180° experimental region data is selected as an example. The specific experimental process is as follows: 1. The receiver is connected to the PC. Observe the data waveform received by the receiver by using the upper computer software “four-axis”, as shown in Fig. 3. The readings of the x, y and z axes of the gyroscope are showed in the figure by the ordinates of the three waveforms, and are defined as GYROx, GYROy and GYROz. The abscissa represents the data collected times over time, and the waveform shows the data changes caused by each movement of the transmitter. The data of x, y and z axes will fluctuate every time the hand moves, and its peak or trough value is the measured data of the three axes of the gyroscope in units of °/s.

Research on Transmitter of the Somatosensory Hand Gesture

1381

GYROy

GYROz GYROx

Fig. 3. Hand movement data waveform in the air

2. In the current experimental spatial region, the position of the hand wearing the transmitter is changed, and then the hand is kept stationary and stable or 10 min immediately after the first movement of the hand, and the experiment is repeated for 4 rounds. The ①, ②, ③ and ④ in Fig. 3 are the measured data of the x, y and z axes of the gyroscope in the first hand movement of each round in the four-round experimental operation, and are filled in the second column of “Gyroscope data (x, y, z) (°/s)” in Table 1. 3. The 3D models of the hand spatial motion pictures are observed in real time through another upper computer software of “flying state monitoring”, and the Euler angles are recorded in units of °. As shown in Fig. 4, the Euler angle values shown in the four screenshots ①, ②, ③ and ④ correspond to the four gyroscope data of the above ①, ②, ③ and ④ in Fig. 3, which are the measured Euler angle data of the first hand movement of each round in the four-round hand motion experiment operation. After that, ensure that the transmitter is stationary, measure the Euler angle value every 1 min, measure 10 times, and fill in Table 1 to observe the value drift of the Euler angle. 4. Make error analysis on the 4 rounds of measurement data, and draw drift error curves. The Euler angle data of each hand movement is used as the initial data and the initial data is subtracted from the every measured data for each minute later. The drift error value per minute can be obtained. In each round, 9 sets of drift error data in total of the three variables of Euler angle can be obtained. The three variables are analyzed separately. The 9 sets of data of each round are averaged to obtain 9 sets of data points, and they are drawn on the coordinate plane with the number of

1382

F. Gao et al. Table 1. Motion data table of the receiver of hand gesture recognition system

Experiment round Gyroscope data (x, y, z) (°/s) Euler angle (roll, pitch, yaw) (°)

Experiment round Gyroscope data (x, y, z) (°/s) Euler angle (roll, pitch, yaw) (°)

1st 2nd … 9th 10th

1st 2nd … 9th 10th





(−43, −105, 21) (−12.7, 3.02, 47) (−12.7, 3.02, 47.01) … (−12.73, 3.03, 47.09) (−12.74, 3.03, 47.1) ③ (230, 113, 97) (−17.5, 2.49, 46.5) (−17.5, 2.49, 46.51) … (−17.5, 2.49, 46.58) (−17.51, 2.5, 46.6)

(265, −10, 130) (−9.73, 2.59, 47.3) (−9.73, 2.59, 47.31) … (−9.74, 2.61, 47.39) (−9.75, 2.62, 47.4) ④ (−225, −197, −110) (−10.25, 7.44, 48.6) (−10.25, 7.45, 48.62) … (−10.27, 7.47, 48.7) (−10.28, 7.49, 48.72)

Fig. 4. 3D model diagram and Euler angle calculation results

measurements as the horizontal axis and the Euler angle as the vertical axis in Fig. 5, and the average data drift trend graph of roll angle, pitch angle and yaw angle can be observed in the figure. The data of Fig. 5 is analyzed to obtain the drift errors of the hand gesture recognition system in the current experimental region: the roll angle drift error is less than 0.04°, and the average drift error is less than 0.025°. The pitch angle drift

Research on Transmitter of the Somatosensory Hand Gesture

1383

0.12 0.1

Euler angle (°)

0.08 0.06 Roll

0.04

Pitch

0.02

Yaw

0 -0.02 -0.04

0

2

4

6

8

10

12

Number of measurements

Fig. 5. Average data drift trend graph of roll angle, pitch angle and yaw angle

error is less than 0.05°, and the average drift error is less than 0.025°. The yaw angle drift error is less than 0.12°, and the average drift error is less than 0.105°. 5. After that, the above experimental steps are repeated in the remaining 7 experimental regions in turn, and the maximum drift errors of 8 experimental regions are calculated by statistical analysis. And at last, through the calculation the maximum drift error in the 8 experimental regions is found in the 4th region which is 0.12° of the yaw angle and in the 7th region which is also 0.12° of the yaw angle.

6 Conclusion This paper designs and implements a hand gesture recognition transmitter with high stability, high accuracy and easy operation. Compared with Reference [3, 6] and the optical detection hand gesture method [1, 2], this method finds smaller calculation, faster speed, higher precision and smaller drift error. And it can be applied in the somatosensory device in the fields of outdoor, industrial front, somatosensory games et al. Although it has certain innovation and practical application significance, there is still much improvement to be further achieved. For example, the low-power design needs to be optimized, the gestures can be further enriched, and the product shell design needs to be further improved.

References 1. Meenaakumari M, Muthulakshmi M (2013) MEMS accelerometer based hand gesture recognition. Int J Adv Res Comput Eng Technol 2(5):1886–1892 2. Ruize X, Shengli Z, Wen JL (2012) MEMS accelerometer based nonspecific-user hand gesture recognition. IEEE Sens J 12(5):1166–1173 3. Liang Y, Mingzhi D, Ying D (2013) A high-performance training-free approach for hand gesture recognition with accelerometer. Multimedia Tools Appl 72(1):843–864

1384

F. Gao et al.

4. Zhenhuan W, Xijun C, Qingshuang Z (2013) Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion. Chin J Aeronaut 26(2):442–448 5. Xingcheng L, Shuangbiao Z (2017) A real-time structure of attitude algorithm for high dynamic bodies. J Contr Sci Eng. https://doi.org/10.1155/2017/9542423 6. Jingqiu W, Ting Z (2015) An ARM-based embedded gesture recognition system using a data glove. In: The 26th Chinese control and decision conference, vol 7(20), pp 3932–3937

Research on Image Retrieval Based on Wavelet Denoising in Visual Indoor Positioning Algorithm Zhonghong Wang1, Guoqiang Wang2(&), and Guoying Zhang1 2

1 Heilongjiang University, Harbin, Heilongjiang, China Key Lab of Electronic and Communication Engineering, Heilongjiang University, No. 74, Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. For the problem of image matching in visual indoor positioning research, the image quality is degraded due to the interference and influence of noises during the generation or transmission of the image, which ultimately leads to the problem of image matching rate and low efficiency. Comparing existing traditional denoising algorithms and wavelet transform algorithms, we use MATLAB for simulation to compare the effects of adding different noises and denoising with different wavelet bases. The results show that the wavelet image denoising improves the shortcomings of the traditional denoising algorithm to a certain extent, but there is still room for improvement. Keywords: Visual positioning Wavelet algorithm

 Image matching  Image denoising 

1 Introduction 1.1

Visual Positioning Technology

The visual positioning technology can be divided into two method. One is markerbased visual indoor positioning method and the other is fingerprint-based visual indoor positioning method [1]. The marker-based vision indoor positioning way is suitable for an environment where the marker extraction is relatively simple and the geographical location of the marker is known, it is similar to the positioning based on the radio fingerprint library. Fingerprint-based visual indoor positioning method searches for several database images which are closest to the image to be located, and the positioning result is a combination of geographic locations where the images are located. The positioning analysis of indoor and outdoor environment points out that whether it is a logo-based visual indoor positioning method or a fingerprint-based visual indoor positioning method, as long as it is an image-based positioning method, its focus is on image search performance under a large-scale database. In terms of performance, denoising preprocessing of images can greatly improve search efficiency and reduce time overhead.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1385–1391, 2020 https://doi.org/10.1007/978-981-13-9409-6_166

1386

1.2

Z. Wang et al.

Image Denoising

In the process of generating or transmitting images, the images qualities are often degraded due to interference and influence of various noises, which adversely affects subsequent image processing such as retrieval, segmentation, understanding. Therefore, image denoising is an important part of image processing. The methods of image denoising can be divided into two categories, one is to denoise the image in the spatial domain, and the other is to transform the image into the frequency domain for denoising. Wavelet transform is a method of processing an image in the frequency domain. The wavelet transform can adjust the sampling length of different frequencies in the time domain, it has multi-resolution characteristics. In this paper, we mainly use salt and pepper noise and Gaussian noise for noise processing. For these two different noises, we analyze the effects of traditional denoising algorithms, wavelet filtering and contour wave filtering on image denoising. The algorithms are analyzed and compared to obtain the suitable denoising algorithm for different types of noise images. Thereby reducing the time overhead, improving the image retrieval speed, and improving the performance of the positioning algorithm.

2 Traditional Denoising Algorithm 2.1

Spatial Domain Filtering

The spatial domain filtering is to perform domain operations on the image by means of a template in the image space. The value of each pixel of the processed image is calculated due to the template and the pixel value in the corresponding field of the input pixel. Spatial domain filtering is mainly divided into mean filtering method and median filtering method. The principle of mean filtering is to replace the value of each pixel in the original image with the mean value of the neighborhood. The median filtering is to sort the pixels of the image in the field by gray level, and then select the middle value of the group as the output pixel value. Thereby the isolated points can be effectively removed (Fig. 1).

Fig. 1. Add 33 spatial filtering results

Research on Image Retrieval Based on Wavelet Denoising

2.2

1387

Frequency Domain Filtering

The principle of frequency domain filtering is to transform the image in the original image space into other spaces by image transformation method. And then, the image processing is conveniently performed by using the unique property of the space. Finally, it is converted back to the original image space to obtain the processed image. The noise of the image is a high frequency component. Denoising can be done using a low frequency filter. It is to filter out or greatly attenuate the high-frequency components of the image. To achieve the purpose of denoising. The low pass filters are primarily an ideal low pass filter and a Butterworth filter (Fig. 2).

Fig. 2. Frequency domain filtering results

3 Wavelet Denoising Algorithm In the wavelet transform processing of digital images, the orthogonal processing of wavelet transform can suppress the correlation of data well. In the wavelet domain [2], some large wavelet coefficients in the image energy can be collected together, and the noise is also distributed in the wavelet domain. Therefore, using different wavelet coefficients to process the digital image, different wavelet denoising methods can be obtained, and different denoising effects are generated. Wavelet denoising mainly includes wavelet mode maximum value denoising, denoising method based on correlation between wavelet coefficients, and wavelet threshold denoising method. Among them, the third one is the most classic. 3.1

Modulus Maxima Algorithm

The principle of wavelet transform modulus maxima is based on the different propagation characteristics of image and noise on each scale of wavelet transform. The modulus maxima mainly generated by noise is removed, the modulus maxima corresponding to the image is preserved, and the image is restored.

1388

3.2

Z. Wang et al.

Correlated Denoising Algorithm

When performing wavelet transform processing, the modulus maxima of noise and non-noise coefficients have different propagation characteristics at different scales, and the modulus maxima of noise will decrease with the scale becomes larger. The nonnoise modulus maxima will become larger as the scale becomes larger. Therefore, in order to enhance the useful information part of the graph, the wavelet coefficients of adjacent scales can be multiplied, the noise in the image is weakened, and then the image restoration is performed by the estimation method. 3.3

Wavelet Threshold Denoising Algorithm

Wavelet threshold denoising method, according to the characteristics of image and noise wavelet coefficients on different scales, the wavelet coefficients are usually processed according to a certain predetermined threshold [3]. A wavelet coefficient smaller than a predetermined threshold is considered to be caused by noise and is directly set to 0. A wavelet coefficient larger than a predetermined threshold is considered to be mainly caused by an image and is directly retained or shrunk. The original image can be reconstructed by wavelet reconstruction of the obtained estimated wavelet coefficients [4] (Fig. 3).

Fig. 3. Threshold denoising results

3.4

Selection of Wavelet Basis in Wavelet Threshold Denoising Algorithm

The diversity of wavelet basis selection is one of the important features of wavelet transform. Different wavelet basis functions have different characteristics, such as orthogonality, tightness, and so on [5]. The different characteristics exhibited by each wavelet system play different roles in threshold denoising (Fig. 4).

Research on Image Retrieval Based on Wavelet Denoising

1389

Fig. 4. Threshold denoising results for different wavelets

3.5

Contour Wave Denoising

Contourlet transform is a new two-dimensional representation of images with multiresolution, localization, multi-directionality, near-neighbor sampling and anisotropy. Because its basis function is distributed in multi-scale and multi-direction, it can effectively capture important information such as edge contours in the image, which is an improvement of classical wavelet transform. The basic idea of contour wave transform can be understood from the mathematical point of view. It can be understood as using a basis function similar to a line segment to approximate the original image, thereby achieving a sparse representation of the image (Fig. 5).

Fig. 5. Contour wave denoising result

4 Algorithm Performance Analysis In order to objectively evaluate the noise pollution of noise images and the effects of various denoising methods, this paper cites the signal-to-noise ratio SNR for evaluation. The SNR is one of the most commonly used methods for evaluating images. The larger the SNR is, the better the quality of the image (Table 1).

1390

Z. Wang et al. Table 1. Characteristics of five wavelet systems

Algorithm Salt and pepper noise Gaussian noise

Mean filtering 14.6279

Median filtering 21.5636

Ideal filter 12.8538

14.6403

21.5050

3.2698

Butterworth filter 13.8385 4.5828

Wavelet threshold 21.5285 21.5286

The image denoised in the wavelet domain can only make the quality better and the SNR is greatly improved when the threshold is selected properly. Wavelet filtering is effective for removing Gaussian white noise, and the effect of removing salt and pepper noise is relatively poor. Since the Gaussian noise is still Gaussian after wavelet transform, the energy of the signal is only distributed over a small number of coefficients. Therefore, the threshold coefficients of each layer of the wavelet decomposition are processed by threshold, which can retain most of the signal coefficients and remove most of the Gaussian noise. The method is simple to implement and has a small amount of calculation, and is suitable for the case where white noise is mixed in the signal. The advantage of threshold denoising is almost completely suppressed, and the characteristic spikes reflecting the original signal are well preserved. Denoising by soft threshold can make the denoised signal approximately become the best estimate of the original signal, and the estimated signal is at least as smooth as the original signal without additional oscillation. The disadvantage of this method is that the denoising effect depends on the signal-to-noise ratio, and is especially suitable for signals with high SNR. The signal denoising effect for low SNR is not ideal. Second, in some cases, such as at the discontinuous point of the signal, pseudo-Gibbs phenomenon occurs after denoising.

5 Conclusion Several algorithms for image denoising preprocessing are introduced, including image spatial denoising algorithm, frequency domain denoising algorithm, wavelet denoising algorithm and contour wave denoising algorithm. The advantages and disadvantages of the denoising algorithm after the target image produces noise are compared and analyzed. Through the image denoising preprocessing operation, the template matching for subsequent visual positioning is fully prepared to improve the matching rate and utilization of information.

References 1. Hao X (2016) Research on visual positioning algorithm based on polar geometry theory. Master’s thesis, Harbin Institute of Technology 2. Zhang X, Li J, Xing J et al (2016) A particle swarm optimization technique-based parametric wavelet thresholding function for signal denoising. Circ Syst Sign Process 35(4):1–22

Research on Image Retrieval Based on Wavelet Denoising

1391

3. Li S, Zhou Y (2017) An adaptive wavelet shrinkage denoising algorithm for low altitude flying acoustic targets. J Vibr Shock 36(9):153–156 4. Huijuan Z (2019) Wavelet transform image denoising algorithm based on improved threshold function. Appl Res Comput 37(5) 5. Dongsheng L (2018) Wavelet basis selection in wavelet threshold image denoising. Compute Knowl Technol 14(30):245–246

Analysis of the Matching Pursuit Reconstruction Algorithm Based on Compression Sensing Zhihong Wang(&), Hai Wang(&), Guiling Sun, and Yangyang Li Electronic Information Laboratorial Teaching Center, College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China {wanghao801226,wanghai}@nankai.edu.cn

Abstract. We introduce the concepts of compression sensing and signal reconstruction, and then explained the minimum 0 norm and minimum l1 norm reconstruction algorithms. We extensively study existing reconstruction algorithms and take the advantages of existing algorithms to propose a new reconstruction algorithm. The normalized random Gaussian matrix is used as the measurement matrix. We chose three different sparse signals for comparisons, namely the three classic CS inputs including the time domain sparse signal, the frequency domain compressible signal and the sparsity unknown frequency domain compressible signal. Finally, a series of simulation results show that the proposed algorithm can achieve signal reconstruction with high probability and high precision. Keywords: Compression sensing

 Matching pursuit  Signal reconstruction

In 2006, D. L. Donoho officially proposed the concept of compression sensing (CS) [1]. Compressed sensing will simultaneous sample and compress, taking advantage of signal sparsity or compressibility. It effectively removes redundancy between data. CS can achieve signal reconstruction with a few sample values [2–6]. The theoretical model of compression sensing is shown in Fig. 1. Where U is the measurement matrix and its size is M  N; W is a sparse basis and the size is N  N. According to Fig. 1, the mathematical formula for compression sensing is as follows: y ¼ Ux ¼ U W s ¼ Hs

ð1Þ

where x is the original signal ðx 2 RN1 Þ, y is the measurement vector ðy 2 RM1 Þ, s is a sparse signal. At the receiving end, an N-dimensional sparse signal s is reconstructed by an M-dimensional measurement vector y. Since M  N, the solution is generally not unique. But if the signal is sparse or compressible, and the measurement matrix satisfies the RIP condition [7–9], the theory proves that the original signal can be recovered from the measurement vector.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1392–1401, 2020 https://doi.org/10.1007/978-981-13-9409-6_167

Analysis of the Matching Pursuit Reconstruction Algorithm

1393

The original signal x(n) Ψ −1

Φ

Ψ The sparse representation s(k)

Θ=ΦΨ

The measurement vector y(m)

Fig. 1. Compressed sensing model

1 Signal Reconstruction At the receiving end, the measurement vector y and the sensing matrix H are known. The signal reconstruction algorithms are divided into two categories: (1) the minimum l0 norm reconstruction algorithms; (2) the minimum l1 norm reconstruction algorithms. (1) The minimum l0 norm reconstruction algorithms To reconstruct the original signal from the measurement vector y and the reconstruction matrix H, it has been shown that, in general, we solve the optimization problem by Eq. (2). minksk0 ; s:t

Hs ¼ y

ð2Þ

The researchers proposed a suboptimal solution. The greedy iterative algorithm is used to approximate the solution. In the greedy iterative algorithm, there are commonly used matching pursuit algorithm (Matching Pursuit, MP) [10], Orthogonal Matching Pursuit (OMP) [11], and regular orthogonal matching tracking algorithm (Regularized OMP, ROMP) [12], Compressive Sampling Matching Pursuit (CoSaMP) [13], Subspace Pursuit (SP) [14], Sparsity Adaptive Matching Pursuit (SAMP) [15] etc. (2) The minimum l1 norm reconstruction algorithms Donoho and Chen have proved that when RIP is satisfied, we can convert the l0 norm to the l1 norm to find the optimal solution. Then Eq. (2) can be converted as follows: minksk1 ; s:t

Hs ¼ y

ð3Þ

Equation (3) is a convex optimization function that can be transformed into a linear programming method (LP) [16] to solve it. The convex optimization algorithm includes Basis Pursuit (BP) [17], Gradient Projection for Sparse Reconstruction (GPSR) [18], homotopy algorithm [19] and so on. Taking the noise into account, (3) can be transformed into Eq. (4) for solving.

1394

Z. Wang et al.

minksk1 ; s:t

ky  Hsk22  e

ð4Þ

where e is noise.

2 Matching Pursuit Algorithm and Its Improvement (1) MP Algorithm The algorithm flow is shown in Table 1: Table 1. MP algorithm Input: The maximum number of iterations K, the residual r0  y the initial value of estimated

x0  0 Output: the estimated xK 1: for t  0 to K do 2: Solving the problem  val , pos  

arg max

rt 1 ,  j

j 1,2, N

3: Updating the estimated and the residual by:

xt  val  pos

2 2

rt  rt 1   x t

4: end for

(2) OMP Algorithm 1

During the operation, the residual is updated to (^xt ¼ ðX0  XÞ  X0  y, rt ¼ rt1  X^xt ) after each iteration. This residual update method ensures the orthogonality of the residual and the space of the column vector of the sensing matrix, and it reduces the number of iterations. (3) SAMP Algorithm The algorithm flow is shown in Table 2: (4) ITSAOMP Algorithm To overcome the shortcomings of existing algorithms, we propose a new algorithm named Iterative Thresholding Orthogonal Matching Pursuit Reconstruction Algorithm Based on Sparsity Adaptive (ITSAOMP). The algorithm flow is shown in Table 3:

Analysis of the Matching Pursuit Reconstruction Algorithm

1395

Table 2. SAMP Algorithm t  1, the recovery matrix , the step size st,

Input: The residual r0  y

the sparsity of estimated K, the signal of reconstructed x 0  0

j  1,

  , the support set F  , where  is a empty set. Output: the estimated xt

St  Max(  * rt 1 , K), t  Ft 1 St , F  Max( t Ct * y , K) , rt  y   F  t F y If

( xt  xt 1   ,where 

is a small constant) then

Quit the iteration; Else if

( x t  x t 1 ) Ft  F, rt  r, t  t  1 ;

Else

K  K  st, j  j  1, t  t  1 End if

Table 3. ITSAOMP algorithm Initialization: The number of iterations t  1 , the iteration stage j  1 , residual error r  y , iteration step length st , initial value of sparse K  st , the index set    ,    . 1: St  Max(  * rt 1 , K) , t  Ft 1 St ,   Max( t Ct * y , K), rt  y     t  y According to s  arg min y   s , get sparse vector estimates s by the least square 2

method. 2: Backtracking optimization by iterative threshold begins: set estimate value in (1) as initial iterative value, finishing threshold iteration, calculate residual error according to OMP. 3: When

rj  rj 1 , if s j  s j 1  , terminate all iteration stage, and get reconstruction sparse 2

signal s  s j, or else, the sparse estimation value K  j * st  st , and back to step (1) to get next stage. 4: After the end of the iteration process, get the reconstruction signal according to x   * s .

3 Reconstruction Algorithm Performance Analysis We chose three different sparse signals for testing. They are respectively time-domain sparse signals with known sparsity, frequency-domain signals with known sparsity and frequency-domain signals with unknown sparsity. All the simulations were performed on the MATLAB.

1396

Z. Wang et al.

(1) Time-domain sparse signals with known sparsity Specific experimental parameters are set as follows: • Signal length N = 100, sparsity K = 10; • The random Gaussian matrix was normalized as the measurement matrix. • The number of rows in the sensing matrix is M (M = 40). The original time domain sparse signal is shown in Fig. 2.

Fig. 2. The original time domain sparse signal

Figure 3 shows the reconstruction performance of four different reconstruction algorithms. The reconstruction error is shown in Fig. 4. (2) Frequency domain compressible signal Specific experimental parameters are set as follows: • • • • •

Signal length N = 256, sparsity K = ½ 1 50 ; The random Gaussian matrix was normalized as the measurement matrix; Step size is the st = ½ 1 8 ; The number of rows in the sensing matrix is M ðM ¼ K  log 2ðN=KÞÞ; We calculated the average of 50 independent experiments as the final experimental result to avoid randomness of the experiments.

In the simulation, we define the relative error g to evaluate the reconstruction accuracy. The g is defined as shown in Eq. (6): qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi PN1 2 ð^ x  x Þ n n jj^x  xjj2 n¼0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ g¼ PN1 2 jjxjj2 n¼0 xn

ð6Þ

where x is the original signal and ^x is the reconstructed signal. Figure 5 is a reconstruction error curve. As shown in Fig. 5, we can see that the proposed ITSAOMP algorithm is significantly better than other algorithms.

Analysis of the Matching Pursuit Reconstruction Algorithm

1397

Fig. 3. Time domain sparse signal reconstruction effect

In Fig. 6, we provide the curve of the error reconstruction rate under different sparsity level. The error reconstruction rate is defined as Eq. (7): Per ¼

numðl1 ð^x  xÞ [ l1 ð xÞ  0:1Þ numtotal

ð7Þ

As can be seen from Fig. 6, the error reconstruction rate of MP and OMP algorithms is relatively high, which is more than 85%. In general, the ITSAOMP algorithm has a lower error reconstruction rate and is more stable for signal reconstruction with different sparsity.

1398

Z. Wang et al.

Fig. 4. Reconstruction error

Fig. 5. Frequency domain compressible signal reconstruction error

Analysis of the Matching Pursuit Reconstruction Algorithm

1399

Fig. 6. Frequency domain compressible signal error reconstruction rate

(3) Frequency domain compressible signal with unknown sparsity Specific experimental parameters are set as follows: • The signal length N = 256, the sparsity K is unknown, K is randomly distributed within [3, 50], and random noise is added to the signal. • The random Gaussian matrix was normalized as the measurement matrix; • Step size is the st = ½ 1 8 ; • The number of rows in the sensing matrix is M. M/N is between 0.05 and 0.4; • we calculated the average of 100 independent experiments as the final experimental result to avoid randomness of the experiments. It can be seen from Fig. 7 that the reconstruction error of the proposed algorithm is very low.

4 Conclusion In this paper, we introduce sparse signal reconstruction, and introduce MP, OMP, SAMP and other classical reconstruction algorithms. After fully studying the compressed sensing theory, we proposed the ITSAOMP algorithm to reconstruct sparse signals. In this paper, the normalized random Gaussian matrix is used as the sensing matrix, and three different sparse signals are tested. The simulation results show that ITSAOMP algorithm can achieve the reconstruction under the condition of known or unknown sparsity. And the proposed algorithm has some advantages in reconstruction error and error reconstruction rate.

1400

Z. Wang et al.

Fig. 7. Frequency domain compressible signal reconstruction with unknown sparsity

References 1. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306 2. Baraniuk RG (2007) Compressive sensing [lecture notes]. Sig Process Mag IEEE 24: 118–121 3. Candes EJ, Romberg J (2006) Quantitative robust uncertainty principles and optimally sparse decompositions. Found Comput Mathem 6:227–254 4. Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. Inf Theory, IEEE Trans 52:489–509 5. Candes EJ, Romberg JK, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59:1207–1223 6. Candes EJ, Tao T (2006) Near-optimal signal recovery from random projections: universal encoding strategies information theory. IEEE Trans 52:5406–5425 7. Candès EJ (2006) Compressive sampling. In: Proceedings of the international congress of mathematicians, pp 1433–1452 8. Sha W (2008) Compressive sensing. Hong Kong University 9. Donoho DL, Huo X (2001) Uncertainty principles and ideal atomic decomposition. IEEE Trans Inf Theory 47(7):2845–2862 10. Daubechies I (1992) Ten lectures on wavelets. Front Matter 11. Tropp J, Gilbert A (2007) Signal recovery from random measurements via orthogonal matching pursuit. Trans Inf Theory 53(12):4655–4666 12. Needell D, Vershynin R (2009) Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found Comput Mathem 9(3):317–334 13. Needell D, Tropp JA (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321 ISSN 1063-5203

Analysis of the Matching Pursuit Reconstruction Algorithm

1401

14. Dai W, Milenkovic O (2009) Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans Inf Theory 55(5):2230–2249 15. Doy TT, Ganz L, Nguyeny N et al (2008) Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In: 2008 42nd Asilomar conference on signals, systems and computers. Pacific Grove, CA, pp 581–587, 26–29 Oct 2008 16. Candes E, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12): 4203–4215 17. Chen SB, Donoho DL, Saunders MA (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20(1):33–61 18. Fiqueiredo MAT, Nowak RD, Wright SJ (1998) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J Sel Top Sign Process 1(4):586–597 19. Donoho DL, Tsaig Y (2008) Fast solution of 11-norm minimization problems when the solution may be sparse. IEEE Trans Inf Theory 54(11):4789–4812

Super-Resolution Based and Topological Structure for Narrow Road Extraction from Remote Sensing Image Guoying Zhang, Guoqiang Wang(&), and Zhonghong Wang Key Lab of Electronic and Communication Engineering, Heilongjiang University, No. 74, Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. It is difficult to extract a narrow road with a width of only a few pixels from a remote sensing image. In order to solve this problem, it is proposed to process the remote sensing image with super-resolution. This paper extends the details of narrow roads by using the Deep Convolutional Neural Network (DCNN) method. Next, some noise points or roads of error extraction are processed by topological structure. To verify performance of the experimental method, experimental research on open remote sensing image data set is carried out. The experimental result is to compare the original image, superresolution image, and topology filtering. Experimental results demonstrate that the new method has better effectiveness and superiority over the original remote sensing image. Keywords: Remote sensing image extraction  Topological structure

 Super-resolution  Narrow road

1 Introduction Road extraction has important application value in emergency risk management, intelligent transportation system, Traffic analysis, GPS positioning, electronic map and other aspects [1]. For 40 years ago, researchers began to study the extraction of reason. At present, existing methods can be divided into image segmentation and classification methods, active contour methods, mathematical morphology methods and other major categories [2]. As time goes by, advances in technology, the performance of these methods has steadily improved. To some extent, road could be extract more precisely in VHR images. But in fact, with the increasing of resolution, road extraction disturbed by noise point more seriously. Only the noise points need to be filtered. There is no need to deal with other points. In this way can not only effectively reduce the amount of calculation, but also avoid the loss of information. Based on this consideration, the topological structure is proposed. It divides the filtering into two steps: In the first step, the characteristics of the noise points are used to identify the noise point. This step can be called the identification of the noise; the second step is to filter out the noise and recover some road information. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1402–1410, 2020 https://doi.org/10.1007/978-981-13-9409-6_168

Super-Resolution Based and Topological Structure for Narrow Road …

1403

Structure of this thesis: Sect. 2 introduces the background in this study, and Sect. 3 introduces the procedure of narrow road extraction. Section 4 shows the result of experiment and evaluate the result of the road extracted. Section 5 concludes this study and provides perspectives for follow-up research.

2 Background High resolution images use electromagnetic waves to image optical image [3]. Its wavelength ranges from 400 to 760 nm. The advantage of the optical image is that it can directly reflect the true color of the object, and is suitable for the observation habit of human eyes. RGB color image is essentially one form of optical image. The advantage of the optical image is that it can directly reflect the true color of the object, and is suitable for the observation habit of human eyes. Recently, the resolution of optical images continues to increase due to the rapid development of sensor manufacturing technology. Next, optical remote sensing images capture a wide range of ground objects, which contain more targets and types than natural images, especially in special scenes such as ports and cities. Therefore, the target extraction of optical remote sensing images also requires the algorithm to have a high capable to resist complex background interference. The target model-based extraction method was originally applied to natural image target extraction [4]. High-resolution remote sensing images are similar in structure to natural images. Therefore, scholars apply the target model method to remote sensing target extraction. Designing a better extraction method, among which the Bag-ofWords (BoW) and the Part-Based Model (PBM) are the most common. BoW was firstly used for text categorization [5]. BoW is widely used for image classification and content-based image retrieval UWHUW because of its high degree of freedom in object description. Recently, BoW is also used for classification and calibration of remote sensing images. Some scholars have applied it to target extraction applications. However, the biggest problem with BoW is the lack of spatial information between local features, resulting in the model not being able to distinguish between differently structured targets. Although some scholars have proposed improvement strategies, they are still not suitable for remote sensing target extraction. Therefore, we propose to use neural networks to extract targets from super-resolution images. According to image priority, Super-resolution algorithms in a single image involve four types—patch-based methods, edge-based methods, predictive models, and image statistical methods. Yang et al. have fully evaluated these methods through experiments [6]. Among them, the instance-based approach achieves the most advanced performance [7, 8]. The instance-based approach takes advantage of self-similarity and generates sample blocks from the input image [9]. An instance based approach learns the mapping between low/high resolution pixel blocks from a data set [10]. The difference between these studies is how to learn a tight dictionary or multiple spaces to correlate low/high resolution pixel blocks [11]. Chang et al. introduced a variety of embedding techniques as an alternative to neural network algorithms. Yang et al. studied the development of the above neural network technology to a more complex sparse coding formula. The accuracy and speed of the mapping are improve by other

1404

G. Zhang et al.

mapping features, such as random forests, and simple functions. Among these methods, the pixel block is the focus of optimization.

3 Procedure of Narrow Road Extraction In the previous section, we described the research background of super-resolution images and explained the feasibility of production resolution for road extraction. Next we introduce experimental theory and process, through super-resolution images and topology. 3.1

Road Extraction

This paper uses OCSVM to categorize high resolution remote sensing images. This classification step is two parts: (1) input image information and map to the highdimensional feature space; (2) find the largest edge hyperplane through OCSVM, thereby separating the training data. In the first layer of the Super Resolution Deep Convolutional Neural Network (SRDCNN), patches, which image blocks extracted from low resolution images Y, are convoluted through filters. In the convolution process, we introduce a nonlinear mapping through the formula: F1 ðYÞ ¼ maxf0; W1  Y þ A1 g

ð1Þ

where W1 represents the filter, A1 is bias. Nonlinear mapping can be introduced for the purpose of performing high resolution image reconstruction. And in convolutional neural networks, weights and biases can be optimized. 3.2

Remove Noise Points by Topological Structure

Effectively and accurately identifying the noise point is the key to the entire filtering process. An effective method is given below. Impulse noise (positive impulse noise and negative impulse noise) usually shows a large or small value, but a pixel with a large or small value is not necessarily a noise point. If a large value or a small value is considered to be noise points, it can cause a lot of false positives, and other characteristics of noise must also be utilized. Considering that the noise point tends to be an isolated point, that is, there are few cases where several impulse noise points are together at the same time, especially all noise points are positive or negative. The topological connectivity of the noise can be used to confirm the noise point. The global threshold is firstly used to decompose the image into a group of positive impulse noise points and a group of negative impulse noise points. I¼fði; jÞj1  i  M; 1  j  N; M; N 2 Z þ g

ð2Þ

For a image with size of M  N, the set I is used to represent all the pixel sets inside.

Super-Resolution Based and Topological Structure for Narrow Road …

1405

Smax ¼ maxfxði; jÞjði; jÞ 2 I g

ð3Þ

Smin ¼ minfxði; jÞjði; jÞ 2 I g

ð4Þ

where Smax and Smin represent the maximum and the minimum value of image pixels. B0þ ¼ fði; jÞjxði; jÞ [ Smax  T; ði; jÞ 2 I g

ð5Þ

B 0 ¼ fði; jÞjxði; jÞ\Smax þ T; ði; jÞ 2 I g

ð6Þ

T ¼ ðSmax  Smin Þ  p

ð7Þ

where B0þ and B 0 are the set of positive and negative impulse noise points after the initial identification. T is the selected threshold, p takes 0.03–0.05 in most of time. After the steps above, the initial noise point can be identified. But in some cases, many false positives will be generated, and some non-noise points will also be regarded as noise points, mainly those with larger or darker blocks or in the linear domain. The topological connectivity of the noise will be judged again in the following, and those mis-judgments will be eliminated. The basic idea is to examine the connectivity of the elements in the positively identified point set and the negative point set, respectively. If the elements connected to each other exceed a certain number, these points are considered to be no noise points.

4 Experiment Results and Analysis For experiment, the Wiedemann evaluation system was used to assess the integrity of road extraction results. The evaluation formula is as follows: TP TP þ FN

ð8Þ

TP TP þ FN þ FP

ð9Þ

Completeness ¼ Quality ¼

where TP is the correct extracted positive path length, the image is marked with green; FP is the wrongly extracted road length error, marked as blue in the image; FN marked in red is the length of the unextracted road in image. For road extraction, we used two sets of experiments to compare the superresolution image with the original image. Finally, the image is topologically filtered and then extracted by road, and compared with the former two.

1406

G. Zhang et al.

Although narrow roads are extracted in super-resolution images, it contains lots of noise signals in the result. Because noise signals have also been amplified after super resolution. And some noise block are very big, median filter can not get rid of them. So the topological structure is applied to solve this problem. Since though the amount of noise is huge, it is especially less than road information. And they do not connect together. So we label each connected patch. It shows in Fig. 1.

(a) Labeled first image

(b) Labeled second image

Fig. 1. Labeled each connected patch

Different color represents different labels. We set up the threshold as mentioned below. The results are as follows: Figures 2 and 3 are cropped separately to obtain Figs. 4 and 5. The narrow path in Figs. 4a and 5a is approximately 2 pixels wide. According to two sets of experiments, the super-resolution image is better than the original image. It can be seen from Figs. 2 and 3 that a narrower road can be extracted from the super-resolution image. Moreover, draw on the integrity and accuracy of road extraction, road extraction results for superresolution images are superior to the original image extraction results. The reason for this result is that the super-resolution image enlarge the number of information during the reconstruction process. Super-resolution images not only cut down the error rate when extracting roads, but also connect isolated road points together, making the extracted roads more complete. This results in higher accuracy when extracting road spectrum information. Tables 1 and 2 are obtained according to Eqs. 8 and 9. The original image, superresolution image, and topology image can be compared based on the data from Tables 1 and 2. It show that the result of super-resolution image also contains too many noise points. Figure 6 shows the advantage of the topological structure filter. It can get rid of noise points. The final result approach to the ground truth. After evaluation by the Wiedemann system, the image results after topological denoising are significantly improved in completeness and quality. In Table 1, although the road length accurately extracted by the topology filtering is not as short as the original image and the superresolution image. However, the image filtered by the topology has a small error path length. Therefore, the road extraction method based on topology filtering is more complete and effective. In Table 2, the length of the correct road extracted by the

Super-Resolution Based and Topological Structure for Narrow Road …

(b) super-resolution image

(a) original image

(e) extract road from superresolution image

(c) road reference image

(f) marked road extraction in the original image

1407

(d) extract road from the original image

(g) marked road extraction in the super-resolution image

Fig. 2. The first example of narrow road extraction

(a) original image

(b) super-resolution image

(e) extract road from superresolution image

(c) road reference image

(f) marked road extraction in the original image

(d) extract road from the original image

(g) extract road from superresolution image

Fig. 3. The second example of narrow road extraction

1408

G. Zhang et al.

(a) Original image

(b) extract road from the original image

(c) extract road from super-resolution image

Fig. 4. The patch of first image

(a) Original image

(b) extract road from the original image

(c) extract road from super-resolution image

Fig. 5. The patch of second image Table 1. Quality assessment of road extraction from Fig. 1 TP FP FN Completeness Quality

Original image 11,279 6890 2326 62.08% 55.03%

Super-resolution image 8466 2266 2320 78.89% 64.76%

Topological structure filtered 8491 1542 2315 84.63% 68.76%

topology filtered image is the longest than the road extraction length of the original image and the super-resolution image. This shows that the topology has obvious advantages in road extraction. The experimental results demonstrate the feasibility of road extraction based on super-resolution and topology.

Super-Resolution Based and Topological Structure for Narrow Road …

1409

Table 2. Quality assessment of road extraction from Fig. 2 TP FP FN Completeness Quality

Original image 6071 1856 7926 43.37% 38.30%

(a) the result of first image

Super-resolution image 7875 2312 6122 56.26% 48.29%

(b) marked result of first image

Topological structure filtered 11,397 4797 2600 81.42% 60.64%

(c) the result of second image

(d) marked result of second image

Fig. 6. The result of topological structure filter

5 Conclusion Road extraction is of great significance to urban construction and target navigation. Many scholars have always wanted to extract the road accurately. However, due to the interference of buildings, vehicles, or roofs and other materials with similar colors, the road has not been well extracted. In this paper, from the perspective of inter-pixel correlation, the correlation between pixels is enhanced by super-resolution technology. With the help of topology filtering, the redundant information amount is removed to seek to increase the accuracy of road extraction. This paper proposes a road extraction from remote sensing image based on superresolution and topology. The experimental results were evaluated by integrity and quality. The results illustrate that this method has excellent performance of topology filters on super-resolution images. However, this method still cannot solve the problem that obstacles block the road. In the future, we hope to combine the algorithm of scene recognition. Combine the surrounding background to determine whether the occluded part contains road information. And hope to analyze the material of the road through the hyperspectral method. Acknowledgements. This work was supported by the National Natural Science Foundation of China (51607059), the National Natural Science Foundation of Heilongjiang Province (QC2017059).

1410

G. Zhang et al.

References 1. Yeh AGO, Zhong T, Yue Y (2015) Hierarchical polygonization for generating and updating lane-based road network information for navigation from road markings. Int J Geograph Inf Sci 29:1509–1533, 2015/09/02 2. Wang W, Yang N, Zhang Y, Wang F, Cao T, Eklund P (2016) A review of road extraction from remote sensing images. J Traffic Transp Eng (English edn) 3:271–282, 2016/06/01/ 3. Chen C, Mingming L, Bing L, Yong Z Image super-resolution reconstruction algorithm based on residual network [J/OL]. Comput Eng Appl 1–8 4. Qifang X, Guoqing Y, Meng Z (2019) High-resolution image target detection technology based on Faster R-CNN[J/OL]. Remote Sens Land Resour 02:38–43 5. Zheng H, Cheng G, Li Y, Liu C (2019) A new fault diagnosis method for planetary gear based on image feature extraction and bag-of-words model. Measurement 6. Yang CY, Ma C, Yang MH (2014) Single-image super-resolution: a benchmark. In: European conference on computer vision, pp 372–386 7. Li F, Bai H, Zhao Y (2019) Detail-preserving image super-resolution via recursively dilated residual network. Neurocomputing 8. Yuxiang Y (2013) Research on image super resolution reconstruction algorithm. University of Science and Technology of China 9. Xiao J-S, Pang G-L, Tang L-M, Qian C, Zou B (2016) Image texture enhancement supersampling algorithm based on contour template and self-learning. Acta Automatica Sinica 42(08):1248–1258 10. Bevilacqua M, Roumy A, Guillemot C, Morel, MLA (2012) Lowcomplexity single-image super-resolution based on nonnegative neighbor embedding. In: British machine vision conference 11. Yang J, Wright J, Huang T, Ma Y (2008) Image super-resolution as sparse representation of raw image patches. In: IEEE conference on computer vision and pattern recognition, pp 1–8

Evaluation on Learning Strategies for Multimodal Ground-Based Cloud Recognition Shuang Liu1,2(B) , Mei Li1,2 , Zhong Zhang1,2 , and Xiaozhong Cao3(B) 1

3

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin, China {shuangliu.tjnu,limeitjnu,zhong.zhang8848}@gmail.com 2 College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin, China Meteorological Observation Centre, China Meteorological Administration, Beijing, China [email protected]

Abstract. As a sign of atmospheric processes, clouds play a crucial role in regulating the earth energy balance, redistributing surplus heat and hydrologic cycle. Appropriate recognition method is essential for accurate ground-based cloud classification. This paper evaluates three kinds of learning strategies, i.e., end-to-end method, k-nearest neighbor (KNN) classifier, support vector machine (SVM) for multimodal groundbased cloud recognition. The experimental results demonstrates that SVM is superior to the other methods on multimodal ground-based cloud recognition.

Keywords: Multimodal ground-based cloud recognition Convolutional neural network · Deep learning

1

·

Introduction

Clouds are a dynamic meteorological phenomenon and play a critical role in numerical climate and forecast models. Especially, the cloud types are an essential indicator of the weather conditions. The existing cloud observation methods are generally categorized into satellite-based and ground-based observations. Compared with the satellite-based cloud observations which are expensive and inaccurate in predicting weather for local areas, the ground-based cloud observations can provide enormous images for localized cloud analysis at a low cost. Unfortunately, the classification of cloud categories still relies on artificial observations, which takes plenty of manpower and financial resources. Therefore, in recent years, the automatic ground-based cloud classification has drawn more attentions. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1411–1417, 2020 https://doi.org/10.1007/978-981-13-9409-6_169

1412

S. Liu et al.

Since clouds are one of natural texture, many researchers attempt to extract texture descriptors for cloud classification. Xiao et al. [1] introduced the Multiview CLOUD (mCLOUD) mechanism which extracts the texture, structure and statistical color features simultaneously, and then encodes them using Fisher vector before fusing them. Luo et al. [2] combined together the texture features and manifold features to improve the recognition performance of the cloud types. To take full advantage of the local contrast information of cloud images, Liu et al. [3] proposed the weighted local binary patterns (WLBP) which uses the local patch variance as an adaptive weight in accumulating histogram. Under the consideration that the distribution of features is actually reflected by the LBPs-based histogram, Wang et al. [4] proposed to measure the Kullback-Leibler divergence between LBP histograms of the original and resized images. Based on the difference, the resolution of the resized image was selected for cloud classification. Recently, convolutional neural network (CNN) have achieved breakthroughs in many fields, such as object detection [5], person re-identification [6,7], scene character recognition [8, 9] and so on, and its promising performance has inspired many researchers to apply it to ground-based cloud classification. Shi et al. [10] conducted the pooling operation on each feature map of deep convolutional layer to extract the deep features for ground-based cloud classification. To extract the cloud texture, structure, and shape features simultaneously, Zhang et al. [11] devised the CloudNet learning model for feature learning. Ye et al. [12] first explored deep features from multiple convolutional layers, and then employed the pattern mining followed by the Fisher vector encoding to generate the final cloud feature representations. In [13], the dual guided loss (DGL) was presented which could integrate the knowledge of different CNNs by adding a modulation term to force the network to focus more on the hard-classified samples, so that cloud feature representations with more discriminative ability were learnt. However, the cloud texture is extremely unpredictable under different atmospheric conditions. Meanwhile, cloud type is closely related to several natural elements, for instance, temperature, humidity, pressure, wind speed and so on, which are referred as multimodal inf ormation. Therefore, the appropriate combination of the cloud visual information and the multimodal information can obtain more representative features for cloud classification. Liu et al. [14] integrated the visual features with multimodal information by a weighted strategy, where the visual features are obtained by implementing a pooling operation on the entire the feature maps and then flatting the computed results in deep layers. In [15], the joint fusion convolutional neural network (JFCNN) was designed for jointly learning the visual features and the multimodal features in a two-stream manner.

Evaluation on Learning Strategies for Multimodal Ground

1413

Fig. 1. The pipeline of classification task.

As shown in Fig. 1, the procedure of classification task is composed of feature learning and classification, both of which are important. The k-nearest neighbor (KNN) classifier, decision tree, support vector machine (SVM), Bayes classifier are the most adopted classifiers. In this paper, we evaluate the classification performance of the KNN classifier, SVM and the end-to-end method for multimodal ground-based cloud classification.

2

Method

Fig. 2. The architecture of deep learning model.

In this section, we present the pipeline of the deep learning model (see Fig. 2). The model takes the cloud images and the multimodal information (represented in orange in Fig. 2) as input and then passes them to global network, local network and multimodal network to learn representative deep features. The global network utilizes the ResNet-50 [16] as the backbone to learn the global cloud visual features. The local network is composed of several convolutional layers and an average pooling layer. It takes the selected salient patterns from the convolutional layer of the global network as input and outputs the local cloud visual features. The multimodal network is a multi-layer perception (MLP) which consists of four fully connected layers. The output of the multimodal network is used as the multimodal features. Afterwards, these features are fused at two fusion layers. Following the fusion layers, there are two fully connected layers, i.e., f c1 and f c2, the results of which are computed by the cross-entropy loss respectively. Hence, the deep learning model could learn global visual features, local visual features and multimodal features simultaneously and be optimized under one unified network. In this paper. we evaluate the classification performance of the KNN, SVM and the end-to-end method. The outputs of the fusion layers are treated as the

1414

S. Liu et al.

feature representation of each cloud sample in the test set, which is fed into the KNN classifier or SVM to obtain the classification result. For the method of endto-end, the average result of f c1 and f c2 is regarded as the feature representation input.

3

Experiments

In this part, we start with introducing the multimodal ground-based cloud dataset (MGCD). Then, we detail the experiment setting. Finally, we discuss and analyze the experimental results. 3.1

Multimodal Ground-Based Cloud Dataset

Fig. 3. Some cloud images from each class.

Fig. 4. The corresponding multimodal information of cloud images of Fig. 3.

The multi-modal ground-based cloud dataset (MGCD) is collected in Tianjin, China, and it consists of 8000 ground-based cloud samples. Each sample is made of two components, i.e., one ground-based cloud image and a set of corresponding multi-modal information. The cloud images with the resolution of 1024 × 1024 are obtained by a sky camera. The multi-modal information

Evaluation on Learning Strategies for Multimodal Ground

1415

is expressed as a vector with four elements, including temperature, humidity, pressure and wind speed. It is gathered by a weather station. The World Meteorological Organization (WMO) divides the sky conditions into 29 classes. Under this hypothesis, we combine some cloud classes and categorize the sky conditions into seven classes: (1) cumulus, (2) altocumulus and cirrocumulus, (3) cirrus and cirrostratus, (4) clear sky, (5) stratocumulus, stratus and altostratus, (6) cumulonimbus and nimbostratus and (7) mixed cloud, where the mixed cloud refers to the cloud image with no less than two cloud types. We partition the dataset into two equal part. One is used as the training set and the other is used as the test set. Figure 3 presents some examples from each class in MGCD, and Fig. 4 lists the corresponding multi-modal information. 3.2

Experiment Setup

The cloud images are first resized to 252 × 252, and then they are randomly cropped to 224 × 224 with the random horizontal flip. Afterwards, each cloud image subtracts the mean RGB values. The value of the multi-modal information is normalized to [0, 1]. The training set is shuffled before they are fed into the deep learning model. During the training period, the main network is initialized by the ResNet50. The weights of the convolutional layers of the attentive network and the fully connected layers (f c1 and f c2) are initialized with the method introduced in [17]. The stochastic gradient descent (SGD) [18] is employed to optimize the deep learning model. The model is trained 50 epochs with 32 samples at each iteration. The weight decay and the momentum are set to 2 × 10−4 and 0.9 respectively. The learning rate is set to 3 × 10−4 for the first 30 epochs and 3 × 10−5 for the rest epochs.

Fig. 5. The classification accuracies of multimodal ground-based cloud dataset under the end-to-end method, k-nearest neighbor (KNN) classifier and the support vector machine (SVM).

1416

3.3

S. Liu et al.

Results and Analysis

The classification results of the KNN, SVM and the end-to-end method are presented in Fig. 5. It can be observed that among the three method, SVM exceeds the others with result of 84.63%. On the contrary, KNN classifier obtains the classification accuracy of 82.28% which is 2.35 and 1.92% lower than SVM and the end-to-end method. The results indicate that choosing appropriate classifier is critical to get the relatively good classification result.

4

Conclusion

In this paper, we have presented an evaluation on learning strategies for multimodal ground-based cloud recognition. Three kinds of methods, i.e., end-to-end method, KNN classifier and SVM have been evaluated on MGCD. The experiment results validates that the SVM is the optimal selection for the recognition task of the multimodal ground-based cloud. Acknowledgements. This work was supported by National Natural Science Foundation of China under Grant No. 61501327 and No. 61711530240, Natural Science Foundation of Tianjin under Grant No. 17JCZDJC30600, the Fund of Tianjin Normal University under Grant No. 135202RC1703, the Open Projects Program of National Laboratory of Pattern Recognition under Grant No. 201800002, the Tianjin Higher Education Creative Team Funds Program, and the Postgraduate Research Practice Project of Tianjing Normal University under Grant No. YZ1260021938.

References 1. Xiao Y, Cao Z, Zhuo W, Ye L, Zhu L (2016) mCLOUD: a multiview visual feature extraction mechanism for ground-based cloud image categorization. J Atmos Ocean Technol 33:789–801 2. Luo Q, Meng Y, Liu L, Zhao X, Zhou Z (2018) Cloud classification of groundbased infrared images combining manifold and texture features. Atmos Meas Tech 11:5351–5361 3. Liu S, Zhang Z, Mei X (2015) Ground-based cloud classification using weighted local binary patterns. J Appl Remote Sens 9:095062 4. Wang Y, Wang C, Shi C, Xiao B (2019) A selection criterion for the optimal resolution of ground-based remote sensing cloud images for cloud classification. IEEE Trans Geosci Remote Sens 57:1358–1367 5. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 91–99 6. Zhang Z, Si T, Liu S (2018) Integration convolutional neural network for person re-identification in camera networks. IEEE Access 6:36887–36896 7. Zhang Z, Zhang H, Liu S (2019) Coarse-fine convolutional neural network for person re-identification in camera sensor networks. IEEE Access 7:65186–65194 8. Zhang Z, Wang H, Liu S, Xiao B (2018) Consecutive convolutional activations for scene character recognition. IEEE Access 6:35734–35742 9. Zhang Z, Wang H, Liu S, Xiao B (2018) Deep contextual stroke pooling for scene character recognition. IEEE Access 6:16454–16463

Evaluation on Learning Strategies for Multimodal Ground

1417

10. Shi C, Wang C, Wang Y, Xiao B (2017) Deep convolutional activations-based features for ground-based cloud classification. IEEE Geosci Remote Sens Lett 14:816– 820 11. Zhang J, Liu P, Zhang F, Song Q (2018) CloudNet: ground-based cloud classification with deep convolutional neural network. Geophys Res Lett 45:8665–8672 12. Ye L, Cao Z, Xiao Y (2017) DeepCloud: ground-based cloud image categorization using deep convolutional features. IEEE Trans Geosci Remote Sens 55:5729–5740 13. Li M, Liu S, Zhang Z (2019) Dual guided loss for ground-based cloud classification in weather station networks. IEEE Access 7:63081–63088 14. Liu S, Li M (2018) Deep multimodal fusion for ground-based cloud classification in weather station networks. EURASIP J Wirel Commun Netw 48 15. Liu S, Li M, Zhang Z, Xiao B, Cao X (2018) Multimodal ground-based cloud classification using joint fusion convolutional neural network. Remote Sens 10:822 16. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition, pp 770–778 17. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing humanlevel performance on imagenet classification. In: IEEE international conference on computer vision, pp 1026–1034 18. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 1097–1105

SAR Load Comprehensive Testing Technology Based on Echo Simulator Zhiya Hao(&), Zhongjiang Yu, Kui Peng, Linna Ni, and Yinhui Xu China Academy of Space Technology (CAST), Beijing, China [email protected]

Abstract. This paper introduces the SAR (Synthetic Aperture Radar) load comprehensive test verification technology based on echo simulator. Firstly, the working principle of the echo simulator for SAR load test verification is introduced. Based on this, the SAR load test mode, test items and test methods are designed. The SAR composed of echo simulator and SAR fast view is given. Load comprehensive test setup Preparation and design plan. It has been applied in actual satellite testing. Keywords: Echo simulator

 SAR  Load comprehensive test

1 Introduction The application of SAR payload satellites is more and more extensive. High-orbit SAR can conduct long-term continuous observation of the land and its surrounding areas, and high revisit and wide coverage observations on the land and its surrounding areas, including disaster characteristics monitoring and disaster assessment, and agroforestry classification, ocean observation, etc., serving the national economic construction and national defense construction. According to the characteristics of SAR load task and test requirements, based on the working principle of echo simulator, a variety of test modes of SAR load and SAR load sub-system test items are designed. The SAR-loaded test ground equipment designed and developed effectively supports the testing tasks at all stages of the entire star development process. The test items and test methods for needle SAR load characteristics were studied and designed and applied during the test implementation.

2 SAR Load Echo Simulator Works The spaceborne SAR echo simulator mainly consists of an RF subsystem, an intermediate frequency subsystem [1], a baseband echo generation system, a recording subsystem and a master control subsystem. The block diagram of the system is shown Fig. 1.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1418–1428, 2020 https://doi.org/10.1007/978-981-13-9409-6_170

SAR Load Comprehensive Testing Technology Based on Echo Simulator

1419

Fig. 1. Spaceborne SAR echo simulator block diagram

The satellite SAR echo simulator has two configuration schemes: line feed mode and air feed mode according to the specific use mode. In the online feed mode, the echo simulator and SAR load can be placed in the laboratory. The echo simulator receives the RF excitation signal of the SAR payload through the RF cable, and feeds the simulated echo signal into the SAR payload through the RF cable. The line feed mode is mainly for two modes of interface. The pre-amplifier is coupled with the RF wired interface. In this state, the load-distribution system in the SAR antenna is not installed. The internal calibration radio-wired interface is completed. In this state, the load sub-system function of all the on-board devices in the SAR sub-system is completed. And performance testing [2] (Fig. 2).

Fig. 2. Schematic diagram of the line feed mode connection of the spaceborne SAR echo simulator

1420

Z. Hao et al.

In the air-fed mode, the echo simulator connects the transceiver antenna to the SAR load placed in the microwave darkroom. The echo simulator receives the satellite’s transmitting signal through the bell antenna, and simultaneously radiates the echo signal to the satellite SAR array antenna through the bell antenna to complete the echo simulation. This state mainly completes the reception of the FM signal in the wireless state, the transmission of the echo signal, and qualitative verification of whether the uplink link is working normally (Fig. 3).

Fig. 3. Schematic diagram of airborne mode connection of spaceborne SAR echo simulator

According to different test requirements, the spaceborne SAR echo simulator can have real-time calculation mode and echo playback mode. The real-time calculation mode is a closed-loop simulation mode. The spaceborne SAR echo simulator collects the excitation signal emitted by the radar in real time and convolves with the point target scene impact response function to obtain the echo signal of the scene. After the delayed playback control, it is transmitted to the radar receiver to complete the echo simulation. The echo playback mode is the open-loop simulation mode [3]. The spaceborne SAR echo simulator pre-stores the scene echo data simulated by the host computer into the scene echo storage unit. In the echo simulation process, the trigger signal of the SAR payload is received as Synchronization signal echo data of data playback is played to complete echo simulation.

SAR Load Comprehensive Testing Technology Based on Echo Simulator

1421

According to the stored echo data form, the echo playback mode is divided into a baseband echo data playback mode and an impulse response function playback mode. The baseband echo data playback mode stores the baseband echo data of each PRT simulated by the host computer into the scene echo storage unit, and receives the trigger signal during the echo simulation process, and according to the playback delay stored in the echo data. The information is subjected to echo playback delay control. The impact response function play mode stores the scene impact response function data of each PRT simulated by the host computer into the echo storage unit. During the echo simulation process, the echo simulator receives the transmit signal of the SAR payload, and the stored scene. The impulse response function is convoluted to obtain a baseband echo sequence, which is fed into the SAR payload to complete the echo simulation (Fig. 4).

Fig. 4. Description of the working mode of the spaceborne SAR echo simulator

The external transmission interface of the echo simulator mainly has the following: 1. 2. 3. 4. 5. 6. 7.

Synchronous clock interface Synchronous trigger interface Interface with the internal scaler Interface with preamplifier Air interface with SAR payload Interface with GPS simulation With the master control interface (Fig. 5).

1422

Z. Hao et al.

Fig. 5. Description of the external interface of the spaceborne SAR echo simulator

3 SAR Load Test Mode, Test Project and Test Method Design Combined with the working principle of the above echo simulator, the SAR load test design is carried out, including the test mode, test project and test method. 3.1

SAR Load Test Mode

3.1.1 Planar Near Field Test Mode The planar near-field test mode satellite is located in the plane near-field microwave darkroom, and the ground equipment is located in the plane near-field test front, mainly containing two states, one is the direction test state, and the other is the normal mode test. In the direction test mode, the SAR payload is provided by the planar near-field test system with timing synchronization signals, wave bit coding, data bit valid flags, RF test signals, etc., to control the ground wave control device, the drive amplifier and the SAR antenna to work together to achieve Coordinated testing in multi-wave, pulse mode. The connection method is shown in Fig. 6.

SAR Load Comprehensive Testing Technology Based on Echo Simulator

1423

Fig. 6. Receive pattern test block diagram

3.1.2 Full Power Test Mode The full-power test mode satellite is located in the AIT assembly hall, and the measurement and control and digital transmission adopt wireless mode. In this mode, the satellite SAR antenna is deployed on the air floating platform while using a wireless absorbing wall, and the connection relationship is as shown in Fig. 7.

Fig. 7. Full power test mode connection diagram

1424

Z. Hao et al.

3.1.3 SAR Load Mode Test Take a satellite SAR load mode as an example. The satellite can work in the sea-land joint observation mode, the land observation mode, the ocean observation mode and the left-view observation mode. The corresponding SAR payload has 13 working modes. The digital transmission can work in real-time transmission, recording, playback and side-by-side mode. In order to fully verify the correctness of the SAR load and the working mode of the whole star, the SAR load mode test design for the actual working state of the satellite in orbit covers the entire left and right side swing, polarization mode, calibration mode, BAQ compression mode, SAR load work. Mode and data transfer mode (Table 1).

Table 1. SAR load mode test design Num

Mode name

1. 2.

Sea-land joint observation mode

3. 4. 5.

Terrestrial observation mode

6. 7. 8. 9. 10. 11. 12.

Ocean observation mode Left view mode

13.

3.2

SAR electronic equipment working mode Bunching mode Superfine strip Fine strip 1 Standard strip Extended high angle of incidence strip Narrow-four-polarized strip Bunched-tetragonal strip Bunching strip-wave mode Wave imaging strip mode Global observation scan imaging mode Bunching mode Standard strip Narrow scan

Data transmission mode Punctuality Record first, then play back Real biography Punctuality Real biography Punctuality Record first, then play back Punctuality Write side by side Record first, then play back Punctuality Record first, then play back Real biography

SAR Load Test Project

The main test items for SAR loads are as follows: 1. Complete the power supply interface check and interface static impedance test of the star loading equipment 2. Complete remote command and telemetry parameter check 3. SAR power test 4. SAR frequency characteristic test

SAR Load Comprehensive Testing Technology Based on Echo Simulator

1425

5. SAR PRF characteristic test 6. SAR performance indicator test (using echo simulator): • • • • • •

switching time between imaging modes in the instruction packet; equipment working time accuracy; instruction packet switching time; analysis of test data of azimuth ambiguity and distance ambiguity; side lobe performance test data analysis; radiation resolution analysis, including relative radiation accuracy and absolute radiation accuracy; • output code rate test in each working mode. 7. Image quality inspection. 3.3

SAR Load Test Method

3.3.1 Power Interface Check Check the SAR load subsystem power supply, command input impedance. Connect the transfer box to the power supply link and the remote command link, measure the impedance and voltage through a multimeter and record. 3.3.2 Remote Command Check Send the remote control command related to the subsystem, and judge whether the subsystem can respond to the remote command correctly through telemetry or the actuator. 3.3.3 Telemetry Parameter Check By powering up the subsystem equipment, send relevant remote control commands to determine whether the change in the telemetry of the subsystem meets the requirements of the telemetry interpretation criteria. 3.3.4 SAR Load Sub-system Power Test The power test object is the power output of the SAR load drive amplifier and the internal standard port. The power test measures whether the power amplifier device meets the design specifications. The power meter used should be able to meet power level, frequency and pulse power measurement requirements [4]. 3.3.5 SAR Load Sub-system Frequency Test The object of the frequency characteristic test is to drive the chirp frequency of the abandonment and the internal standard. The main characteristics of the frequency characteristics include the center frequency and bandwidth. The spectrum analyzer used in the test should be able to meet the power level, frequency and pulse frequency measurement requirements. 3.3.6 SAR Load Subsystem PRF Test The object of the PRF test is the PRF scan signal output by the SAR load control unit. The main indicator of the test is its sweep.

1426

Z. Hao et al.

Trace cycle, step, duty cycle indicators. The test equipment uses an oscilloscope to complete this short-term test project. 3.3.7 SAR Performance Index Test Performance metrics for the required tests include spatial resolution and radiation resolution, integrated side lobe ratio and peak side lobe ratio test. The method is to perform point target simulation by using an echo simulator, and the digital device of the digital transmission system receives the data on the star and sends it to the processing. For the SAR fast-view ground equipment, after the SAR fast-view device completes decompression and imaging, the imaging parameters in the auxiliary data are extracted. The analysis results of spatial resolution and radiation resolution, integrated side lobe ratio and peak side lobe ratio were obtained using analysis software (Fig. 8).

Fig. 8. Echo simulator device star connection composition block diagram

All the equipment of the digital transmission system works, and the ground equipment logarithmically transmits the RF modulation signal for attenuation, down conversion, and intermediate frequency solution. After being adjusted, it is sent to the AOS special test equipment for deformatting processing, and then sent to the decompression device for processing, and the original observation data is restored, and then the SAR load fast view is sent for image quality inspection. 3.3.8 Image Quality Inspection The digital transmission subsystem works on all equipments. The ground equipment transmits the RF modulation signal in logarithm to attenuate, down convert, and the intermediate frequency demodulation is sent to the AOS special test equipment for deformatization processing, and then sent to the decompression equipment for processing to restore the original After observing the data, the SAR payload is quickly viewed for image quality inspection (Fig. 9).

SAR Load Comprehensive Testing Technology Based on Echo Simulator

1427

Fig. 9. Image quality check star connection composition block diagram

The image quality inspection of the digital sub-system mainly observes whether the image data is deformed by the SAR load fast-view; Whether the image data has any noise, speckles, and no abnormalities such as bright lines or patterns; Whether the auxiliary data is intact or not lost; Whether the auxiliary data content is correct.

4 SAR Load Sub-system Test Equipment The SAR sub-system ground test equipment is mainly composed of real-time echo simulator, real-time imaging processing and analysis system (Quick Vision), microwave components, and general test instruments such as power meter and frequency meter. The main task of the real-time echo simulator is to verify the on-board SAR load interface, function and main indicators, and to complete the payload test together with the digital transmission ground equipment and the fast-vision ground equipment. The main functions of the echo simulator include receiving the signal transmitted by the onboard SAR payload transmitter, and generating an echo signal based on the transmitted signal parameters and various control signals to be sent to the SAR payload receiver. The main task of the real-time imaging processing analysis system is to receive the high-rate compressed data and auxiliary data sent by the digital integrated ground processing equipment, distribute the image data to the decompression module in real

1428

Z. Hao et al.

time, complete the decompression processing of the compressed data, and distribute the auxiliary data to the real-time. The image display module processes the decompressed data into SAR original image data in real time according to the SAR imaging algorithm, and finally displays the image data and the auxiliary data in real time, and can analyze and calculate the system index of the image data, and the device is mainly used for Complete SAR load and number transfer system joint test, full star payload functional performance test, large test and launch field test [5].

5 SAR Load Satellite Test Application The SAR load test technology based on echo simulator has been applied in various stages of test tasks. For the characteristics of multi-polarization and multi-operation mode of a certain type of satellite SAR load sub-system, a single hardware platform is designed to cover twelve kinds. SAR echo simulation scheme for SAR load mode of operation. An image evaluation system capable of evaluating the integrated side lobe ratio and peak side lobe ratio is designed for the image evaluation requirements of a multi-polarization multi-operation mode of a satellite SAR payload subsystem. The program successfully applied engineering applications in a prototype, prototype and launch site test.

6 Conclusion This paper first introduces the working principle of the echo simulator, and combines the working principle of the echo simulator to design the SAR load test mode, test items and test methods. The composition and design scheme of the ground test equipment for SAR load comprehensive test are given. The designed echo simulatorbased SAR load test technology has been applied in various stages of test tasks. It effectively supports the comprehensive test tasks of each stage of the SAR payload satellite. The obtained results can be applied to the subsequent synthetic aperture radar imaging satellite load test, and accumulated rich engineering experience.

References 1. Krieger G, Moreira A, Fiedler H et al (2007) TanDEM-X: a satellite formation for highresolution SAR interferometry. IEEE Trans Geosci Remote Sens 45:3317–3341 2. Nord ME, Ainsworth TL, Lee J-S (2009) Comparison of compact polarimetric synthetic aperture radar modes. IEEE Trans Geosci Remote Sens 47:174–188 3. Horn R, Nottensteiner A, Reigber A et al (2009) F-SAR—DLR’S new multifrequency polarimetric airborne SARICI. In: Proceedings IGARSS’09, pp 902–905 4. Angelliaume S, Dubois-Fernandez P (2008) X-band RAMSES PollnSar data calibration and validation. In: Proceedings of EUSAR2008, Friedrichshafen, Germany, 2008 5. Cloude SR, Papathanassiou KP (1998) Polarimetric SAR interferometry. IEEE Trans Geosci Remote Sens 36(5):1551–1565

A New Traffic Priority Aware and Energy Efficient Protocol for WBANs Wei Wang(&), Dunqiang Lu, Xin Zhou, Baoju Zhang, Jiasong Mu(&), and Yuanyuan Li College of Physical and Electronic Information, Tianjin Normal University, Tianjin, China [email protected], {mujiasong,lyytuffo}@163.com

Abstract. For long-term real-time monitoring of human parameters, providing effective treatment for patients and reduce medical costs WBANs emerged, electrical devices microminiaturization makes WBANs come true. The network environment of WBAN is heterogeneous, nodes function and data types are various, however, in typical SPIN data priority is not considered. Limited resource is another challenge for WBANs, whole network energy balance is neglected by now available protocol. Aiming on provide low latency transmission for critical data and extend network lifetime, we proposed a optimization protocol for SPIN, data transmission and routing selection based on data priorities and residual energy. The outcome of simulation we performed show that the improved algorithm result in low latency and longer network lifetime. Keywords: SPIN

 WBAN  Routing  Priority  Energy

1 Introduction Wireless Body Area Networks (WBANs) is one of the latest technologies in telehealthcare diagnosis and management. The developments in sensor technologies, micro-electro-mechanical systems (MEMS) and wireless communications motivated the development of WBANs and leverages the emerging specific standardized for medical WBANs: IEEE 802.15.6 and IEEE 802.15.4 [1, 2]. Sensors in WBANs are heterogeneous, different type of nodes attached on clothing or on the body or even embedded in body. These nodes are sensing and collecting vital physiological signals consistently, such as ECG, EEG, blood pressure, temperature, heart rate, glucose level, respiration rate, SpO2 levels etc. [3]. Sensor node have capability to process and transmit the collected data,and transmit data to a medical center database through a data aggregation gateway [4–6]. WBAN is an out of the ordinary type of Wireless Sensor Network (WSN), most traditional WSN consists of the homogeneous and static nodes, proposed protocols only consider this network environment, on contrary, in WBANs there are various heterogeneous nodes, due to human body movement the network topology change frequently. The moving distance, the propagation of the waves between WSN and WBANs are also different [7, 8]. Briefly speaking, numerous challenges faced by WBANs are similar to © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1429–1437, 2020 https://doi.org/10.1007/978-981-13-9409-6_171

1430

W. Wang et al.

those faced by WSN, nevertheless, a great deal of essential differences among the two networks that require special research in WBANs [9]. Therefore, the protocol of WSN cannot be directly used in WBAN, protocols needs to be optimized and improved. Data transmission reliability, low latency and energy saving in WBANs algorithm design must be considered, these are vital problems and challenges [10]. The paper is organized as follows. In Sect. 2 we give background information on SPIN. Section 3 proposed improved protocol and the simulation result analysis is shown in Sect. 4. Conclusion are drawn in Sect. 5 (Figs. 1, 2).

Fig. 1. Wireless body area networks

2 Background and Motivation SPIN (Sensor Protocol for Information via Negotiation) is the first data-centric routing protocol [11]. Nodes collect and propagate the data through the network by means of negotiation and they use meta-data negotiate with each other, which completely describe the data they sensed. Nodes communicate in SPIN use three types of messages: ADV, REQ and DATA. When a node has data to transmit it can broadcast ADV which containing meta-data. If a node wants to receive actual sensed data send REQ to node which send ADV, DATA message contains integrated and actual information [12, 13], routing process shown as Fig. 3. SPIN is designed based on negotiation and energy adaptation, used to solution the flaws of flooding. Negotiation solve information implosion and overlapping and reduces the redundant data, blind forwarding of resource is addressed by energy adaptation, network lifetime is extended and energy efficiency is increased [14, 15]. However, data in WBAN has different priority level and energy is limitation, SPIN has no measures for routing selection, so further improvement is necessary, a new based on critical data and energy efficient algorithm will be described below.

A New Traffic Priority Aware and Energy Efficient Protocol …

1431

Fig. 2. WBANs architecture

3 An Improved Proposed Protocol Unlike Wireless Sensor Networks, WBANs routing has many strict restrictions and working environment is heterogeneous, extraordinary the collected data are different types have different, limited energy calls for high energy efficiency. WBAN is used for remote monitoring of human parameters to provide real-time and effective diagnosis for patients. The transmission of important data requires low delay and high reliability. Based on these issues, an improved version of Spin is proposed. Firstly, according to its importance to the diagnosis of patients, the collected data are classified into different priorities. Data are divide into three different classes: EM (Emergency), DS (Delay Sensitive) and GM (General Monitoring) [16]. The EM are the most critical data packets should transmit without delay and dependable. DS priority is the second, this type of data is designed for nonmedical applications or data not related to patient’s life safety. With the lowest priority data is GM, they correspond to common measurements of patient parameters that generally directive ordinary values.

1432

W. Wang et al.

Fig. 3. SPIN routing process

The proposed routing protocol consists of three stages. “Initial Phrase” source node broadcast data to neighbors in a certain range, “Negotiation Phrase” in range nodes decide how to reply ADV, in “Data Transmission Phrase”, source choose which one can be the forwarder and send data. The proposed protocol described in the Algorithm 1.

Initial Phrase The source node sensed new data, broadcast ADV contains meta-data to other nodes within a certain radius, this information contains a flag for marking priorities, also a flag used to mark node energy levels. Negotiation Phrase In prescribed range alive node receive the message from source node, if their residual energy can complete this information receiving and processing send a REQ message. When reply REQ, if only one ADV received reply it, otherwise the priority flag need to be judged and the data with the highest priority is replied. In some cases, received several messages with the same priority, reply to the first arrival. In addition, REQ message containing node residual energy value which is requesting for DATA. Transmission Phrase When the source node received only one REQ send the real data to the forward. If there are more than one REQ received, the source node need to view their residual energy levels, the REQ sender with the most residual energy will be selected for the next hop.

A New Traffic Priority Aware and Energy Efficient Protocol …

Algorithm 1

EADV

Energy consumption for accepting ADV

EREQ Energy consumption for sending REQ EDATA Eth

Energy consumption for accepting and sending DATA Energy threshold value Eth=1.5*( EADV +EREQ+EDATA)

PM: EM data, with the highest priority PD: DS data, medium priority PG: GM data with the lowest priority d Distance between source node and next hop node d= sqrt( (S(advertiser).x-(S(j).x) )^2 + (S(advertiser).y-(S(j).y) )^2 ) rad: Radius of transmission range num_REQ Number of REQs received S(i).E Node residual energy value

1.Broadcast ADV to node which are in range 2.Negotiation 3.

4. 5. 6. 7.

if Enode>Eth && d > > =

8 > > >
> P > > > ; : Si ðx; yÞ >

ð9Þ

i¼1

In the Eq. (9), the parameter b represents the gain, a represents the nonlinear intensity, and N represents the number of original image color channels. Compared with the MSR algorithm, the MSRCR algorithm can effectively avoid the distortion of the local color of the image. However, due to the complicated setting of the MSRCR parameters and the low computational efficiency, there are certain limitations and cannot be popularized.

3 Homomorphic Filtering 3.1

Homomorphic Filtering Principle

Homomorphic filtering is often used to process images with uneven illumination, so that the brightness of the image can be evenly distributed. For an image f ðx; yÞ with M  N size, it is composed of the product of the illuminance component iðx; yÞ and the reflection compone rðx; yÞ, which is defined as follows [14]: f ðx; yÞ ¼ iðx; yÞ  rðx; yÞ The specific homomorphic filtering process is shown in Fig. 2.

ð10Þ

1486

H. Wu and Z. Tan

f (x,y) In

FFT

g (x,y)

H(u,v)

(FFT)-1

Exp

Fig. 2. Homomorphic filtering flow chart

Hðu; vÞ represents a homomorphic filter transfer function, as is common with traditional Gaussian homomorphic filter functions. Hðu; vÞ ¼ ðcH  cL Þ½1  ecðD

2

ðu;vÞ=D20 Þ

 þ cL

ð11Þ

where D0 is the filter cutoff frequency, cL and cH are the filter minimum and maximum values, and constant c is mainly used to control the tilt of the filter transfer function. Dðu; vÞ is the distance from point ðu; vÞ to the frequency origin, which is defined as follow. " Dðu; vÞ ¼

3.2

M u 2

2



N þ v 2

2 #1=2 ð12Þ

Improved Gaussian Homomorphic Filtering

The traditional Gaussian homomorphic filter only distinguishes the image pixels according to the cutoff frequency D0 to do low frequency suppression or high frequency enhancement. Therefore, this filtering method tends to have a large error for the frequency distance D0 close to Dðu; vÞ. In this case, the traditional The Gaussian homomorphic filter is improved as follows: Hðu; vÞ ¼ cL  eðD0L =D 2

2

ðu;vÞÞ

 cH  ecðD

2

ðu;vÞ=D20H Þ

þ cH

ð13Þ

Compared with the traditional Gaussian homomorphic filter, the improved homomorphic filter has the following characteristics: (1) it can effectively suppress the strong light region of D\D0L ; (2) slowly enhance the area of D0L \D\D0H to avoid strong light region and weak Mutation problems between light zones; (3) High-frequency enhancement of the D [ D0H region to highlight image details.

An Image Dehazing Algorithm Based

1487

4 Proposed Algorithm 4.1

Algorithm Principle

The algorithm in this paper is mainly a new defogging algorithm combined with the advantages of single-scale SSR and homomorphic filtering. The principle is as follows: (1) Without considering the detail enhancement, the sub-channels are processed by a single-scale SSR algorithm with sufficient surrounding scale for the original image, and the surrounding scale is large enough to ensure a good color recovery effect on the output image; (2) Filtering the output image in step 1 with the improved Gaussian homomorphic filter, which can effectively solve the residual halo artifact of the SSR algorithm, so that the illumination of the output image can be evenly distributed; (3) Perform CLAHE algorithm processing on the output image processed in step 2 to highlight image edge details; (4) Combine the color channels to get the final defogging image. In order to verify the effectiveness and advantages of the proposed algorithm, this paper makes a detailed simulation analysis of different types of haze images. The experimental results show that the proposed algorithm can effectively restore the color information of the original image, eliminate the residual halo problem of the SSR algorithm and highlight the edge details of the image. 4.2

Objective Performance Indicators

In this paper, traditional objective analytical indicators are used to evaluate the effect of image fog removal, such as mean, variance, information entropy, and peak signal-to-noise ratio [15]. For an image with M  N size and L gray level, these four objective indicators can be defined as follows: (1) Mean value: reflecting the average gray of image  ðXÞ ¼ 1=ðMNÞ  G

M 1 X N 1 X i¼0 j¼0

GðXij Þ

ð14Þ

where GðXij Þ represents the gray value of each pixel in the image. For a defogged image, the average gray level should be reduced theoretically, but it should not be too low, which often results in the overall darkness of the resulting image. (2) Variance: reflecting the image contour details ANOVA ¼ 1=ðMNÞ 

M 1 X N 1 X i¼0 j¼0

 ðXÞ Þ2 ðGðXij Þ  G

ð15Þ

1488

H. Wu and Z. Tan

Variance mainly reflects the change degree of high-frequency and low-frequency parts of the image, and the variance of an image with prominent details is often larger. Therefore, for a fog image with rich details, the contrast of the image increases and the variance increases after fog removal. (3) Information entropy: reflecting the total information content of the image and the color richness of the image H ½gðx; yÞ ¼ 

L1 X

fpi  log2 ðpi Þg

ð16Þ

i¼0

where pi denotes the gray frequency when the gray value is i. Therefore, the image information entropy becomes larger and the degree of image color restoration is more obvious after fogging in theory. (4) Peak signal-to-noise ratio (PSNR): reflecting the degree of image distortion PSNR ¼ 10 log

ðL  1Þ2 MSE

ð17Þ

PSNR is used to analyze image distortion, When the PSNR value is larger, the image quality is higher and the distortion is less. MSE represents the mean square error between the original image and the resulting image [15], which is defined as: MSE ¼ 1=ðMNÞ 

M 1 X N 1  X

 f ðxi ; yj Þ  gðxi ; yj Þ

ð18Þ

i¼0 j¼0

4.3

Simulation Results

4.3.1 Processing Colorful Images The color information of fog images with rich colors is often concealed by fog, which reduces the brightness of the image. In order to restore the color of similar images, this paper compares the defogging effects of different defogging algorithms on a special color-rich fog image and proves the effectiveness of the new algorithm. 4.3.2 Processing Images with Uneven Illumination The general enhancement algorithm can easily cause the local halo of the target image when dealing with foggy images with uneven illumination. Contrarily, the new algorithm can effectively reduce the surface halo after image defogging. Figure 4 shows the defogging results of special images under uneven illumination.

An Image Dehazing Algorithm Based

1489

4.3.3 Processing Images with Uniform Illumination The uniformity of illumination reflects the uniform distribution of fog on the image surface. Therefore, Fig. 5 shows that different algorithms are used to remove fog from uniformly illuminated images, which proves that the algorithm is superior to other algorithms.

(a) original image

(b) SSR algorithm

(c) MSR algorithm

(d) Homomorphic filtering

(e) CLAHE algorithm

(f) the proposed Algorithm

Fig. 3. Comparison of different algorithms for defogging effects on color-rich images

(a) original image

(b) SSR algorithm

(d) Homomorphic filtering (e) CLAHE algorithm

(c) MSR algorithm

(f) the proposed Algorithm

Fig. 4. Comparison of defogging effects for different algorithms for uneven illumination images

1490

H. Wu and Z. Tan

(a) original image

(b) SSR algorithm

(d) Homomorphic filtering

(e) CLAHE algorithm

(c) MSR algorithm

(f) the proposed Algorithm

Fig. 5. Comparison of defogging effects of different algorithms for uniform images of illumination

From Figs. 3, 4 and 5, it can be seen that the algorithm can recover image color information well for dealing with different types of haze images. However, the image processed by the MSR algorithm may cause local color distortion due to amplification of image noise. In addition, it can be seen that the image processed by the algorithm is also enhanced in edge detail. Therefore, the image dehazing effect processed by the algorithm of this paper is greatly improved compared with the traditional dehazing algorithm. 4.4

Objective Analysis

In this paper, the algorithm is analyzed from the aspects of mean, variance, information entropy, and peak signal-to-noise ratio. Those performance indicators are introduced in this chapter and used to analyze the defogging effect of the new algorithm when dealing with different types of fog images. Therefore, the real objective analysis results are obtained by comparing the data after fog removal with the traditional image enhancement algorithm. Tables 1, 2 and 3 are shown the analysis results of defogging performance indexes of the above three types of haze images. Table 1. Dehazing analysis of color-rich images Original image SSR algorithm MSR algorithm Homomorphic filtering CLAHE algorithm Proposed algorithm

Mean 0.5663 0.5751 0.3337 0.4473 0.5785 0.4052

Variance 0.0506 0.0696 0.0257 0.1020 0.0458 0.0723

Information entropy Peak signal to noise ratio 7.2493 7.7861 67.8580 7.2621 67.8731 7.5127 67.8637 7.2974 48.8414 7.7371 67.8670

An Image Dehazing Algorithm Based

1491

Table 2. Dehazing analysis of uneven illumination images

Original image SSR algorithm MSR algorithm Homomorphic filtering CLAHE algorithm Proposed algorithm

Mean

Variance

Peak signal to noise ratio

0.0184 0.0299 0.0143 0.0498

Information entropy 6.8382 7.2678 6.8382 7.5368

0.7945 0.7167 0.5066 0.6190 0.7481 0.5151

0.0228 0.0449

7.1334 7.7025

48.4145 70.2998

70.2915 70.3007 70.2952

As can be seen from Table 1, when the algorithm of this paper deals with images with rich colors, the average gray of the resulting image is lower than that of the original image. Compared with the MSR algorithm, the average gray value of the image processed by the MSR algorithm is too low, which results in darkening of the target image. In the aspect of information entropy, it can be seen that the new algorithm inherits the color fidelity advantage of SSR algorithm well, and other indicators show that the new algorithm has more advantages than SSR. In addition, compared with the CLAHE algorithm, the target image processed by the new algorithm has richer color and less distortion. According to Table 2, for a smog image with uneven illumination, it can be seen that the algorithm in this paper has good defogging performance and less image distortion. In addition, the new algorithm performs well in image color recovery. At the same time, combining with Fig. 4, it can be seen that compared with other algorithms, the local contour details are prominent, and the defogging effect is more obvious. In addition, the image local halo effect is weaker than the SSR and CLAHE algorithms. Table 3. Dehazing analysis of uniform illumination images

Original image SSR algorithm MSR algorithm Homomorphic filtering CLAHE algorithm Proposed algorithm

Mean

Variance 0.0053 0.0086 0.0073 0.0084

Information entropy 5.8031 6.1731 6.2765 6.2362

0.5451 0.5582 0.5428 0.3299 0.5605 0.3408

Peak signal to noise ratio 66.9684 66.9694 66.9825

0.0092 0.0121

6.4595 6.7435

46.3297 66.9818

Combined with the analysis of Fig. 5, the proposed algorithm still has its outstanding advantages in dealing with the smog images with comparable illumination. Firstly, the variance is outstanding, and the outline of the image is obviously enhanced. At the same time, the image information entropy is more advantageous than other algorithms, and the image color recovery effect is significant. In addition, the peak

1492

H. Wu and Z. Tan

signal-to-noise ratio (PSNR) can be used to find that the target image is less distorted, and the local halo is reduced effectively. In summary, compared with the traditional enhancement algorithm, the proposed algorithm can show more prominent effects when processing an image similar to the above images. Therefore, combined with subjective and objective analysis, the new algorithm is theoretically feasible. In addition, compared with other algorithms, this algorithm has obvious advantages.

5 Conclusion Because the traditional Retinex algorithm can not achieve the advantages of high color fidelity and detail highlighting while reducing the local halo. Therefore, this paper firstly performs a single-scale Retinex algorithm (SSR) on the haze image to obtain its color-preserving advantages. Then, the illumination is corrected by the improved homomorphic filter to attenuate the halo phenomenon of the SSR algorithm. Finally, the CLAHE algorithm is used to highlight the edge details of the image to obtain the final defogged image. The experimental results show that the proposed algorithm has outstanding advantages for different types of haze images: it can restore the image color well and enhance the image details. Moreover, it can effectively suppress the local halo phenomenon, and the defogging effect is remarkable.

References 1. Xu Y, Wen J, Fei L et al (2017) Review of video and image defogging algorithms and related studies on image restoration and enhancement. IEEE Access 4:165–188 2. Gupta S, Kaur Y (2013) Review of different local and global contrast enhancement techniques for a digital image. In: International conference on machine vision 3. Kim K, Kim S, Kim KS (2018) Effective image enhancement techniques for fog-affected indoor and outdoor images. IET Image Proc 12(4):465–471 4. Pizer SM, Johnston RE, Ericksen JP, et al (1990) Contrast-limited adaptive histogram equalization: speed and effectiveness. In: Proceedings of the first conference on visualization in biomedical computing. IEEE, New York 5. Shigang W, Minjuan Y, Li S et al (2017) Improved algorithm of histogram equalization for image enhancement. Chin J Med Instrum 6. Priyanka JK, Sudarshan BG, Kumar SCP (2012) A review on different algorithms adopted for image enhancement with retinex based filtering methods. Int J Innovative Res Dev 1(6) 7. Land EH (1977) The retinex theory of color vision. Sci Am 237(6):108–128 8. Fan T, Li C, Ma X et al (2017) An improved single image defogging method based on Retinex. In: International Conference on Image. IEEE, New York 9. Zhang G, Yan P, Zhao H et al (2008) An improved single-scale retinex algorithm for image contrast enhancement. In: The 38th international conference on computers & industrial engineering, pp 1001–1007 10. Yu TH, Meng X, Zhu M et al (2016) An improved multi-scale retinex fog and haze image enhancement method. In: 2016 international conference on information system and artificial intelligence (ISAI). IEEE, New York

An Image Dehazing Algorithm Based

1493

11. Li Y, He R, Xu G et al (2008) Retinex enhancement of infrared images. In: International conference of the ieee engineering in medicine & biology society. IEEE, New York 12. Rahman ZU, Woodell GA (2002) Multi-scale retinex for color image enhancement. In: International conference on image processing. IEEE, New York 13. Yaofeng LI, Xiaohai HE, Xiaoqiang WU (2014) Improved enhancement algorithm of fog image based on multi-scale Retinex with color restoration. J Comput Appl 14. Adelmann HG (1998) Butterworth equations for homomorphic filtering of images. Comput Biol Med 28(2):169–181 15. Guo F, Tang J, Cai ZX (2014) Objective measurement for image defogging algorithms. J Central South Univ

Survey of Gear Fault Feature Extraction Methods Based on Signal Processing Hong Wu1(&) and Can Wang2 1

2

School of Electronic Engineering, Heilongjiang University, Harbin, Heilongjiang Province, China [email protected] College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, Guangdong Province, China

Abstract. Gear fault diagnosis technology is significant in reducing casualties and economic losses caused by industrial accidents. Signal processing is an important step in the diagnosis of gear faults, which affects the accuracy of fault recognition seriously. Traditional signal processing method can be divided into three categories: time domain, frequency domain and time-frequency domain. For stationary signals, feature extraction methods can be divided into two categories: time domain and frequency domain. The time-frequency analysis method is more suitable for dealing with non-stationary signals, which can effectively reflect the distribution of non-stationary signals in the time domain and frequency domain. This paper focuses on elaborating various signal processing methods, analyzing their advantages and disadvantages, summarizing existing research results and problems, and looks forward to future research directions. Keywords: Gear fault diagnosis  Gear fault feature extraction  Signal processing method  Short time Fourier transformation  Wavelet transform

1 Introduction In modern industrial applications, the gear transmission mechanism has been widely used for its reliable and accurate characteristics and the large range of transmission speed and power. Because the gear box itself has a complicated structure and often works in a harsh environment, it is easily damaged and prone to failure. Gear failure mainly manifests as broken teeth, tooth surface fatigue, gluing, etc. Once the gear occurs troubles, it will have a great impact on the mechanical equipment. Therefore, the operation status monitoring and fault diagnosis of the gear can realize the fundamental change of the gear from after-sales maintenance, regular maintenance to conditionbased maintenance, thus creating great economic and social benefits. The basic steps of gear fault diagnosis can be summarized as four parts: signal detection, feature extraction (signal processing), state recognition and diagnostic decision. The operating state of the gears is expressed in a variety of signals, such as vibration signals, noise signals, current signals, and torque signals. In order to extract the state characteristic information from the complex gears, the signal needs to be © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1494–1504, 2020 https://doi.org/10.1007/978-981-13-9409-6_179

Survey of Gear Fault Feature Extraction Methods

1495

properly analyzed and processed. Therefore, the signal processing technology has become the key technology for gear fault diagnosis, and has accumulated rich research results. At present, feature signal extraction analysis methods include wavelet analysis [1–3], independent component analysis [4], frequency domain analysis [5], holographic spectrum analysis [6], etc., which provides an effective solution for the feature extraction from diagnostic objects. In early days, signal feature extraction is mainly carried out by means of Fourier transform, that is, using Fourier transform to transform the signal from time domain to frequency domain, which is commonly used for many years. Fourier transform methods include spectrum analysis, correlation analysis, transfer function analysis, envelope analysis, time series analysis, and so on. The premise of these methods is that the signal is assumed to be linear and stationary. For stationary time series, this method has higher real-time characteristic and clearer physical meaning. Under normal circumstances, these methods can basically meet the requirements of engineering practice, and have achieved good results in the fault diagnosis of rotating machines. Until now, classical signal analysis methods are still the most common technique for analyzing vibration signals. However, strictly speaking, this method is only suitable for the analysis of slowly varying signals. With the development of analytical techniques, in order to overcome the limitations of classical spectral analysis methods in dealing with non-stationary and aperiodic signals, more and more modern spectral analysis methods have been explored. The time-frequency correlation matching analysis has been increasingly emphasized, such as short-time Fourier transform (STFT), wavelet analysis, independent component analysis, stochastic resonance, chaotic array, blind source separation, empirical mode decomposition, etc. Modern spectrum analysis method is a good solution to the problem of processing the non-smooth random signal. This paper summarizes the related literatures in recent years, and analyzes both classical signal analysis methods and modern signal analysis methods. The advantages and disadvantages of various methods are compared to provide reference for subsequent research. It is organized as follows. The mechanism of gear fault diagnosis is analyzed in Sect. 2. Then, various gear fault feature extraction methods based on signal processing are studied. The comparison of each method is summarized in Sect. 4, and the conclusion is given in Sect. 5.

2 Mechanism of Gear Fault Diagnosis Mechanical fault diagnosis is divided into direct diagnosis and indirect diagnosis. However, due to the limitation of equipment structure and working conditions, direct diagnosis is often difficult to carry out, and indirect diagnosis is often used, that is, indirect judgment of state changes of key components in the equipment through secondary diagnostic information. Diagnostic testing is a critical part of obtaining secondary diagnostic information, the most common being vibration testing (displacement, velocity or acceleration testing). The fault diagnosis flowchart of mechanical transmission system based on vibration signal processing technology is shown in Fig. 1. The characteristic signal analysis of

1496

H. Wu and C. Wang

the vibration signal collected by the system is carried out to realize parameters extraction of the time domain, frequency domain and frequency domain of the characteristic signal. The state recognition technology is used to determine the state mode, and the final decision is made through the diagnostic decision layer. The extraction of characteristic signals is the basis for accurate state pattern recognition and fault diagnosis decision, and is important for real-time monitoring and fault diagnosis of mechanical transmission systems.

Equipment state

signal collection

Characteristic characteristic signal extraction signal

Characteristic extraction

fault characteristic

Equipment state

state trend

Trend analysis

state mode

State observation

state mode

Fig. 1. Condition monitoring and fault diagnosis of the transmission system

In general, the main failure modes of the gear during operation are: tooth surface wear, tooth surface glue, tooth surface contact fatigue, tooth Surface bending, plastic deformation, crack deformation, tooth fracture. During the operation of the gear, frictional wear occurs between the two gears that mesh with each other. When the gear is in poor lubrication or other tiny particles enter the gear meshing area, it will cause abrasive wear on the gear tooth surface. Corrosion wear will also occur when the gear is contaminated by electrochemical corrosion such as acid in the lubricant and external rain. Uniform wear failure generally does not have a great impact on the operation of the gear. The characteristics of the fault signal are generally not obvious. In the spectrum of the vibration signal, the gear meshing frequency and the amplitude of its higher harmonic components are generally increased. The gear is usually operated under high speed or heavy load. When the gear is severely subjected to friction and wear during operation and the local pressure is increased, the gear tooth surface is welded and separated, which may cause the tooth surface of the gear to be glued. In the state in which the gears are glued, it is generally found that rough strip-shaped grooves having different depths and widths are generated on the gear tooth surface, and the running noise of the entire machine is significantly increased. It should be noted that when the new gear is in the initial running-in operation, there may be a phenomenon in which the tooth surface is glued. At this time, the noise signal also increases. When the periodic stress such as shear stress and bending stress is too high during the operation of the gear, the deformation of the gear, including plastic deformation and crack deformation, is caused. When the crack gradually expands and the remaining

Survey of Gear Fault Feature Extraction Methods

1497

connecting portion of the gear cannot withstand the running load, the tooth breaking phenomenon occurs. According to different properties, broken teeth are divided into overload fracture, local plastic deformation fracture, bending fatigue fracture and so on. When the gear is from the occurrence of crack failure to the occurrence of gear fracture, the signal exhibits obvious regular impact characteristics in the time domain and gradually strengthens, and the frequency of the impact is consistent with the gear frequency. Based on the above analysis, it can be seen that the vibration signal of the gear fault appears as a periodic shock pulse signal in the time domain. In the frequency domain, the main manifestation is the carrier signal composed of the meshing frequency and harmonics of the gear, plus the amplitude and frequency modulation signals with the gear frequency as the fundamental frequency. At this time, the power spectrum energy and the envelope energy of the obtained vibration signal are increased compared with the case of no failure. When the gear is normally fault-free, it appears as a continuous rather than burst signal in the time domain. In the early stage of the fault, the irregular burst signal will gradually appear in the time domain signal. As the fault deteriorates, a regular burst signal related to the rotational speed will gradually appear.

3 Gear Fault Feature Extraction Method Based on Signal Processing In order to extract the state characteristic information from the complex gearbox vibration signal, the vibration signal needs to be properly analyzed and processed. Therefore, the vibration signal processing technology has become the key technology for gearbox fault diagnosis, and has accumulated rich research results. The vibration signal generated during the operation of the equipment is sensitive to the real-time operating state changes of the equipment, and the signal characteristics have a strong correspondence with the fault status of each component inside the equipment. Fault feature extraction can be understood as a mapping process from signal space to feature space, and its effectiveness will directly affect the accuracy of the final intelligent diagnosis. Therefore, fault feature extraction first needs to have a full understanding of the fault mechanism and vibration characteristics of the diagnostic equipment, and on this basis, using appropriate signal processing methods to extract features that can characterize the operating state of the equipment [7]. Typical signal processing methods include time domain analysis, frequency domain analysis, wavelet transform, Hilbert transform, and the like. Using these methods to analyze the measured vibration signals, multi-dimensional and multi-class fault characteristics can be obtained. 3.1

Short Time Fourier Transformation, STFT

The concept of Short Time Fourier Transformation (STFT) was first proposed by Gabor in 1946 and later improved to form a complete method [8], which puts the timefrequency analysis into a practical level. The basic idea is to localize the analysis time through a window function. This method is a development of FFT technology and is

1498

H. Wu and C. Wang

currently used more in time-varying signal analysis. STFT is to segment the signal in the time domain, and each segment is subjected to Fourier transform and calculate its spectrum. The time-varying characteristics of the signal can be seen from the spectral characteristics of each segment. However, STFT is equivalent to segmenting and windowing the signal on the time axis to analyze the variation of the signal frequency with time. According to the uncertainty principle, the time resolution and frequency resolution of signal analysis cannot be improved at the same time. Moreover, STFT treats the signal processing part in the window as a slow-changing signal to make it only suitable for classical spectrum analysis. Therefore, the spectral resolution is limited to some extent, and a more efficient signal feature extraction method needs to be found. 3.2

Autoregressive Moving Average, ARMA

There are many defects in the Fourier transform analysis method, which mainly reflects in its low spectral resolution, spectral distortion, unsuitable for short data, random fluctuations, and unevenness. The ARMA timing model is a widely used modern spectral analysis method. It uses time-series linear predictive model to describe the signal. The spectrum obtained by ARMA is smoother than the spectrum obtained by FFT, and the spectral resolution is higher. The number of signal processing points is not high. The first-order white noise driven ARMA model, namely the AR model, is more widely used in practice. In the AR model, the signal can be regarded as the output of the filtered white noise, so the linear prediction coefficient can also be regarded as one of the characteristics of the signal. The modern spectral analysis method is used to extract features by establishing a parametric model of the characteristic signal, which is widely used in signal processing practice [9–13]. Although the ARMA time series model can be applied to the feature extraction of some nonlinear and non-stationary vibration signals, the main defects are complex modeling, sensitive to signal-to-noise ratio, and the contradiction between order selection and calculation. 3.3

Cohen Type Distribution

The time-frequency two-dimensional distribution of the signal can be used to describe the variation of the amplitude-frequency characteristics of the non-stationary signal with time. The Wigner distribution and the Choi-William distribution belong to the Cohen-like distribution. Features such as signal frequency, power spectral density, energy, and group delay are easily obtained from the Wigner distribution. The timefrequency distribution is another aspect of the development of time-frequency analysis, where the Wigner-Ville distribution is the most successful application. In 1932, Winger first proposed a theoretical model of the Wigner-Ville distribution. In 1948, Ville introduced the Wigner-Ville distribution into signal analysis. This time-frequency analysis method has greatly promoted the analysis and research work of non-stationary signals [14, 15]. Wigner distribution is widely used in the analysis of non-stationary signals, and has attracted much attention in discrete instantaneous frequency estimation, random signal analysis, and signal filtering processing. However, due to the existence of cross terms on the Wigner distribution plane, multi-component signals are

Survey of Gear Fault Feature Extraction Methods

1499

interlaced with noise, so it is difficult to properly filter them, which limits its application to some extent. But after proper processing, self-items and cross terms can be isolated, enabling two-dimensional feature extraction with inherent noise suppression characteristics [16, 17]. 3.4

Wavelet Transform, WT

The Wavelet Transform (WT) method is very popular in mechanical vibration signal processing and fault detection. It has multi-resolution analysis capability and is very suitable for detecting transient vibration characteristics. At the same time, it performs well in reducing Gaussian noise, and it can achieve optimal noise suppression while maintaining the characteristic signal. However, how to select the optimal wavelet base for a particular waveform signal and select matching shape parameters and scaling factors for a particular operating condition is still a difficult problem. In general, wavelet-based signal analysis methods are mainly divided into wavelet decomposition and wavelet filtering. Through the various wavelet transform methods based on singularity, wavelet coefficient, wavelet energy and wavelet function, it can be effectively used for feature extraction of various faults. Literature [18] introduced a continuous wavelet transform method using Morlet basis functions. In [19], a WT method using impulse response basis function instead of Morlet function is proposed, which shows better applicability in the extraction of gearbox fault characteristics. In [20], a wavelet coefficient function synthesized by weighted Shannon function is studied, which can effectively enhance the signal characteristics of the optimal wavelet waveform factor. In [21], a Wavelet Packet Transform (WPT) method featuring wavelet packet coefficients is introduced to optimize the wavelet multi-resolution analysis and is effectively applied to fault diagnosis of bearings. In [22], an adaptive network analysis method based on fuzzy inference system is studied to select the best wavelet packet as the fault feature. Compared with other algorithms, this method has a significant enhancement effect on the fault characteristics of low-speed bearings. In [23], the autocorrelation enhancement algorithm and the optimal Morlet wavelet filter are combined to effectively eliminate the characteristic interference frequency, reduce the residual in-band noise, and highlight the periodic pulse characteristics. On the other hand, although WT and WPT have achieved fruitful results in vibration analysis and fault diagnosis of rotating machinery, they still have some shortcomings. Because WT and WPT are sensitive to the translationality of the input signal, the recognition performance of periodic transient characteristics is degraded [24, 25]. At the same time, there is a relative spectral energy leakage effect in the wavelet subband [26, 27]. In summary, wavelet analysis can provide multi-resolution analysis and signal decomposition of independent frequency bands, which is suitable for detecting transient local features of identification signals. Wavelet analysis has multi-resolution characteristics, and can use base functions with different resolutions and scales to provide a new way to detect machine faults. However, because different faults can result in various dynamic responses, even the same type of fault can produce different characteristics in different machine configurations. Wavelet analysis makes it difficult

1500

H. Wu and C. Wang

to capture the relevant features of a composite fault using a single fixed wavelet basis function. In addition, wavelet analysis still has shortcomings such as information loss and energy leakage. Wavelet analysis is a variable-scale time-frequency analysis method. By using different time window functions, scale operators are used instead of frequency shift operators. A narrow time window is adopted for the signal in the high frequency band, and a wide time window is adopted for the signal in the low frequency band, which effectively solves the accurate estimation of the short-time high-frequency component and the low-frequency gradual change component in the signal [28–30]. 3.5

Hilbert-Huang Transform

The Hilbert-Huang transform (HHT) is an adaptive time-frequency analysis method proposed by Norden E. Huang in 1998 [31]. It can perform adaptive decomposition according to the local time-varying characteristics of the signal itself, which not only can obtain high time-frequency resolution, but also has good time-frequency aggregation. Therefore, it is very suitable for the analysis of non-stationary and nonlinear signals. The Hilbert-Huang transform first performs the Empirical Mode Decomposition (EMD) on the original signal, and obtains several Intrinsic Mode Functions (IMFs). Then, the Hilbert transform is performed on the IMF to obtain the instantaneous frequency and amplitude of each eigenmode component [32]. The Hilbert-Huang transform has been successfully applied to condition monitoring and fault diagnosis of mechanical equipment [32–34]. The Hilbert-Huang transform based on empirical mode decomposition is more and more applied in practice due to its good adaptability, orthogonality and completeness. Lijia Xu used HHT to diagnose the rolling bearing. A signal analysis method combining wavelet improved threshold method and HHT was proposed [35]. Li analyzed the Hilbert marginal spectrum and energy spectrum of gear crack fault and established the relationship between fault and spectrum [36]. Compared with wavelet transform, EMD can adaptively decompose signals into several IMFs according to signal characteristics. The decomposed IMF components respectively reflect the variation characteristics of the signals on different time scales. By performing Hilbert spectrum analysis on the IMF components, the interference of cross terms can be effectively avoided. For the analysis of nonlinear and non-stationary signals, HHT is better than short-time Fourier transform and wavelet analysis. Although the EMD method performs well in processing nonlinear signals, there are still deficiencies such as modal aliasing. In order to improve the problem of EMD modal aliasing, the literature [37] proposed a noise-assisted data analysis method based on Ensemble Empirical Mode Decomposition (EEMD) by increasing the white noise of the analysis signal. Based on EEMD and HHT, the literature [38] proposes a method that can effectively highlight the characteristics of weak faults. Literature [39] combines EEMD and wavelet analysis methods to improve the accuracy of fault diagnosis (Fig. 2).

Survey of Gear Fault Feature Extraction Methods

Original signal

EMD decomposition

Difference spectrum theory denoising

Reconstructed signal spectrum

1501

Fault identification

Fig. 2. Bearing fault diagnosis method based on EMD and singular value difference spectrum theory

4 Comparison of Various Signal Processing Based Gear Fault Feature Extraction Methods For the mechanical transmission system, the frequency components contained in the collected vibration signals are very complicated, and the energy of the vibration signals generated by the normal operation of each component occupies a dominant position. The fault signal is generally weak and the noise background is strong. At this time, the vibration signal is obviously time-varying and non-stationary, which spectrum has a large change with time. Furthermore, different non-stationary characteristics also indicate different fault forms, especially on the working condition with speed fluctuation. To analyze the vibration signal with non-stationary characteristics and judge the fault source, since the traditional signal processing method based on Fourier transform [40, 41] can only analyze the statistical average result of the signal, it is difficult to identify the local early fault directly. Instead, a non-stationary signal processing method is better. The application of non-stationary signal processing technology has greatly promoted the development of fault diagnosis technology, from STFT [42] to adaptive time-frequency analysis [43], from wavelet analysis and wavelet packet analysis [44] to Hilbert-Huang transform [45–47]. The method of non-stationary signal processing is gradually improving. The characteristics of the commonly used signal processing methods are listed in Table 1.

5 Conclusion Based on a brief introduction to the gear failure mechanism, this paper reviews the characteristics of various signal processing methods in gear fault diagnosis. For Fourier transform, once the window function is determined, the time-frequency window is fixed, which is not conducive to different refinement analysis of different frequency components of the signal. Although the Wigner-Ville distribution overcomes the shortcomings of the window Fourier transform to a certain extent and has good timefrequency resolution, it uses bilinear transformation instead of linear transformation. When analyzing multi-component signals there will be cross-interference terms inevitably, and various methods of suppressing cross-interference terms are exchanged at the expense of time-frequency resolution. As a classical decomposition method suitable for nonlinear and non-stationary signals, EMD can effectively separate different feature information from complex signals. However, EMD algorithm still has problems that need further research and improvement, including its orthogonal, easy to

1502

H. Wu and C. Wang Table 1. List of the popular signal processing algorithm

Algorithm Fourier Transformation

Short Time Fourier Transformation, STFT Wigner-Ville distribution Wavelet Transform, WT Hilbert-Huang form

Advantages Realize the conversion of the signal from the time domain to the frequency domain, and then perform spectral analysis of amplitude spectrum, phase spectrum and power spectrum Add window by segment to achieve non-stationary to smooth superposition, signal timefrequency representation Better frequency resolution and no energy leakage Adaptive window, better time domain and frequency domain resolution High time domain resolution, effective extraction of low frequency signal components, suitable for low frequency nonstationary signal processing

Disadvantages Lack of time and frequency positioning capabilities, limitations on non-stationary signals, limitations in resolution

Time-frequency resolution is constrained by the uncertainty principle, windowing causes energy leakage There will be cross-interference items, which is bad for accurate identification Boundary distortion, wavelet mother function is not unique, difficult to choose Intrinsic mode separation termination criteria are different, there are end effect

cause overshoot and undershoot, modal aliasing and others. The above problems have greatly affected the application of the Hilbert-Huang transform and should be further improved.

References 1. Mallat S, Zhang S (1992) Characterization of signals from multiscale edges. IEEE Trans Pattern Anal Mach Intell 14(7):710–732 2. Mallat S, Huang W (1992) Singularity detection and processing with wavelet. IEEE Trans Inf Theory 38(2):617–643 3. Xu YJ, Waver B, Healy DM et al (1994) Wavelet transform domain filters: a spatially selective noise filtration techniques. IEEE Trans IP 3(6):747–758 4. Zhang YW, Zhang Y (2010) Fault detection of non-Gaussian processes based on modified independent component analysis. Chem Eng Sci 65:4630–4639 5. Qu LS, Chen YD, Liu X (1989) Discovering the holospectrum. J Noise Vib Control Worldwide pp 58–62 6. Qu LS, Chen YD, Liu XT (1989) The holospectrum—a new method for rotor surveillance and diagnosis. J Mech Syst Sig Process 3(3):255–267 7. Lei YG, Lin J, Han D et al (2014) An enhanced stochastic resonance method for weak feature extraction from vibration signals in bearing fault detection. J Mech Eng Sci 1989– 1996 (203–21) 228(5):815–827 (ARCHIVE Proceedings of the Institution of Mechanical Engineers Part C)

Survey of Gear Fault Feature Extraction Methods

1503

8. Cvetkovic Z (2000) On discrete short-time Fourier analysis. IEEE Trans Sig Process 48 (9):2628–2640 9. Jiang XQ, Kiagawa G (1993) A time varying coefficient vector AR modeling of nonstationary covariance time series. Sig Process 33(3):315–331 10. Zheng Y, Tay DBH, Lin Z (2001) Modeling general distributed nonstationary process and identifying time-varying autoregressive system by wavelets: theory and application. Sig Process 91(9):1823–1848 11. Velez EF, Absher R (1992) Parametric modeling of the Wigner half-kernel and its application to spectral estimation. Sig Process 26(2):161–175 12. Shen F, Zheng M, Shi DF et al (2003) Using the cross-correlation technique to extract modal parameters on response-only data. J Sound Vib 259(5):1163–1179 13. Martin RJ (1999) Auto-regression and irregular sampling: spectral estimation. Sinal Processing. 77(2):139–157 14. Zou J, Chen J (2004) A comparative study on time-frequency feature of cracked rotor by Wigner-Ville distribution and wavelet transform. J Sound Vib 276(1):1–11 15. Baydar N, Ball A (2001) A comparative study of acoustic and vibration signals in detection of gear failures using Wigner-Ville distribution. Mech Syst Sig Process 15(6):1091–1107 16. Pei S-C, Yang I-I (1990) High resolution Wigner distribution using chirp z-transform analysis. Sig Process 39(7):1699–1702 17. Chandra Sekhar S, Sreenivas TV (2003) Adaptive spectrogram versus adaptive pseudoWigner-Ville distribution for instantaneous frequency estimation. Sig Process 83(7):1529– 1543 18. Tiwari M, Gupta K, Prakash O (2011) Dynamic response of an unbalanced rotor supported on ball bearings. Exp Syst Appl 38(6):7334–7341 19. Junsheng C, Dejie Y, Yu Y (2007) Application of an impulse response wavelet to fault diagnosis of rolling bearings. Mech Syst Sig Process 21(2):920–929 20. Liu J, Wang W, Golnaraghi F et al (2008) Wavelet spectrum analysis for bearing fault diagnostics. Meas Sci Technol 19(1):15105 21. Djebala A, Ouelaa N, Hamzaoui N (2008) Detection of rolling bearing defects using discrete wavelet analysis. Meccanica 43(3):339–348 22. Mathew AJ (2001) Multiple band-pass autoregressive demodulation for rolling element bearing faults. Mech Syst Sig Process 15(5):963–977 23. Su W, Wang F, Zhu H et al (2010) Rolling element bearing faults diagnosis based on optimal Morlet wavelet filter and autocorrelation enhancement. Mech Syst Sig Process 24 (5):1458–1472 24. Antoni J (2007) Fast computation of the kurtogram for the detection of transient faults. Mech Syst Sig Process 21(1):108–124 25. Kingsbury N (2001) Complex wavelets for shift invariant analysis and filtering of signals. Appl Comput Harmonic Anal 10(3):234–253 26. Bao W, Zhou R, Yang J et al (2009) Anti-aliasing lifting scheme for mechanical vibration fault feature extraction. Mech Syst Sig Process 23(5):1458–1473 27. Peng ZK, Jackson MR, Rongong JA et al (2009) On the energy leakage of discrete wavelet transform. Mech Syst Sig Process 23(2):330–343 28. Wang WJ, Mefaddcn PD (1995) Application of orthogonal wavelets to early gear damage detection. Mech Syst Sig Process 9(5):497–507 29. Saravanan N, Ramachand R (2016) Incipient gear box fault diagnosis using discrete wavelet transform for feature extraction and classification using artificial neural network. Exp Syst Appl 37(6):4168–4181 30. Rafice J, Rafice MA, Tsc PW (2010) Application of mother wavelet functions for automatic gear and bearing fault dignosis. Exp Syst Appl 37(6):45, 68–79

1504

H. Wu and C. Wang

31. Huang NE, Shen Z, Long SR (1998) The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc Lond A454:903– 995 32. Yang JN, Lei Y, Lin S (2004) Hilbert-Huang based approach damage detection. J Eng Mech (ASCE) 130(1):85–95 33. Yan R, Gao RX (2006) Hilbert-Huang Transform-based vibration signal analysis for machine health monitoring. IEEE Trans Instrum Measur 55:2320–2329 34. Pines D, Salvino L (2002) Health monitoring of one dimensional structure using empirical mode decomposition and the Hilbert-Huang transform. Proc SPIE 4071:127–143 35. Lijia Xu (2012) Study on fault detection of rolling element bearing based on translationinvariant demonizing and Hilbert-Huang transform. J Comput 7(5):1142–1146 36. Li H, Zheng H (2005) Hilbert-Huang transform and its application in gear faults diagnosis. In: Key engineering material, pp 291–292 37. Wu Z, Huang NE (2009) Ensemble empirical mode decomposition: a noise-assisted data analysis method. Adv Adaptive Data Anal 1(1):1–41 38. An X, Jiang D, Li S et al (2011) Application of the ensemble empirical mode decomposition and Hilbert transform to pedestal looseness study of direct-drive wind turbine. Energy 36 (9):5508–5520 39. Lu X, Wang J (2011) Bering fault diagnosis based on redundant second generation wavelet denoising and EEMD 40. Bracewell RN (2000) The Fourier transform and its applications. McGraw-Hill, Boston 41. Blough JR (2003) Development and analysis of time variant discrete Fourier transform order tracking. Mech Syst Sig Process 17(6):1185–1191 42. Meltzer G, Ivanov YY (2003) Fault detection in gear drives with non-stationary rotational speed-Part I: the time-frequency approach. Mech Syst Sig Process 17(5):1033–1047 43. Neild SA, Mc Fadden PD, Williams MS (2003) A review of time-frequency methods for structural vibration analysis. Eng Struct 25:713–728 44. Donoho DL (1995) De-noising by soft-thresholding. IEEE Trans Inf Theory 41(3):613–627 45. Huang NE, Shen Z, Long SR et al (1998) The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc Lond 454 (1):903–995 46. Huang NE, Shen Z, Long SR (1999) A new view of nonlinear water waves: the Hilbert spectrum. Annu Rev Fluid Mech 31:417–457 47. Yu DJ, Cheng JS, Yang Y (2005) Application of EMD method and Hilbert spectrum to the fault diagnosis of roller bearing. Mech Syst Sig Process 19(2):259–270

Hyperspectral Image Classification Based on Bidirectional Gated Recurrent Units Yong Liu, Hongchang He, Xiaofei Wang(&), Yu Wang, and Runxing Chen School of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected]

Abstract. The hyperspectral image classification method based on recurrent neural network (RNN) regards the spectral values of all bands of each pixel as spectral sequences. But a one-way RNN can only focus on current input and past memory states, not future memories. And RNN itself has the problem of severe gradient vanish. In this paper, bidirectional gated recurrent units (BiGRU) is used for the classification of hyperspectral images. Bi-directional can not only integrate past memory state and future memory state, but also solve the gradient punishment problem of RNN to a certain extent. And the proposed method obtains better classification performance. Keywords: Hyperspectral image classification  Bi-directional recurrent neural network (BiRNN)  Gated recurrent units (GRU)

1 Introduction In recent years, many vector-based machine learning classification methods of hyperspectral image have emerged. Typical vector-based classifiers include SVM, SAE [1], DBN [2] and 1-D CNN [3, 4]. They combine the spectral values of each pixel into a high-dimensional characteristic column vector. On this basis, someone was inspired by the previous method in 2017 [5]. They directly consider it as a spectral sequence. So they proposed a hyperspectral image classification method based on RNN for the first time. However, when standard RNN processes sequence data, it only considers the memory between the current output and the past state, ignoring the connection between the future state and the current input. Therefore, we propose a hyperspectral image classification method based on BiGRU. GRU, as one of the variants of RNN, solves the problem of long term dependence and gradient punishment during training. Bidirectional RNN can simultaneously integrate future and past memories of spectral sequences and extract deeper spectral features.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1505–1510, 2020 https://doi.org/10.1007/978-981-13-9409-6_180

1506

Y. Liu et al.

2 Background of Theory 2.1

Bidirectional Recurrent Neural Network

BiRNN is to propose that each training sequence forward and backward are two RNN, and both of them are connected to the same output layer. This structure provides complete past and future context information for each point in the input sequence at the output layer. Figure 1 shows a partially bidirectional circulatory neural network expanding along time.

y i-1

yi

Bi +1

Fi - 1

Bi

Fi

Xi-1

y i+1

Bi - 1

F i +1

Xi

Xi+1

Fig. 1. The partial chain structure of BiRNN

For the hidden layer of BiRNN, forward calculation is the same as for the single RNN. Except that the input sequence is opposite to the two hidden layers, the output layer is not updated until all the input sequences are processed by the two hidden layers. Given a sequence X = (X1, X2, …, Xd), and the formula is as follows: Forward: Fi ¼ f U f  Xi þ W f  Fi1 þ b f



Backword: Bi ¼ f U b  Xi þ W b  Fi1 þ bb Output: yi ¼ soft maxðV  ½Fi ; Bi  þ bÞ



ð1Þ ð2Þ ð3Þ

where b is the bias unit, f is the activation function, U, W and V are the weight matrix, Fi and Bi are the two hidden memory units, and the output yi is determined by the two memory units. 2.2

Gated Recurrent Units

Based on the original RNN, reset gate and update gate are added to each hidden unit of GRU, namely r switch and z switch in the Fig. 2, and the structure of the two hidden units is shown in Fig. 2. The calculation formula is as follows:

Hyperspectral Image Classification Based

1507

z

h

r

h

IN OUT

Fig. 2. A hidden unit structure of GRU

hi ¼ ð1  zi Þhi1 þ zi ~ hi

ð4Þ

zi ¼ rðWz  Xi þ Uz  hi1 Þ

ð5Þ

~hi ¼ tanhðW  Xi þ U ðri  hi1 ÞÞ

ð6Þ

ri ¼ rðWr  Xi þ Ur  hi1 Þ

ð7Þ

where r is sigmoid function, GRU has fewer parameters to learn than other variants of RNN, and has excellent ability to handle long sequence data. Moreover,PReLU, a new activation function, is introduced to our GRU, allowing us to use fairly high learning rates without the risk of divergence.

3 Method Based on Bidirectional Gated Recurrent Units Our hyperspectral image classification framework based on BiGRU is shown in Fig. 3. The input of network is a pixel spectrum sequence, the output is a label that indicates the category the pixel belongs to.

Fig. 3. Flowchart of the proposed BiGRU model

1508

Y. Liu et al.

4 Experimental Result and Discussion In our study, we used a classical Hyperspectral dataset with different environmental settings were adopted to validate. They are mixed vegetation site over the Indian Pines test area. Three commonly used evaluation criteria for hyperspectral image classification were adopted, including over accuracy (OA), kappa coefficient and average accuracy (AA). 4.1

Data Description

Indian pine: the airborne visible infrared imaging spectroradiometer (AVIRIS) imaged an Indian Pine Tree in Indiana, USA in 1992, and then transcribed the size of 145  145 for labeling as a hyperspectral image classification test. The AVIRIS imaging spectrometer is used to continuously image ground objects in 220 wavebands. There are a total of 21,025 pixels in this data, but only 10,249 of them are ground object pixels. Ground objects and their real labels are shown in Fig. 4 and Table 1. 4.2

Comparison Based on Vector Classification Method

In order to prove that our proposed classification method has better classification performance on hyperspectral data, we compared four vector-based classification models: SVM, SGD, 1-D CNN and RNN. SVM is RBF kernel with cross validation. All algorithms commonly select 30% ground object pixels of each data set as the training set and 70% as the test set.

(a)ground truth

(e)1-D CNN

(b)RGB

(f)RNN

(c)SGD

(d)SVM

(g)BiGRU

Fig. 4. Classification maps obtained by different methods for Indian Pines dataset

Hyperspectral Image Classification Based

1509

Table 1. Accuracy for Indian Pines dataset Class No. 1

Class name

SGD

SVM-RBF

1-D CNN

RNN

BiGRU 67.9

21.6

27.0

54.5

75.9

2

Corn-notill

73.7

58.2

63.4

69.9

80.4

3

Corn-min

57.8

55.9

64.9

66.6

64.8

33.3

34.9

56.0

56.3

57.8 90.5

4

Alfalfa

Color

Corn

5

Grass-pasture

75.2

90.7

89.2

85.6

6

Grass-trees

91.2

92.0

91.5

90.5

94.9

79.1

72.2

75.0 97.0

7

Grass-pasture-mowed

18.2

85.7

8

Hay-windrowed

93.7

95.4

96.5

95.7

9

Oats

0.0

0.0

13.3

47.6

56.0

10

Soybean-notill

62.4

65.2

69.7

70.5

75.2 80.0

11

Soybean-mintill

68.9

72.2

75.7

77.3

12

Soybean-clean

42.5

60.5

70.9

63.8

77.0

13

Wheat

94.3

94.5

91.4

98.6

97.2

Woods

91.5

93.9

93.6

87.7

94.2

15

14

Building-grass-trees

63.1

58.7

60.3

64.3

72.2

16

Stone-steel-towers

91.7

69.7

95.2

79.2

93.2

OA(%)

72.85

73.85

77.13

76.75

82.04

AA(%)

61.19

65.91

72.82

75.11

79.58

Kappa

0.695

0.697

0.738

0.736

0.795

In Fig. 4, from the perspective of sensory analysis, SGD, SVM and 1-D CNN all have obvious block-like classification confusion, ground objects are seriously mixed in each other, as shown in the upper left of the figure. For BiGRU, there is only spotty confusion, and the overall classification result is clearer. Many classes even have no mix. According to the classification results in the Table 1, BiGRU has improved the classification accuracy of Indian Pines, and OA and AA have also raised by nearly 5% compared with RNN. Compared with SGD, there is even a 10% increase. In particular, Oats cannot be distinguished between Oats and other ground objects by traditional machine learning algorithm, and the classification accuracy of CNN is also not high. However, BiGRU can reach 56%, which is a significant improvement compared with other methods. The classification progress of most ground objects can reach more than 90%.

1510

Y. Liu et al.

5 Conclusion In this paper, we propose a hyperspectral image classification algorithm based on BiGRU. Compared with other vector-based methods, when the input data are the same, our method adopts RNN which is specialized in processing sequence data and can extract deeper spectral features. Compared with unidirectional RNN, the final prediction of BiGRU we selected is the result of combining past and future memory states. In terms of both theory and results, our method performs better in classification than other vector-based classification methods, and classification maps are more accurate. Acknowledgements. This work was supported by the National Key R&D Program of China (2016YFB0502502) and the National Natural Science Foundations of China (61871150).

References 1. Tao C, Pan H, Li Y, Zou Z (2015) Unsupervised spectral-spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. IEEE Geosci Remote Sens Lett 12(12):2438–2442 2. Chen Y, Zhao X, Jia X (2015) Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J Selected Top Appl Earth Observ Remote Sens 8(6):2381–2392 3. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Proc: NIPS, pp 1097–1105 4. Chen Y, Jiang H, Li C, Jia X, Ghamisi P (2016) Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans Geosci Remote Sens 54(10):6232–6251 5. Mou L, Ghamisi P, Zhu XX (2017) Deep recurrent neural networks for hyperspectral image classification. IEEE Trans Geosci Remote Sens 55(7):3639–3655

A Survey of Pedestrian Detection Based on Deep Learning Runxing Chen, Xiaofei Wang(&), Yong Liu, Sen Wang, and Shuo Huang School of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected]

Abstract. The purpose of pedestrian detection is to accurately locate each pedestrian belonging to the detection range from a specific scene. When combined with pedestrian recognition and pedestrian tracking technology, it has important applications in areas such as autonomous driving, human-computer interaction, intelligent video surveillance, and character object behavior analysis. The research progress of deep learning technology in the field of pedestrian detection is studied. The main problems and challenges of pedestrian detection are analyzed. The paper also summarizes the data sets and evaluation criteria of pedestrian detection. Provide reference and basis for comprehensive research in the field. Keywords: Deep learning network

 Pedestrian detection  Convolutional neural

1 Introduction In machine vision, pedestrian detection plays a very important role. The goal of pedestrian detection is to accurately locate each pedestrian that appears within the detection range in a given picture or video frame. At the same time, if there is a pedestrian, the spatial range information of the pedestrian should be given. Pedestrian detection can also be applied to the assisted driving system and visual scene perception in combination with pedestrian and tracking, and further to measure the speed and distance of pedestrian targets. In addition, pedestrian detection combined with pedestrian recognition can be widely used in intelligent video surveillance and intelligent security.

2 Related Research In the past few decades, there have been many research results related to pedestrian detection. This paper focuses on classic-based research [1–10], as shown in Table 1.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1511–1516, 2020 https://doi.org/10.1007/978-981-13-9409-6_181

1512

R. Chen et al. Table 1. Pedestrian detection classic paper

Paper name Histograms of oriented gradients for human detection

Year 2005

Journal CVPR

A discriminatively trained, multiscale, deformable part model

2008

CVPR

Pedestrian detection: an evaluation of the state of the art

2012

PAMI

Pedestrian detection with unsupervised multi-stage feature learning

2013

CVPR

Strengthening the effectiveness of pedestrian detection with spatially pooled features

2014

ECCV

Multispectral pedestrian detection: Benchmark dataset and baseline

2015

CVPR

How far are we from solving pedestrian detection

2016

CVPR

Is Faster R-CNN doing well for pedestrian detection CityPersons: a diverse dataset for pedestrian detection

2016

ECCV

2017

CVPR

What can help pedestrian detection

2017

CVPR

Jointly learning deep features, deformable parts, occlusion and classification for pedestrian detection Repulsion loss: detecting pedestrians in a crowd

2018

PAMI

2018

CVPR

Main content Grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets Describes a discriminatively trained, multiscale, deformable part model for object detection Focus on a more thorough and detailed evaluation of detectors in individual monocular images Connections that skip layers to integrate global shape information with local distinctive motif information Incorporating spatial pooling improves the translational invariance and thus the robustness of the detection process Propose an architecture for scalable persistent object managers that provide access to lots of objects distributed Investigate the gap between current state-of-the-art methods and the perfect single frame detector Investigate issues involving Faster R-CNN for pedestrian detection Revisit CNN design and point out key adaptations, introduce CityPersons dataset Exploring whether and how CNNbased pedestrian detector scan benefit from extra features Formulate these four components into a joint deep learning framework and propose a new deep network architecture Explore how a state-of-the-art pedestrian detector is harmed by crowd occlusion via experimentation providing insights into the crowd occlusion

A Survey of Pedestrian Detection Based on Deep Learning

1513

3 Dataset In deep learning, standard data sets are very important in the study of pedestrian detection. The data set can not only be used to carry out the parameters of the training model, but also to evaluate the pros and cons of the model, and can also promote the development of research in this field. This article investigates and counts some of the commonly used data sets that can be used for pedestrian detection, as shown in Table 2. Data sets can generally be divided into two categories: the first is a data set specifically for pedestrian detection, such as Caltech, other non-pedestrians do not mark; the second is a data set such as COCO and BDD, which not only for pedestrians Labels are also marked for other objects. The commonality of the two types of data sets is that there are more standard samples for single-type pedestrians, and pedestrian samples can be extracted for pedestrian detection training.

Table 2. Pedestrian detection dataset Dataset Caltech

Number of pictures Training set: 192,000 Test set: 155,000

Category

Picture size

Basic introduction

1

640  480

Label the time correspondence between rectangles and their occlusion. Each line in each txt file indicates that a pedestrian is detected, in the format (left, top, width, height, score) The TUD pedestrian database provides an image pair to calculate optical flow information for assessing the role of motion information in pedestrian detection The dataset was collected in 27 different ci ties in Germany, with approximately 19,744 pedestrians in the training set and 11,000 pedestrians in the test set COCO is a new dataset for image recognition, segmentation and captioning

TUD

Training set: 1284 Test set: 250

1

720  576

City persons

Training set: 2975 Test set: 500

1

2048  1024

COCO

Training set: 118,000 Verification set: 500 Test set: 41,000

80



(continued)

1514

R. Chen et al. Table 2. (continued)

Dataset BDD

VOC

Number of pictures Training set: 70,000 Test set: 20,000 Verification set: 10,000 VOC2007: 9963 VOC2012: 11,540

Category

Picture size

Basic introduction

10

1280  720

20



The dataset is labeled with the source image’s URL, category label, size (starting coordinates, ending coordinates, width and height), truncation, occlusion, and traffic light color A representative object detection dataset with a total of 20 categories, including pedestrians, many of which are based on the dataset with benchmark results

4 Detection Framework At present, the detection framework based on deep learning can be divided into two categories: 1. Two-stage detection framework: including two stages of generating candidate regions and regional classifications, as shown in Table 3. 2. One-stage detection framework: Classification and detection are in one step, and no candidate regions need to be generated in advance, as shown in Table 4. 4.1

Evaluation

Indicators for evaluating detector performance typically have log average miss-rate (LAMR), frame per second (FPS), average precision (AP), and recall (recall). The efficiency of the frame rate representation model, the precision, the logistic mean miss rate and the recall rate reflect the accuracy of the model. Table 3. Typical two-stage detection framework Network RCNN

SPP

Basic introduction 1. Use the SS algorithm to generate category-independent candidate regions 2. Train the two-class SVM to classify candidate regions 3. Learn CNN features for bounding box regression Adding an SPP layer at the top of the last convolutional layer of the RCNN network accelerates the forward process of RCCN

Pros and cons 1. Slow, difficult to converge

2. The test phase is complicated 3. The training boundary box requires a lot of storage space when returning SPP net can’t speed up the training process, only speed up the testing process (continued)

A Survey of Pedestrian Detection Based on Deep Learning

1515

Table 3. (continued) Network Fast RCNN

Faster RCNN

Mask RCNN

Basic introduction 1. Design a multi-task loss function to unify the training process 2. Use ROI pooling to generate fixedlength features 1. Propose that the RPN network generates class-independent candidate regions 2. The RPN shares a large number of convolution operations with its subsequent Fast RCNN Adding FCN to the original Faster RCNN algorithm to generate the corresponding MASK branch

Pros and cons It takes 2–3 s for a picture to generate a candidate region

Currently very mainstream detection framework, but its speed is still not fast enough, about 5 FPS

Mask RCNN is more complex than Faster RCNN, but at the same speed and with higher accuracy than Faster RCNN

Table 4. Typical one-stage detection framework Network Detector net

Basic introduction Replace the last Softmax classification layer of AlexNet with a regression layer

OverFeat

In addition to the classification and regression layers, consisting of a full convolutional network Taking the entire image as the field of view and treating the problem as a regression problem

YOLO

YOLO V2

SSD

YOLO V3

Use Darknet19 as the basic network, remove the fully connected layer, apply BN and anchor mechanisms, multiscale training, etc Replace the fully connected layer with the convolution layer on a VGG16based network Use Darknet53 as the basic network. Convergence prediction detection frame and class confidence level using 32 times down sampling, 16 times down sampling feature maps respectively

Pros and cons The network can only be trained based on object type and mask type, and cannot be extended to multiple classes. Also need to train multiple networks Won the ILSVRC2013 championship, the first modern detector based on full convolution Compared with RCNN, the speed advantage is obvious, 45–155 FPS, poor positioning, poor focus on dense targets and small targets Speed and accuracy are significantly better than the previous generation, but still not a good solution to the problem of small targets Although not as fast as the YOLO series, the accuracy is high and the use of multiscale features is more sensitive to small targets YOLO V3 is the leader of modern target detectors, with speed and accuracy exceeding the general needs, effectively solving small target problems

1516

R. Chen et al.

5 Conclusion Pedestrian detection is a typical and challenging problem in target detection and has received wide attention from all walks of life. Although the development of deep learning has greatly facilitated the detection of pedestrians, pedestrian detection for complex scenes and special environments still needs to be improved. There is still much room for improvement in the development of domestic databases. How to design a detection framework with high precision, good stability and light weight is still the focus of research. Acknowledgements. This work was supported by the National Key R&D Program of China (2016YFB0502502) and the National Natural Science Foundations of China (61871150).

References 1. Dalal N, Bill T (2005) Histograms of oriented gradients for human detection. In: CVPR, San Diego, USA. IEEE, pp 886–893 2. Felzenszwalb P, Mcallester D, Ramanan D (2008) A discriminatively trained, multiscale, deformable part model. In: IEEE 2008 conference on computer vision and pattern recognition (CVPR), USA. IEEE, pp 1–8 3. Dollar P, Wojek C, Schiele B et al (2012) Pedestrian detection: an evaluation of the state of the art. IEEE Trans Pattern Anal Mach Intell 34(4):743–761 4. Ren S, He K, Girshick R et al (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149 5. He K, Gkioxari G, Dollar P et al (2017) Mask R-CNN. IEEE Trans Pattern Anal Mach Intell (99):1 6. Redmon J, Divvala S, Girshick R et al (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, USA. IEEE, pp 779–788 7. Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, USA. IEEE, pp 7263–7271 8. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. In: Proceedings of the IEEE conference on computer vision and pattern recognition, USA. IEEE 9. Gao Z, Li SB, Chen JN et al (2018) Pedestrian detection method based on YOLO network. Comput Eng 44:215–219 10. Ouyang W, Zhou H, Li H, Li Q, Yan J, Wang X (2018) Jointly learning deep features, deformable parts, occlusion and classification for pedestrian detection. IEEE Trans Pattern Anal Mach Intell 40(8), 1874–1887

Detection of Anomaly Signal with Low Power Spectrum Density Based on Power Information Entropy Shaolin Ma, Zhuo Sun(B) , Anhao Ye, Suyu Huang, and Xu Zhang Wireless Signal Processing and Network Laboratory, Beijing University of Posts and Telecommunications, Beijing 100876, China {sl ma,zhuosun}@bupt.edu.cn Abstract. The low power spectral density characteristics of the direct sequence spread spectrum (DSSS) signals make it difficult to be detected in complex and variable electromagnetic environments. Especially when DSSS signals as an intrusion signal are transmitted in channels overlapped by strong power signals, the possibility of DSSS signals being detected is very low. The traditional DSSS signal detection algorithm is only based on Gaussian white noise, the research scene is single and the complexity of the algorithm is high. In this paper, we will propose an electromagnetic spectrum intrusion signal detection algorithm based on signal power information entropy. According to the characteristics of the DSSS signal, the signal power information entropy is used as a feature, and a single class support vector machine (OC-SVM) is used as a classifier for anomaly signal detection. The simulation results show that the algorithm has the advantages of robustness, high efficiency, and low complexity.

1

Introduction

The radio spectrum is an open resource that can be accessed by any suitable equipped device, so the illegal use of spectrum is an inherent problem in any wireless communication system. Direct Sequence Spread Spectrum (DSSS) communication has outstanding characteristics such as low signal power spectral density, good confidentiality, and strong anti-interference ability, and has been widely used in communication, telemetry, and navigation [1]. However, due to the low power spectral density characteristics of the DSSS signal, a large number of illegal spectrum users often use DSSS signal as an intrusion signal for communication, therefore, the detection of electromagnetic spectrum intrusion of the DSSS signal becomes very meaningful. The traditional DSSS signal detection algorithms include autocorrelation detection method [2] cyclic spectrum detection method [3] high-order statistic detection method [4], and secondary power spectrum [5] etc. However, these algo-

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1517–1527, 2020 https://doi.org/10.1007/978-981-13-9409-6_182

1518

S. Ma et al.

Frequency (kHz)

FM+AWGN+DSSS -20

40

-40 -60

20

-80 -100

0 0.5

1

1.5

2

2.5

Power/frequency (dB/Hz)

rithms are often directed to traditional detection scenarios, such as the DSSS signal communicating only in Gaussian white noise scenarios, without mixing other useful signals. With the research on the low probability of detection signal problem, and under the supported spectrum correlation theory, the anti-interception ability of the DSSS signal is greatly reduced in the traditional single scene. Therefore, in real scenarios, the illegal spectrum users often use strong power and significant parameter features to mask the DSSS signal in the channel [6]. Based on this, this paper uses the DSSS signal as the intrusion signal to study in the strong power signal (such as FM or QPSK) detect scenes. For the traditional signal anomaly detection algorithm, the proposed detection algorithm is mostly based on time domain time series analysis method, based on frequency domain Fast Fourier Transform (FFT), and based on timefrequency domain short-term Short-time Fourier Transform (STFT) [7], Wavelet Transform (WT) [8], etc. However, in the scenario of the DSSS signal overlapped by a strong power signal, the above features can not fully reflect the characteristics of the DSSS signal. Figure 1 shows an example: in the conventional FM signal, there is a spectrum diagram with Gaussian white noise, the upper part is the case of overlapped by DSSS signal, and the following is the case of no overlapped by DSSS signal(the power ratio of FM signal to DSSS signal is 30 dB). It is found that it is difficult to directly observe the difference between the two spectrum diagram, so there is basically no distinguishability in the characteristics of the time domain and the frequency domain. From the knowledge of information theory, events with different probabilities have different information entropy, so when an abnormal event occurs, the probability of the event will be changed, and the information entropy will also be changed at this time. Therefore, the

3

Frequency (kHz)

FM+AWGN -20

40

-40 -60

20

-80 -100

0

0.5

1

1.5

2

2.5

3

-120

Power/frequency (dB/Hz)

Time (secs)

Time (secs)

Fig. 1. Spectrum diagram with DSSS signal and no DSSS signal in FM signal

Power information entropy

Detection of Anomaly Signal with Low Power Spectrum Density

1519

FM+DSSS+AWGN-Power information entropy 5 4 3 2

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Power information entropy

Frequency

5 × 10

4

FM+AWGN-Power information entropy 4.5 4 3.5 3

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Frequency

5 × 10 4

Fig. 2. Power information entropy diagram for the overlap or no overlap of a DSSS signal in the FM signal

information entropy can be used to distinguish between normal and anomalous events. The detection and identification of anomaly DSSS signal are an essentially two categories of problems, so we can use the information entropy as a feature to classify the signals using some classifiers of machine learning. A detailed description of the proposed algorithm is provided in Sect. 2, and the simulation results are shown in Sects. 3 and 4 describes the contributions made in this paper and summarizes the paper.

2 2.1

Detection of Anomaly Signal with Low Power Spectrum Density Analysis on Information Content of Overlapped Signals

In information theory, assuming that digital message X has M possible values {a1 , a2 , . . . , aM }, the corresponding probability is {p1 , p2 , . . . , pM }, then the information entropy of the message X value is [9]: H(X) = −

M  i=1

pi log2 pi .

(1)

1520

S. Ma et al.

It can be seen from Fig. 1 that in the time domain signal, due to the anomaly DSSS signal is overlapped on the FM signal, the energy of the DSSS signal itself is extended to different frequency components, so that the energy distribution at each frequency point changes. From the definition of the above information entropy, we can get the inspiration, we use the power of the signal as an observation event, when a DSSS signal is overlapped, the power distribution of each frequency point will change, resulting in a change in the power information entropy. Figure 2 is the power information entropy diagram of the DSSS signal and the DSSS-free signal in the FM communication signal. Observing Fig. 2, it can be found that when there is a DSSS signal overlapped the FM signal, the power information entropy curve changes significantly, indicating that the power information curve has been used as a feature of DSSS intrusion signal detection. Therefore, we can use the change in the power information entropy as a feature to reflect the occurrence of anomalous events. The calculation of the specific information entropy can be calculated by the following process: (1). Determine the type of event observed, that is, the signal property. In this paper, we select the power of the signal as the event type. (2). Use a clean training signal to determine the probability of each power through the power histogram, and obtain the information entropy of the normal communication signal at each frequency point, and the histogram at this time serves as a reference histogram of the reference. (3). Once the reference histogram is obtained, the test signal can be analyzed. The power of the test signal is extracted and used to update the reference histogram. At this time, the power distribution is changed, and the entropy of power information is also transformed, the information entropy can be calculated according to Eq. 1 to obtain the result. In the above process, it can be seen that the acquisition of the histogram is the key to calculating the information entropy [10]. For the histogram of continuous time signals, the width of the histogram is an important parameter, which can directly determine the sensitivity of the final detector. Figure 3 shows the effect of the different widths of the first ten seconds of the signal of Fig. 1 It can be seen that if the width selection is too little, the sensitivity of the detector is too high, and some normal noise signals are easily taken into consideration. If the width is too large, the histogram cannot capture the occurrence of the anomaly signal. For the selection of the histogram width, we will discuss it in the third part.

Detection of Anomaly Signal with Low Power Spectrum Density

1521

Fig. 3. Influence of different widths of histogram on the instantaneous power spectral density of signal

Traning

Start

Obtain iniƟal event histogram from training data

Get new event,x[k],update histogram and evaluate InformaƟon entropy

OCSVM

Testing

End

Fig. 4. Anomaly signal detection algorithm model flow chart

1522

2.2

S. Ma et al.

Anomaly Detection Model Based on OCSVM

When the information entropy feature is obtained, the anomaly detection of the DSSS signal becomes a two-classification problem. The traditional twoclassification algorithm requires at least two types of positive and negative label samples for supervised training to establish the classification model. However, due to the randomness and the imbalance of the anomalous signal, it is impossible to conduct supervised classification. Single class support vector machine (OCSVM) is a machine learning based anomaly detection algorithm, in the case of insufficient prior knowledge, it still has a good classification accuracy rate. Different from the traditional two-classification algorithm, OCSVM only needs one type of sample (e.g., normal sample) to train the anomaly detection model, which satisfies the characteristics of anomaly data imbalance in real communication, and does not need to specifically label samples. The essence of the single-class support vector machine is to treat the origin as a point different from the training sample type, and map the data in the low-dimensional space to the high-dimensional space through the nonlinear mapping relationship, and find a hyperplane in this space. The plane makes the distance from the origin to this plane as large as possible [11]. The task of the OCSVM classification is to find a function f (x). If the predictive value is positive, the test data x is considered normal, and if the predictive value is negative, the test data x is considered abnormal. The core of OCSVM is to find a kernel function. According to the experiment in [12], the detection accuracy of the single-class support vector machine with the radial basis kernel function K(x, y) = exp(−gamma ∗ x − y2 ) is higher. However, the choice of the radial basis kernel function gamma parameter is important, it can directly affect the accuracy of the classifier. The influence of the specific parameter is introduced in the third part. To measure the performance of the detector, here we define two indicators: accuracy (RA ), false alarm rate (RF ), we assume that there are TP positive samples of the class correctly determined by the detector as positive, there are FN positive sample categories are judged to be negative by the detector error, and FP negative sample categories are determined by the detector as positive samples, and TN negative samples are correctly judged as negative by the detector: RA =

TP + T N . T P + FN + FP + T N

(2)

FP . T P + FP

(3)

RF =

Taking the power information entropy obtained in the first step as a feature, and then using the OCSVM of the second step as a classifier to detect the anomaly signal, the specific flow chart is shown in Fig. 4.

Power information entropy

Detection of Anomaly Signal with Low Power Spectrum Density

QAM16+DSSS+AWGN-Power information entropy 4.5

4

3.5

3

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

10 4

Frequency Power information entropy

1523

QAM16+AWGN-Power information entropy 4.5

4

3.5

3

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

10 4

Frequency

Power information entropy

Fig. 5. Power information entropy diagram for the overlap or no overlap of a DSSS signal in the QAM16 signal

QPSK+DSSS+AWGN-Power information entropy 4.5 4 3.5 3

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Power information entropy

Frequency

5 10

4

10

4

QPSK+AWGN-Power information entropy 4.5 4 3.5 3 2.5

0

0.5

1

1.5

2

2.5

Frequency

3

3.5

4

4.5

5

Fig. 6. Power information entropy diagram for the overlap or no overlap of a DSSS signal in the QPSK signal

1524

3

S. Ma et al.

Results and Analysis

In this section, we will verify the performance of the proposed algorithm. We use the scenario in Fig. 1 to use the FM signal as the normal communication signal and DSSS as the anomaly signal. In order to be as close to the real scene as possible, we add some Gaussian white noise to make the spread spectrum, the signal is almost completely submerged in the noise. Here, we only use the FM signal as an example to illustrate, if the DSSS signal overlap into other signals such as QAM and QPSK, the algorithm proposed in this paper is still effective. For example, Figs. 5 and 6 show the power information entropy diagram for the overlap or no overlap of a DSSS signal in the QPSK and QAM16 signal. It can also be seen in Figs. 5 and 6 that when there is overlapped by a DSSS signal, the power information entropy diagram has to be changed. From the second part of the analysis, we know that the main factors affecting the final result of the entire detector are the selection of the histogram width and the choice of the classifier parameters. Since the DSSS signal has very good concealment, in order to observe the applicable range of the detector, we also analyze the influence of the power ratio of the DSSS signal to the noise on the detection performance. 3.1

Analysis the Effect of Histogram Resolution

Figure 7 shows the effect of the width of the power histogram on the detector when the power ratio of the DSSS signal to the noise is −3 dB and the gamma of the classifier is chosen to be 0.01. Observations show that when B = 8, the accuracy of the detector is very low, and the false alarm rate is very high. This is because the sensitivity of the detector is extremely low due to the large width selection of the histogram, when there is overlapped by a DSSS signal, the detector cannot completely reflect the power distribution of the signal, making the false alarm rate higher. When B = 0.1, the accuracy of the detector is very low, and the false alarm rate is very high. This is because the sensitivity of the detector is extremely high due to the small selection of the width of the histogram, taking into account the transformation of some noise signals. Therefore, considering this, this paper chooses B = 2 as the choice of histogram width. 3.2

Analysis the Effect of Classifier Gamma Parameter

When the radial basis function (RBF) is used as the kernel function of SVM, the parameter gamma affects the complexity of the model. If the parameter gamma is too large, the influence radius of the support vector will be small, and over-fitting will have occurred. When the parameter gamma is very small, the model will not capture the complexity or “shape” of the data, so reasonable gamma parameter selection will affect the accuracy of the entire system. Figure 8 shows the effect of different gamma parameters on the accuracy and false alarm rate of the FM system. Here, the power ratio of the DSSS signal to the noise is chosen to be −3 dB and the width of the histogram is chosen to be 2, which

Detection of Anomaly Signal with Low Power Spectrum Density

1525

can be observed when gamma is taken at 0.1 and 0.001, the over-fitting and under-fitting be occurred because of the gamma parameter is too large and too small, and resulting in a decrease in the accuracy of the detection system. When the gamma is 0.02, the accuracy reaches a maximum of 91.6%. In actual situations, The gamma parameter setting can be reasonably adjusted according to the requirements of accuracy and false alarm rate. 3.3

Analysis the Effect of Power Ratio of the DSSS Signal to the Noise

When a DSSS signal is overlapped into the communication system, the peak transmit power of the transmitter can be greatly reduced, which may be lower than the sensitivity of the receiver, so that the receiver receives the signal completely submerged in the noise, so it is great significance to detect under the DSSS signal and noise with the low power ratio. Figure 9 is the effect of the power ratio parameter of different DSSS and noise on the accuracy and false alarm rate of the FM system. Here, the gamma is chosen to be 0.01, and the width of the histogram is selected as 2.

Accuracy and False Alarm Rate

0.9

Accuracy False Alarm Rate

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

1

2

3

4

5

6

7

8

B

Fig. 7. The effect of the width of the power histogram on the detector

As can be seen from Fig. 9, as the power ratio of the DSSS signal to the noise gradually increases, since the DSSS signal is more and more easily detected, the accuracy rate is gradually increased, and the false alarm rate is decreased. Since the algorithm proposed in this paper is based on the feature of power information entropy, once the anomaly DSSS signal is overlapped to the system, the overall signal power distribution will be different, so just select the appropriate histogram width and adjust the appropriate gamma parameters, it can greatly improve the minimum DSSS signal to noise ratio detection limit.

1526

S. Ma et al.

Accuracy and False Alarm Rate

0.9

Accuracy False Alarm Rate

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

gamma

Fig. 8. Influence of different gamma parameters on the accuracy and false alarm rate of FM system

Accuracy and False Alarm Rate

1 Accuracy False Alarm Rate

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -10

-8

-6

-4

-2

0

2

4

6

8

10

SNR of DSSS to Noise(dB)

Fig. 9. Influence of power ratio of different DSSS signals and noise on accuracy and false alarm rate of FM system

4

Conclusion

In this paper, the anomaly detection algorithm based on the signal power information entropy is effective, simple, and does not require demodulation signals. It is worth noting that the electromagnetic spectrum intrusion detection technology based on the signal power information entropy proposed in this paper is not limited to detecting the DSSS signal intrusion, this paper only uses the DSSS

Detection of Anomaly Signal with Low Power Spectrum Density

1527

signal intrusion which is very difficult to detect as an example. The physical layer intrusion signal can be detected by the power information entropy as long as the power distribution of the original signal is changed at the signal receiver. The performance of the entire inspection system depends on many factors, such as histogram resolution, selection of OCSVM parameters, and so on. In future research, we should continue to study in the following aspects: research optimization algorithm optimizes the selection of OCSVM’s kernel function and the selection of other parameters, thereby improving the overall performance of the model, and how to estimate the parameters of the anomaly signal after obtaining the anomaly detection result, it is also a challenge.

References 1. Xin Z (2004) Spread spectrum communication digital baseband signal processing algorithm and its implementation. Beijing Science Press 2. Wang J, Jin X, Bi G (2012) Multiple cumulants based spectrum sensing methods for cognitive radios. IEEE Trans Commun 60(60):3620–3631 3. Zheng P, Zhang X, Liu F (2013) An innovatory algorithm of cyclic spectrum estimation. J Circuits Syst 18(01):398–402 4. Gardner WA, Spooner CM (1993) Detection and source location of weak cyclostationary signals: simplifications of maximum-likelihood receiver. IEEE Trans Commun 41(6):905–916 5. Zhang Y, Wu H, Chen Y (2012) Period estimation of PN sequence for weak DSSS signals based on improved power spectrum reprocessing in non-cooperative communication systems. In: International conference on control engineering and communication technology. Shenyang, China, pp 924–927 6. Chen Q (2014) Application of large signal masking technology in information security. Telecom Express Netw Commun 5:3–5 7. Bouder C, Azou S, Burel G (2002) A robust synchronization procedure for blind estimation of the symbol period and the timing offset in spread spectrum communication. In: Spread spectrum technology and application, 2002 IEEE seventh international symposium on 2002, vol 1. pp 238–241 8. Zhan Y, Cao Z, Lu J (2005) Spread-spectrum sequence estimation for DSSS signal in non-cooperative communication systems. IEEE Proc-Commun 152(4):476–480 9. Proakis JG (2000) Digital communications. In: Director SW (ed) 4th ed., ser. McGraw-Hill series in electrical and computer engineering. McGrawHill Higher Education 10. Shimazaki H, Shinomoto S (2007) A method for selecting the bin size of a time histogram. Neural Comput 19(6):1503–1527 11. Abdiansah A, Wardoyo R (2015) Time complexity analysis of support vector machin in LibSVM. Int J Com-Puter Appl 128:975–8887 12. Leandros A, Jiang J (2014) OCSVM model combined with K-means recursive clustering for intrusion detection in SCADA systems. In: Proceedings of the 2014 10th international conference on heterogeneous networking for quality. pp 133–134

A Hybrid Multiple Access Scheme in Wireless Powered Communication Systems Yue Liu1 , Zhenyu Na1(B) , Anliang Liu1 , and Zhian Deng2 1

School of Information Science and Technology, Dalian Maritime University, Dalian 116026, China [email protected] http://www.springer.com/lncs 2 School of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China

Abstract. In this paper, a Wireless Powered Communication (WPC) based hybrid multiple access model is proposed. Users can harvest energy in the downlink and transmit signals to base station in the uplink. In order to implement hybrid multiple access, Non-Orthogonal Multiple Access (NOMA) is used for intra-cluster users, while Orthogonal Frequency Division Multiple Access (OFDMA) is used for inter-cluster users. Simulation results demonstrate that the WPC based hybrid multiple access scheme can effectively improve the spectrum efficiency and fairness.

Keywords: WPC

1

· Hybrid multiple access · NOMA · OFDMA

Introduction

The key idea of Wireless Powered Communication (WPC) is that users use the energy from Radio Frequency (RF) signals for information transmission. Compared with traditional battery-powered communications, WPC can keep information transmission while extending battery life by energy harvesting [1]. Moreover, Non-Orthogonal Multiple Access (NOMA) can enhance spectrum efficiency, throughput and fairness. But the complexity brought by Successive Interference Cancellation (SIC) increases with the increase of the number of users. Therefore, users in actual systems usually are clustered to implement hybrid multiple access scheme. For this purpose, NOMA is used for intra-cluster users, while Orthogonal Multiple Access(OMA) is used for inter-cluster users to avoid inter-cluster interference [2]. Over the past few years, WPC and its applications to NOMA systems have been intensively investigated to improve spectrum efficiency. Gong and Chen [3] studied a NOMA based wireless-powered system, durations of uplink and c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1528–1532, 2020 https://doi.org/10.1007/978-981-13-9409-6_183

A Hybrid Multiple Access Scheme in Wireless Powered Communication

1529

downlink were optimized in order to maximize the sum rate. Chingoska et al. [4] concentrated on the system throughput, a scheme was proposed that the transmit power and the duration of energy harvesting are jointly optimized. However, the optimization goal of maximizing the sum rate renders it difficult to guarantee the fairness between users. In this paper, a WPC-based hybrid multiple access scheme is proposed to improve spectrum efficiency and fairness.

2

System Model

In this paper, we consider a hybrid multiple access scheme with a base station and KN users. As shown in Fig. 1, all the users are divided into K clusters and each cluster consists of N users. The cluster set is denoted by K = {1, 2, . . . , K} and the user set is denoted by N = {1, 2, . . . , N }. The n-th user in the k -th cluster is denoted by Uk,n . The channel gain of Uk,n is denoted by hk,n . NOMA is used for intra-cluster users, while OFDMA is used for inter-cluster users.

Fig. 1. System model.

The transmission process is divided into two phases. Base station transmits RF energy to all users within duration T0 in the first phase. Users transmit information to base station within duration T1 in the second phase. The harvested energy at Uk,n is expressed as 2

Ek,n = G0 Gk,n ξ|hk,n | P0 T0

(1)

where G0 and Gk,n denote the directional antenna gains of base station and Uk,n , respectively. ξ (0 ≤ ξ ≤ 1) denotes energy conversion efficiency. P0 (P0 ≤ Pm ) denotes the transmit power of base station and Pm denotes the maximum transmit power of base station.

1530

3

Y. Liu et al.

Problem Formulation

A two-step clustering scheme is designed in this paper. The definition of the difference of channel conditions between users in the k -th cluster is Δhk =

N −1  

  2 2 |hk,i+1 | − |hk,i | , k ∈ K

(2)

i=1

The clustering scheme used in this paper needs to loop through all the cases of clustering. First, the sum of differences between channel conditions is calcuK lated, which is denoted by s = k=1 Δhk . Then, the clustering scheme which maximizes s is selected as the candidate clustering scheme. If more than one candidate scheme makes s maximum, the variance of Δhk needs calculating. Considering the fairness, the clustering scheme with the smallest variance of Δhk should be selected as the best one. In the downlink, the transmit power Pk,n can be expressed as 2

Pk,n =

Ek,n G0 Gk,n ξ|hk,n | P0 T0 = T1 T1

(3)

In the uplink, the total bandwidth is divided equally by all the clusters. The achievable rate of Uk,n (1 ≤ n ≤ N ) is given by   2 Pk,n |hk,n | (4) Rk,n = T1 log2 1 + N 2 j=n+1 Pk,j |hk,j | + N0 where N0 is additive white Gaussian noise (AWGN) power. If ak,n = G0 Gk,n ξ|hk,n |4 P0 , N0

the achievable rate of the k-th cluster is given by Rk =

N 

 Rk,n = T1 log2

N

1+

ak,n T0 T1



n=1

n=1

(5)

Thus, the sum rate is given by Rsum =

K  k=1

4

Rk =

K 

 T1 log2

1+

N

ak,n T0 T1

n=1

k=1

 (6)

Simulation Results and Discussions

The 6-user WPC system is designed for simulations, where 6 users are divided into 3 clusters and each cluster has 2 users. It is assumed that the total bandwidth is 3 kHz and each cluster occupies 1 kHz. The channel gain is modeled as hk,n = −α , where ρ2k,n denotes the exponentially distributed random variable 10−3 ρ2k,n Dk,n with unit mean, Dk,n denotes the distance between base station and Uk,n , and α denotes path loss factor. Assume that the directional antenna gain G0 = Gk,n =

A Hybrid Multiple Access Scheme in Wireless Powered Communication

1531

1, the energy conversion efficiency ξ = 0.14, the noise power N0 = −127 dBm, the maximum transmit power of base station Pm = 20 dBm. The durations of uplink and downlink transmission are set to T0 = T1 = 0.5s. Figure 2 shows the relationship between sum rate and transmit power under different clustering schemes. As demonstrated in Fig. 2, the proposed clustering scheme is superior to the random clustering scheme. However, the two hybrid multiple access schemes both have higher sum rate than OFDMA mainly because users in one cluster can share the same spectrum, leading to the improvement of spectrum efficiency compared with OFDMA. Figure 3 shows the relationship between the Jain’s Fairness Index and transmit power. The Jain’s Fairness Index is defined as 2 R k,n n=1 J = K N 2 KN k=1 n=1 Rk,n 

K k=1

N

(7)

J is bounded between 0 and 1, and changes with the rate of each user. Comparing Figs. 2 and 3, it can be seen that the proposed clustering scheme can not only improve the sum rate, but also increase the fairness between users. That is mainly because the cluster with the smallest variance is formed based on the maximum sum of differences between channel conditions, resulting in the enhancement of fairness between users in the cluster. Moreover, although random clustering scheme can achieve higher sum rate than OFDMA, the overall fairness is lower. 9

Proposed clustering scheme Random clustering scheme OFDMA scheme

8

R

sum

(kbps)

7 6 5 4 3 2 1 10

12

14

16

18

P0(dBm)

Fig. 2. Sum rate versus transmit power.

20

1532

Y. Liu et al. 1

Jain’s Fairness Index

0.95

0.9

0.85

0.8

0.75

0.7

Proposed clustering scheme OFDMA scheme Random clustering scheme 10

12

14

16

18

20

P0(dBm)

Fig. 3. Jain’s Fairness Index versus transmit power.

5

Conclusion

In this paper, a WPC-based hybrid multiple access scheme is proposed. Specifically, users are divided into clusters. NOMA is used for intra-cluster users, while OFDMA is used for inter-cluster users. Simulation results reveal that the proposed clustering scheme can not only improve the spectral efficiency, but also enhance fairness between users. Therefore, the proposed scheme can achieve the best trade-off between sum rate and fairness between users. Acknowledgement. This work was supported in part by the National Natural Science Foundations of China under Grant 61301131, in part by the General Project of Natural Science Foundation of Liaoning Province under Grant, in part by the Fundamental Research Funds for the Central Universities under Grant 3132019214 and Grant 3132019210.

References 1. Huang K, Lau VKN (2012) Enabling wireless power transfer in cellular networks: architecture, modeling and deployment. IEEE Trans Wirel Commun 13(2):902–912 2. Al-Abbasi ZQ, So DKC (2017) Resource allocation in non-orthogonal and hybrid multiple access system with proportional rate constraint. IEEE Trans Wirel Commun (99):1–1 3. Gong J, Chen X (2017) Achievable rate region of non-orthogonal multiple access systems with wireless powered decoder. IEEE J Sel Areas Commun (99):1–1 4. Chingoska H, Hadzi-Velkov Z, Nikoloska I, Zlatanov N (2017) Resource allocation in wireless powered communication networks with non-orthogonal multiple access. IEEE Wirel Commun Lett 5(6):684–687

Gas Sensing Properties of Molecular Sieve Modified 3DIO ZnO to Ethanol Fangxu Shen1, Xinping He2, Xiu Zhang1, Hefei Gao1, and Ruiqing Xing1(&) 1

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, People’s Republic of China [email protected] 2 College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, People’s Republic of China

Abstract. In order to detect the safe driving area in a non-invasive and accurate way in practical application, the architecture of three-dimensional inverse opal (3DIO) ZnO and molecular sieve modified 3DIO ZnO gas sensor are prepared by simple controllable sacrifice template method. Its morphology is characterized by scanning electron microscopy (SEM) and its structure is characterized by X-ray diffraction (XRD). The sensing properties of the gas sensor are studied systematically. The results show that the response of the sensor possessed remarkable sensitivity to ethanol gas, even in high relative humidity (RH) conditions (*94%RH). The response of the molecular sieve modified 3DIO gas sensor is *5.103 to 200 ppm for ethanol, which can effectively detect ethanol as low as 10 ppm. In conclusion, the molecular sieve modified 3DIO ZnO owns satisfactory gas sensing properties in detecting ethanol under high humidity. Keywords: ZnO

 3DIO  Molecular sieve  Ethanol  Gas sensor

1 Introduction With the gradual improvement of people’s living standard, the number of motor vehicles is increasing, and the situation of major traffic accidents caused by drunk driving cannot be ignored [1]. As is well-known, exhaled gas is a complex and high relative humidity environment which makes detection difficult [2]. Thus, it is still a challenge to real-time analysis of exhaled alcohol gas molecular qualitatively till now due to poor selectivity of some detecting instruments and the influence of high humidity. Therefore, accurate alcohol concentration detection at the present stage mainly relies on the inconvenient blood test with a long waiting time and the pain of being tested, inevitably. Although many scientific and technological means have been used to detect ethanol gas, it is still a challenge to produce ethanol gas sensors with high sensitivity, wide linear range, and lower detection limit that can work in high relative humidity environment. Semiconductor metal oxide (SMO) materials has the advantages of low cost, low power consumption and high compatibility, which is expected to be applied for the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1533–1540, 2020 https://doi.org/10.1007/978-981-13-9409-6_184

1534

F. Shen et al.

preparation of high performance of gas sensor [3]. In general situation, through the study of the composite, doping of semiconductor material or change their crystal structure can significantly improve the sensing properties of SMO gas sensor [1]. Besides, the specific surface area, electronic transmission capacity and interface characteristics of the material also have a great influence on the sensing performance of the gas sensor [4]. As a special nano-structured material, 3DIO has been proved to be widely used in various advanced fields and demonstrates outstanding performance, which could be closely related to the properties of high porosity, large specific surface area, orderly macroporous structure and controllable pore size of its unique structure [5]. In terms of gas sensitivity detection, its special macroporous structure not only helps to analyzed gas molecules easier entry and exit, but also facilitates the express transmission of electrons, thus effectively improves the gas-sensitive sensing characteristics [6]. However, until now studies of ethanol gas sensor based on 3DIO structure working in high RH environment were reported rarely, especially molecular sieves modified 3DIO ZnO. In this work, pure 3DIO ZnO and molecular sieves modified 3DIO ZnO are prepared successfully using a simple controllable sacrifice template method and their gas sensing characteristics are studied systematically. The experimental results show that compared to pure 3DIO ZnO the gas sensor modified with molecular sieves possesses better properties to ethanol gas in high humidity environment. The gas sensor owned potential possibility of exhaled ethanol gas detection due to the biomarker of drunk driving is ethanol and the sensor could use in high RH atmosphere [7]. All in all, the molecular sieves modified 3DIO ZnO is a very promising gas sensor for drunk driving detection. Overall, a molecular sieve modified 3DIO ZnO gas sensor is successfully synthesised and applied for ethanol gas monitoring. Experimental results show that the improved 3DIO ZnO gas sensor demonstrates satisfactory gas sensor performance and can monitor and track the level of ethanol gas used in high humidity environment at normal temperature.

2 Experimental Section 2.1

Fabrication of ZnO Films

Firstly, the preparation for the Polymethylmethacrylate(PMMA) spheres’opal films: briefly, the PMMA latex spheres with controllable size are synthesized with methyl methacrylate. After that, the glass substrates that have been soaked in concentrated sulfuric acid for 2 h washed with deionized water, are inserted into the aqueous colloid suspension perpendicularly. Then, placed the whole device in a 303 K thermostat for two days. With the constant volatilization of the aqueous solution and the effect of surface tension, the regular PMMA opal structure template spontaneously formed. At last, the PMMA opal is processed in a 393 K thermostat for 40 min to strengthen the mechanical stability [8]. Secondly, the filling of the precursor solution: Stoichiometric zinc nitrate is dissolved into the anhydrous ethanol solution containing citric acid as the complexing

Gas Sensing Properties of Molecular Sieve

1535

agent, and stirred for 6 h until the solution is clear and transparent under the action of a magnetic stirrer. Then, the prepared solution is then filled into the gap of the prepared PMMA opal template. Thirdly, the obtaining of the 3DIO ZnO films: the obtained films are placed in a tube furnace aiming at taking the polymer spheres away entirely with a rising speed of 1 K/min to 773 K and sustained for 3 h, and the 3DIO ZnO is obtained finally. 2.2

Characterization

The surface morphology of the prepared sample is photographed by JEOL JSM-7500F field emission scanning electron microscope (Tokyo, Japan), whose acceleration voltage is 15 k. A monochromatized Cu target radiation source (=1.5045a) is used to analyze the XRD pattern on Rigaku D/Max 2550 (Tokyo, Japan), and the crystal lattice constants of the samples are calculated by MDI Jade 5.0 software based on the measured XRD data. The Gas Sensitivity is tested on a WS–30A system (Weisheng Instruments Co, Zhengzhou, China).

Fig. 1. SEM of 3DIO ZnO

2.3

Fabrication and Measurement of Gas Sensing Properties

The 3DIO ZnO film on the substrate is scraped off by a metal blade, and then the powder and ethanol is grinded by weight ratio of 5 : 1 (w/w) and mixed into a uniform paste. After that, the paste is applied to the alumina tube with a pair of gold electrodes antecedently attached (4 mm long, 1.2 mm outside diameter, 0.8 mm inside diameter). The solvent is evaporated and subsequently the ceramic tube with the thin layer is sintered in the oven at 623 K for 2 h for purpose of heightening its mechanical stability. Then, pure 3DIO ZnO gas sensor is prepared. For the prepare of gas sensor modified with molecular sieve is as follows: the alumina tube pasted with pure 3DIO ZnO is immersed in the suspension of molecular sieve and stayed for 2 s. Then, it is removed and placed in the thermostat at 373 K for 1 h. Next, a spring-like nickelchrome (Ni–Cr) alloy (28 X) heating wire is carefully inserted into the prepared aluminum tube for the purpose of providing substrate heating and operating temperature

1536

F. Shen et al.

for subsequent testing. Then, the prepared alumina tube and the electrode of the heating wire are welded on a tailor-made support. Then, before the first measurement, the gas sensor is thermally aged for 3 days at a heating voltage of 5 V, aiming to improve the stability of the gas sensor for subsequent testing.

3 Results and Discussion 3.1

Morphological and Structural Characteristics

The morphology and microstructure of the prepared 3DIO ZnO films are characterized by SEM, as shown in Fig. 1. It can be seen from the figure that the prepared 3DIO ZnO films has a clear-cut ordered nanostructure in a major area. It also can be obviously seen that the structure is similar to the honeycomb hexagons structure which is one of the most stable structures of earth [6]. This structure owns an advantage of not easy collapsing and also is conducive to the diffusion of the target gas. In addition, the center distance between the macropores is determined to be 400 nm, smaller than the original size of PMMA template. This means that the actual volume ratio of the cavity to the 3DIO ZnO framework augments with the augment of the macropore diameter. Figure 2 shows the XRD patterns of 3DIO ZnO. As shown, the sample is wellcrystal and it can be assigned to hexagonal phase (JCPDS card 36-1451). Besides, high intensity of the diffraction peaks and no other peaks at the XRD pattern indicate that 3DIO ZnO has high crystallinity.

Fig. 2. XRD patterns of 3DIO ZnO films

Gas Sensing Properties of Molecular Sieve

3.2

1537

Working Temperature and Selectivity of Gas Sensor

Fig. 3. Relationship of responses versus the operating temperature of the 3DIO ZnO gas sensors to 50 ppm ethanol in *94%RH conditions. The lines are spline curves

Considering the significant influence of operating temperature on the sensitivity of the gas sensor, the response of 3DIO gas sensor to ethanol gas is tested at different temperatures, firstly. Different operating temperatures are obtained by controlling the heating voltage of Ni–Cr alloy wires. The response of the 3DIO ZnO gas sensor for 50 ppm ethanol gas in *94%RH conditions as the operating temperature is described with a graph in Fig. 3, which has a spline curve that reveals the correlation between the response and the working temperature distinctly. The results show that the 3DIO gas sensor has typical characteristics of semiconductor oxide gas sensor, that is, when the temperature is relatively low, the response value of the gas sensor increases gradually with the increase of operating temperature until the maximum response value is reached, and then the response value decreases with the continuous increase of temperature. In this experiment, the optimal operating temperature is *308 K, and the corresponding response is *2.64 to 50 ppm ethanol. The reactions between the gas sensor and the analysis gas reaches maximum under the optimum temperature.

Fig. 4. Selective test of 3DIO ZnO sensor for 50 ppm gases at *308 k respectively

1538

F. Shen et al.

Next, due to selectivity is one of the most important indicator for gas sensor, sensitivities for various gases of the gas sensor are tested. The selectivity indicator of various test gases, such as diabetes marker acetone, drink-driving marker ethanol, lung cancer biomarker breath gas methanol and toluene, were all tested at the concentration standard of 50 ppm under the optimal operating temperature [9]. As shown in Fig. 4, the 3DIO ZnO gas sensor showed the best gas sensitivity to ethanol gas, and the sensitivity to interfering gases such as acetone, methanol and toluene were not high. Obviously, even though the acetone gas has relatively strong interference, the response of the 3DIO gas sensor to ethanol is more than twice than all of the other interfering gases, indicating that the sensor has good selectivity to ethanol.

Fig. 5. a Response curves of S1 and S2 gas sensors to 0–400 ppm ethanol gas at *308 k. b Response curves of S1 and S2 gas sensors to 0–100 ppm ethanol gas at *308 k. S1: the 3DIO ZnO gas sensor. S2: the 3DIO ZnO gas sensor modified with molecular sieve

Figure 5a exhibits the relationship between the concentration of 0–400 ppm ethanol gas and the sensitivity value of the pure 3DIO ZnO gas sensor (S1) and the 3DIO ZnO gas sensor (S2) modified with molecular sieve in *94%RH conditions at the operating temperature. Obviously, all gas sensors responses increase as the ethanol concentration increase at first, then the rate of the responses slows down until tending to be same as the ethanol concentration increase due to the completely reactions between gas sensors and ethanol. In addition, as can be seen from Fig. 5a, the response of S2 sensor is significantly higher than that of S1, which possibly because when the humidity is very high, a part of the active sites on the film surface of the gas sensors is occupied by water vapour molecules, resulting in the decrease of the gas sensor response [10]. Figure 5b shows the sensitivity values of S1 and S2 when the concentration of ethanol gas is in a low range (0–100 ppm). As seen, Fig. 5b show an excellent linear relationship in a wide range between the detection of ethanol gas concentration and the responses of each gas sensor. The threshold concentration of ethanol for judging whether a person can safely drive a motor vehicle is 78 ppm [11]. In this work, the response of S1 gas sensor is as high as *1.889 when ethanol gas concentration is as low as 75 ppm, but for S2 the gas sensor response is nearly twice as high as S1 gas

Gas Sensing Properties of Molecular Sieve

1539

sensor, which reached *3.642, indicating that the sensor modified with molecular sieve owns better performance than pure 3DIO gas sensor. Additionally, the experimental testing detection limit is lower than the practical data concentration of exhaled ethanol gas demonstrating the satisfactory performance and potential business applications of the 3DIO ZnO gas sensor modified with molecular sieve.

4 Conclusion In summary, 3DIO ZnO gas sensor and 3DIO ZnO gas sensor modified with molecular sieve with hexagonal phase are prepared through a simple sacrificial template method and their gas-sensing properties are systematic comparative investigated. 3DIO ZnO gas sensor modified with molecular sieve shows better gas sensing properties than pure 3DIO ZnO gas sensor in high RH atmosphere, which could be contribute to the filtering out of the interference of the moisture molecule and the more effectively reactions between gas sensor and the analysis gas. In this work, the response of 3DIO ZnO gas sensor modified with molecular sieve is amount twice as high as the pure 3DIO ZnO gas sensor in *94%RH conditions when ethanol gas concentration is *75 ppm, indicating the sensor might be a promising ethanol gas sensor in high relative humidity environment and in detection of exhaled ethanol gas in daily safety driving in the future. Acknowledgements. This research was supported in part by the National Natural Science Foundation of China (Project No. 61704122, Project No. 61603275, Project No. 61601329), and the Doctor Fund of Tianjin Normal University (No. 52XB1601).

References 1. Ahmad U, Ajmal KM, Rajesh K, Algarni H (2018) Ag-doped ZnO nanoparticles for enhanced ethanol gas sensing application. J Nanosci Nanotechnol 18(5):3557–3562 2. Xiang Q, Meng G, Zhang Y, Xu J, Xu P, Pan Q et al (2010) Ag nanoparticle embedded-ZnO nanorods synthesized via a photochemical method and its gas-sensing properties. Sens Actuators B Chem 143(2):635–640 3. Kim J, Yong K (2011) Mechanism study of ZnO nanorod-bundle sensors for H2S gas sensing. J Phys Chem C 115(15):7218–7224 4. Natalia S, Maria B, Leonid G, Khiena B (2018) A nanostructured sensor based on gold nanoparticles and nafion for determination of uric acid. Biosensors 8(1):21 5. Qin J, Cui Z, Yang X, Zhu S, Li Z, Liang Y (2015) Synthesis of three-dimensionally ordered macroporous lafeo3 with enhanced methanol gas sensing properties. Sens Actuators B Chem 209:706–713 6. Liu X, Zhang J, Wang L, Yang T, Guo X, Wu S et al (2010) 3d hierarchically porous ZnO structures and their functionalization by Au nanoparticles for gas sensors. J Mater Chem 21 (2):349 7. Hassan MM, Khan W, Naqvi AH, Mishra P, Islam SS (2014) Fe dopants enhancing ethanol sensitivity of ZnO thin film deposited by Rf magnetron sputtering. J Mater Sci 49(18):6248– 6256

1540

F. Shen et al.

8. Wang JQ, Wu XH, Wu YY, Yuan SS, Xu YM, Chen XB et al (2013) Fabrication and characterization of tin oxide inverse opal by template method. Key Eng Mater 562–565: 18–21 9. Guo J, Zhang J, Zhu M, Ju D, Xu H, Cao B (2014) High-performance gas sensor based on ZnO nanowires functionalized by Au nanoparticles. Sens Actuators B Chem 199:339–345 10. Hugon O, Sauvan M, Benech P, Pijolat C, Lefebvre F (2000) Gas separation with a zeolite filter, application to the selectivity enhancement of chemical sensors. Sens Actuators B Chem 67(3):235–243 11. Jones AW, Andersson L (2003) Comparison of ethanol concentrations in venous blood and end-expired breath during a controlled drinking study. Forensic Sci Int 132(1):18–25

FiberEUse: A Funded Project Towards the Reuse of the End-of-Life Fiber Reinforced Composites with Nondestructive Inspection Yijun Yan1, Andrew Young1, Jinchang Ren1(&), James Windmill1, Winifred L. Ijomah2, and Tariq Durrani1 1

2

Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, UK [email protected] Department of Design, Manufacture and Engineering Management, University of Strathclyde, Glasgow, UK

Abstract. FiberEUse is a €9.8 million research project funded by the European Union since June 2017 and collaborating with 20 partners from 7 EU countries. It aims at developing different innovative solutions towards enhancing the profitability of glass and carbon fiber reinforced polymer composites (GFRP and CFRP) recycling and reuse in added-value products and high-tech applications. There are three big tasks: (i) mechanical recycling of short GFRP, (ii) thermal recycling of long fibers (both GFRP and CFRP), (iii) inspection, repairing and remanufacturing for the end-of-life (EoL) GFRP/CFRP products. As one of the partners, the main objective of our work is to design a nondestructive testing (NDT) method for recycled/repaired/remanufactured CFRP products based on hyperspectral imagery (HSI). In this paper, we will introduce the use of hyperspectral imaging for erosion detection in different materials. Our previous work on metal corrosion estimation will be discussed first. Then, the idea of this work is carried out. The experimental setup of both works is illustrated and more details of our strategy are provided with future development direction. Keywords: Nondestructive inspection Erosion detection  Corrosion index©®

 Hyperspectral imagery  FiberEUSe 

1 Introduction Glass and carbon fiber reinforced polymer composites (GFRP and CFRP) have been widely used as structural materials in many areas such as aerospace, marine, transport, sports and civil engineering industries (Fig. 1) due to their better lightweight, higher stiffness, strength and damping properties compared to metals. Composites are relatively young and intrinsically durable materials, but composite-based components or products have a limited lifetime, which is usually between 20 and 30 years. For example, the lifecycle of a wind turbine does not exceed 20–25 years, the average life time for recreational boats and car bodies is about 10 years. With the growing demand for composites in industrial manufacturing, correct waste management becomes a more and more urgent issue for industry. Current waste management is mostly landfilling © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1541–1547, 2020 https://doi.org/10.1007/978-981-13-9409-6_185

1542

Y. Yan et al.

which is still a relatively cheap option, but it is not the most optimal way and doesn’t comply with the Waste Framework Directive (2008/98/EC). In the future, the landfilling will become unviable because of the higher legislation-driven cost. Several countries, e.g. Germany and Austria, have already forbidden landfilling of composite waste, and other EU countries will follow suit soon.

CFRP Sports/Leisure 14%

Civil Pressure Other engineering vessels 7% 5% 5% Wind turbine 14%

Moulding and compound 12% AutomoƟve 11% Marine 2%

Aerospace 30%

GFRP Other 1%

Sports/Leisure 15%

Transport 35%

Electronics 15%

ConstrucƟon 34%

Fig. 1. CFRP and GFRP production in Europe for different industries [2]

For the EoL composites, recycling and reuse are the best solution, which not only complies with environmental regulations, but also benefits both end users and stake holders. However, there are many barriers such as the negative perception of recycled products, lack of suitable business models, immature recycling techniques, limited synergistic use of available inspection, repair and reprocessing technologies, etc. To address all these challenges, FiberEUse project aims to develop effective recycling, remanufacturing, inspection solutions and profitable reuse option for EoL composites (Fig. 2) through the integration of innovative remanufacturing technologies.

Fig. 2. The concept of FiberEUse. Source http://fibereuse.eu/

FiberEUse: A Funded Project Towards the Reuse

1543

As one of the most important modules in this project, non-destructive testing is used to provide an efficient inspection to assure a fast and reliable remanufacturing. Existing non-intrusive inspection techniques include visible cameras, ultrasound, thermal imaging and laser, they deliver information about physical integrity of structures but not on material composition. To this end, hyperspectral imaging has provided a unique way to tackle this challenge. Through capturing spectral data from a wide frequency range along with the spatial information, hyperspectral imaging can detect minor differences in terms of temperature, moisture and chemical composition. As a result, it has been successfully applied in a number of emerging applications such as food and drinks [3, 4], remote sensing [5, 6], clinical medicine [7], art verification [8], etc. However, the application of hyperspectral imaging in manufacturing is rare, though some limited work for plastic characterization is reported [9, 10]. In this paper, the strategy of hyperspectral imaging in manufacturing will be introduced where our previous work on metal erosion detection will first be discussed, then how to transplant the same idea from metal to composite will be discussed. The outline of this paper is as follows: Sect. 2 introduces hyperspectral imaging and data acquisition of our previous work and current work. Section 3 describes our previous work on metal corrosion detection. Section 4 presents the way to build the erosion index map for wind turbine blade inspection. Finally, some concluding remarks and future work are summarized in Sect. 5.

2 Hyperspectral Imaging and Data Acquisition A standard RGB image, includes three bands which are red, green and blue (Approx. 650, 550 and 450 nm respectively). For a hyperspectral image, there are hundreds of bands and each pixel is made by hundreds of spectral values (Fig. 3). As a result, it can provide a 2-D spatial scene with a 1-D spectral domain, which makes it popular in many fields. In the following two works, the hyperspectral data is acquired by Innospec Red Eye 1.7 (Fig. 4a) which is a near-infrared system operating in the spectral range of 950–1700 nm and outputs 256 bands over this range. As the camera is a line scan device, it is necessary to move either the camera or sample at a constant speed in order to scan the whole surface of the sample. For metal corrosion inspection, as the size of metal samples is small, we fixed the position of the camera and put the sample on the mechanical movable system (Fig. 4b). For the wind turbine blade inspection, as our

Bands (B) spectral bands

Height (H) pixel Width (W)

Fig. 3. Illustration of hyperspectral imaging [1]

1544

Y. Yan et al.

mechanical movable system cannot hold the blade, we fixed the position of the blade and move the camera over the full length of the blade to provide the image (Fig. 4c).

Fig. 4. Hyperspectral camera (a), experimental setup for metal corrosion detection with horizontal view angle (b) and wind turbine blade corrosion detection with vertical view angle (c)

3 Inspection of Metal Corrosion The objective of our previous work is to measure the corrosion resistance of metal and then produce a corrosion index map to describe the severity of the damage. First, we prepare some saline solutions and then immerse the metal samples into the solutions for 2, 4, and 6 h. From visible images in Fig. 5, it can be seen the corrosion on the surface of metal samples is getting worse with the increasing of immersing time. However, it is unknown if other region exists where corrosion occurs that are not visible in RGB images. Hyperspectral imaging can detect minor differences in terms of chemical properties, which drives us to apply hyperspectral imagery in this project. To extract the optimal band information from 256 bands, PCA is employed here and first and second components are used to generate the final corrosion index map. From the second row in Fig. 5, severely corroded areas show very bright colour, and the colour of those invisible corroded areas change from dark blue to light blue. Therefore, it can

Original

2 hours

4 hours

6 hours

Fig. 5. Visible image (first row) and corrosion index map (second row)

FiberEUse: A Funded Project Towards the Reuse

1545

be concluded that besides the visible corrosion region, those invisible corrosion regions (in the visible spectrum) have corroded with the growth of immersing time.

4 Inspection of Erosion on Wind Turbine Blade The success of our previous work on metal corrosion inspection gives us confidence for the inspection of CFRP. In this section, the feasibility of inspection on selected CFRP (i.e. wind turbine blade) will be investigated. The wind turbine is usually operating outside for many years. Due to extreme conditions such as icing, hailstones, salt spray, etc. [11], the erosion of the outer laminate of the blade becomes one of the biggest problems [12]. From Fig. 7a, it can be seen that the light erosion happens on the first (outermost) layer of laminate, which is hardly recognized with the naked eye. If there is no effective action to stop it, the erosion problem will get worse, which will corrode down to the second and third layer of laminate, which is the pink (Fig. 7b) and grey (Fig. 7c) laminate respectively. Finally, the erosion damage will penetrate down through all layers of laminate to the composites [13]. To this end, erosion detection in the early stage becomes an urgent task. Similar to metal, the region with light erosion on the surface of the blade is also invisible. Therefore, inspired by our previous work, hyperspectral imagery is applied to generate the erosion index map. The workflow of this module includes the following steps: (i) data acquisition, (ii) band selection, (iii) image flattening and (iv) index map calculation. Follow the setup shown in Fig. 4c, we get the hyperspectral data. Then through the spectrum analysis (Fig. 6), the bands between 1154 and 1233 nm are selected due to their maximizing contrast between the different erosion levels. The erosion index map as shown in Fig. 7d is generated after image flattening and surface subtraction. On the erosion index map, the light blue colour indicates the laminate is healthy, the dark blue colour indicates there is light erosion on the outermost laminate or even pink laminate, the yellow and red colour denotes the erosion problem is very heavy and has already corroded down to grey laminate or even the composite.

Fig. 6. Average contrast between three level of erosion

1546

Y. Yan et al.

(a)Light erosion

(b)Pink laminate

(c)Gray laminate

(d) Corrosion index map Fig. 7. Corrosion type and corrosion index map

5 Conclusion Hyperspectral imaging has drawn more and more attention due to its capability to detect the minor and invisible difference beyond what can be done with RGB cameras alone. It has been applied for many fields including remote sensing, precision agriculture, condition monitoring and pharmaceutical quality control, etc., but it is rarely used in industrial manufacturing. In this paper, it not only proves that hyperspectral imagery is useful and effective for erosion detection on metals and composites, but also provide an innovative strategy of composite inspection for FiberEUse project. Four key tasks that will be accomplished in the future are summarized as follows: (1) Improvement of state-of-art band selection methods [14, 15] for accurate calculation of the erosion index map. (2) AI-driven method for damage type classification of fiber composite materials. (3) Evaluation of the coating performance for recycled fiber composite materials. (4) Analysis of physical and chemical characteristics for fiber composite materials through hyperspectral imagery. Acknowledgements. The authors wish to thank the support from the EU-H2020 Project FiberEUse (GA No. H2020-730323-1): Large scale demonstration of new circular economy valuechains based on the reuse of end-of-life fiber reinforced composites (Web: http://fibereuse.eu/).

FiberEUse: A Funded Project Towards the Reuse

1547

References 1. Zabalza J (2015) Feature extraction and data reduction for hyperspectral remote sensing earth observation. University of Strathclyde 2. Ramakrishnan M, Rajan G, Semenova Y, Farrell G (2016) Overview of fiber optic sensor technologies for strain/temperature sensing applications in composite materials. Sensors 16:99 3. Tschannerl J, Ren J, Jack F, Krause J, Zhao H, Huang W et al (2019) Potential of UV and SWIR hyperspectral imaging for determination of levels of phenolic flavour compounds in peated barley malt. Food Chem 270:105–112 4. Qiao T, Ren J, Craigie C, Zabalza J, Maltin C, Marshall S (2015) Singular spectrum analysis for improving hyperspectral imaging based beef eating quality evaluation. Comput Electron Agric 115:21–25 5. Sun H, Ren J, Zhao H, Yan Y, Zabalza J, Marshall S (2019) Superpixel based feature specific sparse representation for spectral-spatial classification of hyperspectral images. Remote Sens 11:536 6. Cao F, Yang Z, Ren J, Ling W-K, Zhao H, Sun M et al (2018) Sparse representation-based augmented multinomial logistic extreme learning machine with weighted composite features for spectral-spatial classification of hyperspectral images. IEEE Trans Geosci Remote Sens, 1–17 7. Md Noor S, Ren J, Marshall S, Michael K (2017) Hyperspectral image enhancement and mixture deep-learning classification of corneal epithelium injuries. Sensors 17:2644 8. Sun M, Zhang D, Wang Z, Ren J, Chai B, Sun J (2015) What’s wrong with the murals at the Mogao Grottoes: a near-infrared hyperspectral imaging method. Sci Rep 5:14371 9. Balsi M, Esposito S, Moroni M (2018) Hyperspectral characterization of marine plastic litters. In: 2018 IEEE international workshop on metrology for the sea; learning to measure sea health parameters (MetroSea), pp 28–32 10. Galdón-Navarro B, Prats-Montalbán JM, Cubero S, Blasco J, Ferrer A (2018) Comparison of latent variable-based and artificial intelligence methods for impurity detection in PET recycling from NIR hyperspectral images. J Chemom 32:e2980 11. Rivkin D, Silk L (2012) Wind turbine operations, maintenance, diagnosis, and repair, Jones & Bartlett Publishers 12. Brøndsted P, Lilholt H, Lystrup A (2005) Composite materials for wind power turbine blades. Annu Rev Mater Res 35:505–538 13. Young A, Kay A, Marshall S, Torr R, Gray A (2016) Hyperspectral imaging for erosion detection in wind turbine blades 14. Tschannerl J, Ren J, Yuen P, Sun G, Zhao H, Yang Z et al (2019) MIMR-DGSA: unsupervised hyperspectral band selection based on information theory and a modified discrete gravitational search algorithm. Inf Fusion 51:189–200 15. Tschannerl J, Ren J, Zabalza J, Marshall S (2018) Segmented autoencoders for unsupervised embedded hyperspectral band selection. In: 2018 7th European workshop on visual information processing (EUVIP), pp 1–6

Autonomous Mission Planning and Scheduling Strategy for Data Transmission of Deep-Space Missions Jionghui Li(B) , Liying Zhu, Shi Liu, Xiongwen He, and Xiaofeng Zhang Beijing Institute of Spacecraft System Engineering, Beijing 100094, China [email protected]

Abstract. Regarding to the deep-space missions which are faraway from the Earth, the conventional command-dependent mission operation and control are challenged due to the long delay of the uplink. Under this circumstances, autonomous planning and scheduling attracts more and more attention. As a critical subsystem of a spacecraft, data transmission system is a key node of autonomous planning and scheduling. This paper discusses the strategy of autonomous mission planning and scheduling for deep-space data transmission tasks. First, from the system-level analysis, four planning requests are proposed, which covered the problems of link establishment, antenna pointing, data rate and storage. Then, planning strategy models as well as the mathematical models are built-up for each request. The strategy model clarifies the planning elements which are “plan period”, “constraints” and “activities”. The discussed models are not separated, the relations and connections among the four models are illustrated to form a integrated planning logic.

1

Introduction

Based on the developed of satellite applications, crewed spacecrafts and space stations, exploring the deep space is the natural consequence of space technology development. The exploring probes are the entities of deep-space missions, whose exploring targets include the Moon, the planetaries as well as their satellites, asteroids, comets and other astronomical matters. The conventional mission operation and control relies to the commands sending from the ground through TT&C uplink. However, due to the extreme length of transmission links, commands from the ground is facing significant delay, which is intolerable in some cases. Under this circumstances, autonomous planning and scheduling attract attention. Some NASA missions had implemented mission planning systems, such as SPIFE [1], Mr SPOCK [2], MAPGEN [4], and system for Mars missions.

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1548–1557, 2020 https://doi.org/10.1007/978-981-13-9409-6_186

Autonomous Mission Planning and Scheduling Strategy for Data

1549

Mission Planning and Scheduling are integral parts of Mission Operations and closely related to the other aspects of the overall monitoring and control of space missions [3]. Data transmission system is an indispensable subsystem of deep space missions, since it is the only link connecting the exploring probes, no matter crewed or unmanned. The exploring results are transmitted to the Earth deep-space stations through downlink data transmission links. Autonomous planning and scheduling relies on an encoding of possible actions in the domain [5], which specifies the three elements of planning strategy model: “constraints”, “plan period” and “activities”. Correspondingly, the planning consists of three main mechanisms: goaling, task decomposition and conflict analysis. In this paper, we discuss the autonomous planning and scheduling strategy of data transmission task in the normal operating mode, providing the strategy models for detailed requests on the basis of task and constraint analysis.

Planning User Planning Requests

Navigation

Orbital & Attitude Information

Feedback

Planning Plans

Cross Support

TT&C

Feedback

Plan Execution Automation

Feedback

Mission Control

Fig. 1. Mission planning functions

2

Planning Request of Data Transmission Task

Many subsystems involve in the autonomous mission planning and scheduling of a spacecraft. For any planning events, it may related to more than one subsystems. For example, the data transmission subsystem, the data storage subsystem and the power subsystem is coupled for planning events. As showing in the Fig. 1, the Planning function takes planning requests as input and responsible for performing mission planning and scheduling. The output of the Planning function is plans which are distributed to Plan Execution functions. Planning function may receive orbital information, attitude or slew time information as input. It may also get support and perform the necessary negotiation with ground stations through TT&C [3].

1550

J. Li et al.

Power is the main constraint for the deep-space mission. Therefore, in the normal operating mode of a spacecraft, data transmission task should be well planned regarding to the power and storage distribution, aiming to the optimal utilization of the precious and limited resource. Due to the possible occlusions between the spacecraft and the Earth, data transmission link may not be available for all the time. Moreover, as data transmission is a high-power task, if another power-consuming task with higher priority, data transmission could be interrupted or switched to lower-power mode in order to ensure enough power for the more important functions. Hence, the first planning request of data transmission task is to autonomous decide if the data transmission link can be set up with a proper working mode during the mission period. If the data transmission link can be set up, antenna control is the following planning request. It is desirable that the on-board high-gain antenna (HGA) could autonomously and rapidly adjust so that its beam can be well pointing to the Earth. To satisfy this request, coordination from GNC subsystem as well as the design of antenna pointing mechanism should be considered. In order to better exploit the link and power resources, variable rate transmission is desirable for deep-space missions. And this appears as the third planning request of data transmission task. During the mission period, the mission could autonomously determine the optimal transmission rate and autonomously switch to the determined rate. Besides, data transmission task and data storage task are tightly coupled. The data recorders capture high-rate data and play it back over data-rate-limited links. While the data transmission is interrupted, the data has to be recorded into the on-board storage. Therefore, the scheduling of storage is another planning request of data transmission task. As stated above, this paper discusses the autonomous planning and scheduling strategy of data transmission task to model the following four requests: 1. The autonomous establishment of downlink data transmission link. Through autonomous planning and scheduling, it is required to decide whether the downlink data transmission link should be set up, and to select the proper transmission power mode. Whereas, the on-board data transmitter keeps in the low-power standby or shut down mode. 2. The autonomous planning of antenna pointing controls. Besides, if the HGA cannot provide ground-pointing, the strategy autonomously switches to lower gain antenna with wider cone of coverage. 3. The autonomous planning and scheduling of variable-rate transmission. On the premise of guaranteeing the communication quality, the strategy is to improve the transmission data rate. 4. The autonomous planning of data storage schedule. The strategy is to keep the balance to data input and output of the storage.

Autonomous Mission Planning and Scheduling Strategy for Data

3

1551

Problem Modeling

For deep-space probes, it is impossible to expand the on-board resource after launching. All activities have to be done with the fixed on-board resource including calculating capacity, storage, power, etc. Regarding to the four requests listed above, three planning elements, mathematic modeling and constrains are analyzed and specified respectively in this section. 3.1

Link Establishment (Model 1)

The problem of link establishment can be modeled as a logical judgement. Within the planning period, when and only when all the constraints are satisfied, it is able to build the downlink data transmission link. The three elements are specifies as Planning Period: During the mission period which enables the data transmission subsystem. The Planning Period is defined as set Tp . Constraints: 1. Visibility Constraint: The orbital position of the spacecraft has to be visible to the ground station in order to establish a data transmission link. Since the locations of ground stations are fixed, based on the mission design, the visibility range can be modeled as a known set, denoted as Pa . From the orbital information provided by the Navigation function, the orbital position of the spacecraft at time t0 can be denoted as pt0 . If the spacecraft is visible to the ground station, it can be modeled as pt0 ∈ Pa . 2. Power Constraint: Normally, data transmission system offers working mode options with different power consumption. Higher power consumption provides higher EIRP with the same antenna. However, as an integrated system, the power distribution is a spacecraft-level strategy. The premise of link establishment is that the power distributed to the data transmission system should be no less than the lowest working configuration, PL . While the available power increasing, higher working mode can be enabled. It should be noticed that the commands through TT&C uplink, if being received, or presettled working mode along with the flight program play higher weigh in the determination. Activities: The logic of the planning activities of Link Establishment is illustrated as Fig. 2. It includes: 1. Judge the visibility of the spacecraft to the ground stations; 2. Determine the working mode based on power distribution or commands from ground; 3. If the space is visible and the distributed power is provided, then turn on the data transmission subsystem should be turned on with the proper working mode; otherwise, the data transmission subsystem retains in power-off or standby mode.

1552

J. Li et al. Orbital & Attitude Information Power

Visible? Yes No

Enough power?

Transmit off/standby

Yes Select working mode

No link

No

Turn on the transmitter

Data transmission link is established

Fig. 2. The logical scheduling of Model 1 planning activities

Based on the definition above, denote the available power distributed to the data transmission system as PDT (t0 ), at an on-orbit time t0 , the mathematical model of Link Establishment Problem is written as: Set s.t.

Enable DT L = 1 pt0 ∈ Pa , PDT (t0 ) > PL .

Enable DT L = 1 represents the link can be enabled. If there is no up-linked commands or pre-settled working modes along with the flight programs, then the mode selection can be defined as an optimization problem, max Pmode s.t. Enable DT L == 1, Pmode ≤ PDT (t0 ). where Pmode denotes the transmit amplifier output power for the selected working mode. 3.2

Antenna Pointing Control (Model 2)

In the most cases, multiple antennas take charge of data transmission task. The high-gain antennas (HGA) have narrow beamwidths. Therefore, in order to maximize the intensity of radiation, it is desirable to adjust antenna pointing in a preferred direction through antenna pointing mechanism or spacecraft attitude adjustment regarding to the fixed HGA. When HGA is not able to make the

Autonomous Mission Planning and Scheduling Strategy for Data

1553

Earth in its cone of coverage, the system should switch to medium-gain antenna (MGA) or low-gain antenna (LGA) to support lower data rate transmission, which depends on the spacecraft configuration. Planning Period: During the mission period which enables the data transmission subsystem, Tp . Constraints: 1. The data transmission link is enabled; 2. For deep-space mission, normally, the data transmission with HGA starts when the spacecraft reaches a certain distance. In order to transmit data through HGA, the Earth must necessarily be covered by is beam cone, as shown in Fig. 3. Based on the HGA antenna beamwidth θ, the distance of the spacecraft from the Earth R1 and the precision of spacecraft position estimation. Assuming that the spacecraft position estimation error at the distance R1 is σ2 . The maximum allowed antenna pointing misalignment angle to ensure data transmission with HGA is σ1 = θ − σ2 . Spacecraft

Earth

Fig. 3. The illustration of HGA coverage request Commands or presettled schedule?

Yes

Yes

No Orbital & Attitude Information

HGA ?

HGA pointing alignment is achievable? Yes

No

No

Switch to MGA or LGA

MGA or omnidirectional LGA is used for data transmission

Pointing controls

HGA is used for data transmission

Fig. 4. The logical scheduling of Model 2 planning activities

1554

J. Li et al.

Activities: The schedule of planning activities of Antenna Pointing Control is illustrated as Fig. 4. It includes: 1. When the data transmission link is enabled, the antenna mechanism subsystem control the antenna pointing rotation (φx , φy , φz ) based on Navigation and attitude information. The pointing alignment should have enough precision, the maximum misalignment is no larger than σ1 . 2. If the HGA rotation cannot be achieved, switch to MGA or omnidirectional LGA to undertake the data transmission task. Define the angle between the line of spacecraft centroid-Earth and the installing axis of LGA as γ angle. The change of γ angle can reflect the LGA antenna coverage to the Earth. If γ angle is between 90◦ and 180◦ , the Earth station is covered by omnidirectional LGA installed on the -axis side; If γ angle is between 0◦ and 90◦ , the Earth station is covered by omnidirectional LGA installed on the +axis side. 3.3

Variable Rate Transmission (Model 3)

The link condition of deep-space mission is dynamic in a large range. Therefore, variable rate transmission is required to fully use the limited link resource. The transmission rate is the result of synthetical consideration of system configuration. The inputs of planning function includes the selected working mode, bit error rate (BER) requirement, link length obtained from navigation information and designed lowest/highest data rate. Panning Period: During the mission period which enables the data transmission subsystem, Tp . Constraints: 1. The data transmission link is enabled; 2. The transmit amplifier output power Pt is decided by the selected working mode, Pmode ; 3. Data rate is limited by link budget, different antenna in use provides different gain; 4. To ensure communication quality, BER requirement has to be satisfied, for example BER ≤ 10−6 . Under the fixed coding and modulation scheme, data rate Rb is in proportion to transmission symbol rate Rs , Rb = M C Rs , where M and C are the modulation order and coding rate respectively. Therefore, the optimization of Rb is equivalent to the optimization of Rs . In the downlink budget calculation, [Rs ] = [EIRP ] − [Losses] + [G/T ] − [k] − [Es /N0 ] − [LM ] where [·] denotes the decibel notation. Losses is the total link loss which contains feeder loss, atmospheric loss, polarization mismatch loss and free-space spreading loss. Among these losses, the free-space spreading loss, [F SL] is related to the link length and carrier frequency. [F SL] = 32.4 + 20 lg D + 20 lg fc

Autonomous Mission Planning and Scheduling Strategy for Data

1555

where D denotes the link length which can obtain from the navigation information, and the carrier frequency fc is a settle parameter. The equivalent isotropic radiated power (EIRP) in decibel notation is [EIRP ] = [Pt ] + [Gt ] where [EIRP ] is in dBW, [Pt ] is also in dBW and spacecraft transmit antenna gain [Gt ] is in dB. Worst angle gain Gselected of selected antenna is used for planning. Assuming the designed lowest and highest data rate are RL and RH respectively. Then, at an on-orbit time t0 ∈ Tp , the mathematical model of Variable Rate Transmission Problem can be written as: max Rb (t0 ) s.t. Enable DT L == 1, RL ≤ Rb (t0 ) ≤ RH , BER ≤ 10−6 , Pt == Pmode , D == D(t0 ), Gt == Gselected . Activities: 1. Through link budget calculation with the constraints, optimized data rate at time t0 is determined. 2. If RL ≤ Rb (t0 ) ≤ RH is satisfied, then set the data transmission rate to Rb (t0 ). If Rb (t0 ) ≥ RH , then set RH as the data transmission rate. If Rb (t0 ) ≤ RL , this means the link condition cannot satisfy the lowest data transmission request, data transmission halts or ask for high power.

3.4

Storage Scheduling (Model 4)

The storage scheduling is tightly coupled with the data transmission task. Especially when the data transmission task is interrupted, data needs to be recorded into the storage for later transmission. Panning Period: Exploring payload working period. Sometimes, telemetry data also needs to be recorded. Constraint: 1. The data transmission link is enable or not; 2. The data transmission rate; 3. Storage space.

1556

J. Li et al. Check the storage space

Plenty of storage?

No

Has the stored data been played back?

Yes

Wipe out

No Yes

Transmission link?

Yes

Play back

No

Record the data

Wait for transmission link

Fig. 5. The logical scheduling of Model 4 planning activities

Activities: The autonomous planning of storage contains to main activities: 1. Autonomous Recording: Before the recording, the storage judges if there is plenty of storage space to execute the request. If the space is not sufficient, the recording task cannot be execute. Otherwise, data can be recorded for later transmission. 2. Autonomous Play-back: Once the data transmission link build up, the recorded data can autonomous plan the play-back function until the storage is empty or data transmission halted. After playing back the recorded data, the storage should be autonomously wiped out in order to free the space. The storage scheduling is illustrated as Fig. 5.

4

Logical Relations and Connections of Planning Models

The four planning request is logically related. The output of one planner can be the input of another. Therefore, as a whole subsystem, the problems should be solved as a unified integrity. Corresponding to the Fig. 1, spacecraft on-orbit position, antenna pointing angle and link length could be acquired from navigation information. Figure 6 illustrates the logical relation and connections of four planning models proposed above. The on-orbit position and distributed power are the input of the planner for Model 1 and Model 3. The outputs of Model 1, link establishment and working modes, act as a key input of Model 2, Model 3 and Model 4. The output of Model 2, i.e. the antenna gain, is also the input of Model 3. The rate output of Model 3 plays as one of the constrains of Model 4.

5

Conclusion

Due to the extreme distance of deep-spacecraft, autonomous planning and scheduling is the effective way to increase the AI level of the spacecraft, in order

Autonomous Mission Planning and Scheduling Strategy for Data

1557

Fig. 6. The logical relations and connections of planning models

to flexibly deal with unpredictable deep-space environment and events. In this paper, we analyzed the planning and scheduling strategy for data transmission task to increase autonomy in the normal operating mode. The purpose of this strategy is to clear the elements of autonomous planning models for specified planning requests, which contains “plan period”, “constraints” and “activities”. Regarding to the data transmission task, the planning requests considered in this paper include the autonomous establishment of downlink data transmission link, the autonomous antenna pointing controls, the autonomous scheduling of variable-rate transmission and the autonomous scheduling of data storage. Mathematic models are abstracted from the detailed requests, clarifying the object functions. Finally, the logical relations and connections of the four considered planning models are illustrated. The autonomous functions have to be achieved as a unified integrity.

References 1. Aghevli A, Bencomo A, Mc. Curdy M (2011) Scheduling and planning interface for exploration. In: Proceeding of International Conference on Automated Planning and Scheduling (ICAPS2011), Freiburg 2. Cesta A, Gortellessa G, Fratini S et al (2011) Mr spock-steps in developing an end-to-end space application. Comput Intell 27(1):83–102 3. CCSDS (2018) Mission planning and scheduling. CCSDS 529.0-G-1 4. Bresina JL, Jonsson AK, Morris PH et al (2005) Activity planning for the mars exploration rovers. ICAPS, Monterey, pp 40–49 5. Chien S, Fisher F, Estlin T (2000) Automated software module reconfiguration through the use of artificial intelligence planning techniques. IEE Proc Softw 147(5):186–192

Preparation of TiO2 Nanotube Array Photoanode and Its Application in Three-Dimensional DSSC Zhiwei Cui, J. R. An, and Y. W. Dou(&) Heilongjiang University, Xuefu Road 74, Harbin 150080, China [email protected], [email protected], [email protected]

Abstract. As a promising solar cell type, a three-dimensional (3D) DSSC was fabricated based on the traditional DSSC structure using TiO2 nanotube array (Ti-NTA) photoanode prepared in a titanium wire spiral coil as an alternative material. The effects of preparation process parameters (electrolyte composition, oxidation voltage, oxidation duration, etc.) on the structure of Ti-NTA were investigated, and also the morphological and crystal structure analysis of the materials were carried out by scanning electron microscopy (SEM) and X-rays diffraction spectrum (XRD). Finally, a 3D DSSC was fabricated using Ti-NTAs as photoanode with optimal process parameter and performance of the DSSC was tested. Keywords: 3D DSSC

 Anodizing  Ti-NTA

As a kind of renewable non-polluting green energy, solar cell is one of the most promising new energy source, which has received great attentions from researchers [1]. There are different methods for solar cell production. Among them, DSSC is a new way to use solar energy which has potential application for supplying power for miniature devices when the size of DSSC scaled down [2]. Typical DSSC is a layered “sandwich” structure consisting of a photoanode, an electrolyte and a counter electrode [3]. Conventionally, we mostly employ titanium foil to prepare TiO2 photoanode which limited their light-receiving and specific surface area, the two key factors for adsorptions of dye molecules, and cause lower efficiency of absorbing solar energy [4]. Different from the traditional photoanode using in DSSC, a 3D Ti-NTA prepared in Titanium (Ti) wire solenoid was adopted as photoanode to fabricate a new type of DSSC, which may improve the light absorption by increasing light receiving area.

1 Experimental Part There are many different methods for preparation of TiO2 nanofilms such as sol-gel [5], hydrothermal, anodization oxidation [6] etc., which have advantages and disadvantages under different circumstances. In this experiment, an anodic oxidation method was used to prepare Ti-NTA. The processing flow of overall experiment was shown in Fig. 1. Materials used in experiment were titanium wire (99.99%), ammonium fluoride © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1558–1566, 2020 https://doi.org/10.1007/978-981-13-9409-6_187

Preparation of TiO2 Nanotube Array Photoanode

1559

(analytical grade), ethylene glycol (analytical grade), deionized water, platinum wire (99.99%) (All purchased from Tianjin Guangfu Fine Chemical Research Institute).

Ammonium fluoride, ethylene glycol, water

Ti wire solenoid Cleaning and drying

Anodizing Cleaning and drying

SEM, XRD

Thermal annealing

XRD Photoanode

Electrolyte

0.3mL Solution N719

Stereo DSSC

Photoelectric test

Fig. 1. Processing flow of overall experiment

2 Results and Discussion In this paper, Ti-NTA photoanode was prepared in the Ti wire solenoid by anodizing method. As well known, the morphology, crystal structure of Ti-NTA are key factors effecting the performance of DSSC [7]. The morphology of Ti-NTA prepared by anodization is mainly affected by oxidation voltage, oxidation duration, oxidation temperature, electrolyte composition [8]. The crystal structure of Ti-NTA is mainly influenced by the annealing temperature [9]. In this section, the morphology of the TiNTAs was observed by SEM and the influence of preparation parameters on the morphology were analysed. Crystal structure analysis was also given by XRD. Finally, characteristics of 3D DSSC made with Ti-NTA as photoanode were tested.

1560

2.1

Z. Cui et al.

Effect of Preparation Parameters on Morphology of TiO2 Nanotubes

In this section, the effects of ammonium fluoride content, water content, oxidation voltage and oxidation duration on the morphology of TiO2 nanotubes during anodizing were studied. 2.1.1 Ammonium Fluoride Content in Electrolyte In order to give the effect of ammonium fluoride concentration on the surface morphology of Ti-NTA, the contents of the electrolyte except ammonium fluoride was fixed (water (40 ml), ethylene glycol (500 ml)), conditions for anode oxidation were: voltage—60 V, temperature—30 °C, duration—4 h. The surface morphology of titanium wire solenoid anodized under five different mass fractions of ammonium fluoride were shown in Fig. 2. It could be seen from Fig. 2 that the TiO2 nanotubes wasn’t formed in a 0.1 wt% ammonium fluoride, and the surface of the titanium wire solenoid was covered by uniformly distributed corrosion pit. With the increase of ammonium fluoride concentration, the prepared Ti-NTAs were uniformly distributed. When the concentration is 0.5 wt%, part of the sidewall begins to corrode and fall off, the TiO2 nanotubes prepared by the 0.3 wt% ammonium fluoride had the best morphology as could be seen from Fig. 2a–e. As shown in Fig. 3f, with the increasing of the ammonium fluoride concentration, the diameters of Ti-NTAs increased and the thickness of the nanotube wall increased at first and then decreased. The contents of ammonium fluoride had little effect on the length of TiO2 nanotubes, which was about 5 lm under different contents.

(a)

(b)

(c)

(d)

(e)

(f) Length(nm)

250

pipe diameter(nm) pipe thickness(nm)

200 150 100 50 0 0.1

0.2

0.3

0.4

0.5

Concentration(wt%)

Fig. 2. Surface and cross-sectional SEM images of Ti-NTAs prepared by different ammonium fluoride contents: a 0.1 wt%, b 0.2 wt%, c 0.3 wt%, d 0.4 wt%, e 0.5 wt%; f indicated that diameter and thickness variations

Preparation of TiO2 Nanotube Array Photoanode

1561

2.1.2 Water Content in the Electrolyte In order to give the effect of water content on the surface morphology of Ti-NTA, the composition of the electrolyte except water was fixed (ammonium fluoride (0.3 wt%), ethylene glycol (500 ml)), conditions for anode oxidation were: voltage—60 V, temperature—30 °C, duration—4 h. The morphology of the Ti-NTA was observed under the conditions of different water contents (see Fig. 3). It could be clearly seen from Fig. 3a that when water content in the electrolyte equals to 2 vol.%, the surface of the nanotube was covered with a thick nanoscale “grass”, and the length of the nanotube below it was about 11.8 lm. When water content increased to 4 vol.%, the surface of the nanotubes was still covered with thick nanoscale “grass”, but with slender structure on it, and the length of the nanotubes was about 13.6 lm (see Fig. 3b). As the water content rise to 8 vol.%, the surface-covered “grass” had been significantly reduced, and more than half of the places had exposed neat nanotube arrays with length of 8.7 lm (see Fig. 3c). when water content was 16 vol.%, the “grass” on the surface of the nanotube had disappeared, and the nanotube was completely exposed, and the length of the nanotube was only 3.4 lm (see Fig. 3d).

(a)

(b)

(d)

(c)

(e)

Length ( )

12 8 4 0

4

8

12

16

Volume fraction(vol.%)

Fig. 3. SEM image of the surface and cross-sectional view of titanium nanotube with different water volume fractions under 60 V, 30 °C and 4 h oxidation conditions: a 2 vol.%, b 4 vol.%, c 8 vol.%, d 16 vol.%, e indicated the length variations

The main reason for the phenomenon mentioned above lies in that when water content in the electrolyte was lower, the hydrolysis of NH4F was insufficient, which provided less quantities of F-ion and lead to low electrolyte conductivity. As we known, the growth rate of nanotube increased rapidly with increasing conductivity of electrolyte. When water content in the electrolyte increased, hydrolysis of NH4F would also increase, which caused increasement of conductivity of electrolyte, in turn, made the growth rate of nanotube enhanced (process 1). However, as the water content

1562

Z. Cui et al.

increase continuously, the rate of surface corrosion of the nanotubes also enhanced (process 2). Existence of “grass” on the surface of Ti-NTA decided by the two competitive processes. Lower water content would cause lower growth rate of the nanotubes, which made the surface has more “grass” and the nanotube was completely covered (correspond to process 1). In contrast, when water content was much higher, the rate of surface corrosion of the nanotubes also increased, process 2 was more significant than process 1. At this circumstance, growth rate of the nanotubes became slower, then the surface has less “grass” and the nanotube was completely exposed as we can see from Fig. 3d.

(b)

(c)

(d)

(e)

(f) Length(nm)

(a)

250

pipe diameter(nm) pipe thickness(nm)

200 150 100 50 30

40

50

60

Voltage(V)

70

80

Fig. 4. SEM image of titanium oxide surface with different anodization voltage under the conditions of ammonium fluoride/ethylene glycol/water (ammonium fluoride 0.3 wt%, 500 ml, water 8 vol.%) electrolyte system, 30 °C, oxidation for 4 h: a 40 V, b 50 V, c 60 V, d 70 V, e 80 V, f indicated that diameter and thickness variations

2.1.3 Oxidation Voltage SEM images of Ti-NTAs prepared at different anodization voltages were shown in Fig. 4. It could be seen from Fig. 4 that the change of the anodic oxidation voltage had direct effects on the diameter and morphology of the Ti-NTAs. As the anodization voltage increased, the diameter of the nanotubes increased continuously, while the nanotube sidewall thickness didn’t change obviously. With the increasing of oxidation voltage, the “grass” on the surface of the nanotube was gradually reduced, and nanotube orifice was completely exposed. The main reason is that as the oxidation voltage increased, the concentration of F− in the orifice also increased by increasing of electric field force there, and the corrosion of the orifice became faster. The increase of reaction duration resulted in the diameter of the nanotube getting larger and larger, as can be seen from Fig. 4a–e. At this time, the electric field force of F− is getting smaller and smaller. The corrosion of the nozzle tends to be flat, which obtained a smooth nanotube structure. Relationship of Sidewall thickness and the oxidation voltage is shown in Fig. 4f.

Preparation of TiO2 Nanotube Array Photoanode

1563

2.1.4 Oxidation Duration The influence of oxidation duration on nanotube length was investigated by selecting the optimal anodized voltage of 60 V, oxidation temperature of 30 °C, water content of 8 vol.% based on the results mentioned above, the oxidation durations were set by 15 min, 30 min, 1 h, 2 h, 4 h, 12 h, 24 h, respectively. The length of the nanotubes grown under various oxidation duration were analyzed and summarized. The SEM images of the nanotube length obtained under different oxidation duration were shown in Fig. 5. It could be seen that the length of TiO2 nanotubes increased with the increase of oxidation duration.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h) 30

Length(um)

25 20 15 10 5 0 0

5

10

15

20

25

Time(h)

Fig. 5. SEM image of nanotube lengths at different oxidation durations in an electrolyte system of 60 V, 30 °C ammonium fluoride/ethylene glycol/water (ammonium fluoride 0.3 wt%, 500 ml ethylene glycol, water 8 vol.%): a 15 min, b 30 min, c 1 h, d 2 h, e 4 h, f 12 h, g 24 h, h indicated the length variations

1564

2.2

Z. Cui et al.

Annealing

Heat treatment was carried out for the TiO2 nanotubes prepared by the abovementioned anodizing conditions (0.3 wt% ammonium fluoride, 500 ml ethylene glycol, water 8 vol.% in electrolyte, 30 °C, 60 V). The effects of annealing temperature on the surface morphology of TiO2 nanotubes were observed. The crystal structures of annealed nanotubes were analyzed by XRD and the results was shown in Fig. 6b.

(a)

(b) a

Intensity (a.u.)

a:anatase

a

a a

a 550 500 450 400 350

10

20

30

40

50

60

2 thera (degree)

70

80

Fig. 6. Surface SEM image of nanotube annealed at 500 °C (a) and XRD patterns of nanotubes annealed at 350, 400, 450, 500, 550 °C (b)

It can be seen from the Fig. 6 that Ti-NTA had the same structure with samples before and after annealing and the annealed TiO2 nanotubes exhibit characteristic diffraction peaks of anatase at 2h of 25.3°, 38.06°, 48.1°, 54.2°, and 70.8°, indicated that the TiO2 nanotubes have been converted into anatase phase. As the temperature increases, the peak value intensities increase gradually, which indicating that degree of crystallization is getting better as the temperature increases. Indicating that a small amount of anatase has been converted to rutile. Because the anatase has better photoelectric conversion efficiency [10], we chose 500 °C as the ideal annealing temperature. 2.3

Solar Cell Assembly and Performance Test

A 3D DSSC was made using annealed Ti wire solenoid as photoanode and the platinum wire as cathode with dye sensitizer and the liquid electrolyte solution placed between the two electrodes. Finally, the epoxy resin AB glue was used for encapsulation. Characteristic parameters were tested for assembled DSSC (Fig. 7).

Preparation of TiO2 Nanotube Array Photoanode

anode

1565

cathode

Fig. 7. 3D dye-sensitized solar cell

The assembled 3D DSSC was placed in a solar simulator with a standard AM 1.5 solar spectral radiation distribution with an incident power density of 100 mW/cm2. The photo-induced current versus voltage of DSSC was measured by linear voltammetry scanning with an electrochemical workstation. The growth parameters of the photoanode was: voltage—60 V, electrolyte (0.3 wt% ammonium fluoride, 500 ml ethylene glycol, water 8 vol.%), duration—24 h. The resulting J-V plot was shown in Fig. 8. 10

2

Current density (mA / cm )

9 8 7 6 5 4 3 2 1 0 0.0

0.1

0.2

0.3

0.4

0.5

0.6

Potential (V)

Fig. 8. Output J-V curve of stereo DSSC prepared by photoanode nanotube growth for 24 h

When the photoanode growth duration was 24 h, it was found that Voc = 0.51, Jsc = 5.8 mA/cm2, FF = 0.94, η = 2.8%.

3 Conclusion In this paper, anodizing method was used for preparation of Ti-NTA. Experiment results show that processing parameters (anodizing voltage, anodizing duration, content of different compositions of electrolyte) used in anodizing for Ti wire solenoid had

1566

Z. Cui et al.

significant effect on the morphology of Ti-NTA. Samples prepared under various processing parameters had different diameter and length of Ti-NTA, which we got more uniformly and cleanly Ti-NTA with operating temperature of 30 °C, oxidation voltage of 60 V and electrolyte of ammonium fluoride/ethylene glycol/water (ammonium fluoride 0.3 wt%, 500 ml ethylene glycol, water 8 vol.%). In order to get anatase phase and better crystallization degree, the Ti-NTA was annealed at 500 °C and analyzed by XRD, which indicated the existence of highly crystallized anatase phase. A 3D-DSSC was prepared using Ti-NTA as photoanodes, and the characteristics of DSSC was obtained: Voc = 0.51, Jsc = 5.8 mA/cm2, FF = 0.94, η = 2.8%.

References 1. Jeyaraman AR, Balasingam SK (2019) Enhanced solar to electrical energy conversion of titania nanoparticles and nanotubes-based combined photoanodes for dye-sensitized solar cells. Mater Lett 243:180–182 2. Chamanzadehb Z, Noormohammadia M (2017) Enhanced photovoltaic performance of dye sensitized solar cell using TiO2 and ZnO nanoparticles on top of free standing Ti-NTAs. Mater Sci Semicond Process 61:107–113 3. Li J, Ma S, Liu X (2012) ZnO meso-mechano-thermo physical Chemistry. Chem Rev 112:2833–2852 4. Tiwana P, Docampo P, Johnston MB, Snaith HJ (2014) Electron mobility and injection dynamics in mesoporous ZnO, SnO2, and TiO2 films used in dye-sensitized solar cells. ACS Nano 5(6):5158–5166 5. Hagfeldt A, Gratzel M (2012) Molecular photovoltaics. Acc Chem Res 33(5):269–277 6. Chiba Y, Islam A, Watanbe Y (2016) Dye sensitized solar cells with conversation efficiency of 11.1%. Jpn J Appl Phys 45(25):638–640 7. Hossain MA, Sehyun O (2017) Fabrication of dye-sensitized solar cells using a both-endsopened TiO2 nanotube/nanoparticle hetero-nanostructure. J Ind Eng Chem 51:122–128 8. Sarvari N, Mohammadi MR (2018) Influence of photoanode architecture on light scattering mechanism and device performance of dye-sensitized solar cells using TiO2 hollow cubes and nanoparticles. J Taiwan Inst Chem Eng 86:81–91 9. Rajaei E, Valipouri A (2018) Electrochemical and photovoltaic properties of dye-sensitized solar cells based on Ag-doped TiO2 nanorods. Optik 158:514–521 10. Ghoderao KP, Jamble SN (2018) Influence of reaction temperature on hydrothermally grown TiO2 nanorods and their performance in dye-sensitized solar cells. Superlattices Microstruct 124:121–130

Block-Based Data Security Storage Scheme Yina Wang1, Hongbin Ma1, Qitao Ma2, Hong Chen1, Dongdong Zhang1, and Yingli Wang1(&) 1

Electronic Engineering College, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected] 2 The Hong Kang Polytechnic University, No. 11 Yucai Road, Hong Kong, People’s Republic of China

Abstract. Blockchain technology is now a frontier field of high value with its unique technological advantages, innovative value concepts and wide application scenarios. The blockchain guarantees the integrity and anti-tamper modification of the stored transaction data through the hash algorithm. In the face of the hash algorithm, the centralized attack method, namely the brute force cracking method, the dictionary cracking method and the rainbow table attack method, is realized by using the collision principle. Based on the traditional method of salt-adding hashing, this paper proposes to use the commonly used Miller-Rabin prime number detection algorithm to generate random large prime salt to resist the attack of rainbow table. The test time comparison proves that this method plays an important role in the secure storage of blockchain data. Keywords: Blockchain

 Large prime  Salt  Rainbow attack

1 Introduction Blockchain is a decentralized distributed ledger technology based on cryptographic algorithms. It is essentially a database shared by the Internet. In the blockchain, the hash algorithm and the asymmetric encryption technology in cryptography are used to protect the data in each link of the blockchain network [1]. The hash algorithm is able to convert a value of any length into a fixed-length binary value, and the result is called the hash value of the original data. The hash algorithm uses its unidirectionality to make it impossible for information robbers to push back raw data through hash values. However, in the face of this characteristic of the hash algorithm, the following concentrated attack methods have emerged: brute force attack, dictionary attack, and rainbow table method. In the face of this type of attack, we can mix a “random” string into the password and hash it [2]. This string is called the salt value. Traditional salt values can be roughly divided into the following two categories:

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1567–1575, 2020 https://doi.org/10.1007/978-981-13-9409-6_188

1568

Y. Wang et al.

(1) Fixed salt In order to verify that the password is correct, we need to store the salt value. It is usually stored in the account database along with the password hash or directly as part of the hash string [3]. Increased difficulty in deciphering, long enough fixed salt can resist violent attacks and dictionary attacks to a certain extent, but it is prone to short salt, repeated salt, and combined salt. The form of the fixed salt is as follows: hash(“hello”) = 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938 b9824 hash(“hello”+“QxLUF1bgIAdeQX”) = 9e209040c863f84a31e719795b257752395473 9fe5ed3b58a75cff2127075ed1

(2) Random salt A commonly used random salt is a salted hash based on a pseudo random number generator (CSPRNG). This causes the same password to be encrypted into a completely different string each time [4]. The same is a pure digital 3-bit short salt value, the amount of computation required for random salt cracking is 1000 times the fixed salt value. In this paper, we use the MD5 encryption algorithm and the single-linked list data structure to generate a lightweight blockchain. Different from the traditional pseudo-random number generator to generate salt values, we use the Miller-Rabin prime number detection algorithm to generate a set of large prime numbers [5]. As a salt value, the randomization of each salt value is ensured by studying the prime number distribution, and finally the effect of the salt hash encryption of the present scheme is detected by a simple rainbow table attack test.

2 Formation of Lightweight Blocks 2.1

Block Data Structure

We first define the structure of the block. The most basic information of each block in the blockchain includes its own hash value, the last block hash value, timestamp, and storage information data list. The data storage information list mainly includes a storage hash value, a storage initiator, a storage acceptor, a time stamp, a stored data information, a signature, a storage input, and a storage output information. The class diagram of the block implementation is shown in Fig. 1.

Block-Based Data Security Storage Scheme

1569

Block Class int index String Ɵmestamp List data String hash String prevHash get() set()

Fig. 1. Block implementation class diagram

2.2

Data Structure of Singly Linked List

In the previous section, we designed the basic data letter of the block. In this section, we will jump out of the block and design and implement it from how to connect the block into a line to form a blockchain. Figure 2 shows the structure of a simple singlelink data structure [6]. node _3 node _1 data

Node . next

123

node _2

data

Node . next

789

node _4

node _4

node _2 data

Node . next

456

node _3

data

Node . next

111

null

Fig. 2. Data structure diagram of one-way linked list

From Fig. 2, we can see that the singly linked list structure must know the position of the next node. Through this node, we must know where the next node is, in order to form discrete points into lines, that is, design the block to form a block. So we redesigned the block class, added some redundant fields, and made the block into a blockchain. The data information carried in each block is not considered first, and the plurality of block object instances form a linked list structure completion blockchain as shown in Fig. 3.

1570

Y. Wang et al.

Block n String hash String prevHash Block 2 String hash String prevHash

Block 1 String hash String prevHash

Fig. 3. Blockchain is composed of block diagrams

As shown in Fig. 3, the block object 1 points to the block object 2 through the prevHash attribute, and the index is sequentially directed to complete the blockchain diagram of length n. 2.3

Light Blockchain Formation

According to the principle described above, we use C++ language to implement a block as shown in Fig. 4, which takes 384.320 s, and the pseudo code is shown in Table 1.

Fig. 4. Block formation Table 1. Block pseudo code

bloc. data = circle; bloc. pre_ hash = pre_ blo. this_ hash; bloc. this_ hash = (blo. pre_ hash + person->public_ Key) * (i + j) Create_ Block * one = new Create_ Block(); one->Make_ First_ Block(); Get_ Block* two = new Get_ Block(); two->calc();

Block-Based Data Security Storage Scheme

1571

3 Hash Salt Encryption Based on Large Prime Numbers The number of primes has been widely applied to cryptography, such as the RSA algorithm. The prime modulus defines a finite set, provides computability, and the large number decomposition provides computational unidirectionality. At present, the methods for generating prime numbers mainly include deterministic generation of prime numbers and probabilistic generation of prime numbers. The method of deterministic generation of prime numbers can ensure that the number generated must be a prime number, but the generated prime number has a certain regularity, and the attacker can use a small cost to derive the law of generation of prime numbers; the method of probabilistic generation of prime numbers It is a pseudo prime number and needs to be tested for primeness, but it is faster and has no rules to follow, which is safer and faster than the former. So we use a probabilistic prime generation method to generate a set of large prime numbers. 3.1

Large Prime Generation

The steps for generating large prime numbers for n bits are as follows: (1) Using the Lehmer random number algorithm to generate an n-bit binary number q; (2) performing a Miller-Rabin test on the q generated in the previous step; (3) outputting the test successfully, otherwise returning to the previous step. We use C++ language programming to generate a set of large prime matrices. We set the prime interval to [1006721, 15485863]. As shown in Table 2, it is a partial prime number.

Table 2. Prime generation table 1006721 5924161 8108827 10026943 12046927 13324769 15485801

3.2

1006739 5924173 8108831 10026949 12046933 13324789 15485807

1006751 5924189 8108861 10026971 12046949 13324819 15485837

1006769 5924203 8108869 10026979 12046967 13324849 15485843

1006781 5924221 8108899 10027009 12046973 13324853 15485849

1006783 5924231 8108911 10027021 12046981 13324859 15485857

Distribution of Prime Numbers

Suppose x [ 0, let pðxÞ be a prime number that is not greater than x, that is, pð xÞ ¼ Pðx1 Þ, also called a prime distribution function Although the distribution of prime numbers in integers is extremely irregular, it has good distribution properties. In 1792, Gauss first proposed the prime number theorem, as shown in Eq. (3.1).

1572

Y. Wang et al.

limX!1

pð xÞ

ð3:1Þ

x log x

The prime number theorem is the central theorem of prime distribution, which describes the rule of the prime number distribution in accordance with the logarithmic integral. However, as the value is larger, the absolute error of the formula is so large that the actual effect is small. Until 1859, Riemann proposed the following conjecture: The non-trivial zeros of fðsÞ all fall on the straight line x ¼ 1=2 of the complex plane, that is, the real part of the non-trivial zero is 1=2. According to the Riemann conjecture, it can be found that the distribution of prime numbers is regularly searchable, that is, the selection of prime numbers in a set of prime numbers is the selection of x.

4 The Realization of the Fourth Block Blockchain Security Storage Scheme 4.1

Process Implementation

The design idea of prime salt hash encryption based on lightweight blockchain is that each time a block is formed, the user-provided password is combined with the specified additional code, and the combined result is hashed, and then by randomly assigning a prime salinity value from the prime matrix according to the prime distribution theorem, the random large prime number and the just hash result are finally hashed to form the final ciphertext. Among them, the role of the additional code is to make the plaintext password further randomized by combining it with the password first, and increase the length of the plaintext ciphertext, which can increase the difficulty of using dictionary, violence, and rainbow table attacks. The flow chart of the whole salt-added hash encryption algorithm is shown in Fig. 5.

Users

System assignment

Password| extracode

Hash operaƟon

Password|salt

Random large prime number

Lightweight block chain

Hash operaƟon ID|password

ID|password

Fig. 5. Application flow chart of salting hash algorithm

ID|password

Block-Based Data Security Storage Scheme

4.2

1573

Algorithm Implementation 1.Implementation of MD5 Salt Hash Encryption Algorithm SecureRandom PRIME = new SecureRandom (); byte [] salt = new byte [SALT_LENGTH]; random.nextBytes(salt); pwd = new byte [digest. length + SALT_LENGTH]; 2.Implementation of account verification algorithm // register account encryptedPwd = MD5Encoder.getEncryptedPwd(password); users.put(userName, encryptedPwd); // Verify login String pwdInDb = (String)users.get(userName); return MD5Encoder.validPassword(password, pwdInDb) System.out.println("don’t exit!!!");

5 Program Testing and Analysis 5.1

Rainbow Table Attack Principle

The rainbow table is an optimized memory time tradeoff proposed by Oechslin in 2003 for the lack of the Hellman method. The method includes a pre-computation phase and an online phase, the pre-calculation is to construct a rainbow table process, and the online analysis is a look-up process. Compared to the Hellman table, the rainbow table only occurs when two nodes collide in the same column. In a rainbow table of length t, with m chains, based on different R functions, the probability of finding a key success is: Prainbow ¼ 1 

t  Y i¼1

m1

1

mi  N

ð3:1Þ

In which m1 ¼ m; mi þ 1 ¼ Nð1  e N Þ. The method for finding the key in the rainbow table is as follows: Perform F function transformation on the password to be cracked, get the key value, and then compare the key value with the EP of each chain in the table. If it is the same as a

1574

Y. Wang et al.

certain EPi , the Ki , t  1 in the chain where EPi is located is likely to be Looking for K; Pre-calculation is started from the starting point SPi of the rainbow chain until Ki , t  1, and S is performed on it. If the obtained value is consistent with the result of cracking the solution S, the crack is completed, otherwise it is a false alarm operation. Need to be re-aligned; If the corresponding EP value is not found, repeat steps 1, 2. Every time an F operation is performed, the K is moved one column in the direction of the starting point, and the execution is repeated t  2 times. If the corresponding plain is still not found, the crack fails. 5.2

Test and Analysis

Since the rainbow table involves sensitive information, encryption is also required during the transmission process. We chose to save the rainbow table locally on the client. To save time, we used multiple compute nodes for distributed computing. Let r be the size of each rainbow table, p is the key length, u is the key space, q is the number of rainbow tables, m is the number of rainbow chains in each table; l is the length of the rainbow chain. We perform a four-state attack test on a rainbow table with a key length of 1–7 characters. The results are shown in Table 3. Table 3. Rainbow table micro attack based on four states Name r (M) p u q m l t

No salt 610 1–7 8353082582 5 8000000 2100 3 h 16 m

Fixed salt 610 1–7 8353082582 5 8000000 2100 05 h 45 m

Pseudo random number salt 610 1–7 8353082582 5 8000000 2100 9 h 38 m

Large prime salt 610 1–7 8353082582 5 8000000 2100 11 h 56 m

Analysis of the table data can be seen if the password characters are controlled within 1–7 digits, compared to the four types. It can be seen from the crack time (account verification time) t that using the large prime number as the salinity value can maximize the difficulty of attack.

6 Conclusion After encrypting the user password with a safe hash algorithm with salinity value, even if multiple users provide the same accident password string, the encryption is obtained because the system randomly generates different salinity values for each user. The ciphertext is different, which makes attackers who use the dictionary attack and rainbow table attack need to create a dictionary record for each salinity value, which will make the attack complicated and time consuming. In addition, by using the large prime numbers that play an important role in cryptography as the salinity value and the role of

Block-Based Data Security Storage Scheme

1575

the pre-additive code, the plaintext password can be further randomized, and the length of the plaintext password is increased, which also increases the difficulty of the attack to some extent. It can be seen that the use of a secure hash encryption algorithm with a large prime salinity value to encrypt the user’s password can improve the security of the user’s password, and thus improve the secure storage of the data based on the blockchain. Acknowledgements. This work is supported by Heilongjiang Provincial Education Department Project (SJGY20180390).

References 1. Tahir R, Huosheng H, Dongbing G, McDonald-Maier K, Howells G (2013) Resilience against brute force and rainbow table attacks using strong ICMetrics session key pairs 2. Gauravaram P (2012) Security analysis of salt||password hashes 3. Dhany HW, Izhari F, Fahmi H, Tulus M, Sutarman M (2018) Encryption and decryption using password based encryption, MD5, and DES. In: International conference on public policy, social computing and development, 2017 (ICOPOSDev 2017) 4. 祝彦斌,王春玲.一种Hash特征隐藏的加盐信息摘要模型[J].计算机技术与发展, 23 (03):134–138 (2013) 5. Sturm C, Scalanczi J, Schönig S, Jablonski S (2019) A Blockchain-based and resource-aware process execution engine. Future Gener Comput Syst 100 6. Abt N (2019) Blockchain is already tracking and securing freight shipments across the globe. Fleet Owner

Chaos Synchronization and Voice Encryption of Discretized Hyperchaotic Chen Based on Euler Algorithm Xinyue Tang, Jiaqi Zhen(&), Qun Ding, Bing Zhao, and Jie Yang College of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected]

Abstract. In this paper, in order to facilitate the hardware implementation, the hyperchaotic Chen system is discretized based on the Euler algorithm, and the Lyapunov exponent is used to verify that it has the same hyperchaotic characteristics with continuous one. A nonlinear controller is constructed for synchronization of two discrete hyperchaotic Chen systems with the same structure, then the synchronization of it is proved by Lyapunov stability theory. Finally, the secret communication scheme based on nonlinear feedback synchronization is used to transmit the voice signal, which verifies the feasibility and efficiency of it again. Keywords: Euler algorithm  Discrete hyperchaotic Chen system Synchronization  Voice encryption



1 Introduction Since Pecora and Carroll [1] introduced a method of synchronizing two identical chaotic systems with different initial conditions, chaotic synchronization has attracted great attention. There are many methods and techniques for dealing with chaotic control and synchronization, such as nonlinear feedback, adaptive synchronization [1, 2]. In 2004, Li et al. obtained a hyperchaotic system from the Chen chaotic system by designing a nonlinear state feedback controller, that is the four-dimensional hyperchaotic Chen system [3]. The dynamic characteristics of continuous hyperchaotic Chen chaotic systems are generally analyzed by many literatures [1, 3]. However, in practical applications, it must be discretized and digitized. Therefore, the Euler algorithm is adopted here to discretize [4].

2 Discretized Hyperchaotic Chen System and Synchronization 2.1

Discretized Hyperchaotic Chen System Based on Euler Method

The equation of discrete hyperchaotic Chen system based on Euler algorithm is shown in Eq. (1). © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1576–1580, 2020 https://doi.org/10.1007/978-981-13-9409-6_189

Chaos Synchronization and Voice Encryption

1577

8 Dx ¼ xðn þ 1Þ  xðnÞ ¼ ½ayðnÞ  axðnÞ þ wðnÞ  T > > > < Dy ¼ yðn þ 1Þ  yðnÞ ¼ ½dxðnÞ  xðnÞzðnÞ þ cyðnÞ  T > Dz ¼ zðn þ 1Þ  zðnÞ ¼ ½xðnÞyðnÞ  bzðnÞ  T > > : Dw ¼ wðn þ 1Þ  wðnÞ ¼ ½yðnÞzðnÞ þ rwðnÞ  T

ð1Þ

where a, b, c, d, r are system parameters, x, y, z, w are system state variables, and T is the sampling time of Euler algorithm. When a ¼ 35, b ¼ 3, c ¼ 10, d ¼ 7, r ¼ 0:2, T ¼ 0:005 and the initial value of the system is taken ð1; 1; 2; 2Þ, the threedimensional chaotic phase diagram of x, y, w of discrete hyperchaotic Chen is shown in Fig. 1a, and it’s Lyapunov exponent is shown in Fig. 1b, showing that the discrete Chen system contains two positive Lyapunov exponents, and it is hyperchaotic.

Dynamics of Lyapunov Exponents

2

LE 1 =1.4783

x(n)

1.5 30 20 10 0 -10 -20 40

LE 2 =1.4515

1 0.5 0

LE 4 =-0.0038

-0.5

LE3 =-0.3684

-1 20

0

y(n)

-20

-100

-40

0

100

-1.5 -2

w(n)

(a) The phase diagram of x,y,w dimension

0

1000

2000

3000

4000

5000

(b) The figure of Lyapunov Exponents

Fig. 1. Discrete hyperchaotic Chen system

2.2

Nonlinear Feedback Synchronization

In this paper, the chaotic synchronization method based on nonlinear state variable feedback is applied. The principle is shown in Fig. 2.

ct

cr

ui (t )

m(t )

Fig. 2. Nonlinear feedback synchronization

m′(t )

1578

X. Tang et al.

The driving system of this paper is Eq. (1), and the response system is shown in Eq. (2). The output ui ðtÞ ði ¼ 1; 2; 3; 4Þ of the constructed nonlinear controller is as shown in Eq. (3), mðtÞ is the transmitted voice signal, m0 ðtÞ is the received voice signal after transmission through the common channel. 8 Dx1 ¼ x1 ðn þ 1Þ  x1 ðnÞ ¼ ½ay1 ðnÞ  ax1 ðnÞ þ w1 ðnÞ  T þ u1 > > > < Dy ¼ y ðn þ 1Þ  y ðnÞ ¼ ½dx ðnÞ  x ðnÞz ðnÞ þ cy ðnÞ  T þ u 1 1 1 1 1 1 1 2 > Dz ¼ z ðn þ 1Þ  z ðnÞ ¼ ½x ðnÞy ðnÞ  bz ðnÞ  T þ u 1 1 1 1 1 1 3 > > : Dw1 ¼ w1 ðn þ 1Þ  w1 ðnÞ ¼ ½y1 ðnÞz1 ðnÞ þ rw1 ðnÞ  T þ u4 8 u1 > > >

u3 > > : u4

ð2Þ

¼ xðnÞ  x1 ðnÞ ¼ yðnÞ  y1 ðnÞ þ T  ½x1 ðnÞz1 ðnÞ  xðnÞzðnÞ ¼ zðnÞ  z1 ðnÞ þ T  ½xðnÞyðnÞ  x1 ðnÞy1 ðnÞ

ð3Þ

¼ wðnÞ  w1 ðnÞ þ T  ½yðnÞzðnÞ  y1 ðnÞz1 ðnÞ

The Lyapunov function method is defined in [1]. To define the system synchronization error as ex ¼ xðnÞ  x1 ðnÞ, ey ¼ yðnÞ  y1 ðnÞ, ez ¼ zðnÞ  z1 ðnÞ, ew ¼ wðnÞ  w1 ðnÞ, the error between Eqs. (1) and (2) is: 8 Dex ¼ T  ½aðey  ex Þ þ ew   ex > > > < De ¼ T  ðde þ ce Þ  e y x y y > Dez ¼ Tbez  ez > > : Dew ¼ Trew  ew

ð4Þ

Take the Lyapunov function: 1 VðeÞ ¼ ðe2x þ e2y þ e2z þ e2w Þ  0 2

ð5Þ

then: DVðeÞ Dex Dey Dez Dew ¼ex þ ey þ ez þ ew T T T T T 1 2 1 1 1 ¼ða þ dÞex ey  ða þ Þex þ ex ew þ ðc  Þe2y  ðb þ Þe2z þ ðr  Þe2w T T T T 1 1 1 1 1 1  ða þ dÞðe2x þ e2y Þ þ ða þ Þe2x þ ðe2x þ e2w Þ þ ðc  Þe2y  ðb þ Þe2z þ ðr  Þe2w 2 T 2 T T T 1 1 1 1 1 1 1 1 1 1 ¼ð þ d  a  Þe2x þ ð a þ d þ c  Þe2y  ðb þ Þe2z þ ð þ r  Þe2w 2 2 2 T 2 2 T T 2 T

ð6Þ

According to the value of the given system parameters, it is obvious that DVðeÞ=T  0, the system finally reaches full synchronization. The nonlinear controller satisfies the chaotic synchronization discriminant method of Lyapunov function.

Chaos Synchronization and Voice Encryption

1579

Figure 3a shows the synchronization error of the different dimensions between the drive system and the response system, using the constructed nonlinear controller. The w-dimensional synchronization can be achieved after 0.01 s, as shown in Fig. 3b.

(b) The w-dimensional synchronization error

(a) The different dimensions synchronization error

Fig. 3. The synchronization error between the drive and response system

3 Voice Encryption

n

(a) The original voice signal

error(m`(t)-m(t))

m`(t)

ct

m(t)

In this paper, the secure communication scheme of chaotic masking is used [5]. As shown in Fig. 4, the error between the recovered and the original voice signal is small, and full synchronization is quickly achieved.

n

(b) The encrypted voice signal using chaotic masking

0 -0.5 -1 -1.5

n

(c) The recovered voice signal

0

20

40

n

60

80

100

(d) The error between originaland recovered voice signal

Fig. 4. Voice encryption and synchronization error

4 Conclusion In this paper, the hyperchaotic Chen chaotic system is discretized based on the Euler algorithm, and still has the same hyperchaotic characteristics of the continuous one. Based on the Lyapunov stability theory, a nonlinear controller is constructed, which makes the drive and response system reach completely synchronization at 0.01 s. In addition, the nonlinear feedback synchronization is applied to the secret

1580

X. Tang et al.

communication scheme of chaotic masking. The feasibility and high efficiency of it is verified again. Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61501176, Natural Science Foundation of Heilongjiang Province F2018025, University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province UNPYSCT-2016017, and the postdoctoral scientific research developmental fund of Heilongjiang Province in 2017 LBH-Q17149.

References 1. Park JH (2005) Adaptive synchronization of hyperchaotic Chen system with uncertain parameters. Chaos Solitons Fract 26:959–964 2. Mossa Al-sawalha M (2013) Hybrid adaptive synchronization of hyperchaotic systems with fully unknown parameters. Appl Math 4(12):1621–1628 3. Gao T, Qiaolun G, Chen Z (2009) Analysis of the hyper-chaos generated from Chen’s system. Chaos Solitons Fract 39:1849–1855 4. Simin Y Chaotic systems and chaotic circuits: principle, design and its application in communications. Xi’an: Xidian University, pp 609 5. Ekhande R, Deshmukh S (2014) Chaotic signal for signal masking in digital communications. IOSR J Eng (IOSRJEN) 04(02):29–33. ISSN (e): 2250-3021, ISSN (p): 2278-8719V5

Multiple UAV Assisted Cellular Network: Localization and Access Strategy Yiwen Tao(B) , Qingyue Zhang, Bin Li, and Chenglin Zhao School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, 10# Xitucheng Road, Beijing 100876, China [email protected]

Abstract. With the gradual deployment of the fifth generation (5G) communication network, various challenging requirements are urgently expected to be met. One of the most important challenges is to ensure service quality under explosive traffic volume, which is putting pressure on the cellular network. In this paper, we aim to introduce the Unmanned Aerial Vehicles (UAV) to assist the existing base stations (BS), under an overloaded cellular network. Specifically, we suggest an Extended Kalman Filter (EKF) based technique to help the BS get the real-time locations of multiple UAVs. Then, a multiple access strategy is designed to arrange the users to communicate with the nearby UAVs, instead of the BS. Numerical simulation results show that the average network throughput can be significantly improved by introducing UAVs, showing the great potential of our UAV-assisted communication scheme in future 5G network. Keywords: Unmanned aerial vehicles (UAV) Localization · Extended kalman filter (EKF)

1

· Cellular network ·

Introduction

The 5G communication network is expected to support a large amount of users and machines, with extremely high requirements on latency and reliability, especially in particular environments like open-air festivals and stadiums. In these scenarios, the existing communication infrastructure may suffer from severe access overloading [1]. To cope with this problem, various solutions have been suggested, such as densely deploying small cells, developing new access protocols, and resorting to Unmanned Aerial Vehicles (UAV), etc. [2,3]. Among these methods, applying UAVs to assist the existing network is apparently the most cost-effective one. Benefited from the flexibility and miniature size, UAV would be an ideal option for providing wireless communication channels with better conditions [4]. In the UAV-assisted communication network, the UAVs’ c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1581–1588, 2020 https://doi.org/10.1007/978-981-13-9409-6_190

1582

Y. Tao et al.

locations are of great importance to deploy further UAV-based multiple access strategies. It is noted that, various techniques have been provided nowadays for real-time target localization. For instance, the global positioning system (GPS) has been widely utilized in multiple applications. Nevertheless, future complex communication environments with unstable signal propagation characteristics will cause difficulty in the deployment of GPS, resulting in an unsatisfied localization performance. Meanwhile, the UAV-assisted access strategy has to be further designed. This paper introduces multiple UAVs into the dense cellular network to assist the existing base station (BS), thereby improving the average throughput of the whole network. Concretely speaking, the discrete time extended Kalman filter (EKF) is suggested to estimate the locations of multiple UAVs with high accuracy, with the help of the Received Signal Strength (RSS) at ground users. Incorporated with the estimated locations of each UAV, we further design a multiple access strategy, which allows the ground users to establish communication links with nearby UAVs, instead of the BS. Numerical simulation results prove that, the average network throughput of the UAV-assisted cellular network can be significantly improved. The rest of this paper is organized as follows. Section 2 introduces the system model of the UAV-assisted cellular network. The proposed multiple UAV localization mechanism is elaborated in Sect. 3. A new access strategy is presented in Sect. 4. In Sect. 5, simulation results are illustrated. Finally, Sect. 6 concludes the paper.

2

System Model

Consider a wireless cellular network in a rectangular area, which consists of a BS, I ground users, and K UAVs, which fly at a fixed height H0 . The position of each user is denoted by pi = (xi , yi ), for i = 1, 2, . . . , I. The BS is located at the center of the square area with a coordinate of p0 = (x0 , y0 ). At discrete of the time s, we assign a state vector Xks to characterise the  flying conditions k k , where (xks , ysk ) denotes , ysk , vy,s kth UAV, which is written as Xks = xks , vx,s k k and vy,s denote the the coordinate of the kth UAV at the sth time slot, vx,s velocities with respect to the x-axis and y-axis, respectively. The initial state   k k . , y0k , vy,0 vector of the kth UAV can then be denoted by Xk0 = xk0 , vx,0 We assume that at each time slot, each UAV will broadcast signal to the ground users. Then at the sth time slot, the RSS at the ith user received from (i,k) , which can be written based on the classic the kth UAV is denoted by zs signal propagation model as [5]   (i,k) d s + n, (1) zs(i,k) = PT − 10αlg d0 (i,k)

where the geographic distance between the ith user and the kth UAV, i.e., ds is written as  2 2 (i,k) = (yi − ysk ) + (xi − xks ) + H02 , (2) ds

Multiple UAV Assisted Cellular Network: Localization and Access

1583

where PT represents the transmitted power. d0 is the reference distance. α is the path loss exponent, which usually takes value of α ∈ [2, 4] in an opening area outdoor. n captures the shadow-fading effect in complex mutipath environments. Besides, the RSS, i.e., zsi , from the BS to the ith user can be written as   di zsi = PT − 10αlg + n, (3) d0 where the geographic distance between the ith user and the BS, i.e., di is written as  2 2 (4) di = (yi − y0 ) + (xi − x0 ) . The throughput between the i th user and the k th UAV can then be written as

(5) Cs(i,k) = log2 1 + SN Rs(i,k) , (i,k)

where the signal-to-noise ratio from the kth UAV to the ith user, SN Rs

, is

(i,k)

SN Rs(i,k) =

zs , σ2

(6)

where σ 2 is the noise power received by the user. Accordingly, the throughput, Csi , between the ith user and the BS is expressed as (7) Csi = log2 1 + SN Rsi , where SN Rsi is the signal-to-noise ratio from the BS to the ith user, i.e., SN Rsi =

3

zsi . σ2

(8)

Multi-UAV Localization Technique

In this section, we suggest an EKF based multi-UAV localization scheme, which is based on the RSS at the users. As such, more accurate multiple UAVs locations can be estimated to assist subsequent access strategy design. 3.1

Dynamic State-Space Model

In order to accomplish the sequential UAV localization procedure, a Dynamical State-space Model (DSM) is formulated, which is written as Xs = f (Xs−1 ) + ws

(9)

Zs = g (Xs ) + vs .

(10)

T where Eq. (9) is referred to as the transitional equation. Xs = [X1s , X2s , . . . , XK s ] k denotes the state vector of multiple UAVs, where Xs , for k = 1, . . . , K is the state

1584

Y. Tao et al.

vector of the kth UAV. Equation (10) is referred to as the observation equation, (1,1) (I,K) T (i,k) ] , with zs i = 1, . . . k = where Zs can be written as Zs = [zs , . . . , zs 1, . . . , K denoting the RSS received by the ith user from the kth UAV, in the sth time slot. f and g represent the linear transitional function and the nonlinear observation function, respectively. In this system, the transitional function is ⎤ ⎡ 1 ⎡ 1 ⎤ xs + vx,s f1 1 ⎥ ⎢ f2 ⎥ ⎢ vx,s ⎥ ⎢ 1 ⎥ ⎢ 1 ⎥ ⎢ f3 ⎥ ⎢ ys + vy,s ⎥ ⎢ ⎥ ⎢ (11) f = ⎢ f4 ⎥ = ⎢ v 1 ⎥. y,s ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ .. ⎥ ⎢ .. ⎦ ⎣ . ⎦ ⎣ . K f4K vy,s The observation function can be expressed as g = [g1 , g2 , . . . , gIK ]T , which is further derived applying Eq. (1) to be ⎤ ⎤ ⎡ d(1,1) PT − 10αlg( sd0 ) g1 ⎥ ⎥ ⎢ ⎢ .. ⎥. g = ⎣ ... ⎦ = ⎢ . ⎦ ⎣ (I,K) ds gIK ) PT − 10αlg( ⎡

(12)

d0

ws in Eq. (9) denotes the transitional noise vector, which is written as 1

ws = Q 2 W,

Q ∈ R4K×4K ,

(13)

where Q is the covariance matrix of the transitional noise ws . And W ∈ R4K×1 , whereby each element is a zero-mean Gaussian random variable with unit variance. vs is the observation noise vector, which is written as 1

vs = R 2 V,

R ∈ RIK×IK ,

(14)

where R is the covariance matrix of observation noise. And V ∈ RIK×1 , whereby each element is similarly a zero-mean Gaussian random variable with unit variance. 3.2

Predictor

As a basic principle of the Kalman Filter procedure, a linear observation model is required. Therefore, the linearization technique is applied to transform the nonlinear observation equation as in Eq. (1). Specifically, the Jacobian matrix of f is computed as ⎡ ⎤ a11 · · · a12K  ∂f  ⎢ .. ⎥ , = ⎣ ... . . . (15) F= . ⎦ ∂X X=Xs a2K1 · · · a2K2K

Multiple UAV Assisted Cellular Network: Localization and Access

1585

⎧  ⎪ ⎪ 1 1 , i = j, ⎪ ⎪ ⎨ 01

where aij =

  ⎪ ⎪ 00 ⎪ ⎪ , i = j, ⎩ 00

(16)

and the Jacobian of g is ⎡   ∂g  ⎢ =⎣ G(Xs ) = ∂X X=Xs

⎤ b11 · · · b1K .. . . . ⎥ . .. ⎦ , . bK1 · · · bKK

where bij ∈ RI×4 is further written as ⎡ ⎤ ⎧ 0 0 ··· 0 ⎪ ⎪ ⎪ ⎢ .. .. . . .. ⎥ ⎪ ⎪ ⎣ . . . . ⎦ , i = j, ⎪ ⎪ ⎪ ⎪ ⎪ 0 0 ··· 0 ⎪ ⎪ ⎨ ⎤ bij = ⎡ −10α(x1 −x1 ) −10α(ys1 −y1 ) s ⎪ 0 0 ⎪ (1,1) (1,1) ⎪ ln(10)ds ⎪ ⎥ ⎢ ln(10)ds ⎪ ⎪ ⎢ .. .. .. .. ⎥ , i = k. ⎪ ⎪ ⎢ . . . .⎥ ⎪⎣ ⎪ ⎦ ⎪ K K ⎪ −10α(x −x ) −10α(y −y ) I I s s ⎩ 0 0 (1,1) (1,1) ln(10)ds

(17)

(18)

ln(10)ds

On the basis of above, the error covariance matrix at discrete time s can be predicted as ˆ s = FPs−1 FT + Q, (19) P where Ps−1 is the error covariance matrix at the last time slot, i.e., s − 1. Also, the predicted multiple UAVs’ state vector can be expressed as ˆ s = FXs−1 . X 3.3

(20)

Corrector

ˆ s with covariSubsequently, given the predicted multiple UAVs’ state vector X ˆ ance Ps , and the observation vector Zs with covariance R, the Kalman gain at the sth time slot can be obtained by  −1 ˆ s )P ˆ s G(X ˆ s )T G(X ˆ s G(X ˆ s )T + R . (21) Ks = P Subsequently, the UAVs’ state vector can be updated via

ˆ s + K Zs − g X ˆs . Xs = X

(22)



ˆ s can be derived using the observation function g. And the RSS, which In (22), g X is received from the UAV at the user, is replaced by the theoretical value derived by the predicted UAV location. Further, the covariance matrix is updated via

1586

Y. Tao et al.



ˆ s) P ˆ s, Ps = I − Ks G(X

(23)

where I is a diagonal matrix and Ps represents the estimated covariance matrix of state Xs at time s.

4

Access Strategy

In the considered UAV-assisted cellular network, we apply a direct and simple access strategy, i.e. each UAV will communicate with the closest user. As such, high computational complexity can be avoided. To guarantee communication quality, we set a maximum communication distance between a ground user and an UAV, i.e., D. To be specific, the distance between each user and each UAV, denoted by (i,k) ls , is obtained by the estimated locations of the UAVs. Then, a distance matrix, Ls ∈ RI×K , is formulated as ⎡ ⎤ (1,1) (1,K) · · · ls ls ⎢ . . ⎥ . . ... ⎥ Ls = ⎢ (24) ⎣ .. ⎦. (I,1)

ls

(I,K)

· · · ls

Our UAV-assisted multiple access strategy is based on Ls and is shown in Algorithm 1. At each algorithm loop, the minimum value of Ls is selected, e.g., (m,n) , which represents that the mth user and the nth UAV acquire a shortest ls (m,n) , for n = 1, . . . , K, and distance, thereby, a best communication link. And ls m = 1, . . . , I are all set to be U , where U is a large value. Therefore, the nth UAV is assigned to communicate with the mth user, if the distance between them is smaller than D. Then, a new loop is conducted to select a new minimum value of Ls ∈ RI×K . The algorithm ends when all of Ls ’s elements are U . Algorithm 1. UAV-assisted multiple access strategy Require: The distance matrix Ls Ensure: The access matrix, M ∈ R4×K , where at each column, the 4 values denote the distance, user index, UAV index, and a Boolean value representing the access status, respectively. 1: repeat (m,n) in Ls . 2: find the minimum value ls (m,n) , for n = 1, . . . , K, and m = 1, . . . , I, are assigned to be U . 3: ls  T (m,n) (m,n) 4: if ls < D, set the corresponding column of M to be ls , m, n, 1 , otherwise, T  (m,n) , m, n, 0 . ls 5: until (all of Ls ’s elements are U )

Multiple UAV Assisted Cellular Network: Localization and Access

5

1587

Numerical Simulation Result

14 12 10 8

2

=1 with 2 UAVs assisted

2

=1 with 4 UAVs assisted

2

=1 with 6 UAVs assisted

2

=1 without UAV assisted

Average throughput (bps/Hz)

Average throughput (bps/Hz)

In this section, the numerical simulation results are shown to evaluate the performance of the proposed UAV-assisted wireless cellular network. We assume that the users are randomly distributed in a square area, with a size of 100 m × 100 m. The simulation parameters are configured as follows. PT = 100, d0 = 1, α = 2 and the coordinate of the BS is P0 = (50, 50), which is the center of the square area. To evaluate the UAV-assisted cellular network, Eqs. (5) and (7) are adopted. Firstly, we study the effect of changing the number of UAV that utilized in the considered cellular network. From Fig. 1a we can observe that, the network throughput will deteriorate with the increase of the user number. While introducing more UAVs can result in better network throughput performance. Next we investigate the effect of the noise power on the system performance. Figure 1b illustrates the throughput performance of the system under different received noise power. It is seen that the UAV-assisted cellular network achieves better throughput performance compared with the conventional network. And introducing more UAVs can further improve network throughput.

6 4 2 0 20

30

40

50

60

70

80

Users number

(a) Effect of UAV number on the system performance.

14 12 10 8

2

=1 without UAV assisted

2

=1 UAV assisted

2

=7 without UAV assisted

2

=7 UAV assisted

6 4 2 0 20

30

40

50

60

70

80

Users number

(b) Effect of noise power on the system performance.

Fig. 1. Network throughput performance

Finally, the effect of received noise power on the localization accuracy is studied. As in Fig. 2, as expected, the average localization error tends to be greater with the increase of the noise power.

1588

Y. Tao et al. 5.5

Average error (m)

5 4.5 4 3.5 3 2.5 2 1.5

1

1.5

2

2.5

3

3.5

4

4.5

5

2

Fig. 2. Performance of localization accuracy under different received noise power

6

Conclusion

In this paper, multiple UAVs are introduced in a dense cellular network to assist the BS. The UAV-based localization scheme and access strategy are studied in detail. A localization technique implemented by EKF is designed based on the RSS at the user. As such, the real-time locations of multiple UAVs can be estimated. Then a multiple access strategy is proposed in order to arrange the users to communicate directly with the UAVs. Simulation results show that the network throughput can be significantly improved with the assist of UAVs. Further work may concentrate on a 3D setup and different deployment patterns of UAVs.

References 1. Osseiran A, Boccardi F, Braun V, Kusume K, Marsch P, Maternia M, Queseth O, Schellmann M, Schotten H, Taoka H, Tullberg H, Uusitalo MA, Timus B, Fallgren M (2014) Scenarios for 5G mobile and wireless communications: the vision of the METIS Project. IEEE Commun Magzine 2. Yuniarti D (2018) Regulatory challenges of broadband communication services from high altitude platforms. In: International conference on information and communications technology (ICOIACT) 3. Kim Y, Hwang G, Um J, Yoo S, Jung H, Park S (2016) Throughput performance optimization of super dense wireless networks with the renewal access protocol. IEEE Trans Wirel Commun 15(5) 4. Zeng Y, Zhang R, Lim TJ (2016) Wireless communications with unmanned aerial vehicles: opportunitied and challenges. IEEE Commun Mag 5. Li X (2006) RSS-based location estimation with unknown pathloss model. IEEE Trans Wirel Commun 5(12)

WiFi Location Fingerprint Indoor Positioning Method Based on WKNN Xinxin Wang1, Danyang Qin1(&), and Lin Ma2 1

Key Lab of Electronic and Communication Engineering, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected] 2 Harbin Institute of Technology, Harbin, People’s Republic of China

Abstract. Wireless Fidelity (WiFi) based fingerprint indoor positioning can directly utilize existing commercial WiFi devices, the deployment cost is low, easy to expand, and has good non-invasiveness, which has gradually become a hot spot of indoor positioning technology researchers. The positioning method of this paper combines the Received Signal Strength (RSS) ranging method and the location fingerprint method. On this basis, the Weighted K-Nearest Neighbor (WKNN) matching algorithm is used to match the fingerprint data in the location fingerprint database. In view of the strong problem of indoor wireless signal oscillation, this paper uses Kalman filtering method to process the signal strength value. The simulation is carried out under the MATLAB platform. The results show that the proposed method is superior to the existing K-Nearest Neighbors (KNN) and Nearest Neighbors (NN) algorithms in the same simulation environment, which significantly improves the indoor positioning accuracy. Keywords: WiFi fingerprint location

 RSS  WKNN  Kalman filter

1 Introduction In recent years, with the rapid development of smart devices and mobile Internet technologies, the demand for location-based services has also increased dramatically. At present, in the outdoor environment, relying on the global positioning system [1], more accurate positioning can be achieved. But the radio signals in the indoor environment are blocked by buildings, it makes the received signal weaker, resulting in the positioning accuracy not meeting the indoor standard, so the Global Positioning System (GPS) positioning technology is difficult to adapt to the indoor environment. The indoor location technology based on Wireless Fidelity (WiFi) location fingerprint utilizes a method of characterizing signal strength values as scene feature information, it compares the information measured at the point to be located with the location fingerprint database, selects the data with the largest correlation characteristic, and takes the physical location corresponding to the feature information as the estimation of the location of the location to be located [2]. The Kalman filter algorithm is used to filter the collected RSS signal values to obtain a set of localized fingerprint signals closer to the true value, and then the Weighted K-Nearest Neighbor (WKNN) © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1589–1596, 2020 https://doi.org/10.1007/978-981-13-9409-6_191

1590

X. Wang et al.

matching algorithm is used for positioning, positioning accuracy and compared with the traditional WiFi fingerprint location algorithm, the positioning accuracy and stability are greatly improved [3].

2 Location Fingerprint Location Method The experiment needs to collect all the WiFi hotspot signal strengths in a specific location [4] and store them in the fingerprint database, compare the WiFi signals received by the mobile terminal, and use the Received Signal Strength (RSS) value to optimize the parameters of the propagation loss model. Accurate propagation loss model [5]. In the actual positioning phase, based on the scene fingerprint database, the matching position of the current mobile terminal is calculated by the matching algorithm. The specific process is shown in Fig. 1. Base station node WiFi signal Reference point

Multiple sets of measured RSSI values Information location

Scene fingerprint information RSS fingerprint information base

Matching method

Fingerprint database

Actual location

Fig. 1. Location fingerprint location process

2.1

WiFi Location Fingerprint Positioning Implementation Principle

WiFi location fingerprint positioning requires two steps of offline sampling and online positioning, as shown in Fig. 2.

WiFi Location Fingerprint Indoor Positioning Method Based on WKNN

1591

Offline sampling phase: The area to be tested is divided into several grids, with the vertices of each grid as a reference point (RP); the feature information of each RP (including coordination, received signal strength RSS value) is stored as a set of position fingerprints. Location fingerprint database. The location fingerprint database data can be expressed as:  8 ðx1 ; y1 Þwifi11 ; wifi21 ; wifi31 ; . . .; wifin1 > > >  > > < ðx2 ; y2 Þwifi12 ; wifi22 ; wifi32 ; . . .; wifin2 RPx ¼ .. > > . > > >  : ðxm ; ym Þwifi1m ; wifi2m ; wifi3m ; . . .; wifinm

ð1Þ

where ðxm ; ym Þ is the coordination of the mth RP, and is the mth RP receiving the nth RSS value of the nth access point (AP). Online positioning stage: Real-time measurement of the RSS value of the WiFi signal source to be measured at the point to be tested; then using a certain matching algorithm [6] to match the fingerprint data in the location fingerprint database to estimate the optimal position information of the point to be measured.

AP1

Offline phase X1,Y2,RSS1, RSS2 RSSN

AP2 X2,Y2,RSS1, RSS2 RSSN

AP3

Mobile terminal Xn,Yn,RSS1, RSS2 RSSN Location fingerprint database

APN

Online stage RSS1, RSS2 RSSN

Matching algorithm

X, Y

user

Fig. 2. Basic schematic diagram of location fingerprint location method

2.2

Signal Strength Measurement

Knowing the transmitted signal of the node, the receiving node measures the received signal strength RSS, calculating the loss during propagation, and using the signal propagation attenuation model to convert the propagation loss into a distance. Then

1592

X. Wang et al.

calculating the relative position coordinates of the measurement nodes [7]. The commonly used path loss models are as follows: PðdÞ ¼ Pðd0 Þ þ 10  n log

  d þ x0 d0

ð2Þ

where x0 is the added value of the path loss (including the wall attenuation coefficient (WAF) and the floor attenuation coefficient (FAF)); d0 is the reference distance, which is usually set to 1 m; n is the path loss factor; Pðd0 Þ is the power when the distance is the reference distance d0 ; d is the distance from the transmitting end to the receiving end. Typically, when d0 ¼ 1 m, Pð1Þ ¼ 30 dB [8]. When collecting the RSS signal value of the fingerprint point, due to the influence of noise in the complex indoor environment, the collected RSS value may exhibit signal characteristic jitter. In order to solve this problem, the Kalman filter algorithm [9] is used to filter the collected RSS signal values to obtain a set of localized fingerprint signals closer to the true value. Kalman filter algorithm process is as follows: X ðkjk  1Þ ¼ AX ðk  1jk  1Þ þ BUðkÞ

ð3Þ

P0 ðkÞ ¼ APðK  1ÞAT þ Q

ð4Þ

X ðkjk  1Þ indicates the prediction value of the station for k  1 time to time k, X ðk  1jk  1Þ indicates the state estimated value corrected at time k  1, A, B is the system parameter, U ðkÞ is the system control amount at time k. PðkÞ, PðK Þ represents the covariance of the estimated value and the true value, the predicted value and the true value. Kalman gain parameters are calculated from these two values:  1 KgðkÞ ¼ P0 ðkÞH T HP0 ðKÞH T þ R

ð5Þ

Use Kgðk Þ to calculate the estimated value required: X ðkjkÞ ¼ X ðk jk  1Þ þ KgðkÞðZðkÞ  HX ðk jk  1ÞÞ

ð6Þ

Z ðk Þ is the measured value of the system k time, which is converted from the system state X ðk Þ at time k, and H is the conversion parameter. Finally, the error covariance between the estimated value and the true value is calculated to prepare for the next recursion. PðkÞ ¼ ðI  KgðkÞHÞP0 ðkÞ

2.3

ð7Þ

WKNN Matching Algorithm

WKNN is an improved algorithm of the K-Nearest Neighbors (KNN) algorithm. WKNN assigns a weight to each fingerprint according to the contribution of each sample point to the unknown node [10], and its matching process is as follows.

WiFi Location Fingerprint Indoor Positioning Method Based on WKNN

1593

Step 1 Calculate the Euclidean distance: Assuming that the fingerprint data is ðrssii1 ; rssii2 ; . . .; rssiin Þ, the unknown node collects the signal strength of each beacon node as RSSI ¼ ðrssi1 ; rssi2 ; . . .; rssin Þ, and calculates the Euclidean distance [7]. Di ¼

n X

ðrssij  rssiij Þ2

i ¼ 1; 2; . . .; m

ð8Þ

j¼1

where Di represents the Euclidean distance of the unknown node from the ith fingerprint. Calculating the Euclidean distance between the unknown node RSSI value and the m fingerprints RSSI values according to Eq. (8), and sorting the m Euclidean distances from small to large, and taking the coordinates of the first K fingerprints with the smallest Euclidean distance. Step 2 Determine the weight: The smaller the Euclidean distance, the greater the contribution [11], the greater the weight; vice versa. 1

wi ¼ PkDi

1 i¼1 Di

ð9Þ

where wi represents the weight of the ith sample point, and the larger wi indicates that the difference between most of the iðRSSIi Þ values in the cluster RSSI and its mean   RSSIi is large, and vice versa. Step 3 Estimate unknown node coordinates: Estimate the position coordinates ðx; yÞ of the unknown node using the WKNN matching calculation formula. 8 k P > > wi xi

> :y ¼

i ¼ 1; 2; . . .; k

i¼1 k P i¼1

wi yi

ð10Þ

where xi is the abscissa of the ith sample point, yi is the ordinate of the ith sample point, k is the number of sample points selected, x, y is the horizontal and vertical coordinates of the unknown node, and wi is the ith sample The weight of the point.

3 Simulation Result Analysis Using MATLAB to perform Kalman filtering on the RSS acquisition values and simulate the WKNN matching algorithm. We arranged the room in 20m  15m  4m to experiment. When performing RSS intensity value acquisition, the transmission path includes line-of-sight link (LOS) and non-line-of-sight link (NLOS) due to reflection of indoor walls and floors. In Fig. 3a, the difference between the constant coefficient 44.006 of NLOS and the constant coefficient 41.090 of LOS represents the existing wall attenuation coefficient, therefore, the difference between the two constant coefficients of −3 dBm is adopted as the WAF in this researching. In Fig. 3b, the calibration RSS in

1594

X. Wang et al.

the case of LOS and NLOS is shown. As can be seen from the figure, there is a difference of −26.21 dBm between their constant coefficients, which represents an upper limit for the FAF. Therefore, in this study, −26 dBm was used as the FAF.

-36

-30

NLOS Linear(NLOS) LOS Linear(LOS)

-37 -38 -39

4m-Free Space 4m-FAF

-35

y=0.0041x-46.26

-40 -45

RSSI (dBm)

RSSI (dBm)

-40 -41 -42 -43

-50 -55 -60 -65

-44 -45

-70

-46

-75

-47

y=0.0045x-72.476

-80

0

5

10

15

20

25

30

35

40

0

5

10

15

Distance (m)

20

25

30

35

40

Distance (m)

(a) WAF estimation

(b) FAF estimation

Fig. 3. Attenuation coefficient estimation

Figure 4 shows the motion trajectory identification, where (a) is the trajectory identified by the WKNN matching algorithm without Kalman filtering, and (b) is the trajectory identified by the WKNN matching algorithm after Kalman filtering. It can be clearly seen that the positioning accuracy is significantly improved. 15

10

10

y/m

y/m

15

5

5

0 0

Real trace Estimated trace without KF 2

4

6

8

10

x/m

12

14

16

18

(a) WKNN recognition motion track

20

0

Real trace Estimated trace with KF

0

2

4

6

8

10

x/m

12

14

16

18

20

(b) Kalman filtered WKNN recognition track

Fig. 4. Motion track recognition

In addition, in order to verify the performance of the WKNN algorithm, this paper uses the same fingerprint database, using the three algorithms WKNN, KNN, NN for positioning experiments, simulation results can be seen in Fig. 5, the average error distance of WKNN is about 1 m, the average error distance of KNN algorithm is about 1.5 m, and the average error distance of NN algorithm is about 2 m. It can also be seen

WiFi Location Fingerprint Indoor Positioning Method Based on WKNN

1595

that the stability of the NN algorithm is poor, and the accuracy of the positioning result fluctuates greatly. The stability of WKNN algorithm is better than KNN and NN algorithm, so the performance advantage of WKNN algorithm is more obvious from the comparison of algorithm error and positioning accuracy stability. CDF Comparison

1

Cumulative distribution function

0.9 0.8 nn knn wknn

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Error distance(m)

Fig. 5. Comparison of positioning accuracy of three algorithms of WKNN/KNN/NN

4 Conclusion In this paper, the new fingerprint matching algorithm WKNN is applied to the WiFi location fingerprint location system. In order to verify the effectiveness of the algorithm, we use MATLAB for simulation verification. The results show that this method effectively reduces the positioning error compared with the traditional matching algorithm, but the workload in the offline data acquisition phase is large. Therefore, the next step will be to consider new ways to reduce the workload of offline data collection and update. Acknowledgements. This work is supported by the Undergraduate University Project of Young Scientist Creative Talent of Heilongjiang Province (UNPYSCT-2017125).

References 1. Bulusu N, Heidemann J, Estrin D (2000) GPS-less low-cost outdoor localization for very small devices. IEEE Pers Commun Maga 7(5):28–34 2. Tian WX, Wang X (2015) Fundamental limits of RSS fingerprinting based indoor localization. IEEE Conf Comput Commun 6(3):2479–2487 3. Kushki A, Plataniotis KN, Venetsanopoulos AN (2007) Kernel-based positioning in wireless local area networks. IEEE Trans Mob Comput 6(6):689–705

1596

X. Wang et al.

4. Wu C, Yang Z, Zhou Z et al (2015) Non-invasive detection of moving and stationary human with WiFi. IEEE J Sel Areas Commun 33(11):2329–2342 5. He S, Chan SHG (2016) W-IFI fingerprint-based indoor positioning: recent advances and comparisons. IEEE Commun Surv Tutor 18(4):466–490 6. Liu HH, Yang YN (2012) WiFi-based indoor positioning for multi-floor environment. In: Tencon IEEE region 10 conference, pp 456–468 7. Retscher G, Hofer H (2017) Wi-Fi location fingerprinting using an intelligent checkpoint sequence. J Appl Geod 11:197–205 8. Shu Y, Huang Y, Zhang J et al (2016) Gradient-based fingerprinting for indoor localization and tracking. IEEE Trans Individ Electron 63(7):1523–1620 9. Vargas AN, Ishihara JY (2016) Unscented Kalman filters for estimating the position of an automotive electronic throttle valve. IEEE Trans Veh Technol 65(5):4627–4632 10. Eirola E, Lendasse A, Vandewalle V et al (2014) Mixture of gauss for distance estimation with missing data. Neural Calc 13(1):32–42 11. Kriz P, Maly F, Kozel T (2016) Improving indoor localization using bluetooth low energy beacons. Mob Inf Syst 4(9):1–11

The Digital Design and Verification of Overall Power System for Spacecraft Ning Xia(&), Qing Du, Zhigang Liu, Xiaofeng Zhang, and Yan Chen Beijing Institute of Spacecraft System Engineering, 10094 Beijing, China [email protected]

Abstract. This paper proposes a digital spacecraft overall power system design, simulation, analysis and verification mode, forming a unified simulation model based on a unified model, fully autonomously controllable, and the simulation results and power distribution map, grounding map and tasks time series is deeply integrated to realize the dynamic analysis and display of the whole satellite power supply and distribution system, as well as the digital flight of the ground test and the in-orbit flight process, and combined with a lunar probe model application example to carry out verification work. This has important guiding significance for the overall design and verification of the spacecraft. Keywords: Spacecraft

 Overall power  Digital

1 Introduction The digital design of spacecraft power is an important part of the whole satellite development process, including the design of the Whole satellite power supply and distribution link, the design of the power supply and distribution subsystem, the grounding loop design and the power supply interface design of all load devices [1]. The current digital design and verification of spacecraft power is based on documentation, physical prototypes and electrical performance tests. This model has shortcomings such as long cycle, high cost and low efficiency. With the continuous development of China’s aerospace industry, the difficulty of flight missions is increasing, the development cycle is shortening, the power supply requirements are becoming more and more complex, the load equipment is increasing, the in-orbit illumination conditions are harsh, new requirements for the performance and reliability of spacecraft power supply and distribution systems are constantly being proposed, and the requirements for the digital design capability and verification level of the whole satellite power are gradually improved. The traditional power digital design verification method can no longer meet the current spacecraft development needs. Model-based digital design and verification method is an emerging method in the field of system engineering. The core idea is to make full use of the model to make the model play a central role in system analysis, design and implementation [2]. The goal is to create shared data sources and views; to establish a structured development process to achieve synergy between disciplines. The main potential advantages of this approach include increased communication among stakeholders in the system, improved © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1597–1604, 2020 https://doi.org/10.1007/978-981-13-9409-6_192

1598

N. Xia et al.

knowledge acquisition and knowledge reuse, provide better traceability of information, enhanced reusability of artifacts, and reduced development risk [3]. The introduction of model-based systems engineering (MBSE) advanced methods into the digital design and verification of spacecraft power is of great significance in reducing spacecraft development costs, improving efficiency and design level, and enhancing the reusability of results [4, 5]. Foreign research institutions are also actively carrying out the practice and exploration of MBSE in the design of aviation and aerospace electrical systems. Although foreign advanced aerospace companies have made partial breakthroughs in the digital design methods and tools of electrical systems, no overall solution has yet been formed. Domestic spacecraft introduces some software tools in the design and analysis of equipment-level electrical performance. For example, EDA software such as Cadence, Mentor, Altium and so on have realized popularization in circuit principle design, PCB design and hardware simulation [6]. At present, the digital technology of spacecraft power supply and distribution is still in the local application stage, and the technical system of digital R&D and the management mode adapted to it have not yet been formed. In the process of promoting the digital design of spacecraft power, it is necessary to establish a digital prototype of power supply and distribution products as the core [7], with the engineering development process as the main line, throughout the various stages of project development, as system demonstration, program design, detailed design, virtual test, processing and manufacturing, etc. This paper proposes a suitable idea for the digital design of spacecraft overall power supply in China, and analyzes the key technology and implementation scheme of the digital design of spacecraft overall power supply based on MBSE. Taking the overall design and verification process of a lunar probe as an example, it introduces the current research progress and application results, and provides a new digital means for the overall design of spacecraft power. It has opened up key links such as system design, system simulation and integrated analysis, and has achieved the purpose of verifying and optimizing the overall design of the electric power and improving the performance of the power supply and distribution system, and has important guiding significance for the overall design and verification of the spacecraft.

2 Electric Overall Digital Design 2.1

Design Ideas

(a) Model-based system engineering concept Figure 1 shows a typical model-based system development process. Application model support system requirements definition, design, analysis, verification and verification throughout the development process. In the system design phase, the system architecture is designed according to the functions defined by the system requirements. According to the definition of the architecture, the multi-domain system model is established in the early stage of the design. The overall scheme is analyzed and optimized based on the system model and the subsystems are completed. Performance indicator setting.

The Digital Design and Verification of Overall Power System System Design

Analysis

Application

Realization

Product (Virtual or Real)

Demand

Demand Model Functional Analysis

Demand Analysis

Demand Confirmation

Product Feature List Product Logic Structure System Architecture Model

Logical Design

Program Evaluation Overall Indicator

System Design

System Test Verification

System Function Performance Verification

Options Subsystem Indicator

Subsystem Design

Subsystem Model Physical Desigh

1599

Subsystem Test Verification

Subsystem Function Performance Verification

Preferred Solution Component Model

Component Design

Component Verification

Component Function Performance Verification

Solution Realization

Time

Fig. 1. Model-based system development process

(b) Design concept based on unified data source Based on the Interface Data Sheet (IDS) unified data source, it can ensure the singlemachine contact data, the machine, electricity and thermal attribute data sources are accurate and reliable, so that no artificial secondary error will be introduced in the energy flow design and simulation process, and the data source changes can be timely transmitted to The energy flow simulation platform comes. (c) Hierarchical and structured design concept Hierarchical and structured design is the mainstream idea of modern software design. The hierarchical design provides a basic support for the energy flow to express different granularities at different levels, and multiple views from different angles to describe the system, and meet the diverse information acquisition needs of different roles. At the same time, the structured design provides support for software to quickly optimize, upgrade and expand the architecture design. 2.2

Model-Based Electrical Overall Digitalization Scheme

(a) System framework composition Model-based spacecraft electric digital design platform includes “digital design scheme for power supply and distribution system”, “digital simulation and verification platform for power supply and distribution links”, “technical state visualization platform for power supply and distribution system” and “power supply and distribution system”. The integrated analysis platform is divided into four parts.

1600

N. Xia et al.

(1) Digitalization scheme design platform for power distribution system In the “Digital Design Scheme for Power Supply and Distribution System”, the space environment interface module is set up, and the corresponding design and input of the power supply and distribution system is obtained through the analysis of the track conditions and space tasks, and the appropriate scheme model is retrieved from the space power system scheme design database. Carry out module design of solar cell array parallel connection and auxiliary cloth piece, solar wing occlusion analysis, battery string type design and charge state estimation, power supply regulation/power management and distribution, and finally reliability design of power supply and distribution links, to obtain a complete digital distribution system for power distribution systems. (2) Digital simulation and verification platform for power supply and distribution links The “Digital Simulation and Verification Platform for Power Supply and Distribution Links” is based on the output of the “Digital Distribution Design Platform for Power Supply and Distribution Systems”, “Digital Solution for Power Supply and Distribution System”, through load tasks and working mode interface modules, lighting and space environmental conditions. Interface modules and other imitation transmission conditions are introduced into the simulation model, and appropriate battery charge and discharge control strategies, power supply regulation strategies, failure prediction strategies, fault diagnosis strategies, fault recovery strategies, health evaluation strategies, and power supply and autonomous management functions are introduced. The data analysis interface module performs data acquisition and processing, and then performs energy balance simulation analysis to verify the electrical performance of the “power supply system digital solution”. (3) Power supply system technical status visualization platform The “Technology Status Visualization Platform for Power Supply and Distribution System” takes the process of “Digital Simulation and Verification Platform for Power Supply and Distribution Links” and the output result “Digitalization Scheme for Power Supply and Distribution System” as the input object, and through the stratification “Digitalization Scheme for Power Supply and Distribution System” The sub-system layout, equipment layout and contact connection relationship generate a stratified power supply and distribution map, and introduce dynamic information components (solar wing components, battery components, curve components, etc.) on the basis of power supply and distribution large maps, and also with STK Animation docking, realtime receiving STK software output data (including illumination, beta angle, track, etc.), docking with the on-orbit/thermal test flight control program, real-time receiving illumination\shadow switching, device switching machine working status, etc., feedback to the scheme simulation During the process, visualization of the technical status of the power distribution system is carried out in the form of graphics, animation, curves, forms, etc., to ensure that the designer understands the technical status of the entire power supply and distribution.

The Digital Design and Verification of Overall Power System

1601

(4) Integrated analysis platform for power supply and distribution systems The “Integration and Analysis Platform for Power Supply and Distribution Systems” conducts cross-integration analysis of the disciplines of power, power, heat, electromagnetics, fluids, etc. for the “digitalization scheme for power supply and distribution systems” verified by simulation, including the evaluation of electrical performance of power supply and distribution systems. Supply and distribution system stability margin analysis, power supply and distribution system EMC analysis, power supply and distribution system in-orbit attenuation, solar array/battery pack/power supply controller (including power distribution) in-orbit performance degradation, power supply and distribution system Thermal analysis and thermal design, finite element and fluid design analysis, power supply system fault injection and self-management analysis, etc., to ensure that designers can more clearly and quantitatively master and understand the entire power supply and distribution system during the project demonstration/plan design phase. (b) Top-Down Design Process (1) Electrical system hierarchical model definition By combing the design elements of different stages of the electrical system to define the hierarchical electrical system model, it is necessary to meet the needs of different design stages and processes, corresponding to the hierarchical model library. The model may correspond to a single device and component, or it may be a part of the system or even a system, and does not necessarily have a corresponding physical entity. (2) Electrical system architecture modeling Define the logical architecture model and physical architecture model of the system through graphical modeling. The existing electrical single-machine products are all modeled, and the system model is directly dragged and dropped. The new equipment/components re-establish the model by defining interfaces, parameters, and principles. (3) Electrical system design and simulation Based on the system design and simulation integration model, the overall function, performance and reliability simulation analysis and verification can be carried out. For example, by simulating various orbital conditions, the design accuracy of the system energy balance, solar array and battery can be verified. (c) Bottom-Up design process (1) Generate device model from interface data sheet The model library is built by abstracting the commonality principle in various device interface data sheets. According to the device type, the electrical interface information contained in the interface data sheet is extracted, the device principle model is

1602

N. Xia et al.

automatically generated, and the interface connection relationship between the model interface and the external electrical connector is established, and the corresponding single machine model is quickly generated, thereby greatly improving the efficiency of building the database. (2) System model framework generated by interface data sheet and power distribution large image generation Extract the interface and connection relationship, parameters and other information described in the device interface data sheet, the design result of the Whole satellite power supply and distribution large map, automatically generate corresponding to the system design Verify the simulation framework. (3) System model hierarchical simulation Different levels of simulation verification can be carried out for system level/subsystem level and equipment stand-alone level. (4) System level simulation analysis—energy balance simulation For the primary power interface circuit simulation, it can reflect the current surge condition of the device switch, the power supply protection of the interface circuit, the simulation of the interface circuit fault, and the impact analysis of the interface circuit fault on the primary power bus. (5) Equipment level simulation analysis The simulation model can be configured with environmental parameters, solar array parameters, and load power. The simulation can reflect the output power of the solar array in real time. The simulation model can be configured with charge and discharge switches, battery parameters, load power and solar array related parameters, and reflect the charge and discharge of the battery pack in real time through simulation. The simulation model can be configured with parameters such as environmental parameters, solar array parameters, battery parameters, load power, etc., real-time reflection of power system power adjustment circuit S3R/S4R hierarchical modulation, BDR working conditions, reflecting the distribution switch on-off condition and load power situation. (d) Extended application of system model A digital docking interface with the spacecraft on-orbit flight control/telemetry program is realized through TCP/IP communication. Receive flight control commands in real time, receive and display flight events, and analyze and process, inject parameters into the simulation process in real time, and perform real-time simulation calculation. The simulation value and telemetry value of the key parameters of the real-time monitoring system are displayed through the comparison table. Real-time interpretation of the received telemetry data, and comparing the real-time curves of the simulated value and the telemetry value, so as to achieve the auxiliary satellite test and the digitized companion during the on-orbit purpose.

The Digital Design and Verification of Overall Power System

1603

3 Verification Example At present, the overall digital design platform for spacecraft power has been applied in deep space exploration, small satellites and space stations. Taking a lunar probe as an example. Figure 2 is the semi-physical simulation interactive interface of the whole device. The system can collect the telemetry status of the device in real time, the remote command on the receiver, and the simulation data and Online comparison of real data provides a basis for further revision of the model.

Simulation Data

Telemetry Data

Fig. 2. Interactive page of semi-hardware simulation

4 Innovation Points By adopting the digital spacecraft digital design method, compared with the traditional design verification method, the following aspects of innovation and improvement are mainly realized: (1) Automatic generation of static model of whole-star power supply and distribution system based on IDS unified data source, which can ensure single-machine contact data, accurate and reliable source of machine, electricity and heat attribute data, so that the digital design process of electric power will not introduce artificial secondary error. Changes in the data source can also be transmitted to the static model of the power distribution system in time, greatly improving the efficiency and accuracy of the modeling. (2) Based on the hierarchical static model, build a hierarchical simulation model of the Whole satellite power supply and distribution system level, subsystem level, single machine level and interface circuit level, with full star energy balance analysis, power flow analysis and on-orbit flight Function, by configuring environmental parameters, solar array parameters, battery parameters, MEA and BEA control

1604

N. Xia et al.

parameters, load power, etc., can reflect solar cell array output power, battery pack charge and discharge power, power system power in real time. Regulation circuit S3R/S4R hierarchical modulation, BDR work, etc., greatly improved the spacelevel digital simulation analysis and verification capabilities of spacecraft power supply and distribution systems. (3) Establish the mapping relationship between IDS and simulation model, based on which the automatic conversion from static model to dynamic model of the whole power supply and distribution system can be realized, so as to generate the simulation model of electric design based on Modelica language and the dynamic model of automatic transformation. Compared with the static model coverage rate of 100%, more than 90% of the dynamic model parameters can be correctly inherited, and the parameters within 10% need to be modified, which completely solves the problem of model reconstruction and inconsistency.

5 Conclusion The digital design scheme of spacecraft overall power system proposed in this paper has been applied in the project development process. At present, the platform has completed the construction of the professional model library of power supply and distribution system, and the static model of whole satellite power supply and distribution system can be automatically generated with the equipment interface data sheet as the unified data source. The mapping relationship between the device interface information and the simulation model can be established. The simulation model is automatically generated based on the system model, and the whole satellite energy balance analysis is carried out. The simulation of the key device partial failure mode of the power supply and distribution link are realized, relying on the whole satellite integrated test environment carries out semi-physical simulation verification, which realizes the fusion of digital model and real telemetry remote control data.

References 1. Wang K, Yuan J, Chen H, Pu H (2012) Research and practice of foreign model-based system engineering methods. Aerosp China, 11 2. Hin F, Lin Y, Fan H (2014) Research and practice of model-based systems engineering in spacecraft development. Spacecr Eng 23(3):119–125 (in Chinese) 3. Y Lin, Jungang Y (2009) System engineering concept, process and framework. Spacecr Eng 18(1):8–12 (in Chinese) 4. Yu H, Hao W, Jungang Y et al (2009) Development schemes of spacecraft system engineering techniques. Spacecr Eng 18(1):1–7 (in Chinese) 5. INCOSE. INCESE MBSE roadmap [EB/OL]. http://www.incose.org/enchantment/docs/ 07docs/07jul_4mbseroadmap.pdf 24 June 2007 6. Estefan JA (2008) Survey of model-based system engineering (MBSE) methodologies. INCOSE MBSE Initiative, pp 1–70 7. Li B, Zhu W, Liu J et al (2001) Research and practice on complex products virtual prototype technology. Meas Control Technol 20(11):1–6

The Analysis and Practice of Backup Spacecraft Tele Command Based on Chang’E-4 Xiaoguang Li(&), Xiaohu Shen, Mei Yang, and Shi Liu Beijing Institute of Spacecraft System Engineering, 104, Youyi Road, Haidian, Beijing, China [email protected]

Abstract. The safe flight of both backup spacecraft and primary spacecraft without changing the facilities onboard is discussed based on the design and operation of Chang’E-4. Since the TT&C subsystem of Chang’E-4 is the backup spacecraft of Chang’E-3, the two satellites are designed and produced at the same time with the same hardware and software, and they fly at the same time around the similar place. By using the different tele command data rate, the different frequency, the shelter from the moon as well as the limit of the uplink power, the interference between the two tele command link is avoided, and the safety of the two satellites are maintained. Keywords: Backup spacecraft

 Tele command  Design  Practice

1 Introduction As the primary satellite is designed and produced, the backup spacecraft is often produced at the same time in case of the failure of the primary one. After the primary spacecraft is successfully launched, the backup one can be slightly changed to adapt the new mission. Since the new mission is regularly similar with the original mission, the primary and backup spacecraft might work at the same time as well as the same position in the space. Take the China Moon Exploration Project as an example, the second period of the project include Chang’E-2, Chang’E-3 and Chang’E-4. Chang’E-3 was launched in December, 2018, which achieved the “landing” mission. The TT&C subsystem of Chang’E-4 is the backup spacecraft of Chang’E-3 [1]. Actually Chang’E-2 is also the backup spacecraft of Chang’E-1. But because Chang’E-1 has been bumped onto the moon when Chang’E-1 launched [2], the interference of the tele command was not exist. Chang’E-4, the backup of Chang’E-3, the equipment of the satellite are all designed, produced at the same time with the same hardware and software. The devices of the TT&C and OBDH subsystems are all the backup devices of Chang’E-3, so the TT&C system, uplink carrier frequency, tele command sub-carrier frequency as well as the data format are all the same with Chang’E-3. While Chang’E-4 is in orbit, Chang’E-3 is still working. It is impossible to abandon the necessary tele command just

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1605–1610, 2020 https://doi.org/10.1007/978-981-13-9409-6_193

1606

X. Li et al.

because of the safety of Chang’E-4. In that case, we have to make sure the tele command to Chang’E-4 won’t be received and performed by Chang’E-3. At the same time, the tele command to Chang’E-3 won’t be received and performed by Chang’E-4.

2 Mission Simulation In the process of feasibility of the project, the position of both the spacecraft and the ground stations are simulated based on the known parameters of Chang’E-3 and predicted parameters of Chang’E-4. Chang’E-3 landed in HongWan area on 14, February, 2013, the position is (19.5088° W,44.1197° N), and is still working. Chang’E-4 is going to launch on 8, February, 2018 and land on 3, January, 2019. After landing, Chang’E-4 can’t be seen from the ground stations, the link is connected by the relay satellite. Both Chang’E-3 and Chang’E-4 are long-lived explorer on the surface of the moon. The period on the moon is 27.3 days, with about 14 daylights and 14 nights. During the 14 nights, the solar battery can’t receive sunlight and the temperature will fall rapidly, to adapt which the spacecraft will turn into sleeping mode [3]. As designed, both Chang’E-3 and Chang’E-4 will turn into sleeping mode while the sun altitude angle at the landing position is between 10° and 15°. At that time tele command will be sent from the ground station to turn off the devices on board, among which the data management computer will be turned off at last. When the daylight starts, the sunlight will awake the spacecraft and the exploration work will be continued. Chang’E-3 and Chang’E-4 will be landed on the near side and far side of the moon, respectively. So the tele command resources are not conflicted after landing. During the flight period of Chang’E-4, the sleep and wakeup time of Chang’E-3 are shown in Table 1. It can be seen that Chang’E-3 and Chang’E-4 will both need tele command from 17th to 30th, December, 2018. At that time, the beam angle from the ground station to Chang’E-3 and Chang’E-4 are shown in Fig. 1, which are between 0.013° and 0.5°.

Table 1. The sleep and wakeup time arrangement of Chang’E-3 No. 1. 2. 3. 4.

Event The 62 The 62 The 63 The 63

time time time time

of of of of

sleep on the moon wakeup on the moon sleep on the moon wakeup on the moon

Time 2, December, 2018 17, December, 2018 30, December, 2018 14, January, 2019

Based on the analysis above, Chang’E-3 and Chang’E-4 will be within the same beam from the ground station from 17th to 30th, December, 2018. So it must be sure that the tele command to the two spacecraft are separated clearly and do not interference from each other.

1607

The beam angle from the ground station to Chang'E-3 and Chang'E-4 ( )

The Analysis and Practice of Backup Spacecraft Tele Command

Time (Beijing time)

Fig. 1. The beam angle from the ground station to Chang’E-3 and Chang’E-4

3 Design Analysis To solve this, the different data rate is used to minimize the change. The TT&C system, uplink carrier frequency, tele command sub-carrier frequency as well as the data format are all kept the same. The tele command to the two spacecraft won’t be accepted by the other one just because of the different data rate. The theory analysis is shown below. The data rate of Chang’E-4 is N, while the data rate of Chang’E-3 is 8 N. The process of tele command BPSK sub-carrier demodulation is consist of carrier synchronization and code synchronization, which are two independent demodulation loop [3], as shown in Fig. 2. The carrier tracking loop is independent from the modulated code rate, and the local output of fTC sub-carrier and the input sub-carrier have the same frequency as well as the same phase after the loop is locked [4].

Code NCO

Discriminator

Loop Filter reset Accumulation at the precise time

Demodulation data/ synchronous clock

Fractional-N Frequency Accumulation at the half time earlier

I Branch

code tracking loop

I

Branch Filter sin

AD output 8bit 64K Sampling Frequency

sin&cos function look-up table

Carrier NCO

Loop Filter

Discriminator

cos carrier tracking loop Q Branch

Branch Filter

Q

DDC

Fig. 2. Digital demodulation theory demonstration

1608

X. Li et al.

The code tracking loop is dependent on the code rate [5], the center frequency of NCO in Fig. 2 is fTC, according to the time relationship of modulation code rate and fTC, the local bit synchronous clock NHz is the fractional-N frequency from fTC, and the local code synchronous is the fractional-N frequency from the local bit synchronous clock. The purpose of the code tracking loop is to adjust the local bit synchronous clock NHz to make it be the same frequency and the same phase with the input modulation code (code data rate is N), and so the code tracking is achieved. The local produced NHz signal (consist of two reset pulse of a precise time and a 180° phase earlier) is used to integral reset to the I point in Fig. 2. The result of reset output is consist of a precise time and a 180° phase earlier, which are sent to the discriminator to figure out the error. The final output of demodulator is the symbol bit of the integral reset at the precise time, 0 represent positive, and 1 represent negative. If the data rate of the input signal is 8 N, the tracking and demodulation can’t be performed normally. As shown in the frequency spectrum, there is no 8 N in the input signal, so the code tracking loop is unable to track but continue to output a surging signal around NHz. And the integral reset pulse is also around NHz. According to the demodulation theory, only when the integral time is the width of a code bit and the starting time is simultaneous, the highest signal energy can be accumulated and the noise can be mostly restrained, which achieves the lowest demodulation error. But if the integral time is over the width of a code bit, the signal integral energy is attenuated, and the demodulation can’t perform correctly. On the contrary, if the code rate of 8 N is used to demodulate the code rate of N, the locking problem is also exist and the integral time is too short to demodulate. The simulation result is shown in Fig. 3 that the code rate of N is used to demodulate the code rate of N, with the input signal of 0 and 1 alternatively. The blue line shows the value of the precise time reset while the red line shows the half time earlier. It can be seen that the 0 and 1 alternatively data are demodulated normally, and the integral energy of the half time earlier is around 0, the most energy is output at the precise integral reset time.

Fig. 3. Simulation result of the code rate of N is used to demodulate the code rate of N

The simulation result is shown in Fig. 4 that the code rate of N is used to demodulate the code rate of 8 N. It can be seen that the precise time reset and the half time earlier results are random (dependent of noise and integral time), and the integral energy is within 20, which is far lower than 500 in Fig. 3. So the demodulation is failed.

The Analysis and Practice of Backup Spacecraft Tele Command

1609

Fig. 4. Simulation result of the code rate of N is used to demodulate the code rate of 8 N

4 Experimentation in Lab In the laboratory, the Nbps uplink code rate is used to send 100 tele command instructions to the demodulation loop of 8 Nbps rate. The result is none of the instructions is performed. Meanwhile, all of the 100 instructions are performed after the uplink code rate is changed to 8 Nbps. The experimentation proves the analysis above.

5 Practice Onboard In practice, it is strongly advised that we should avoid to send tele command when the two spacecraft are near to each other at the first place. If the avoidance is impossible, take the example of Chang’E-4 and Chang’E-3, while Chang’E-4 is flying around the moon, Chang’E-3 must receive the tele command of sleep. Or Chang’E-3 would not be safe. So, even though the analysis above shows that there is no risk of the wrong tele command, the steps below are carried out to make sure the safety of the two spacecraft as a complementarity. 5.1

Use the Shelter

The sleep tele command to Chang’E-3 is sent when Chang’E-4 is behind the far side of the moon. The moon is a natural shelter from the ground station to Chang’E-4, there is no direct uplink. 5.2

Reduce Power

As the abundance of the uplink is high, the power of the ground station is lowered to the limit, which again reduces the possibility of the wrong tele command of Chang’E-4. 5.3

Mode Switch Onboard

Both Chang’E-3 and Chang’E-3 are designed with 2 frequencies, of which 1 is main frequency, the other is backup frequency. Since the sleep tele command is send with the main frequency, Chang’E-4 is switched to the backup frequency before the tele command is sent. The mode is changed back after the sleep tele command to Chang’E-3 is received. At the meantime, the data management subsystem is switched to “data door close” mode to prohibit the spacecraft from receiving data.

1610

5.4

X. Li et al.

Summary

As the hardware and software of Chang’E-4 are all the same with Chang’E-3, during the same time and around the similar orbit, different tele command code rate, different uplink frequency, shelter of the moon and lower the power are used to avoid the interference of the two spacecraft, and the safety purpose is maintained.

6 Conclusion From the view of theoretical analysis as well as the engineering practice, the safe flight of both backup spacecraft and primary spacecraft without changing the facilities onboard is discussed. The flying practice of Chang’E-4 proves the analysis result and the practice process.

References 1. Yingzhuo JIA, Yongliao ZOU, Changbin XUE (2018) Scientific objectives and payloads of Chang’E-4 mission. Chin J Space Sci 38(1):118–130 (in Chinese) 2. Ouyang Z (2010) Science results of Chang’e-1 lunar orbiter and mission goals of Chang’e-2. Spacecr Eng 19(5):1–6 (in Chinese) 3. Lei Y,Zhang M,Jin B (2014) Research on autonomous sleep-reboot of lunar probe. Spacecr Eng 23(6):13–16 (in Chinese) 4. Shen X,Guo J,Cheng H (2009) Design of a AGC in digital telecommand sub-carrier demodulation. Spacecr Eng 18(2):106–111 (in Chinese) 5. Yang Z (2011) The design and improvement of minitype TT&C transponder. Zhejiang University, Zhejiang (in Chinese)

A Modified Hough Transform TBD Method for Radar Weak Targets Using Plot’s Quality Bao Zhonghua(&), Tian Shusen, and Lu Jianbin Naval University of Engineering, Jiefang Street. 717, Wuhan, China [email protected]

Abstract. In this paper, we propose a modified Hough transform based TBD detection algorithm for radar weak targets using radar plot quality information. Firstly, a general description of the proposed method was presented. And then the quality of radar raw plots was redefined and its calculation algorithms were discussed in detail. Finally, a modification of traditional Hough transform using plot’s quality information was applied and more credible peak detections were conducted and outputted. The effectiveness of the proposed method especially in heavy clutter conditions was verified by simulation experiments. Keywords: TBD detection

 Hough transform  Plot quality  Radar weak target

1 Introduction The detection of low observable targets by radar in marine environment is widely concerned for several decades. To solve this problem, it was acknowledged by most radar researchers that the track-before-detect (TBD) technology is quite useful [1]. Different form traditional detect-before-track (DBT) procedures, TBD outputs the detections and tracks simultaneously after a quite long time non-coherent integration within consecutive radar scans. There are several kinds of algorithms presented to implement the TBD technology including 3D matched filter, Hough Transform (HT), particle filter, dynamic programming and etc. [2–5]. Among these algorithms, the HT based TBD method was specially focused for its great advantage of insensitivity to local missing and strong anti-jamming ability. Hough Transform was presented by Hough in 1962 for the detection of straight lines in images. In the year of 1994, Carlson [3] first introduced HT for radar weak target detection. Since then, many research works dealing with the improvement and modification of HT-TBD have been proposed [6–14]. However, in most of the existing works, all the suspect radar plots were equally treated or at most only the amplitude information was used. In fact, different plots have different qualities, especially nowadays more and more coherent radar systems are applied for getting a better performance in clutter backgrounds. These radar plot quality (PQ) information should also be adequately used to extract real targets form massive false alarmed detections. In this paper, we present a novel HT-TBD approach using radar raw plot’s PQ information. Firstly, the PQ of a radar suspect plot was redefined which was suitable for © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1611–1619, 2020 https://doi.org/10.1007/978-981-13-9409-6_194

1612

B. Zhonghua et al.

both coherent and non-coherent radars, and the calculation algorithms for the radar PQ and its four different component indexes were given. Then a modification of standard HT was present to implement PQ integrations and peak detections. Finally, simulations were conducted to evaluate the performance of the proposed approach in K-distributed sea clutter environment.

2 General Description of the Proposed Method Figure 1 shows the basic flow chart of the presented novel HT-TBD method.

Fig. 1. Flow chart of the proposed algorithm

The inputs of the improved method are the constant false alarmed rate (CFAR) detection reports named as EPs (Echo Presence). The threshold of CFAR detection usually called as the first threshold which should be set enough low as to ensure weak target echoes can pass through, and the final false alarm rate can be controlled by the TBD. After CFAR detection, all EPs are condensed to several suspect plots through a traditional plot centroid technology; meanwhile at this stage, the PQ of each suspect plot was calculated using an algorithm which will be given in detail in Sect. 3. Then we get the radar’s suspect plot observation set of N scans denoted as    ð1Þ Z ¼ Zin i ¼ 1; 2; . . .; Mn ; n ¼ 1; 2; . . .; N where Zin ¼ ½xi ; yi ; n; Qi , ðxi ; yi Þ are respectively the Cartesian coordinates of suspect plot i, Mn is the total number of suspect plots in scan n, Qi is the plot’s quality which in standard Hough transform (SHT) algorithm is exactly the amplitude. Hough transforms map points in Cartesian space to curves in parametric space by using the following parametric equation q ¼ x cos h þ y sin h

ð2Þ

In which q is the distance between origin and a line that contains point ðx; yÞ, and h 2 ½0; p is the angle between the line and x-axis. Usually an accumulator Matrix H defined on the parametric space is used to record the integrated values. In SHT applications, the value is plot’s amplitude. However, for the detection of weak radar

A Modified Hough Transform TBD Method for Radar Weak Targets

1613

targets submerged in strong clutters, amplitude or energy is not so creditable. As a modification of the SHT, here we propose to use the plot’s quality as instead. After using the modified HT towards Z, peak detections will be made in matrix H and a final detection and track reports will be outputted.

3 The Definition and Calculation of Radar Plot Quality 3.1

The Definition of Radar Plot Quality

In this paper, we say that the quality of a suspect plot was decided by four different kind factors which are the EP numbers, the local signal noise ratio (SNR), the range/azimuth match degree and the Doppler match degree. The composite influence of the four factors was described as the average quality of the suspect plot, which was denoted as q ¼ AverageðqEP ; qSNR ; qRA ; qD Þ

ð3Þ

where qEP , qSNR , qRA and qD are respectively the quality decided by EP numbers, local maximum SNR, range/azimuth match degree and Doppler match degree. All the four special kind quality index ranged in [0, 1]. For non-coherent radar applications, just qEP , qRA and qE will be used. To enhance the disparity between high quality plots and low quality plots, a nonlinear function gðÞ was designed to map q to the total PQ of the suspect plot Q gðÞ:

 pffiffiffi q q [ qth Q¼ q2  q  qth

ð4Þ

where qth is the threshold for the partition of high PQ and low PQ plots, commonly in coherent radar cases we define qth ¼ 0:5 and in non-coherent radar cases we define a bigger qth . Both the domain and range of gðÞ are within ½0; 1. 3.2

The Calculation Algorithm of Plot Quality

Now we consider in detail the calculation algorithm of qi for all suspect plots, assuming that target is point-like and non-fluctuant in the coherent process interval (CPI). 3.2.1 The Calculation of qEP The total number of EPs being used in the centroid procession of a suspect plot can somehow reflect the plot’s reliability. Actually the more a suspect plot contains EPs, the more likely it was from a real target. The EP numbers decided plot quality index qEP was given here as qEP ¼

Nep Nd

in which Np is the EP numbers and Nd is the total detection times of the plot.

ð5Þ

1614

B. Zhonghua et al.

3.2.2 The Calculation of qSNR Denote the maximum amplitude of the suspect plot’s EPs as AEP and the corresponding CFAR threshold as VT , the quality of SNR qSNR was given as  qSNR ¼ max

1  i  Np

 2 AEP  VT  AEP i

ð6Þ

The search of maximum value was conducted among all the EPs of the suspect plot. In radar CFAR detection, VT is subjected to mean power of the local noise or clutters. For all the suspect plots, qSNR is ranged from 0 to 1 since AEP is always bigger than VT . The more AEP exceeds VT , the bigger will qSNR be valued, which can be an instruction of plot quality related to local SNR. 3.2.3 The Calculation of qRA The echo of a real target has its own particular range and azimuth extension features, which are quite different from that of clutter or jamming. These features are actually decided by the radar’s match filter designation and antenna gain function. Here we design qRA as the match degree of the range azimuth extension (RAE) feature between the suspect plot and an ideal point-like target. Denoting the RAE function of an ideal point-like target as EIT and the RAE function of a suspect plot as EEP , then qRA can be defined as follow kEIT  EEP k2 qRA ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kEIT k2  kEEP k2

ð7Þ

EIT ðr; /Þ ¼ HR ðrÞ  HA ð/Þ

ð8Þ

where

In which HR ðrÞ and HA ð/Þ are respectively the radar’s match filter responding function and the antenna’s round trip gain function towards an ideal point-like target. Both HR ðrÞ and HA ð/Þ are only subject to radar itself, and can be acquired by actual measurements. EEP ðr; /Þ is the measured RAE and can be calculated from the suspect EPs’ amplitude and position. If in a particular ðr; /Þ cell, an EP does not exist; then we just simply set the value of this cell to be zero. 3.2.4 The Calculation of qD As to talking about coherent radar applications, there exists additional Doppler information that can be used for plot quality calculation. In coherent radar signal processing, traditionally moving target detection (MTD) technology was used to suppress clutter, and then CFAR was implemented upon ranges for each Doppler channel. When a moving real target with identifiable radial velocity exists, special peaks will occur in corresponding adjacent channels and present a Sinc-function like amplitude Doppler spectrum feature.

A Modified Hough Transform TBD Method for Radar Weak Targets

1615

We present here to define a suspect plot’s Doppler quality index qD by using the match degree of the plot’s measured Doppler spectrum towards real target’s theoretical spectrum. Denoting the Doppler spectrum of the suspect plot measured in a CPI as Sd ðf Þ and the expected target’s theoretical spectrum as Sref ðf Þ, then qD was given by R Sd ðf Þ  Sref ðf Þdf ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð9Þ qD ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R 2 R 2 Sref ðf Þdf Sd ðf Þdf  If the MTD was implemented by the fast Fourier transform (FFT) algorithm, then the discrete analytic expression of Sref ðf Þ can be given as Sref ðnÞ ¼

sin½pNðfd =fr þ m=MÞ sin½pðfd =fr þ m=MÞ

ð10Þ

This is directly the magnitude response of the m-th Doppler filter channel of FFT based MTD, where M is the numbers of Doppler channels.

4 Simulation and Results Analysis To validate the proposed method and evaluate its performance, Monte Carlo simulations were made considering a K-distributed sea clutter environment. 4.1

The Generation of K-Distributed Sea Clutter

Firstly, K-distributed temporal-spatial correlative sea clutter was generated using the famous SIRP method given in detail by reference [15]. Main parameters of the simulated sea clutter were shown in Table 1. Figure 2 shows an example of the simulated coherent sea clutter’s amplitude and its fitting probability density compared with a theoretical K distribution of exactly same parameters, which demonstrates that the simulation method works well.

Table 1. Main simulation parameters of the sea clutter Simulation parameters Shape factor Scale factor (b) Radar direction towards wind s.t.d of clutter spectrum spread Wind speed

Value 1.71 3.52 0° 66.67 Hz 10 m/s

1616

B. Zhonghua et al.

Fig. 2. The simulated K-distributed sea clutter

4.2

The Validation of PQ Calculation Algorithms

Then we consider a marine target with a radial velocity of 4 m/s observed by coherent radar in K-distributed sea clutter environment. The simulation parameters of sea clutter is same as Sect. 4.1 and the simulation parameters of radar was set as follow: the wavelength k ¼ 3 cm, the PRF is 1000 Hz, the 3 dB beam width is 1.8°, the range resolution is 15 m, and the scan period is 2.5 s. A sixteen point FFT algorithm was used for MTD and CA-CFAR algorithm with a normalization factor of 2 was used for CFAR detection. The amplitude of simulated target coherent echo is defined as A, and A/b decided the SNR of the input echoes. The simulation was done for 100 times. The calculation results of component PQ index qEP ; qSNR ; qRA ; qD using the simulated target and clutter data under different input SNR conditions were given in Fig. 3, and the calculation results of total PQ for non-coherent and coherent radar applications were given in Fig. 4. From the results shown in Figs. 3 and 4, we can conclude that: (a) As far as the input SNR is not too low, in this simulation case such as A=b  10=3:52 was satisfied, all the four presented quality indexes can successfully distinguish real target plots from clutter induced false alarm plots. (b) Among the four different quality indexes, especially in low input SNR conditions, qD has the best performance and qEP ; qSNR are more or less confusing, which means that the introduction of qD made great difference. (c) The designed calculation algorithm for Q is quite effectual and robust, which was shown in Fig. 4, even in quite low input SNR conditions, target and clutter are still mainly distinguishable.

A Modified Hough Transform TBD Method for Radar Weak Targets

(a)The calculation results of qEP

(b) The calculation results of qSNR

(c) The calculation results of qRA

(d) The calculation results of qD

Fig. 3. The calculation results of qi

(a) Coherent radar case

(b) Non-coherent radar case

Fig. 4. The calculation results of Q for coherent and non-coherent radars

1617

1618

4.3

B. Zhonghua et al.

The Performance of the PQ-HT TBD Method

Assuming a marine target started from ð0 m; 0 mÞ sailing with the speed of ð3 m=s; 3 m=sÞ and seven scans radar observations were integrated. The number of false alarm plots per scan is controlled to be random variable uniformly distributed in the range of ½40; 45. Other simulation parameters are set as same of Sect 4.2. Figure 5a gives the results of the proposed PQ-HT algorithm in the condition of A = 3, as a comparison the results of SHT with magnitude accumulation was given in Fig. 5b, in which the red square indicates the position of the real target and all the values are normalized by the maximum peak.

(a) The result of the proposed PQ-HT

(b) The result of the SHT

Fig. 5. Results comparison between the PQ-HT and SHT

Let the threshold of peak detection to be 0.8 (H has been normalized), Table 2 gives the mean false alarm numbers nfa of the two algorithm with different input SNR conditions. For each input SNR condition, Monte Carlo simulation was done for 50 times. Table 2 Mean false alarm numbers of PQ-HT and SHT Aðb ¼ 3:25Þ nfa (PQHT) nfa (SHT)

1 8.04 20.2

1.5 5.04 17.4

2 0.9 14.58

3 0.02 13.15

4 0 6.02

5 0 2.02

The results given by Fig. 5 and Table 2 show a significant advantage of the proposed PQ-HT compared with SHT in low SCR applications.

A Modified Hough Transform TBD Method for Radar Weak Targets

1619

5 Conclusion In this paper we introduced a new definition and calculation algorithm for the radar plot’s quality, and then presented a modified HT based TBD method for radar weak targets in strong sea clutter using the defining plot quality information. Results of Monte Carlo simulations show the advantage of the proposed method.

References 1. Sun L, Wang J (2007) An improved track before detection algorithm for radar weak target detection. Radar Sci Technol 5(4):292–296 2. Reed IS, Gagliardi RM, Stotts LB (1988) Optical moving target detection with 3D matched filtering. IEEE Trans AES 24(4):327–336 3. Carlson BD, Evans ED, Wilson SL (1994) Search radar detection and track with the Hough transform. IEEE Trans AES 30(1):102–115 4. Barniv Y (1985) Dynamic programming solution for detecting dim moving targets. IEEE Trans AES 1:144–156 5. Salmond D, Birch H (2001) A particle filter for track-before-detect. In: Proceedings of the American control conference, pp 375–370 6. Chen J, Leung H (1996) A modified probabilistic data association filter in a real clutter environment. IEEE Trans AES 32(1):300–313 7. Binias G (2002) Target track extraction procedure for OLPI antenna data on the basis of Hough transform. IEE Proc RSN 149(1):20–32 8. Xu L, Oja E, Kultanen P (1990) A new curve detection method: randomized Hough transform (RHT). Pattern Recognit Lett 11(5):331–338 9. Moyer LR, Spak J, Lamanna P (2011) A multi-dimensional Hough transform-based trackbefore-detect technique for detecting weak targets in strong clutter backgrounds. IEEE Trans Aerosp. Electron Syst 47(4):3062–3068 10. Fraiba H, Vahid R, Abbas S (2012) Comparison of two algorithms for detection of fluctuating targets in HRR Radars in Non-Gaussian clutter based on Hough transform. Radar Sci Technol 10(2):124–132 11. Priyanka M, Bidyut BC (2015) A survey of Hough transform. Pattern Recognit 48:993–1010 12. Wang GH, Li L, Yu HB (2017) A modified Hough transform TBD algorithm based on point set merging. Acta Aeronautica et Astronautica Sinica 38(1):203–123 13. Yu HB, Wang GH, Wu W et al (2016) A novel RHT-TBD approach for weak targets in HRRF radar. Sci China Inf Ser 59(12):1–14 14. Bi X, Du JS, Zhang QS et al (2015) Improved multi-target radar TBD algorithm. J Syst Eng Electron 26(6):1229–1235 15. Deng SQ, Jin L, Liang H (2017) Simulation of temporal-spatial coherent correlation K-distributed sea clutter. Electron Meas Technol 40(11):61–65

Analysis of the Effects of Climate Teleconnections on Precipitation in the Tianshan Mountains Using Time-Frequency Methods Baoju Zhang1(&), Lixing An1, Yonghong Hao2, and Tian-Chyi Jim Yeh2,3 1

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, 300387 Tianjin, China [email protected] 2 Tianjin Key Laboratory of Water Resource and Environment, Tianjin Normal University, 300387 Tianjin, China 3 Department of Hydrology and Atmospheric Sciences, University of Arizona, Tucson, AZ, USA

Abstract. Precipitation is the main sources of subsurface and surface water in the Tianshan Mountains, which play a vital role in the social and economic developments of Xinjiang, China. Researches showed that precipitation in the Tianshan Mountains is largely affected by climate phenomenon. In this paper, Ensemble Empirical Mode Decomposition (EEMD) and wavelet coherence analysis were used to explore the effects of climate teleconnections at annual to multidecadal scales on precipitation in the northern and southern slopes of the Tianshan Mountains. The results show that the annual scale of ISM strongly impacts the precipitation on both northern and southern slopes. The interannual scales of ENSO and NAO mainly affect the precipitation of the northern slope, and ENSO has a greater influence on precipitation than NAO. The influence of the multidecadal scale of the PDO on precipitation of northern and southern slopes are weak. Keywords: EEMD method Mountains  Precipitation

 Wavelet coherence analysis  Tianshan

1 Introduction Ensemble Empirical Mode Decomposition (EEMD) is a adaptive time-frequency analysis method to decompose multi-scale oscillation into a series of intrinsic mode functions (IMFs) and a residual component for non-stationary and non-linear time series data [1]. This method improved the Empirical Mode Decomposition (EMD) method and overcame the problems of mode fixing of EMD [2]. Another timefrequency technique used in this paper, wavelet coherence analysis, is to analyze the local correlation between two associated variable signals, which particularly suitable for nonstationary systems [3]. Climatic phenomenon and hydrological processes are © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1620–1628, 2020 https://doi.org/10.1007/978-981-13-9409-6_195

Analysis of the Effects of Climate Teleconnections

1621

regarded as nonstationary and nonlinear systems. These mathematical methods provide us with opportunities to explore the relationship between precipitation and climate change. Monsoons and climate teleconnections fluctuating from annual to multidecadal years largely impact precipitation [4]. However, the effects of monsoon and climate teleconnections on precipitation in different regions vary widely. The Tianshan Mountain acts as a ‘water tower’ in central Asian. Precipitation of the mountainous areas is an important source of surface and subsurface water in Xinjiang. The spatial distribution of precipitation affects regional soil water content, groundwater recharge, and river flow. Therefore, it is important to study the effects of climate variation on the spatial and temporal distribution of precipitation in the Tianshan Mountains of Xinjiang. In this study, we selected monsoon and climatic teleconnection indices such as the Indian Summer Monsoon (ISM), the El Nino-Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO), and the Pacific Decadal Oscillation (PDO). The ISM has a 1 year cycle, the ENSO has a 2–7 year cycle, the NAO has a 3–6 year cycle, and the PDO has a 10–30 year cycle [5, 6]. The indices associated with precipitation variations in the amplitude, timing, and spatial distribution in China. The goal of this paper is to explore the impacts of annual to multidecadal climate variation on the precipitation of the northern and southern slopes of the Tianshan Mountains in Xinjiang. First, we extract precipitation time series of different significant frequencies by EEMD method. Then, we sum and group up significant frequencies according to the periodic fluctuation of the climate indices. Finally, coherence correlation is used to analyze relations between climate indices and reconstructed precipitation of northern and southern slopes of the Tianshan Mountains respectively.

2 Method 2.1

EEMD Decomposition

EEMD can decompose non-linear and non-stationary signals into a series of intrinsic mode functions (IMFs) from high frequency to low frequency through the sifting process [7]. It overcame the mode fixing of EMD [8]. The specific steps of EEMD decomposition are as follows: (1) Add Gaussian white noise to the original signal; (2) Decompose the data series into IMFs using EMD; (3) Repeat above steps for N times with different white noise series; (4) Average the summation of corresponding decomposed IMF for N times. Then an ensemble set of IMFs can be obtained: cj ¼

N 1X cj;i N i¼1

where cj denotes the jth IMF component. The well-established statistical rule of the added white noise as follows:

ð1Þ

1622

B. Zhang et al.

e en ¼ pffiffiffiffi N

ð2Þ

where e is the amplitude of the added white noise, N is the size of ensemble numbers,en represents the standard deviation of the error, which means the difference between the original time series and the IMF(s). In this paper, the standard deviation of the added noise is 0.4, and the ensemble number size is 1000. 2.2

Wavelet Coherence Analysis

The wavelet coherence spectrum is used to measure the correlation between two timeseries. The coherence spectrum of two time-series, X and Y, is defined as [9]:  1 XY 2 Sðs W ðsÞ R ðsÞ ¼   2    2  S s1 W X ðsÞ  S s1 W Y ðsÞ 2

ð3Þ

where R2 ðsÞ is the wavelet coherence coefficient, it takes values from 0 to 1. W XY ðsÞ is cross wavelet transform, which reveals the high common energy region and phase relationship of two time-series. s is the scale parameter of the Molert wavelet. S is a smoothing operator and is given by: SðWÞ ¼ Sscale ðStime ðWðsÞÞÞ

ð4Þ

where Sscale means smoothing along the wavelet scale axis and Stime smoothing in time, and they are defined by Torrence and Webster as follows [10]: t2

Stime ðWÞjs¼ ðWðsÞ  c12s2 Þjs Y Sscale ðWÞjs ¼ ðWðsÞ  c2 ð0:6 sÞÞjs where c1 and c2 are normalized constants,

Q

ð5Þ ð6Þ

is a rectangular function.

3 Study Area and Data Tianshan Mountain extends more than 2,500 km from east to west and about 250– 350 km from north to south [11]. Due to access restrictions and data availability, the study region was confined to Tianshan Mountain extends between latitudes 40°31′ and 45°23′ N and longitudes 96°10′ and 74°50′ E in China. The latitude of the Mountain straddles the central part of the Xinjiang Uygur Autonomous Region. The mountain starts from the east of Hami County and ends at the northwest of Wuqia County (Fig. 1). Tianshan Mountain stretches 1,760 km and covers an area of over 570,000 square kilometers, accounting for about 1/3 of the entire area of Xinjiang [12]. Monthly precipitation data for 18 meteorological stations in the Tianshan Mountains is derived from the China Meteorological Data Sharing Service System.

Analysis of the Effects of Climate Teleconnections

1623

Fig. 1. Distribution of the 18 rainfall stations of the eastern Tianshan Mountains

According to the geographical location and climatic characteristics of the study area, the mountain area is divided into two regions: the northern slope (7 stations) and the southern slope (11 stations) of Tianshan Mountains. In order to make full use of the data from meteorological stations, the calculation of precipitation on the Northern slope will be carried out from 1953 to 2017 and on the southern slope will be carried out from 1951 to 2017. 3.1

EEMD Decomposition

Monthly precipitation time series collected from the northern and southern slopes of the Tianshan Mountains were decomposed into eight IMF components and one residual component using the EEMD method (Fig. 2a, b respectively).

Fig. 2. EEMD decomposition results of monthly precipitation on the northern slope (a) and southern slope (b) of Tianshan Mountains

Based on the mean period and contribution rate of IMFs (Table 1), precipitation variation on both the northern and southern slopes had the strongest annual and weak interannual quasi-periodic fluctuations but ambiguous multidecadal scale quasi-periodic fluctuations. Then we group and sum IMFs into four time-scale components according to the following period ranges: 0.5–1 year (ISM-like), 2–7 years (ENSO-like),

1624

B. Zhang et al.

3–6 years (NAO-like), and 10–35 years (PDO-like). The composite IMFs are used in wavelet coherence analysis. Table 1. IMFs period and contribution rate of precipitation on the northern and southern slopes of Tianshan Mountains IMF components IMF1 MF2 IMF3 IMF4 IMF5 IMF6 IMF7 IMF8

3.2

Northern slope Period (month) 8.56 10.94 12.55 33.16 53.76 125.99 250.72 425.24

Contribution (%) 31.85 37.81 22.54 4.45 2.12 0.77 0.32 0.15

Southern slope Period (month) 8.68 9.66 12.38 28.58 53.43 107.01 241.87 618.89

Contribution (%) 30.50 35.08 29.13 2.50 1.54 0.74 0.20 0.30

Wavelet Coherence Between Precipitation and Climate Indices on the Northern Slope

Wavelet coherence analysis was used to identify the effect of annual to multidecadal climate variability on reconstructed precipitation of the northern and southern slopes of the Tianshan Mountains. Figure 3 shows the wavelet coherence results between composite IMFs of northern slopes precipitation and the ISM, ENSO, NAO, and PDO. Figure 3a shows the wavelet coherence results between reconstructed precipitation and the ISM. High wavelet coherence (>0.8) is observed at 1-year scale. In addition, relatively high wavelet coherence (0.2–0.4) is observed at 0.5-year scale, the periodicity of intraannual (i.e., 0.5–1.0 year) is intermittent in different years. Figure 3b displays the wavelet coherence results between reconstructed precipitation and the ENSO. The effect of the ENSO is different from that of the ISM. The significant high wavelet coherence regions are concentrated at 2–8 year scales. The periodicity of 2–4 years is intermittent and is observed in 1957–1975, 1993–2013 and the periodicity of 4–8 years is observed in 1959–1995. The wavelet coherence between reconstructed precipitation and the NAO is shown in Fig. 3c. The figure indicates that precipitation has a clear fluctuation in response to the NAO in 3-year and 5-year scale. The 3-year fluctuation is observed in 1977–1989, and the 5-year fluctuation is found in 1997– 2008. Figure 3d illustrates the wavelet coherence results between reconstructed precipitation and the PDO. The wavelet coherence results suggest that the effect of the PDO on precipitation is weak on the multidecadal scale, and no significant coherence regions are detected. However, a high wavelet coherence (>0.4) region is observed at 4-year scale in 1985–1999.

Analysis of the Effects of Climate Teleconnections

1625

(a) ISM

(b) ENSO

(c) NAO

(d) PDO

Fig. 3. Wavelet coherence between composite IMFs of the northern slope precipitation and the ISM, ENSO, NAO, and PDO

1626

3.3

B. Zhang et al.

Wavelet Coherence Between Precipitation and Climate Indices on the Southern Slope

Figure 4 shows the wavelet coherence results between composite IMFs of southern slopes precipitation and the ISM, ENSO, NAO, and PDO. (a) ISM

(b) ENSO

(c) NAO

Fig. 4. Wavelet coherence between composite IMFs of southern slope precipitation and the ISM, ENSO, NAO, and PDO

Analysis of the Effects of Climate Teleconnections

1627

(d) PDO

Fig. 4. (continued)

In particular, Fig. 4a shows the wavelet coherence between reconstructed precipitation and the ISM. It reveals and effects of the ISM on precipitation similar to those of northern slope. That is, high wavelet coherence (>0.8) is observed at 1-year scale and relatively high wavelet coherence is observed at 0.5 years. And the effects of the ISM on precipitation of southern slope is stronger than that of on northern slope, the coherence coefficient reaches 0.44. On the other hand, the wavelet coherence between reconstructed precipitation and the ENSO is illustrated in Fig. 4b. According to this figure, precipitation had a weaker response to the ENSO on the southern slope than the northern slope, The 3-year fluctuation is found in 1969–1981. Figure 4c displays the wavelet coherence between reconstructed precipitation and the NAO. The impacts of the NAO on precipitation of southern slope is also weak and the periodicity of 2-year is intermittent in different years. Figure 4d shows the wavelet coherence between reconstructed precipitation and the PDO. The results revealed that the PDO has no significant coherence with precipitation on multidecadal scales.

4 Conclusions The precipitation on the northern and southern slopes of the Tianshan Mountains is significantly different in space, and the climate indices have a significant influence on the difference in precipitation. On the annual scale, monthly precipitation has a strong positive correlation with ISM. On the annual scale, ISM has a greater impact on precipitation of southern slope of the Tianshan Mountains. On the interannual scale, the ENSO is the main factor affecting precipitation on the northern slope of the Tianshan Mountains; the NAO also impacts precipitation of northern slope, but the effects weaker than ENSO. Both ENSO and NAO have a greater impact on precipitation on the northern slope than that of on the southern slope. On the multidecadal scale. PDO was weakly correlated with precipitation in the Tianshan Mountains. These mathematical methods provide us with an understanding of the impacts of climate variation on precipitation in space and offer insight into water resource management.

1628

B. Zhang et al.

Acknowledgement. This paper is supported by Natural Youth Science Foundation of China (61501326, 61401310), the National Natural Science Foundation of China (61731006) and Natural Science Foundation of China (61271411). It also supported by Tianjin Research Program of Application Foundation and Advanced Technology (15JCZDJC31500), Tianjin Natural Science Foundation (18JCZDJC39500), and Tianjin Science Foundation (16JCYBJC16500). This work was also supported by the Tianjin Higher Education Creative Team Funds Program.

References 1. Yinghao Y, Zhang H, Singh VP (2018) Forward prediction of runoff data in data-scarce basins with an improved ensemble empirical mode decomposition (EEMD) model. Water 10:388 2. Wu Z, Huang NE (2009) Ensemble empirical mode decomposition: a noise-assisted data analysis method. Adv Adapt Data Anal 1(1):1–41 3. Grinsted A, Moore JC, Jevrejeva S (2004) Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlinear Proc Geophys 11:561–566 4. Velasco EM, Gurdak JJ, Dickinson JE, Ferré TPA, Corona CR (2016) Interannual to multidecadal climate forcings on groundwater resources of the U.S. West Coast. J Hydrol Reg Stud 5. An X (2015) Effect of terrain on the spatial distribution of precipitation in the Tianshan Mountains 6. Schulte JA, Najjar RG, Li M (2016) The influence of climate modes on streamflow in the mid-Atlantic region of the United States. J Hydrol Reg Stud 5:80–99 7. Peel MC, Srikanthan R, McMahon TA, Karoly DJ (2011) Ensemble empirical mode decomposition of monthly climatic indices relevant to Australian hydroclimatology. In: 19th international congress on modelling and simulation, Perth, Australia, 12–16 Dec 2011 8. Liu J, Zhang W (2018) Climate changes and associated multiscale impacts on watershed discharge over the upper reach of Yarlung Zangbo river basin, China. Adv Meteorol 2018 (1):1–11 9. Torrence CC, Compo GP (1998) A practical guide to wavelet analysis. Bull Am Meteorol Soc 79(1):61–78 10. Torrence CC, Webster PJ (1999) Interdecadal changes in the ENSO monsoon system. J Clim 12(8):2679–2710 11. Guo L, Li L (2015) Variation of the proportion of precipitation occurring as snow in the Tian Shan mountains, China. Int J Climatol 35:1379–1393 12. Li S, Wang Q, Li L (2016) Interdecadal variations of pan-evaporation at the southern and northern slopes of the Tianshan Mountains, China. J Arid Land 8(6):832–845

An Improved Cyclic Spectral Algorithm Based on Compressed Sensing Jurong Hu(&), Ying Tian, Yu Zhang, and Xujie Li College of Computer and Information, Hohai University, Nanjing, China [email protected], [email protected]

Abstract. With the increase of types and functions of electronic equipment, the electro-magnetic environment of radar is becoming more and more complex, so it is very difficult to estimate the spectrum of electromagnetic environment. Since it does not need prior information of signal, cyclic spectral algorithm is very suitable for analyzing electromagnetic environment. The algorithm has strong anti-noise performance but high computational complexity in spectrum estimating of electromagnetic environment. This paper combines compressed sensing with the spectral correlation function to solve this problem for spectrum estimation. The simulation results show that the proposed algorithm can reduce the computational complexity of spectrum estimation while ensuring the estimation accuracy of radar electromagnetic environment. Keywords: Radar

 Spectral correlation function  Compressed sensing

1 Introduction As a new radar system, Multiple Input Multiple Output (MIMO) radar has attracted more and more attention from academia. MIMO radar is composed of multiple transmitting stations and multiple receiving stations. It uses space diversity, frequency diversity and signal diversity technology to realize target location, recognition and parameter estimation [1]. Compared with its radar, MIMO radar has the characteristics of multi-resolution parameters and strong anti-jamming ability, so it has attracted wide attention. As the application scenario of MIMO radar becomes increasingly complex, various interference factors included in the electromagnetic environment of radar work will reduce the performance of MIMO radar. The Electromagnetic Interference (EMI) of MIMO radar mainly comes from anti-radiation missile, stealth technology, enemy reconnaissance system, and communication signal interference [2]. In order to ensure that MIMO radar can detect, track and locate targets in complex electromagnetic environment, it is necessary to quickly and accurately perceive the spectrum information of electromagnetic environment signals and detect the location of interference signals. At present, spectrum sensing mainly includes matched filter algorithm, energy detection algorithm and cyclic spectrum detection algorithm [3]. Matched filter detection algorithm is the best spectrum sensing method in theory, but it needs the prior information of the signal, which does not meet the requirements of actual use [4]. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1629–1638, 2020 https://doi.org/10.1007/978-981-13-9409-6_196

1630

J. Hu et al.

Energy detection algorithm is easy to implement, but it has poor anti-noise ability and long detection time, so it cannot be used to sense various radar electromagnetic environment [5]. Cyclic spectrum detection algorithm [6] has strong anti-noise performance, it does not need the prior information of the signal, and its different types of information present different cyclic spectrum, so the cyclic spectrum detection algorithm can distinguish different types of signals [7]. The cyclic stationary characteristics of radar received signals are analyzed in [8]. In [9] proposes an improved radar signal analysis algorithm based on cyclic spectrum algorithm. However, the above-mentioned literature does not discuss the operational efficiency of the radar received signal in the cyclic spectrum analysis. The received signal contains large amount of information. If the he mass information cannot be processed in real time, the efficiency of the radar will be greatly reduced. In order to improve the efficiency of the cyclic spectrum algorithm in perceiving EMI signals, an improved cyclic spectrum algorithm based on Compressed Sensing (CS) theory is proposed in this paper. Firstly, the received EMI signals are processed by CS, then the main information of the signals is extracted, and then the cyclic spectrum analysis is carried out to improve the operation efficiency of the system.

2 Cyclic Spectrum Algorithm Define the received signal r(t) of MIMO radar, which is consisted of signal s(t) and noise n(t). n(t) represents random signal matrix with zero mean. If s(t) and n(t) are independent of each other, the signal r(t) can be expressed as: r ðtÞ ¼

q X

si ðtÞ þ nðtÞði ¼ 1; 2; . . .; qÞ

ð1Þ

i1

The CAF (cyclic autocorrelation function) of signal rðtÞ is: T=2 Z

Rarr ðsÞ ¼ lim

t!1

rðtÞr ðt þ sÞej2pat dt

ð2Þ

T=2

where a represents cycle frequency, a ¼ p=T0 , and T0 represents cycle. Fourier transform of CAF of signal rðtÞ, then it can get the cyclic spectrum function: Sarr ðf Þ ¼

1 Z 1

Rarr ðsÞej2pfs ds

¼ lim lim

T!1 Dt!1

 1  a a R t; f þ R  t; f  dt T 2 2 Dt

Dt Z2

ð3Þ

2

where f represents frequency, R(f) is the Fourier transform form of r(t), Dt is the time interval, (f+a=2) and (f–a=2) are the two spectrum components of signal r(t).

An Improved Cyclic Spectral Algorithm Based on Compressed

1631

3 An Improved Cyclic Spectrum Algorithm Based on CS Theory 3.1

The Theory of Compressive Sensing

CS theory provides a signal processing framework including signal representation, sampling and reconstruction, which can effectively solve the problems encountered in the process of acquisition and processing of broadband signals, and it has a wide range of applications [10]. The processing flow of CS mainly includes sparse representation, compression measurement and signal reconstruction, as shown in Fig. 1. Choosing a suitable sparse basis is the prerequisite for successful CS implementation, and what kind of sparse basis to choose usually depends on the type of signal. In radar operating system, the wavelet basis or discrete cosine transform basis are generally selected. At present, the observation matrix in CS can be divided into three categories: random observation matrix, deterministic observation matrix and partial random observation matrix. In order to reconstruct the signal accurately, the observation matrix must satisfy the constrained equidistant condition [11]. The core of signal reconstruction is to restore the original signal accurately from a small number of observations, that is to say, the original signal with length N is accurately recovered from the observed value of the signal with length M.

Sparse Representation

Sparsity Coefficient

Compression Measurement

Observation Results

Signal Reconstruction

Fig. 1. CS processing flow

In order to describe the principle of CS conveniently, r(t) is expressed as a vector, that is r. Define the length of signal r as N. However, in the electromagnetic environment where radar works, many signals are not sparse signals, but they can be sparsely represented by some transformations as follows: r ¼ Wa

ð4Þ

where W is an orthogonal basis matrix, a is a vector of projection coefficients. Then, the observed vector y can be expressed as follows: y ¼ Ur ¼ U W a ¼ Ha

ð5Þ

where U is a M  N dimensional observation matrix, H ¼ U W is a recovery matrix. It is difficult to directly solve the projection coefficient vector a, so it can be transformed into the approximate value a0 of matrix a as follows: ^a ¼ argminjja0 jj1

s:t:Ha0 ¼ y

ð6Þ

1632

J. Hu et al.

So the reconstructed signal r0 is: r0 ¼ Wa

3.2

ð7Þ

An Improved Cyclic Spectrum Algorithm Based on CS Theory

In order to improve the efficiency of spectrum sensing in electromagnetic environment by using cyclic spectrum algorithm, an improved cyclic spectrum algorithm based on CS theory is proposed. The flow chart is shown in Fig. 2.

Start

Electromagnetic environment signal r t

Initialization of compressed sensing parameters

Reconstructing r(t), then the reconstructed signal is obtained.

No

If reconstruction error meets the requirement Yes

Analysis and Utilization of Electromagnetic Environment Signal by Cyclic Spectrum

End Fig. 2. Flow chart of improved cyclic spectrum algorithm based on CS theory

The concrete steps of the improved cyclic spectrum algorithm based on CS theory are as follows: (1) Receiving the Signal r(t) from electromagnetic environment. (2) Initializing the parameters of CS algorithm.

An Improved Cyclic Spectral Algorithm Based on Compressed

1633

(3) Reconstructing r(t), then the reconstructed signal r 0 ðtÞ is obtained. (4) Judging whether the reconstruction error meets the requirement, if it meets the requirement, the next step is carried out. If not, step (2) is returned and recalculated. (5) Cyclic Spectrum Analysis and Utilization of Reconstructed Electromagnetic Environment Signals. The improved cyclic spectrum algorithm based on CS theory retains the main components of the signal and simplifies the next cyclic spectrum analysis, which can greatly reduce the operating cost of the system and improve the efficiency of the algorithm.

4 Simulations This section simulates the electromagnetic environment of MIMO radar using Amplitude Modulation (AM) signal and Binary Phase Shift Keying (BPSK) signal. Assume that the carrier frequency f c1 of AM signal r(t) is 1000 MHz, Set the frequency f of baseband signal to 10 MHz. The code rate Rb of BPSK signal r2 ðtÞ is 15 Mbps and Initial phase u0 is 0. Carrier frequency f c2 is 1500 MHz, and the number of symbols is 1500. Then the traditional electromagnetic environment signal cyclic spectrum diagram as shown in Fig. 3 is obtained. Figure 4 shows the improved cyclic spectrum diagram of electromagnetic environment signals. In the simulation process, the observation matrix chooses the Gauss random matrix and uses the Orthogonal Matching Pursuit (OMP) algorithm [12] to reconstruct the electromagnetic environment signals, which K (sparsity) is 20, M (measurement) is 250.

3000

alfa (MHz)

2000 1000 0 -1000 -2000 -3000 -1500 -1000

-500

0

500

1000

1500

f (MHz)

Fig. 3. Traditional cyclic spectrum algorithm for electromagnetic environment signals

1634

J. Hu et al. 3000

alfa (MHz)

2000 1000 0

-1000 -2000 -3000 -2000 -1500 -1000 -500

0

f (MHz)

500

1000 1500

Fig. 4. Improved cyclic spectrum algorithm for electromagnetic environment signals

Analysis of Figs. 3 and 4 shows that the improved cyclic spectrum algorithm based on CS theory can be used to sense the spectrum information of EMI signal. The coordinates of AM signal at cyclic spectrum (alfa, f) are (0, 1000 MHz), (0, −1000 MHz), (2000 MHz, 0) and (−2000 MHz, 0). BPSK signal has spectrum peaks at four locations of cyclic spectrum (0, 1500 MHz), (0, −1500 MHz), (3000 MHz, 0) and (−3000 MHz, 0). Compared with Figs. 3 and 4, it can be seen that the improved cyclic spectrum algorithm based on CS theory abandons some less influential points and retains the main components of the signal, which can improve the readability of the signal cyclic spectrum analysis. Figure 5 shows the spectrum of the original electromagnetic environment signal and the reconstructed electromagnetic environment signal by CS. It shows that compared with the spectrum of the original electromagnetic environment signal, the spectrum of the reconstructed electromagnetic environment signal by CS effectively retains the main frequency components of the electromagnetic environment signal, eliminates most of the indifferent components such as small amplitude, and greatly reduces the operating cost of the system.

An Improved Cyclic Spectral Algorithm Based on Compressed

1635

Amplitude

200 150 100 50

Amplitude

0 -3000

-2000

-1000

0

1000

2000

3000

1000

2000

3000

f(MHz)

200 150 100 50 0 -3000

-2000

-1000

0

f(MHz)

Fig. 5. Spectrum comparison of electromagnetic environment signals

Figure 6 depicts the variation of reconstruction time t with compression ratio when the sparsity of electromagnetic environment signals is 10, 20, 30 and 40, respectively. 3.5

K=10 K=20 K=30 K=40

3

t(s)

2.5 2 1.5 1 0.5 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Compression Ratio

Fig. 6. Signal reconstruction time with compression ratio

1636

J. Hu et al. 1 K=10 K=20 K=30 K=40

0.9 0.8

error

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Compression Ratio

Fig. 7. Signal reconstruction error with compression ratio

It is shown that, the larger the sparsity, the longer the reconstruction time, and when the compression ratio is greater than 0.18, with the increase of the compression ratio, the performance of the algorithm tends to be stable, and the reconstruction time of OMP algorithm for reconstructing electromagnetic environment signal is also more and more stable. Figure 7 presents the variation of reconstruct error with compression ratio in electromagnetic environment when the sparsity of electromagnetic environment signal is 10, 20, 30 and 40, respectively. Analysis of Fig. 7 shows that when the compression ratio is the same, the larger the sparsity value, the smaller the error of reconstruction, with the increase of compression ratio, the performance of the algorithm tends to be stable, and the error change becomes more and more stable when OMP algorithm is used to reconstruct electromagnetic environment signal. The influence of the length of electromagnetic environment signal on the operation efficiency of the cyclic spectrum algorithm as shown in Table 1 and the influence of the length of electromagnetic environment signal on the system resources of the cyclic spectrum algorithm as shown in Table 2, which are obtained by experimental simulation. In Tables 1 and 2, the first method refers to the cyclic spectrum estimation algorithm, and the second method refers to the improved cyclic spectrum estimation algorithm based on CS.

Table 1. Effects of electromagnetic environment signal length on the operation efficiency of cyclic spectrum algorithm Signal length Compression ratio Resource savings rate

1024 0.4 2/3

2048 0.45 Approximate 2/3

5120 0.65 Approximate 1/3

10240 0.7 0.3

An Improved Cyclic Spectral Algorithm Based on Compressed

1637

Table 1 shows that, with the increase of signal length, the running time of the two algorithms becomes longer and longer, which indicates that the efficiency of the two algorithms decreases. In contrast, the operation efficiency of the first algorithm is the highest, because the calculation amount of it is the least. The operation efficiency of the second algorithm is lower, because it adds the process of CS processing on the basis of the first algorithm. Table 2. Effects of signal length in electromagnetic environment on system resources of cyclic spectrum algorithm Signal length 1024 2048 5120 10240

Time Use of first algorithm(s) 0.537306 1.021654 3.284974 8.105692

Time Use of second algorithm (s) 0.640809 1.242784 3.964025 8.955821

Table 2 shows that with the increase of the length of complex electromagnetic environment signal, the compression ratio required for high-precision reconstruction of electromagnetic environment signal becomes larger and larger, because the longer the electromagnetic environment signal is, the more information it contains. Compared with the first algorithm and the second algorithm, the improved cyclic spectrum algorithm based on CS can effectively save system resources and improve the utilization rate of system resources.

5 Conclusion On the basis of sensing electromagnetic environment signals by cyclic spectrum algorithm, this paper proposes an improved cyclic spectrum algorithm based on CS theory. The cyclic spectrum of common interference signals such as AM signal and BPSK signal is studied emphatically, and the influence of different CS parameters on reconstructing electromagnetic environment signals is analyzed. The simulation results show that the proposed algorithm can effectively improve the efficiency of analyzing the spectrum information of radar electromagnetic environment. This algorithm has great significance for MIMO radar, which works efficiently in complex electromagnetic environment.

References 1. Li J, Stoica P (2009) MIMO radar signal processing 2. Zhao J (2014) Radar countermeasure technology based on complex electromagnetic environment. Private Technol (1) 3. Jiang Y, Mao M, Cao N (2014) Performance analysis of spectrum sensing based on WLC algorithm. Electron Meas Technol Abroad 33(7):25–28

1638

J. Hu et al.

4. Urriza P, Rebeiz E, Cabric D (2013) Multiple antenna cyclostationary spectrum sensing based on the cyclic correlation significance test. IEEE J Sel Areas Commun 31(11):2185– 2195 5. Sun Yu (2016) Discussion on spectrum sensing technology of cognitive radio. Inf Technol 8:206–208 6. Huang G, Tugnait JK (2013) On cyclostationarity based spectrum sensing under uncertain Gaussian noise. IEEE Trans Signal Process 61(8):2042–2054 7. Jia Y Spectrum sensing technology based on cyclostationary feature. Harbin Institute of Technology 8. Wei Y, Yue Q, Liu P (2017) Analysis and research on cyclic stability of radar received signals. Comput Simulation (1) 9. Wu H, Mao Y, Cao J (2016) An improved algorithm for in-pulse analysis of radar signals based on cyclic spectral correlation. Fire ControlCommand Control 9:123–127 10. Sun J (2013) Research on observation matrix in compressed sensing. Huazhong University of Science and Technology 11. Zhao X, Li D (2017) Improvement of Gauss random observation matrix. Foreign Electron Meas Technol (5) 12. Wang K, Ye W, Lao G (2014) a broadband SAR signal reconnaissance method based on compressed sensing. Foreign Electron Meas Technology 33(4)

Video Deblocking for KMV-Cast Transmission Based on CNN Filtering Yingchun Yuan(&) and Qifei Lu Tongji University, Shanghai, China {15221383831,starluqifei}@163.com

Abstract. KMV-Cast is a pseudo analog video transmission scenario which is robust and scalable in multicast video transmission. However, the bias of received metadata in KMV-Cast may cause blocking artifact in the image. Instead of using the H.264 deblocking filter which is of high computational complexity and is of limited benefit, a CNN-based filtering method is proposed in this paper to correct the luminance of each block and remove the noise from each pixel. CNN filter can recognize the edge of the image, automatically distinguish the true and false edges of the image, and then smooth the image. Compared with traditional filtering methods, it can retain details intelligently. And experiment indicates our method performs well and can significantly increase the PSNR of reconstructed video by an average of 5 dB. Keywords: Pseudo analog Video transmission

 CNN  Image restoration  Blocking artifact 

1 Introduction With the advance of communication technology in recent years, people demand video communication with higher QoS. And It is predicted that the total amount of video communication will also increase. Therefore, it is essential to explore more reliable video communication schemes. Reference [1] demonstrates that the optimal communication system  is amatter of matching up six variables, namely, the source ðps ; dÞ, the channel pYjX ; q , and the encoder-decoder pair (F, G), where ps denotes the distribution of the source, d the distortion function, pYjX the state transition function of channel, q the cost function of the channel, F the encoder and G the decoder. The traditional source-channel separation scheme is one method to achieve this probability matching, which is highly complex and ignores delay. Separation principle is limited to ergodic and point-to-point communication, while for nonergodic and multi-user communication, is suboptimal. Therefore, Ref. [2] proposed a novel optimal communication system, SoftCast, based on joint source-channel coding, which guarantees the communication quality of multiusers, eliminates glitches caused by mobility and improves robustness. However, due to the introduction of 3D-DCT, SoftCast is of high computational complexity and still unable to solve delay. Based on SoftCast, Ref. [3] proposed KMV-Cast which adopts

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1639–1646, 2020 https://doi.org/10.1007/978-981-13-9409-6_197

1640

Y. Yuan and Q. Lu

2D-DCT instead. KMV-Cast greatly simplifies the computational complexity and extracts related information from cloud to assist video reconstruction. However, due to the inaccurate recovery of metadata, KMV-Cast transmission scheme will face the problem of blocking artifact. In H.264 [4], deblocking filtering method includes three steps: estimating boundary strength, distinguishing true and false boundary and filtering operation based on the difference p block and q block. The deblocking filtering based on macroblock is carried out from left to right and from top to bottom. This method is computationally complex and has limited effect. Therefore, more convenient and effective solutions need to be sought. In recent years, convolutional neural network has become the latest research hotspot in image processing due to its versatility. In this paper, we proposed a novel deblocking method based on CNN.

2 Related Work In this section, we address the latest applications of CNN in image processing, inspired by which our method is proposed. And then we detailed KMV-Cast transmission scheme and the block artifact in it. 2.1

CNN in Image Reconstruction

With the development of digital image processing technology, various denoising algorithms are proposed to recovering an image, such as median filtering, wavelet threshold denoising, total variation image denoising, BM3D and so on. The drawback of these approaches is that the filter parameters are fixed and lack of adaptive ability. Thus, they are only effective for specific noise models. Another downside is that they can only handle pixel-level noise and have no effect on image restoration with blocking artifact. In the past few years, convolutional neural networks have attracted widespread attention in the field of image and video denoising for its powerful fitting ability and the way they encode the data in their hidden layers. In Refs. [5] and [6], deep CNN network has been applied. It can be used in Gaussian denoising, image super-resolution reconstruction and deblocking of JPG image. 2.2

KMV-Cast

Traditional digital video transmission schemes achieve accurate end-to-end transmission through source-channel separation coding. In practice, broadcasting is more widely used than unicast due to its efficiency and resource-saving. Digital video transmission scheme has poor performance in broadcasting because of its shortcomings in scalability, robustness and effectiveness. Hence, research on analog video transmission has received more and more attention in recent years. KMV-Cast [3] is a pseudo analog video transmission scheme, whose basic idea is to evenly divide a video frame into 8  8 blocks, and then perform DCT transformation on each block, and the obtained DCT coefficients need to be normalized before transmission. Another feature of KMV-Cast is to extract a block with the largest

Video Deblocking for KMV-Cast Transmission Based on CNN

1641

correlation coefficient from the cloud for each block. At the receiver, the correlation block acts as part of the a priori to assist in reconstructing a video frame. The received signal of mobile video transmission using pseudo-analog modulation can be represented as: y ¼ a/h þ v

ð1Þ

where h denotes the normalized DCT coefficient, a the power scale factor, / the unitary matrix to reduce peak-to-average power ratio, and t the noise. Besides normalization parameter k, correlated block coefficient k have to be transmitted through channel (Fig. 1).

modulation

demodulation

Cloud Servers

Local Cloud

Search the “useful” information

Correlated information

transformation

inverse transformation

Correlated information

Fig. 1. KMV-cast transmission scheme which removes nonlinear transform component [3].

Compared with the digital video transmission scheme, KMV-Cast has three advantages. (1) Scalability: it means the adaptive capability while broadcastnig and the receive quality one users can achieve depends upon its own channel quality; (2) Robustness: digital transmission is very fragile to the fluctuation of channel quality which may cause mosaic effect, while KMV-Cast is robust and can always decode the viewable video due to the linear transformation of joint source channel coding; (3) Efficiency: In KMV-Cast scheme, the DCT transformation is performed on blocks within a frame, and there exists no inter-frame compression. Therefore, the delay at the receiver is much shorter than that of GOP-based digital scheme. Of course, the pseudo-analog scheme also has its drawbacks: in terms of transmission accuracy, the digital scheme is more accurate. Definitely, metadata k is of great significance for video reconstruction. The brightness is very sensitive to the noise in the channel. The bias of received metadata in KMV-Cast cause blocking artifact in a frame, which is showed in Fig. 4.

1642

Y. Yuan and Q. Lu

3 Proposed Method The problem we need to solve can be described as: how to design a CNN network which can remove the mixed noise in video, and getting clean video after filtering. Since video is composed of a sequence of images, restoring an image is the basis for restoring the video, which is the goal of this paper. Obviously, it is suitable to establish a regression model in order to solve our problem. On the other hand, PSNR is generally used as an objective evaluation of video quality in research. PSNR ¼ 10  log10 MSE ¼

ð2n  1Þ2 MSE

w1 X h1 1 X kyði; jÞ  ^yði; jÞk2 w  h i¼0 j¼0

ð2Þ ð3Þ

where n denotes the bit depth of the pixel, yði; jÞ the source and ^yði; jÞ the received pixel in an image. Therefore, we can choose MSE as the loss function of CNN model. Inspired by VGG-19, we design a network with similar structure illustrated in Fig. 2, which consists of 20 convolution layers, 20 instance normalization layers and 20 activation layers.

Fig. 2. Our deblock filter model which is based on VGG-19.

In classification tasks, the introduction of pooling layer such as max-pooling can reduce the dimensionality of the image and thus reduce the computation. However, it may cause the missing of details in image. Hence, pooling layer is discarded in our method. 3.1

Instance Normalization

As the networks deepens, it is common that gradient vanishes in the process of back propagation, making gradient updating slowly and network parameters difficult to converge. In Ref. [7], batch Normalization is designed to adjust the distribution of the output of each convolution layer so that it can enter the active function area. The area of activation function refers to the area near the origin. The gradient dispersion rate is low and the discrimination rate is high. At the same time, BN will adjust the data distribution in the training process to make it “more reasonable” to enter the activation function.

Video Deblocking for KMV-Cast Transmission Based on CNN

1643

In image generation tasks, Batch Normalization links multiple images within a batch, resulting in loss of detail for a single image. Instance normalization is proposed on this basis, which standardizes a picture rather than a whole batch and has remarkable effect in accelerating image generation task. It operates as follows: W X H 1 X xnilm HW i¼1 l¼1

ð4Þ

W X H 1 X ðxnilm  unm Þ2 HW i¼1 l¼1

ð5Þ

xnilm  unm ynilm ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r2nm þ e

ð6Þ

^xnilm ¼ cynilm þ b

ð7Þ

unm ¼

r2nm ¼

where x 2 RNWHC denotes the input tensor, N the batch size, C the number of Channels (RGB), W and H the size of an image, xnilm is the (n,i,j,m) pixel of a batch. 3.2

L2 Regularization

A key problem in machine learning is to design a model that not only performs well on training data, but also has good generalization ability on new input data. Overfitting is a common problem in training deep convolution neural networks. In this paper, we use L2 regularization to prevent over-fitting. The strategy is to use L2 norm of network parameters as penalty term of objective function J(w;X,Y): ~Jðw; X; X Þ ¼ k wT w þ Jðw; X; X Þ 2 _

_

ð8Þ

^ the restored images. The correwhere w represents weights, X the input batch, X sponding gradient is: ^ ¼ kw þ rw Jðw; X; XÞ ^ rw ~Jðw; X; XÞ

ð9Þ

weights updated with gradient descent can be deduced as: w

  ^ w  e kw þ rw Jðw; X; XÞ

ð10Þ

w

^ ð1  ekÞw  erw Jðw; X; XÞ

ð11Þ

that is:

We can see that adding weight attenuation will lead to the modification of learning rules, that is, shrinking the weight vector before performing the usual gradient update at each step.

1644

Y. Yuan and Q. Lu

The effect of L2 regularization is to scale w along the axis defined by the eigenvector of the Hessian matrix. Only directions along which the parameters contribute significantly to reducing the objective function are preserved relatively intact. In directions that do not contribute to reducing the objective function, a small eigenvalue of the Hessian tells us that movement in this direction will not significantly increase the gradient. Components of the weight vector corresponding to such unimportant directions are decayed away through the use of the regularization throughout training. In this paper, the width of the widest layer in our proposed network reaches 512. Hence, L2 regularization can be introduced to prevent network over-fitting.

4 Experiments In this section, we design experiments to verify the performance of the proposed method. Both the input and output of the network is 176  144 image. We chose Pascal VOC image data set to generate training set and test set. 2500 noisy pictures transmitted by KMV-Cast and the corresponding 2500 clean pictures are prepared to be a training input and label of the network respectively. Other 500 noisy-clean pairs are prepared to be test data. Moreover, video sequences such as Foreman and Coastguard are adopted as test sequences to verify the performance of our method. In our network, the maximum batch size that GTX 1080 can withstand is 4.

Fig. 3. Train loss and test loss of each epoch while k = 0.0005, channel SNR = 10 dB, a0 = 0.01, decay steps = 15, decay rate = 0.9.

Since the receiver of the communication system is capable of channel estimation, we can train multiple CNN video optimization systems for different channels, then adopt two of the weights according to the result of channel estimation, and take the interpolation of the two results as final output. The experimental results are influenced by the parameter k of L2 regularity, Initial learning rate a0 , decay Rate, etc. The curve of loss function during the process of training and testing is illustrated in Fig. 3.

Video Deblocking for KMV-Cast Transmission Based on CNN

(a) frame #9

(b)

(c)

(e) frame #110

(f)

(g)

1645

Fig. 4. a and e are clean frame, b (PSNR = 23.0864 dB) and f (PSNR = 22.8152 dB) are reconstructed frame using KMV-Cast transmission (SNR = 10 dB), c (PSNR = 29.3019 dB) and g (PSNR = 28.9461 dB) are frames after CNN filtering.

Figure 4 shows that our method can not only deblock but also smooth the noise of the pixels. Using this method, PSNR can be increased by an average of 5 dB.

5 Conclusion In this paper, a CNN-based filtering method is proposed for KMV-Cast transmission to alleviate blocking artifact. Since there are many parameters in convolution neural network, the difficulty of designing CNN is to carry out a larger number of experiments and adjust various parameters continuously in order to achieve the desired goal. Fortunately, experiments indicate our method performs well and can significantly improve the quality of reconstructed video. Besides deblocking, our method can remove the noise of pixels, which is an evident advantage over H.264 deblocking approach.

References 1. Gastpar M, Rimoldi B, Vetterli M (2003) To code, or not to code: lossy source-channel communication revisited. IEEE Trans Inf Theory 49(5):1147–1158 2. Szymon J, Dina K (2011) A cross-layer design for scalable mobile video 3. Huang X-L, Wu J, Hu F (2017) Knowledge-enhanced mobile video broadcasting framework with cloud support. IEEE Trans Circuits Syst Video Technol 7(1):6–18 4. ITU-T Rec. H.264 (04-2013) Advanced video coding for generic audio visual services

1646

Y. Yuan and Q. Lu

5. Gupta H, Jin KH, Nguyen HQ (2018) CNN-based projected gradient descent for consistent CT image reconstruction. IEEE Trans Med Imaging 37(6):1440–1453 6. Zhang K, Zuo W (2017) Beyond a Gaussian Denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process 26(7):3142–3155 7. Sergey I, Christian S (2011) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 17th annual international conference on mobile computing and networking, MOBICOM 2011. Las Vegas, Nevada, USA

Improved YOLO Algorithm for Object Detection in Traffic Video Qifei Lu(&) and Yingchun Yuan Tongji University, Shanghai, China {starluqifei,15221383831}@163.com

Abstract. The detection of moving targets in complex traffic scenes is the most basic and important technical means in video surveillance. In order to balance the speed and accuracy of object detection, this paper chooses You Only Look Once(YOLO) algorithm to extract foreground targets in video frames. Meanwhile, some steps are used to improve this algorithm. First, data is augmented during the pre-processing phase to ameliorate the imbalance of sample category. Then, re-clustering our own data set before training to get the corresponding anchor box size to enhance the accuracy of the final training model. In the training process, the focus loss is used instead of the binary cross entropy loss to further solve the problem of slow convergence rate and poor training effect caused by the imbalance of sample category. The improved YOLO algorithm is used to compare the training results with the original YOLO algorithm, and they are comprehensively analyzed by the model evaluation index. It can be verified that the improved YOLO algorithm maintains a faster training speed while it also improves the accuracy of training. Keywords: YOLO algorithm K-means  Focal loss

 Object detection  Data augmentation 

1 Introduction Under the complex road traffic environment, the abnormal behavior of vehicles and pedestrians may cause serious traffic safety accidents, resulting in loss of life and property. Therefore, it is necessary to monitor the abnormal behavior of vehicles or pedestrians in time, so that we can effectively reduce the possibility of major accidents. Video surveillance is an important means of urban traffic supervision. It is not difficult to imagine that for large-scale urban transportation systems, the traditional manual detection mode is time-consuming and laborious, and may cause false detection and missed detection during visual fatigue, which reduces the accuracy and efficiency of video surveillance. In order to improve the accuracy of detection and make it convenient for governance, this paper considers using intelligent algorithms instead of manual detection [1]. In practical applications, the detection, tracking and behavior recognition of moving targets in complex traffic scenes have always been difficult and hot topics. The moving object detection studied in this paper is the most basic and important technical means in video surveillance [2]. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1647–1655, 2020 https://doi.org/10.1007/978-981-13-9409-6_198

1648

Q. Lu and Y. Yuan

2 Data Acquisition and Preprocessing In the actual traffic scene, there are many categories: car, truck, bus, bicycle and person. Therefore, these five types of data are focused on during the research process. Labeling tool “LabelImg” is used to mark the traffic video data of the intersection under the stable camera, thus obtaining 1000 manually labeled data frame. In the process of screening data sets, we try to ensure that as many as five types of objects appear in a single frame. Due to the imbalance of the number of samples in the actual scenario, we consider expanding the data set by data augmentation. Data augmentation is one of the commonly used techniques in deep learning. It is mainly used to increase the training data set and make the data set as diverse as possible, so that the trained model has stronger generalization ability. According to the data set used in this paper, by counting, the number of categories of the original data set is showed in Table 1. Table 1. The number of categories of the original data set Category Quantity

Car 7865

Truck 1160

Bus 934

Bicycle 463

Person 4439

It can be seen that the number of bus and bicycle is significantly less than the other three categories. In order to increase the number of samples and change the state of the base sample, we consider increasing the number of samples as a whole so that reducing over-fitting. In this paper, the angle rotation is selected. The original video frame is rotated clockwise by 1°, 2°,… until 5°, and then rotated counterclockwise by 1°, 2°,…, up to 5°, and finally 11,000 data samples are obtained. The number of categories of the augmented data set is as follows (Table 2): Table 2. The number of categories of the augmented data set Category Quantity

Car 86,264

Truck 12,757

Bus 10,248

Bicycle 5054

Person 45,294

The training set is randomly selected according to the ratio of 80%, and the proportion of 20% is the test set, thereby obtaining the required data set.

3 Algorithm Implementation and Improvement This paper chooses YOLOv3 algorithm as the basic model, and makes certain choices and improvements based on the original framework, including the steps of anchor reselection and using focus loss.

Improved YOLO Algorithm for Object Detection in Traffic Video

3.1

1649

YOLOv3 Algorithm

YOLOv3 uses k-means dimension clustering to get the anchor boxes on the training set, and then predict the confidence of the existence of an object as well as four coordinates tx , ty , tw , th for each bounding box [3]. Besides, YOLOv3 uses multi-label classification to predict categories. Previous versions using softmax classifier may result in inter-class mutex, such as the inclusion relationship between male and person, but the softmax classifier cannot accurately classify them. Therefore, YOLOv3 uses a separate logistic classifier to predict the classes which the bounding box may contain. For example, a person can have two tags, “male” and “person”. Also, in the training process, the binary cross entropy loss is used as the loss function. Moreover, the YOLOv3 algorithm borrows from the Feature Pyramid Network. For the input of 416  416, it predicts three different scale bounding boxes. Each scale sets three bounding boxes, as shown in the following figure (Fig. 1).

Fig. 1. The main network structure of YOLOv3 algorithm

3.2

Improvement of YOLOv3 Algorithm

3.2.1 Anchor Reselection The size of the nine anchor boxes in the YOLOv3 algorithm’s configuration file is based on a training set, obtained by using the k-means algorithm. For our own dataset, the general anchor box size will reduce the accuracy of the final training model, so we need to get the corresponding anchor box size and replace the default value according to our own dataset sample. The standard k-means algorithm uses the Euclidean distance [4], but a larger bounding box may produce more errors than smaller boxes, for example, ð50  45Þ2 ¼ 25 and ð5  2:5Þ2 ¼ 6:25. The bounding box prior we want is to help get higher IOU

1650

Q. Lu and Y. Yuan

(Intersection over Union) between anchor boxes and adjacent ground truth, and the distance to the cluster center is smaller at the same time. So use (1) as the distance in kmeans algorithm [3]. dðbox; centroidÞ ¼ 1  IOUðbox; centroidÞ

ð1Þ

After allocating all labeled boxes, the average of the width and height of all labeled boxes in the cluster is calculated. Repeat the above steps until the cluster center changes very little. Note that k-means is sensitive to the initial point, so the anchors obtained each time is different, but the corresponding average IOU is stable. 3.2.2 Focus Loss As mentioned in the paper [5], the main reason for low accuracy of single-stage structure (such as YOLO algorithm) is the class imbalance. When calculating the loss, the predicted bounding box can be divided into two categories: positive and negative. When IOU is greater than the threshold (generally 0.5), the bounding box is considered to be a positive example; otherwise, it is a negative example [6, 7] (Fig. 2).

Fig. 2. Division of different examples in an image

Usually, for a single image, the proportion of the target in the input image is much smaller than the proportion of the background. Therefore, the absolute number of negative examples will lead to large calculated total loss value, which is not conducive

Improved YOLO Algorithm for Object Detection in Traffic Video

1651

to convergence. For a single example, most of the negative examples do not affect the recognition of the foreground target, that is, a large amount of background information submerges the main foreground information and dominates the direction in which the gradient falls, so that the loss of the target is small, and the gradient is also small in the reverse calculation. Accordingly, the negative example that is not in the foreground and background transition area does not contribute much to the target convergence, which is called “easy negative example”. If you want better convergence, you need more negative examples that are more closely related to the foreground, called “hard negative example”. Based on the reality that there are many negative examples, the paper [5] proposed to replace the binary cross entropy loss with focal loss. Focus loss improves the training effect of the model by reducing the weight of simple negative examples in the training process. The focus loss expression [5] is obtained as: FLðpt Þ ¼ at ð1  pt Þc logðpt Þ

ð2Þ

Define pt as the classification probability of different categories. Also, to balance the positive and negative examples, a weighting factor at (a decimal between [0, 1]) is used. Meanwhile, an exponential factor c ðc  0Þ is introduced. With reference to the paper [5], we chose c ¼ 2 and at ¼ 0:25. Based on the fact that the YOLOv3 algorithm using a logistic classifier instead of a softmax classifier, the focus loss formula on the “darknet” framework is derived as follows. First, the logistic function is showed as bellow, where x0 is the center of the function curve and k is the slope of the curve: f ðxÞ ¼

L 1 þ ekðxx0 Þ

ð3Þ

To simplify the calculation, the classifier formula is simplified to: pi ¼

1 1 þ ekxi

ð4Þ

And then take the derivative of focal loss: dFL dFL dpi ¼  dxi dxi dpi For the first part,

ð5Þ

1652

Q. Lu and Y. Yuan

dFL dðat ð1  pt Þc logðpt ÞÞ ¼ dpi dpi 1 ¼  at ðcð1  pt Þc1 logðpt Þ þ ð1  pt Þc Þ pt 1 ¼  ai ðcð1  pi Þc1 logðpi Þ þ ð1  pi Þc Þðwhile y¼ 1Þ pi

ð6Þ

And it is known that the logistic function satisfies the following properties [8]: f 0ðxÞ ¼ f ðxÞð1  f ðxÞÞ

ð7Þ

The above formula can be verified by calculation: dpi dð1 þ ekxi Þ kekxi 1 1 ¼ ¼ ¼ kð ð1  ÞÞ ¼ pi ð1  pi Þ dxi dxi 1 þ ekxi 1 þ ekxi ð1 þ ekxi Þ2 1

ð8Þ

Finally, dFL 1 ¼ ai ðcð1  pi Þc1 logðpi Þ þ ð1  pi Þc Þ  pi ð1  pi Þ dxi pi ¼ ai ð1  pi Þc ðc logðpi Þpi  1 þ pi Þ

ð9Þ

It is worth noting that a negative sign is required for reverse conduction on the “darknet” framework of the YOLOv3 algorithm.

4 Analysis of Experimental Results Because the framework of the YOLOv3 algorithm is complicated and the training takes a long time, after weighing the accuracy and speed of the training results, this paper finally makes improvement on the YOLOv3-tiny algorithm. The following table shows the fps(frames per second) when training with the original YOLO algorithm and that of the improved YOLO algorithm. The comparison shows that the speed of the algorithm does not have a significant downward trend (Table 3). Table 3 The comparison of fps between original YOLO algorithm and improved YOLO algorithm Batches 5000 10000 15000 20000 25000 30000

fps of original YOLO algorithm 242 245 244 243 249 247

fps of improved YOLO algorithm 241 245 244 245 246 246

Improved YOLO Algorithm for Object Detection in Traffic Video

1653

As can be seen from Fig. 3a, the mAP(Mean Average Precision) of the original data set is less than 0.1, and the mAP of the augmented data set is substantially greater than 0.8. Therefore, data augmentation is necessary, and the training precision can be effectively improved. While in Fig. 3b we can see that in addition to the initial 5000 batch, the mAP using focus loss is higher than that of using binary cross entropy loss. Besides, as is showed in Fig. 3c, in addition to the initial 5000 and 10,000 batch, using the anchors obtained by k-means re-clustering can also improve the mAP.

Fig. 3. mAP comparison curve between original algorithm and improved algorithm

The training results mainly refer to two results: the average IOU curve and the loss curve. By analyzing the two visualization results, when the batch is 30,000, the model has stabilized. The IOU stays at 0.83, and the loss converges to 1.9 (Figs. 4 and 5).

Fig. 4. The region Avg IOU curve

1654

Q. Lu and Y. Yuan

Fig. 5. The loss curve

After many trainings, the training weight whose mAP is 0.92 is finally selected as a reliable model. In order to visualize the various parameters in the training process, run the “detector valid” command in the official “darknet” framework to generate the test result for the test set. The specific command is “.\darknet detector valid ”, where “voc.data” and “cfg” are training configuration files, and “weights” are weight files obtained by training. The generated test result form is “ ”. For example, in the test results of car: P359_000341 0.967789 152.362808 161.948624 179.076126 200.279556 It refers to the image file named “P359_000341”, which can identify an object belonging to the category of car, with a confidence of 0.967789. (152.362808, 161.948624) is the center coordinates of the the predicted bounding box, and 179.076126 and 200.279556 are the width and height of the predicted bounding box respectively. The improved YOLO algorithm is used to detect the traffic video. The detection result of one frame is as follows. Overall, the detection result is satisfactory (Fig. 6).

Improved YOLO Algorithm for Object Detection in Traffic Video

1655

Fig. 6. Detecting traffic video with improved YOLO algorithm

5 Conclusion This paper deals with the situation that abnormal traffic behavior may occur in complex traffic environment, and applies machine learning and deep learning theory to the key processing part of intelligent transportation system. The object detection algorithm based on deep learning is used to effectively detect and extract the foreground target, and saves the category and location information of the target contour for subsequent tracking processing.

References 1. Redmon J, Diwala S, Girshick R et al (2016) You only look once: unified, real-time object detection. IEEE Comput Soc Conf Comput Vision Pattern Recogn 2016:779–788 2. Buch N, Velastin SA, Orwell J (2011) A review of computer vision techniques for the analysis of urban traffic. IEEE Trans Intell Transp Syst 12(3):920–939 3. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement 4. Kanungo T, Mount D M, Netanyahu NS et al (2002) An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Mach Intell 24(7):0–892 5. Lin TY, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell (99):2999–3007 6. He H, Garcia EA (2009) Learning from imbalanced data. IEEE Trans Knowl Data Eng 21 (9):1263–1284 7. Davis J (2006) The relationship between precision-recall and ROC Curves. In: Proceedings of the 23th international conference on machine learning 8. Zhou Z (2016) Machine learning. Tsinghua University Press, Beijing, pp 102–104

Task Allocation for Multi-target ISAR Imaging in Bi-Static Radar Network Dan Wang1(&), Jia Liang1, Qun Zhang1, and Feng Zhu2,3 1

Institute of Information and Navigation, Air Force Engineering University, Xi’an 710077, China [email protected] 2 Academy of Military Sciences of PLA, Beijing 100091, People’s Republic of China 3 National Defense University of PLA, Beijing 100091, People’s Republic of China

Abstract. For radar imaging of multiple targets, it is necessary for the radar network to allocate the imaging task to different radars to achieve high image quantity under limited radar resource. In this paper, based on the Bi-ISAR imaging algorithm, a task allocation optimization method is proposed for multitarget inverse synthetic aperture radar (ISAR) imaging in radar network consisting of different bi-static radar. Simulation results indicate that the proposed method can effectively accomplish the multi-target imaging task allocation using the minimum time resource. Keywords: Radar network  Multi-target imaging task Bi-ISAR  Task allocation optimization

 Bi-static radar 

1 Introduction Recent years, high-resolution ISAR imaging techniques have been widely used in military and civil fields such as target classification and human behavior understanding due to its all-day and all-weather surveillance capability [1]. An ISAR image is a 2-D projection of the target on the range-Doppler plane with the direction of range and direction of gradient of Doppler frequencies. In a monostatic radar, when the target is moving along the radar line of sight, the viewing angle does not change with time and hence the ISAR image cannot be produced [2]. To overcome this restriction, the bistatic ISAR based on bi-static radar has been employed. The configurations where the transmitter and the receiver are installed on separated platforms has many advantages including of information acquisition, avoiding blind velocities, anti-interference and anti-destruction [3]. Meanwhile, compared with a mono-static radar, the radar network consisting of distributed radars has many advantages. For instance, when the target is illuminated by the radar network, the observation data from different angles can be obtained simultaneously [4]. Combined with the multi-angle information, the radar network can improve the performance of target detection, tracking, imaging and identification [5–8]. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1656–1665, 2020 https://doi.org/10.1007/978-981-13-9409-6_199

Task Allocation for Multi-target ISAR Imaging in Bi-static

1657

A radar network can serve multiple tasks simultaneously such as target searching, tracking, recognition and imaging. Thus, reasonable and effective radar resource allocation algorithms are important and essential [9–11]. However, in most existing radar resource allocation studies, the imaging task isn’t taken into account. Only a small number studies focus on optimization of imaging task allocation for multiple targets, while the radar network consisting of bi-static radars has not been considered. In this paper, we propose a task allocation strategy of multi-target ISAR imaging task for the radar network consisting of bi-static radars. For an ISAR image, imaging quality is the first factor to be considered. Different from conventional ISAR image, the bi-static ISAR image is also influenced by the time-varying bi-static angle. While the relationship between imaging quality and task time is analyzed, the task allocation problem can be converted into a time resource optimization problem. Finally, the task allocation optimization model of the bi-static radar network is constructed and the corresponding algorithm for solving the model is proposed. The paper is organized as follows. The Bi-ISAR imaging algorithm is introduced in Sect. 2. The multi-target task allocation optimization model of the bi-static radar network is constructed and the corresponding algorithm is proposed in Sect. 3. Simulation results and analysis are presented in Sect. 4. Finally, conclusion is drawn in Sect. 5.

2 Bi-Static ISAR Signal Model The geometry of Bi-ISAR imaging is illustrated in Fig. 1, where Tx is the transmitter, Rx is the receiver, and O is the target center. Suppose a scatter P on the target, and RT ðtm Þ, RR ðtm Þ denote the distances from scatter P to the transmitter and the receiver to at slow time tm respectively; b is the bi-static angle which usually ranges from 30° to 135° according to [6].

P O

RR tm

RT tm

RTref

RRref T

R

Tx

Rx

Fig. 1. Geometry of Bi-ISAR imaging

Suppose that the radar transmits linear frequency modulated (LFM) signals

1658

D. Wang et al.



   ^t exp j2pfc t þ jpl^t2 stx ð^t; tm Þ ¼ rect TP

ð2:1Þ

where ^t ¼ t  tm denotes the fast time, tm denotes the slow time. TP denotes the pulse duration, fc denotes the carrier frequency, l denotes the chirp rate and rectðÞ denotes a unit rectangle function. Then, the received signal is srx ð^t; tm Þ ¼

X n

     ! ^t  Rn ðtm Þ=c R n ðt m Þ Rn ðtm Þ 2 exp j2pfc t  þ jpl ^t  rn rect TP c c ð2:2Þ

where Rn ðtm Þ denotes the distance sum from n-th scatter center to the transmitter and the receiver, Rn ðtm Þ ¼ RT ðtm Þ þ RR ðtm Þ. c is wave velocity, rn is the scattering coefficient of n-th scatter. Assume that the translational motion has been compensated and the bi-static angle is kept constant during short coherent processing interval (CPI), the target model can be converted into turntable model [7]. After range compression and the cross-term compression by apply FT respect to ^t and tm    lDRn ðtm Þ Ta TP rn sinc Tp fn þ c n      2fc xn cosðb=2Þx 4pfc yn cosðb=2Þ sinc Ta fa þ exp j c c

sr ðf n ; f a Þ ¼

X

ð2:3Þ

where fa is the Doppler extent and Ta is the observation time. Thus, the range coordinate and the cross-range coordinate is proportional to the signal frequency and the Doppler frequency respectively. Then the location of scatter in the range and crossrange can be scaled. Therefore we can obtain the 2D ISAR image by using the method above. Based on the Bi-ISAR imaging method mentioned above, the task allocation optimization model of the bi-static radar and corresponding algorithm are proposed in upcoming section of this paper.

3 Task Allocation Optimization Model The multi-target imaging scene of bi-static radar network is shown in Fig. 2. The radar network is constituted of M transmitters and N receivers. Each transmitter and receiver can constitute a bi-static radar to perform the ISAR imaging task, in other words the radar network is constituted of K ðK ¼ MNÞ bi-static radars. In a multi-target environment, the radar observation time for each target is limited and high performance is required in real time. So when multi-target appears at the imaging area and a target can be imaged independently by different bi-static radars, the selection of a bi-static radar to

Task Allocation for Multi-target ISAR Imaging in Bi-static

1659

image a suitable target not only affects the single task time, but also affects the total task time and the resource utilization of the radar network. Therefore, in order to reduce the total task time and improve the radar resource utilization, it is necessary to construct an optimization model for the imaging task allocation of the bi-static radar network.

Fig. 2. Multi-target imaging scene of bi-static radar network

v B

A

δ

β 2

T

R

Fig. 3. Geometric diagram of imaging by bi-static radar

For a multi-target imaging task allocation, imaging quality is an important factor to be considered. So it is essential to determine an objective function according to the imaging quality. For an ISAR image, image quality often depends on the range and

1660

D. Wang et al.

azimuth resolution [4]. Therefore, the relation between the time resource and the range and azimuth resolution should be analysed first. Different from the mono-static ISAR radar, the range resolution qr is not only determined by the signal bandwidth B, but also determined by the bi-static angle b, which can be described as qr ¼

c 2B cosðb=2Þ

ð3:1Þ

It is worthwhile to note that the bi-static angle b is determined by the relative position of the target and the bi-static radar. Hence the range solution is space-varying. In Fig. 3, the point A denotes the initial position where the l-th target is tracked, the point B denotes the position of the imaging terminal point. The point T and R denotes the location the m-th transmitter and n-th receiver respectively. Let the three-dimensional spatial coordinates of the l-th target, m-th transmitter and n-th receiver at the initial position be ðxl ; yl ; zl Þ, ðxm ; ym ; zm Þ and ðxn ; yn ; zn Þ respectively, the speed vector of the l-th target is Vn ¼ vx ; vy ; vz . Then the spatial coordi  nates of the l-th target x0l ; y0l ; z0l at time t can be determined by x0l ¼ xl þ vx  t y0l ¼ yl þ vy  t

ð3:2Þ

z0l ¼ zl þ vz  t

Therefore according to the geometrical relation, the bi-static angle b can be obtained by 

 TB  RB b ¼ arccos jTBj  jTBj

ð3:3Þ

where TB denotes the vector from point T to point B which can be described as  0  0 0 TB ¼ xl  xm ; yl  ym ; zl  zm , RB denotes the vector from point R to point B which  0  0 0 can be described as RB ¼ xl  xn ; yl  yn ; zl  zn .Then the bi-static angle b can be obtained at any time and the range resolution can be calculated if needed. Generally, it takes a long time for the targets to fly into the required radar imaging area from the initial position to meet the required range resolution. Therefore the flight time tr , defined as the time used to fly into the required radar imaging area from the initial position, can be used as an indicator to construct objective function of the optimization model instead of the range resolution. In the case the range resolution is given, the flight time and the imaging start time is determined. The smaller the flight time is, the higher the radar resource utilization in the radar network is. Compared with the range solution, the azimuth resolution is much more complicated. The azimuth resolution mainly depends on the Doppler effect, and it can be described as

Task Allocation for Multi-target ISAR Imaging in Bi-static

qa ¼

k RT0 RR0  jVn jTa RR0 sin2 ðd þ b=2Þ þ RT0 sin2 ðd  b=2Þ

1661

ð3:4Þ

where k is the wavelength, Ta is the required synthetic aperture time, RT0 , RR0 is the distance from the target to the transmitter and receiver at the imaging start time respectively. The angle d can be determined by the speed vector and the bi-static angle bisector line which can be described as  d ¼ arccos

 TB  Vn þ b=2 jTBj  jVn j

ð3:5Þ

Without loss of generality, the angle b is approximately constant during the imaging. In the case when the imaging start time is determined, RT0 , RR0 is determined, the azimuth resolution is mainly determined by the required synthetic aperture time. The required synthetic aperture time can be determined by ta ¼

k RT0 RR0  2 jVn jqa RR0 sin ðd þ b=2Þ þ RT0 sin2 ðd  b=2Þ

ð3:6Þ

The synthetic aperture time is the imaging time. Therefore, the imaging time can be used as an indicator to construct objective function of the optimization model instead of the range resolution The smaller the imaging time is, the higher the radar resource utilization in the radar network is. According to the above analysis, the single imaging task time for a target can be constructed depending on the flight time tr and the imaging time ta . The purpose of the optimization model is to allocate the multi-target imaging tasks for the bi-static radar network under the condition that using the minimum amount of time and meeting the required image quantity. The total time of the multi-target imaging tasks for the radar network is determined by the radar with the longest imaging task time. Each target imaging task can only be allocated to one bi-static radar and each bistatic radar can be used to execute one imaging task at one time. For the imaging scene, how to allocate the targets to the distributed radars and when to observe the target will affect the radar resource. Let Jk denotes the set of the targets which are allocated to the same bi-static radar as the k-th radar. Thus for the targets in the set Jk , the imaging order is determined by the priority of the targets. According to previous research [3–5], the priority can be determined by the distance Rn , speed Vn , heading hn , and radar cross section(RCS), which can be measured by conventional tracking algorithms. So in this paper, we suppose the priority as prior information and the targets has been sorted in the set Jk according to priority. According to the above analysis, the imaging task allocation optimization model for the bi-static radar network can be determined by

1662

D. Wang et al.

minimize maxðsumðA  XÞÞ 8 sumðXð:; lÞÞ ¼ 1 > > > > > sumðXðk; lÞÞ ¼ L > > > > > > < Xðk; lÞ 2 f0; 1g Iðk; Jk Þ þ Aðk; Jk Þ\Iðk; Jk þ 1 Þ s:t: > > > J 2 f1; . . .; Lg > > k > > > > k ¼ 1 : MN > > : l¼1:L

ð3:7Þ

where the K  L matrix A denotes the set of the imaging task time, each element in matrix A denotes the single imaging task time for one target and corresponding bi-static radar, which can be described as Aðk; lÞ ¼ tr

kl

þ ta

kl

ð3:8Þ

The K  L matrix X denotes the task allocation scheme contains only 0 and 1. The element 1 in the matrix X ðXðk; lÞ ¼ 1Þ means the imaging task of l-th target is allocated to the k-th radar, otherwise not. The objective function is the maximum of the sum of the Hadamard product of matrix A and X, which denotes the time resource used to accomplish all imaging tasks in the radar network. The K  L matrix I denotes the start time of the imaging task. For the targets allocated to the same radar, the imaging start time of the latter target cannot be earlier than the terminal time of the previous target. Therefore, the task allocation optimization model can be constructed by minimizing the sum of the sum of the Hadamard product of matrix A and X. The optimization problem in (3.7) is a multi-parameter, multi-constrained integer programming problem, which needs to be solved by heuristic search algorithm. So a solution for the model based on genetic algorithm is proposed and the algorithm will converge to the optimal chromosome through multiple iterations. The optimal chromosome can be converted to the optimal task allocation scheme.

4 Experiments To validate the effectiveness of the proposed method for the bi-static radar network, simulation experiments are conducted in this section. We have set a 4  8 and 6  10 radar network and the geometry scene of bi-static radar network and multiple targets is illustrated in Figs. 4, and 5 respectively. Suppose the speed, coordinates, RCS and priority of each target has been obtained as prior information. So the targets have been sorted according to their priority. The parameters in genetic algorithm are set as follows. The number of individuals, crossover probability, mutation probability, generation gap value, and the supreme genetic algebra is 50, 0.7, 0.1, 0.9 and 200 respectively. Then the he optimal task allocation scheme with minimal total task time can be obtained as illustrated in Table 1.

Task Allocation for Multi-target ISAR Imaging in Bi-static

1663

4

10

2

1.5 target transmitter receiver

1

0.5

0 4 2 5

3 0

10

2 -2

5

10

1 -4

0

Fig. 4. Radar network layouts and multiple targets location in case 1

4

10 2

1.5

target transmitter receiver

1

0.5

0 4 2 10

5

0 -2 -4

0

0.5

1

1.5

2

2.5

3

5

10

Fig. 5. Radar network layouts and multiple targets location in case 2

Thus, the imaging task can be implemented by the assigned radar using Bi-ISAR imaging method. In a whole, based on the Bi-ISAR imaging algorithm and genetic algorithm, the multi-target imaging task are conducted within the minimal time under the required image quantity.

1664

D. Wang et al. Table 1. Optimal task allocation scheme Case Case 1

Case 2

Radar Radar1 Radar2 Radar3 Radar4 Radar1 Radar2 Radar3 Radar4 Radar5 Radar6

(Tx1, (Tx1, (Tx2, (Tx2, (Tx1, (Tx1, (Tx1, (Tx2, (Tx2, (Tx2,

Rx1) Rx2) Rx1) Rx2) Rx1) Rx2) Rx3) Rx1) Rx2) Rx3)

Imaging Task Target 6, 7 Target 2, 5, 8 Target 3 Target 1, 4 Target 6 Target 5, 8 Target 4,7 Target 1, 2, 3 Target 10 Target 9

5 Conclusion In this paper, combined with the Bi-ISAR imaging algorithm, the multi-target imaging task allocation model has been constructed for the bi-static radar network and an algorithm based on genetic algorithm for solving the task allocation optimization model has been proposed. The simulation results show that the optimal task allocation scheme with minimal total task time can be obtained by the proposed method. In our future work, energy resource and computer resource should also be considered when designing the imaging task allocation strategy. Acknowledgements. This research is funded by the National Natural Science Foundation of China under Grant 61631019, 61703412 and the China Postdoctoral Science Foundation under Grant 2016M602996.

References 1. Chen Y-J, Zhang Q, Luo Y, Chen Y-A (2016) measurement matrix optimization for ISAR sparse imaging based on genetic algorithm. IEEE Geosci Remote Sens Lett 13(12):1875– 1879 2. Chen VC, Rosiers AD, Lipps R (2009) Bi-static ISAR range-doppler imaging and resolution analysis. In: Proceedings of the IEEE radar conference, pp 1–5 3. Huang J-F, Chang G-Y, Huang J-X (2017) Anti-jamming rendezvous scheme for cognitive radio networks. IEEE Trans Mobile Comput 16(3):648–661 4. Liu XW, Zhang Q, Chen YC, Su LH, Chen YJ (2018) Task allocation optimization for multi-target ISAR imaging in radar network. IEEE Sens J 18(1):2018 5. Chen Y, Zhang Q, Yuan N, Luo Y, Lou H (2015) an adaptive ISAR-imaging-considered task scheduling algorithm for multi-function phased array radars. IEEE Trans Signal Process 63(19):5096–5110 6. Martorella M, Palmer J, Littleton B, Longstaff ID (2007) On bistatic inverse synthetic aperture. IEEE Trans Aerosp Electron Syst 43(3):1125–1134

Task Allocation for Multi-target ISAR Imaging in Bi-static

1665

7. Martorella M (2011) Analysis of the robustness of bistatic inverse synthetic aperture radar in the presence of phase synchronization errors. IEEE Trans Aerosp Electron Syst 47(4):2673– 2689 8. Bai XR, Zhou F, Xing MD, Bao Z (2010) Scaling the 3-D image of spinning space debris via bistatic inverse aperture radar. IEEE Geosci Remote Sens Lett 7(3):430–434 9. Yan J, Liu H, Pu W, Liu H, Liu Z, Bao Z (2017) Joint threshold adjustment and power allocation for cognitive target tracking in asynchronous radar network. IEEE Trans Signal Process 65(12):3094–3106 10. Panoui A, Lambotharan S, Chambers JA (2016) Game theoretic distributed waveform design for multistatic radar networks. IEEE Trans Aerosp Electron Syst 52(4):1855–1865 11. Sun MC, Zhang Q, Chen GL (2018) Dynamic time window adaptive scheduling algorithm for the phased array radar. J Radars 7(3):303–312

A New Tracking Algorithm for Maneuvering Targets Jurong Hu(&), Yixiang Zhu, Hanyu Zhou, Ying Tian, and Xujie Li Hohai University, Nanjing, China [email protected], [email protected]

Abstract. In target tracking for radar with Kalman filter, the process noise covariance matrix is usually selected experientially and assumed to remain unchanged throughout the tracking process. Although this method is effective on steady moving targets, in some practical situation, especially in the case of large maneuvering targets, we will meet some unsuccessful examples and fail to track those targets. In this paper, an improved target tracking algorithm based on the law of large numbers for maneuvering targets is proposed. During the process of Kalman filtering, the sliding window is used to select the acquired target trajectory data to estimate the process noise covariance matrix according to the law of large numbers. It means that the process noise covariance matrix can change adaptively with the movement of the target so that the filter can track the trajectory of the target more accurately. The simulation results show that the proposed tracking algorithm can produce smaller tracking errors than classical Kalman filter for targets with different motion models. Keywords: Kalman filter  Maneuvering targets matrix  Law of large numbers  Adaptive

 Process noise covariance

1 Introduction The Kalman filter algorithm is widely used in various fields because of its stable and excellent filtering capability. It is also the most widely used algorithm in radar positioning and tracking algorithms. The process noise covariance matrix of Kalman filter algorithm determines the convergence rate and noise sensitivity [1]. The maneuverability of the target motion is usually uncertain, so we should adjust the process noise covariance matrix to the mobility of the target. It can change the convergence rate of the algorithm in different conditions to improve the positioning and tracking accuracy [2, 3]. This paper makes some research on Kalman filter algorithm first,then proposes an improved Kalman filtering algorithm for maneuvering target, which is based on the law of large numbers. Finally, this paper makes a comparison between the improved Kalman filtering algorithm and classical Kalman filtering algorithm.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1666–1673, 2020 https://doi.org/10.1007/978-981-13-9409-6_200

A New Tracking Algorithm for Maneuvering Targets

1667

2 Kalman Filter Algorithm First, we need to initialize all parameters of the Kalman filter [4, 5], including state transition matrix Fk , external control matrix Bk , external control vector Uk , observation matrix Hk , observation noise covariance matrix Rk , process noise covariance matrix Qk and initial state vector X0 . Then we can obtain the preliminary prediction of the state ^k and covariance matrix P ^ k with the initial values: vector X ^k ¼ Fk Xk1 þ Bk Uk X

ð1Þ

^ k ¼ Fk Pk1 FkT þ Qk P

ð2Þ

Then we calculate the innovation S and gain K: ^ k HkT þ Rk S ¼ Hk P

ð3Þ

^ k HkT S1 K¼P

ð4Þ

With the gain K worked out, we can revise the preliminary prediction of the state vector and covariance with the observation value Zk . The revised prediction of the state vector Xk and the covariance matrix Pk are: ^k Þ ^k þ KðZk  Hk X Xk ¼ X

ð5Þ

^k ^ k  KHk P Pk ¼ P

ð6Þ

When we receive the next observation value Zk þ 1 , we use the state vector prediction Xk and covariance matrix prediction Pk for the next filtering. The filtering algorithm stops when there is no new observation value. In the Kalman filter algorithm, the state transition matrix Fk and state vector Xk are determined by the target motion model [6]. The improved algorithm proposed in this paper uses the constant velocity motion model, so we define the state transition matrix Fk , the state vector Xk and external control matrix Bk as: 2

1 60 6 60 Fk ¼ 6 60 6 40 0

T 1 0 0 0 0

0 0 1 0 0 0

0 0 T 1 0 0

0 0 0 0 1 0

2 3 3 0 xk 6 vxk 7 07 6 7 7 6 7 07 7 ; Xk ¼ 6 y k 7 6 vyk 7 7 07 6 7 4 zk 5 5 T vzk 1

ð7Þ

1668

J. Hu et al.

2

0:5T 2 6 0 6 6 0 Bk ¼ 6 6 0 6 4 0 0

0 T 0 0 0 0

0 0 0:5T 2 0 0 0

0 0 0 T 0 0

0 0 0 0 0:5T 2 0

3 0 07 7 07 7 07 7 05 T

ð8Þ

where T is set to 1, xk ; vxk ; yk ; vyk ; zk ; vzk are the distance and velocity components in X; Y; Z direction respectively. As a convenience, we define observation matrix Hk and observation noise covariance matrix Rk as: 2

1 60 6 60 Hk ¼ Rk ¼ 6 60 6 40 0

0 1 0 0 0 0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

3 0 07 7 07 7 07 7 05 1

ð9Þ

3 Improved Algorithm Based on Law of Large Numbers In target tracking for radar with Kalman filter, the process noise covariance matrix is usually selected experientially and assumed to remain unchanged throughout the tracking process. The improved algorithm based on the law of large numbers can adaptively change the process noise covariance matrix with the movement of the target [7]. The flow chart of the algorithm is shown in Fig. 1. First, set system output matrix E,  E¼

H HF

0 H

 ð10Þ

where system observation matrix H and state transition matrix F are given by formulas (7) and (8). Coupling vector di is the solution of the following equations: 

00 di E ¼ |fflffl{zfflffl}

gTi

6

 ; i ¼ 1;    ; 6

ð11Þ

where 

00 gi ¼ |fflffl{zfflffl} i1

1

0|fflffl{zfflffl} 0 6i

T

ð12Þ

A New Tracking Algorithm for Maneuvering Targets

1669

Fig. 1. Estimation algorithm flow chart

Then we can obtain the linear coupling of the system:     Z Z Z~pi þ 1W ¼ di 2pw þ 1 ; . . .; Z~pi ¼ di 2p1 2 Z2pw þ 2 Z2p

ð13Þ

Finally, estimate the covariance matrix of process noise with the linear coupling based on the law of large numbers [8, 9]: Qi ¼

  2  2  1 ~i R  di Zp þ 1W þ    þ Z~pi 2 0 W=2

 0 T d ; i ¼ 1;    ; 6 R i

ð14Þ

1670

J. Hu et al.

Then we can obtain the estimation of the covariance matrix of process noise: 2

Q1 6 0 ~ ¼6 Q 6 .. 4 .

0 Q2 .. .

0

0

  .. .



0 0 .. .

3 7 7 7 5

ð15Þ

Q6

The rest steps in flow chart are the classical Kalman filter algorithm. Apply the formulas (14)–(2) to adjust the process noise covariance matrix in real time [10, 11].

4 Simulation and Algorithm Analysis 4.1

Simulation and Analysis of Covariance Matrix Estimation Algorithms

In this paper, the target moves at a radial velocity of 10 m/s and its acceleration is Gaussian random variables with zero mean and variance of 4, 16, 36, 64. Then we estimate the variance of acceleration according to the flow chart shown in Fig. 1. The simulation results are shown in Fig. 2. The abscissa of the graph is the time of target motion, and the ordinate is the estimated variance of acceleration. From the figure we can see that the simulation results converge to the corresponding variance value successfully after about 200 steps.

Fig. 2. Estimated variance of acceleration noise

4.2

Simulation and Analysis of Maneuvering Target Tracking

In order to verify the effectiveness of the improved algorithm for maneuvering target tracking proposed in this paper, we use the S-shaped motion to generate the target trajectory. Then we will compare it to the classical Kalman filter algorithm’s. In this paper, classical Kalman filter Q diagonal element takes 10. The initial target position is (0,0,0). The target moves at a constant radial speed of 10 m/s in a straight line from 0 to 100 s, from 200 to 300 s, from 400 to 500 s, from 600 to 700 s and from 800 to 900 s. The target moves at an arc-shaped trace from 100

A New Tracking Algorithm for Maneuvering Targets

1671

to 200 s, from 300 to 400 s, from 500 to 600 s, from 700 to 800 s and from 900 to 1000 s. The max radial acceleration are 44.5 m/s2, 36 m/s2, 44.5 m/s2, 18 m/s2, 44.5 m/s2 respectively. Figure 3 shows the root mean square error (RMSE) of improved Kalman filter algorithm and classical Kalman filter algorithm. We can see that Kalman filtering algorithm produces larger errors when the target moves at an arcshaped trace with large acceleration. When the target moves at a constant radial speed in a straight line, two algorithm have similar performance. From Fig. 4, we can see that the element mean value of the process noise covariance matrix increases when the target moves at an arc-shaped trace with large acceleration and decreases when the target moves at a constant speed.

Fig. 3. RMSE

Fig. 4. Element of the process noise covariance matrix

For a convenient, this paper only shows a part of target trajectory in XY plane. From Fig. 5 we can see that the improved Kalman filter and the classical Kalman filter both have good performance when the target moves at a constant speed. When the target starts to move at an arc-shaped trace with large acceleration, classical Kalman filter produces larger errors than the improved Kalman filter.

1672

J. Hu et al.

Fig. 5. Tracking trajectory

4.3

Analysis of Algorithms Under Different Noise Levels

The algorithm is used to track target detected by radar, so we need to consider the influence of measurement noise to the performance of the algorithm. We set the measurement noise to Gauss white noise with zero mean and change its variance to obtain the root mean square error (RMSE) of Kalman filter and the improved Kalman algorithm. The simulation parameters and motion are the same as the simulation above. The results are shown in Table 1.

Table 1. Root mean square error Measurement noise variance

Mean square error of Kalman filter

Mean square error of the algorithms in this paper

r2 r2 r2 r2

18.8512 36.8206 73.7196 104.0230

10.8004 12.8945 15.2326 18.6536

= = = =

4 9 25 100

Relative improvement of error (%) 42.71 64.98 79.33 82.07

5 Conclusion This paper makes an analysis on classical Kalman filtering algorithm and presents an improved Kalman filter algorithm, which can estimate the process noise covariance matrix with the law of large numbers. When tracking large maneuvering targets, the improved algorithm can reduce the tracking error effectively. It can reduce the root mean square error (RMSE) of tracking for more than 40% under the different measurement noise levels above. In practical application, the motion of the target detected by radar is changeable. In this paper, the target motion model of algorithms are both constant velocity motion, so whether the proposed algorithm has better performance in other motion models needs further study.

A New Tracking Algorithm for Maneuvering Targets

1673

References 1. Nez A, Fradet L, Marin F et al (2018) Identification of noise covariance matrices to improve orientation estimation by Kalman filter. Sensors, Basel 2. Yu T, Xu AG, Fu XR et al (2017) An adaptive Kalman filter integrated navigation and location method. J Navig Positioning 5(03):101–104 3. Cui YL, Xi YH, Zhang XD (2019) Harmonic detection based on residual analysis of adaptive Kalman filter. Power Syst Prot Control 4. Li K, Wang R, Song JQ (2019) Research on radar single target tracking algorithms based on Kalman filter. Space Electron Technol 16(01):16–20 5. Zhang DZ, Yan D, Zhang ZX et al (2014) Cognitive radar tracking algorithm based on improved volume Kalman filter. Comput Simul 31(12):14–17 6. Liang LK, Zhang LY, He WC et al (2018) Probabilistic neural network multi-model Kalman filter location and navigation algorithms. Appl Electron Technol 44(06):60–62 7. Wang JW, Shui HT, Li X (2011) Design of robust Kalman filter with unknown statistical characteristics of noise. Control Theory Appl 28(5):693–697 8. He FX (2015) Analysis and discussion on the theorem of large number and the theorem of central limit. In: 3rd international conference on social science and education, USA 9. Mao YF, Shi PF (2004) Sequential Monte Carlo vehicle tracking based on sampling theory. J Syst Simul 11:2520–2521 + 2528 10. Liu N, Liu WS (2016) A dual-adaptive fuzzy filtering algorithm based on current statistical model. Group Technol Modernization Prod 33(02):25–30 11. Wu CL (2018) An adaptive volumetric Kalman filtering for noise characteristic estimation

Research on an Improved SVM Training Algorithm Pan Feng1, Danyang Qin1(&), Ping Ji1, Min Zhao1, Ruolin Guo1, Guangchao Xu1, and Lin Ma2 1

Key Lab of Electronic and Communication Engineering, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected] 2 Harbin Institute of Technology, Harbin, People’s Republic of China

Abstract. A new SVM training algorithm is proposed in the paper to improve the validity and efficiency of image annotation. These annotation tasks are related to one another due to the correlation among the labels. The model will implicitly learn a linear output kernel during training. Simulation results show that compared with independent SVMs training, Joint SVM improves classification accuracy and efficiency substantially. Keywords: Image annotation

 SVM  Output kernel  Joint training

1 Introduction Many papers propose different algorithms to annotate labels for images based on the image information and it is necessary to search for images through texts. The existing image annotation algorithm can be divided into two: generation method or discrimination method, according to the correlation modeling method between image features and text labels. The continuous correlation model (CRM) [1], the correlation hidden Dirichlet distribution (CorLDA) [2] and the Bernoulli model MBRM [3] are the generation methods. But these methods have a disadvantage. Many statistical assumptions might be imposed on the model, limiting the modeling capabilities. In addition, most of the generation methods are not easy to derive during the label prediction phase, so it needs approximation techniques. The discriminant method models the label prediction function, TagProp [4], JEC [5–8] is a method based on metric learning, rank-SVM [9], LM-K [10] and [11] are based on ranking learning methods; M3L [12] and [13] are methods based on maximum margin. However, many existing discrimination methods do not make good use of the correlation between output labels. The method proposed in this paper belongs to the discriminant method based on maximum margin. The other sections are as follows: Sects. 2 and 3 introduce the joint SVM algorithm proposed in the paper; Sect. 4 is simulation, and Sect. 5 is conclusion.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1674–1680, 2020 https://doi.org/10.1007/978-981-13-9409-6_201

Research on an Improved SVM Training Algorithm

1675

2 Joint SVM 2.1

Joint Learning

Given an input image, the auto-annotation model predicts the presence of a label, and all labels for the image come from a pre-designed dictionary. Assuming that the d-dimensional feature vectors are extracted from the image, there are T labels in the dictionary, and the task is to establish the function x 2 Rd ! f1; þ 1gT . The presence of each label can be seen as a binary classification problem, which needs T SVMs. Like other classification learning frameworks [14], the learning tasks can be completed by simply summing their goals and constraints, as shown in (1). s ¼ min

T T X m X 1X nðt iÞ kw t k2 þ C 2 t¼1 t¼1 i¼1

w:r:t: w1 ; w2 ; . . .; wT 2 RH/ 1 T T    X X ðiÞ s:t: yt wTt / xðiÞ  T  nðt iÞ t¼1

ð1Þ

t¼1

where t is the index of different labels, and T is the label number. h i h T i w wT T ðiÞ ðiÞ By yðiÞ ¼ y1 ; . . .; yT and W ¼ T1 ; . . .; TT , (1) can be written as (2): m X 1 nðiÞ s ¼ arg min kWk2F þ C H 1 2 w2R / i¼1 D  E s:t: yðiÞ ; W/ xðiÞ  1  nðiÞ ; ni  0; i 2 f1; . . .; mg

ð2Þ

P where kWkF is the F norm of the W, and nðiÞ ¼ 1=T Tt¼1 nðt iÞ . A basic principle of (2) is that learning simple tasks can help to learn challenging tasks under the requirements   of goals and constraints. For example,   in the p-th task, if the training data xðiÞ ; yðpiÞ

is classified correctly (i.e. yðiÞ wTp xðiÞ =T [ 1=T), it can

provide some fault-tolerant space for other tasks that are easier to classify wrongly    before violating the overall constraint yðiÞ ; W/ xðiÞ H  1. 2.2

Output Core

The joint SVM can implicitly learn a linear output kernel and incorporate it into the model parameters W. Based on prior knowledge, the kernel function defined on the output y can be clearly defined to improve the representation ability of the model.   Similar to the input kernel, the kernel function defined on the output is Kw yðiÞ ; yð jÞ , the feature map is w : f1; þ 1gT ! Hw , and then (2) is modified to (3).

1676

P. Feng et al. m X 1 nðiÞ s ¼ arg min kWk2F þ C 2 Hw H/ w2R i¼1 D    E ðiÞ ðiÞ s:t: w y ; W/ x  1  nðiÞ ; ni  0; i 2 f1; . . .; mg

ð3Þ

 ðiÞ   ðiÞ T P . Similar to the SVM, the dual form of the joint SVM where W ¼ m / x i ai w y in (2) is as shown in (4). s ¼ arg min a1 ;a2 ;...;am

m X

ai 

i¼1

m X

    ai aj Kw yðiÞ ; yð jÞ K/ xðiÞ ; xð jÞ

i;j¼1

ð4Þ

s:t: 8i; 0  ai  C The output kernel is built on the output y, and it considers the correlation between labels yðiÞ ; yð jÞ . If the output kernel matrix is known, the computational complexity of multiple SVM learning jointly is near to that of a single SVM learning, so it is efficient. Given the sample ^x, the prediction wð^yÞ is defined by (5), m X

wð^yÞ ¼ W/ð^xÞ ¼

    ai w yðiÞ K/ xðiÞ ; ^ x

ð5Þ

i¼1

2.3

Optimal Solution

wð^yÞ can be obtained through the joint SVM, but ^y cannot be obtained through direct inverse mapping wð^yÞ. Therefore, the optimal solution ^ y can only be found in all T xÞ, as shown in (6). y 2 f þ 1; 1g , and its mapping on Hw is the closest to W/ð^ ^y ¼ arg max hwðyÞ; W/ð^xÞi y2f þ 1;1gT

¼ arg max y2f þ 1;1g

m X T

i¼1

    ai K/ xðiÞ ; ^x Kw yðiÞ ; yð jÞ |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl}

ð6Þ

bi

Usually, (6) has no closed solution, so approximate dynamic programming (ADP) is used to find the optimal solution ^y . There are a lot of labels in the dictionary, and an image usually has only a few of them, which means that most y in the {+1, −1}T space do not match the image. When the sample set is large enough, the solution of (6) is likely to approximate the output fygm i¼1 of a sample. Therefore, the approximate optimal solution ^y can be found by (7): 

^y ¼

K X k¼1

!, ðk Þ

y wk

K X k¼1

wk

ð7Þ

Research on an Improved SVM Training Algorithm

wj ¼

m X i¼1

  ai bi Kw yðiÞ ; yð jÞ

1677

ð8Þ

The possible optimal solution is calculated using the closest image labels with   K = 10. Since ai and Kw yðiÞ ; yð jÞ are calculated during the training phase, only fbi gm i¼1 are required to calculate during the test. The computational complexity is O(m).

3 Output Kernel Learning The relevance of labels is a great help for label prediction, so the paper proposes two things: a linear output kernel, implicitly learned during the joint SVM training process, and an odds ratio output kernel. 3.1

Linear Output Kernel

Assuming that the statistical information of the pair of labels are encoded into the T  T matrix P, and the output vector is mapped as wðyÞ ¼ Py linearly. The output kernel is   KwLin yðiÞ ; yð jÞ ¼ yðiÞT Xyð jÞ

ð9Þ

where X = PTP = PPT. Let U = PTW, (12) can be written as (10), m X 1 nðiÞ s ¼ arg min kWk2F þ C 2 Hw H/ W2R i¼1 D  E ðiÞ ðiÞ s:t: y ; U/ x  1  nðiÞ ; ni  0; i 2 f1; . . .; mg

ð10Þ

In [15], additional regularization was performed on X and added to the objective function[16]. The paper takes a simpler strategy, using a regularization matrix 1=2WT X W to get (11): m X 1 nðiÞ kUk2F þ C H H 2 U2R w / i¼1 D  E s:t: yðiÞ ; U/ xðiÞ  1  nðiÞ ; ni  0; i 2 f1; . . .; mg

s ¼ arg min

ð11Þ

By substituting W for U, (3) is converted to (11). This shows that a joint SVM without an explicit output kernel implicitly learns a linear output kernel and trains it into W during training.

1678

3.2

P. Feng et al.

Odds Ratio Output Kernel

In order to obtain the label relevance in pairs, the paper also designs a kernel based on the odds ratio for the labels. Correlation tests how the appearance of one label affects the probability of another one. At first, estimating the probability of labels wr and ws appear together as shown in (12): Pðwr ; ws Þ ¼

Pm

i¼1

yðriÞ ¼ 1 and yðsiÞ ¼ 1 m

ð12Þ

Both come from training data. The odds ratio is used to evaluate the relevance of the label, and the odds ratio is calculated by (13). Ors ¼

r; w  sÞ Pðwr ; ws ÞPðw  s ÞPðw  r ; ws Þ Pðwr ; w

ð13Þ

 r is the complement of wr . Make the ratio symmetry as (14), where w Ors

logðOrs Þ

ð14Þ

Zero value means they are independent, and positive (or negative) values indicate that the two labels are attracted (or excluded) to each other. The greater the absolute value is, the more obvious this tendency is. Then calculate the kernel function on a pair of outputs:   KwOdd yðiÞ ; yð jÞ ¼ yðiÞT Qyð jÞ

ð15Þ

4 Simulation The datasets Corel5k, Espgame, and Iaprtc12 and 15 common visual features were used in the simulation. The accuracy rate (P), recall rate (R), and F1-Measure (F1) are used to evaluate the prediction results. The Corel5K data set is selected for the experiment. Both of which use two common cores: Gaussian (Gau) and quadratic polynomial kernel (Pol). During learning stage, the optimization is solved by coordinate descent method. In addition, both models use cross-validation process. The experiment recorded training and test time to assess the efficiency of both. A comparison of the two is shown in Fig. 1. It can be seen from Fig. 1, the same experimental data, the training time and test time of the joint SVM are significantly lower than the multiple SVM by in-dependently training. The time of the independently trained SVM will be longer significantly as the number of labels increases, but the one of the joint SVM does not. In terms of accuracy, the JSVM also exceeds the independently trained SVM. In addition, among the two input kernels, the quadratic polynomial kernel is better than the Gaussian kernel in all indicators.

Research on an Improved SVM Training Algorithm

1679

Fig. 1. Comparison of SVMs and joint SVMs. a Time comparison. b Performance comparison

5 Conclusion In order to label images online and make these images better query through text, this paper proposes a new SVM algorithm for image annotation, which is expected to improve the training efficiency of traditional SVM and introduce the output kernel to improve the representation ability. The simulation results show that it is superior to the traditional SVM in terms of training time and prediction results. The next work is the application and improvement of joint SVM in other multi-label learning areas to further improve its predictive performance and application scenarios. Acknowledgements. This work is supported by the National High Technology Research and Development Program of China (2012AA120802), National Natural Science Foundation of China (61771186), Postdoctoral Research Project of Heilongjiang Province (LBH-Q15121), Undergraduate University Project of Young Scientist Creative Talent of Heilongjiang Province (UNPYSCT-2017125), Postgraduate Innovative Research Project of Heilongjiang University (NO. YJSCX2019-059HLJU), National Natural Science Foundation of China (61571162), Heilongjiang Province Natural Science Foundation (F2016019).

References 1. Lavrenko V, Manmatha R, Jeon J (2004) A model for learning the semantics of pictures. In: Proceeding of advances in neural information processing systems, pp 553–560 2. Blei DM, Jordan MI (2003) Modeling annotated data. In: Proceeding of the 26th annual international ACM SIGIR conference on research and development information retrieval, pp 127–134 3. Feng SL, Manmatha R, Lavrenko V (2004) Multiple Bernoulli relevance models for image and video annotation. In: Proceeding of the 2nd IEEE computer society conference on computer vision and pattern recognition, vol 2. IEEE, pp II-1002–II-1009 4. Guillaumin M, Mensink T, Verbeek J et al (2009) TagProp: discriminative metric learning in nearest neighbor models for image auto-annotation. In: Proceeding of IEEE, international conference on computer vision. IEEE, pp 309–316

1680

P. Feng et al.

5. Makadia A, Pavlovic V, Kumar S (2010) Baselines for image annotation. Int J Comput Vis 90(1):88–105 6. Akgül YS, Kiliç MM. Ship location estimation from radar and optic images using metric learning. In: Proceeding of 26th signal processing and communications applications conference (SIU), Izmir, Turkey, pp 1–4 7. Nawaz S, Calefati A, Ahmed N et al (2018) Hand written characters recognition via Deep metric learning. In: Proceedings of 13th IAPR international workshop on document analysis systems (DAS), Vienna, Austria, pp 417–422 8. Wahlberg F (2018) Gaussian process classification as metric learning for forensic writer identification. In: Proceedings of 13th IAPR international workshop on document analysis systems (DAS), Vienna, Austria, pp 175–180 9. Elisseeff A, Weston J (2001) A kernel method for multi-labelled classification. In: Proceeding of international conference on neural information processing systems: natural and synthetic. MIT Press, pp 681–687 10. Guo Y, Schuurmans D (2013) Multi-label classification with output kernels. In: Proceeding of joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, Heidelberg, pp 417–432 11. Pan L, Li HC, Sun YJ et al (2018) Hyperspectral image reconstruction by latent low-rank representation for classification [J]. IEEE Geosci Remote Sens Lett 15(9):1422–1426 12. Hariharan B, Vishwanathan et al (2012) Efficient max-margin multi-label classification with applications to zero-shot learning. Mach Learn 88(1–2):127–155 13. Koda S, Zeggada A, Melgani F et al (2018) Spatial and structured SVM for multilabel image classification [J]. IEEE Trans Geosci Remote Sens 56(10):5948–5960 14. Argyriou A, Evgeniou T, Pontil M (2008) Convex multi-task feature learning [J]. Mach Learn 73(3):243–272 15. Dinuzzo F, Cheng SO, Gehler PV et al (2012) Learning output kernels with block coordinate descent [C]. In: Proceeding of international conference on machine learning, ICML 2011. Bellevue, Washington, USA, DBLP, pp 49–56 16. Zhang Y, Yeung DY (2013) Multilabel relationship learning [J]. ACM Trans Knowl Discov Data (TKDD) 7(2):7

Modeling for Coastal Communications Based on Cellular Networks Yanli Xu(B) College of Information Engineering, Shanghai Maritime University, NO.1550 Haigang Av., Pudong District, Shanghai, China [email protected]

Abstract. Integrating cellular networks into coastal communications may significantly reduce communication cost and improve service quality for maritime users. However, particular ocean communication scenario brings new problem of modeling for cellular networks which are originally used for terrestrial communications. A tractably analytical model for coastal communications is proposed in this paper, through which the distribution of cellular link distances is investigated. Based on these analyses, network performance of different metrics can be obtained. As special cases, the network coverage, handover metric and resource partition are analyzed, which provide possible ways of evaluating and guaranteeing the network performance.

Keywords: Coastal networks Cellular networks

1

· Modeling · Maritime communication ·

Introduction

As the increase of human activities in marine environments, the development of maritime communications is attracting more attention. Currently, maritime communication mainly relies on very high frequency radio (VHF) and satellite communication system. However, VHF is a communication platform of ship automatically positioning system which only supports low-rate transmissions (about 300 kps) [1]. Although the satellite Internet is accessible in many areas in the ocean, it needs high maintenance cost with long communication delay [2]. Furthermore, both of these two communication networks need special communication devices which are not popular with maritime users. On the other hand, cellular networks support high-quality communication (up to 10 Gbps communication rate) [3] and seamless service for terrestrial communications. However, introducing terrestrial cellular networks into the marine communication faces many challenges. One of the important problems is modeling communication scenarios for such a network, which is the basis of evaluating the c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1681–1688, 2020 https://doi.org/10.1007/978-981-13-9409-6_202

1682

Y. Xu

performance of coastal networks and judging whether cellular networks are beneficial for the maritime communications or not. Different from the deployment of base stations (BSs) and user equipments (UEs) in terrestrial networks, BSs are deployed along coastline and UEs are clustered distributed along a route, leading to different modeling problems. In our previous work, we mainly studied the architecture of crotaline communications based on cellular networks [4]. In this paper, the distribution of link distances is analytically characterized for coastline cellular networks. Based on these analyses, the coverage of a BS and handover metrics are studied for seamless communication of coastal networks. The remainder paper is organized as follows. In Sect. 2, the coastal network scenario is characterized and a communication is proposed. In Sect. 3, network performance is analyzed. Simulation results are presented in Sect. 5. Finally, Sect. 6 concludes this paper.

2

System Model

As shown in Fig. 1, BSs are deployed along the coastline. To focus the transmission energy, a BS uses directional antennas to form a fan-shaped area directed to the sea surface. For example, each BS covers a half of circle area (called a sector of BS) and there are overlaps among sectors of BSs to support a seamless coverage for coastal networks. UEs outside the coverage of BSs access the network via nearby UEs in the coverage. For the resource allocation, resource is reused by different BS sectors, frequency partition is used for the overlapped areas of neighbor BSs and orthogonal resource is allocated to users served by the same BS for interference mitigation like that in terrestrial cellular networks [5,6].

Fig. 1. A scenario example for coastline networks

UEs are clustered in a ship and ships are distributed on routes. The number of ships on a route (e.g. route A shown in Fig. 1, which can be seen as a line compared to the coverage distance of a BS sector) follows a homogeneous point Poisson process (PPP) Φ, which is effective for characterizing randomly deployed

Modeling for Coastal Communications Based on Cellular Networks

1683

nodes with arbitrary user density. Then the probability that there are N ships on a route with length l can be written as Nl =

N

(ρl) −ρl e , N!

(1)

where ρ is the density of ships. Then the locations of the UEs are modeled by a Poisson cluster process where cluster centers follows a homogeneous PPP Φ with density ρ [7].

3

Distribution of Cellular Link Distance

Compared to the distance between ships and a BS, the distance between UEs on the same ship can be ignored. Hence, we regard each ship as a node and focus on the distance between ships and BSs to investigate the distribution of cellular links. For a route partly covered by a BS j, the covered and uncovered parts are denoted by Ljin and Ljout , respectively. We focus on the analysis in a BS sector and omit the index j for simplification. The distribution of ships on the bounded area Lin can be modeled by a binomial point process (BPP) [8], which describes independently uniformly distributing N points in a compact set. Lemma 1. For a closed line segment σ in Lin , the probability that there are less than n points on this line segment can be expressed by n−1 k k N −k p (1 − p) N ! k=0 CN , (2) Pσ (n) = N (ρ |Lin |) e−ρ|Lin | where p =

|σ| and N is the total number of nodes in Lin . |Lin |

This Lemma can be easily obtained based on the definition of BPP, which is omitted due to limited space. Before analyzing the cellular link distance, we first study the distances among ships on the covered route. Lemma 2. For a BS sector with N nodes on Lin , the pdf of the distance between the nth nearest node to a reference node x (denoted by ln ) is N −1  l 1 1 N2 1 − × fln (l) = + |Lin | |Lin | |Lin | (3)  N −k−1   n−1  l k−1 Nl l k −k 1− CN . |Lin | |Lin | |Lin | k=1

Proof. The cdf of ln , Fln (l), can be calculated by Pr {ln ≤ l} according to the definition of cdf. Because Pr {ln > l} is the probability that there are less than n points in Ln (x, l), where Ln (x, l) is the line segment with center x and length 2l. Then the probability Pr {ln > l} can be written as Pr {ln > l} = PLn (x,l) (n) N|Lin | ,

(4)

1684

Y. Xu

where PLn (x,l) (n) is the probability Pr {ln > l} with N in Lin which can be obtained by Lemma 1. N|Lin | is the number of nodes on Lin , which can be obtained based on (1). Based on (1) and (2), (4) can be written as Pr {ln > l} = =

n−1 

k k CN p

N!

n−1 k=0

k k CN p (1 − p)

N −k

N

(ρ |Lin |) −ρ|Lin | e N!

N

(ρ |Lin |) e−ρ|Lin | N −k

(1 − p)

(5)

,

k=0

where p = |Ln (x, l)| / |Lin | = 2l/ |Lin |. Thus, Fln (l) can be written as follows. Fln (l) = 1 −

n−1 

N −k

k k CN p (1 − p)

.

(6)

k=0

Thus, the pdf fln (l) can be calculated by n−1 d  k k N −k CN p (1 − p) fln (l) = − dl

=

n−1  k=0



n−1  k=1



k=0

N −k−1

k (N − k) pk (1 − p) CN

 dp dl

(7)

  N −k dp k . kpk−1 (1 − p) CN dl

Including p = 2l/ |Lin | yields the conclusion of Lemma 2. Since a route location is pre-known at BSs, the location of a reference point is known by pre-designating a location such as the origin o of this route as shown in Fig. 1. Thereby, the distance between a BS j and o, doj , and the angle ψj are known. For a node Sn on Lin , the distance between it and BS j is given in Theorem 1. Theorem 1. The cdf and pdf of the distance between Sn to BS j (denoted by dSn j ) can be expressed by FdSn j (d) = Fln (εj ) ,

(8)

dfln (εj ) , εj

(9)

and fdSn j (d) =  where εj =

d2 − d2oj + 2doj ln cos ψj , d ≥ d0 and d0 =



d2oj − 2doj ln cos ψj .

Modeling for Coastal Communications Based on Cellular Networks

Proof. According to cosine law, we can calculate dSn j by  dSn j = d2oj + ln2 − 2doj ln cos ψj .

1685

(10)

Thus, FdSn j (d) can be derived by FdSn j (d) = Pr {dSn j ≤ d} 

= Pr ln < d2 − d2oj + 2doj ln cos ψj ,

(11)

= Fln (εj )  where εj =

d2 − d2oj + 2doj ln cos ψj , d ≥ d0 and d0 =



d2oj − 2doj ln cos ψj .

With the derivation of (11), we can obtain the expression of pdf. Remark. With the distribution of distances from BS to ships, the power distribution at each cluster center can be obtained. With this power distribution, we can analyze the receiving power at a cluster member (UE) on a ship by using the clustered PPP theory from which the distribution of distance between cluster member and the cluster center can be obtained. Then the power from the BS to a member on a ship at a route can be derived. Correspondingly, the power control, resource allocation, scheduling scheme can be designed for cellular-based maritime communications.

4

Coverage and Handover of Coastal Networks

To support a seamless communication service for maritime users, the coverage of a BS sector and handover mechanism need to be considered. For both the coverage and the handover, they depends on the receiving power at a user. Here, the coverage is evaluated by outage probability, that is, the cellular link is regarded to be outage if the receiving power is below a threshold ηth . If the average outage probability at a receiver is larger than a targeted service quality o , this receiver is regarded to be out of the coverage of BS. Denoting the Pout n , it can be written as outage probability at Sn as Pout n Pout = Pr {RSn j < ηth }   ηth dα Sn j . = 1 − EdSn j FH Ej

(12)

where RSn j = Ej HSn j d−α Sn j is the receiving power at Sn when the transmission power is Ej and channel fast fading power is HSn j . FH (·) is the cdf of HSn j . EdSn j [·] is expectation function of dSn j and α is the path loss factor. For the communication channel following exponential distribution with unit mean, (12) can be expressed by   ηth α n . d Pout = 1 − E 1 − exp − (13) E j Sn j

1686

Y. Xu

Including the cdf of dSn j in (11) into (13), we have  

R ηth ηth α α−1 =1− α x exp − x FdSn j (x) dx Ej Ej d0  

R ηth α ηth α xα−1 exp − x dx =1− Ej Ej d0  

n−1 R ηth  k ηth α k N −k − α CN xα−1 exp − x p (1 − p ) dx, Ej Ej d0

n Pout

(14)

k=0

where R is the radius of the sector of a BS and p = |Linj | . For the handover of serving BSs, only two neighbor BSs need to be considered due to the special deployment scenario of coastal BSs, which is different from that of terrestrial cellular networks where a circle of neighbor BSs are considered. Like the terrestrial cellular network, user selects the BS from which the receiving signal power is larger as the serving BS. Taking the handover between BS j and j − 1 as an example, this selection mainly depends on the path loss, i.e., the link distance when the transmission power of BSs is equal. Taking some affecting parameters such as the soft handover, available resources and number of ships into consideration, a redundancy variable δ is used here for the handover. Then the probability of Sn selecting BS j as its serving BS is 2ε

Qsj = Pr {dSn j−1 − dSn j ≥ δ}

R y−δ = fdSn j (x) fdSn j−1 (y) dxdy, 0

(15)

0

where fdSn j−1 can be written as follows based on (9). fdSn j−1 (y) =

dfln (εj−1 ) , εj−1

(16)

 and εj−1 = d2 − d2oj + 2doj−1 ln cos ψj−1 . Then the probability of Sn selecting BS j − 1 as its serving BS is Qsj−1 = 1 − Qsj . We observe that δ affects the probability of selecting a serving BS, as a result, the handover can be optimized by adjusting δ to guarantee some performance metrics such as capacity, energy consumption and time delay. Denoting the overlapped sector covered by j − 1 and j as Ωj−1,j , the number of nodes served by j − 1 and j in Ωj−1,j are ∞ 

Nj−1 Ωj−1,j =



Nj−1

NΩj j−1,j Qsj (n) ,

(17)

NΩj−1 Qsj−1 (n) , j−1,j

(18)

Nj−1 =0 n=1

NjΩj−1,j =

Nj ∞   Nj =0 n=1

Modeling for Coastal Communications Based on Cellular Networks

1687

(ρ|Ljin |) j e−ρ|Ωj−1,j | where NΩj j−1,j = . Nj ! The resource partition in the overlapped area is according to the average number of serving nodes of each BS, i.e., the resource allocation at overlapped area of j and j − 1 is proportional to the average number of serving nodes as follows. NjΩj−1,j Rj = j−1 . (19) Rj−1 NΩ N

j−1,j

5

Simulation Results and Discussions

In this section, simulation results are presented and parameters setting are listed as follows: path loss factor is α = 3, transmission power is Ej = 23 dBm, targeted receiving power is ηth = −107 dBm, BS radius R = 1e+4 m and the setting of these related parameters can be referred to [3]. Three random routes are selected to be evaluated and the shortest distances between these routes and the shore are 500, 1000 and 1500 m. Nodes are uniformly deployed on these routes. −1

10

Po =0.01 out

Outage probability

−2

10

o

−3

Pout=10

−3

10

doj=500, N=200 d =1000, N=200

−4

10

oj

doj=1500, N=200 doj=500, N=500

−5

10

1

5

10

15

20

25

The order of ships according to their distances to BS

Fig. 2. Comparison of outage probabilities under different transmission cases

As shown in Fig. 2, we find that the outage probability increases with the o , the distance from ship to BS. Thereby, for a certain coverage requirement Pout number of supported ships by a BS is limited. We observe that the support regions (enclosed by curves and coverage requirement lines shown in this paper) are different for these routes due to the difference of their distances to the BS. In practical system, we can adjust BS transmission such as number of antennas, transmission power or number of served UEs to satisfy a given coverage requirement for different routes with different number of UEs based on these results.

1688

6

Y. Xu

Conclusion

In this paper, we have proposed a cellular-based communication architecture to cover maritime communications. In addition, we have modeled the coastal scenario and formulated communication problems. We have derived some results for the distribution of communication link distances, based on which we study the BS coverage, handover metric and resource partition of coastal cellular networks. The modeling and related analyses provide tractable tools for performance evaluation and give an insight into possible ways for supporting high-quality communication service of coastal networks. Acknowledgements. This work was partially supported by the National Natural Science Foundation of China (61271283 and U1701265).

References 1. EEC Committee (2013) Information paper on VHF data exchange system (VDES). In: The 3rd meeting CPG PTC, CPGPTC (13) INFO 16, Bucharest, Hungary 2. Jiang S (2013) On the marine internet and its potential applications for underwater inter-networking. In: Proceedings of the eighth ACM international conference on underwater networks and systems, WUWNet’13, ACM, New York, NY, USA, pp 13:1–13:2 3. 3GPP technical report 36.843. Technical report, 3GPP WG RAN1 (2014). http:// www.3gpp.org/ftp/Specs/html-info/36843.htm 4. Xu Y, Liu F, Jiang S, Li XJ (2017) Coastal communications based on cellular networks with distributed antennas. In: 2017 9th international conference on wireless communications and signal processing (WCSP), pp 1–5 5. Fu W, Tao Z, Zhang J, Agrawal DP (2010) Differentiable spectrum partition for fractional frequency reuse in multi-cell OFDMA networks. In: IEEE wireless communication and networking conference, pp 1–6 6. Jeon WS, Kim J, Jeong DG (2014) Downlink radio resource partitioning with fractional frequency reuse in femtocell networks. IEEE Trans Veh Technol 63(1):308–321 7. Ganti RK, Haenggi M (2009) Interference and outage in clustered wireless ad hoc networks. IEEE Trans Inf Theory 55(9):4067–4086 8. Baccelli F, Blaszczyszyn B (2009) Stochastic geometry and wireless networks. Now Publishers Inc.

Research of Space Power System MPPT Topology and Algorithm Qing Du(&), Ning Xia, Bo Cui, Zhigang Liu, Yi Yang, Hao Mu, and Yi Zeng Beijing Institute of Spacecraft System Engineering, 10094 Beijing, China [email protected]

Abstract. To meet the growing energy demand of the new spacecraft such as high resolution and radar remote sensing satellite and deep space probes, a incremental conductance peak power tracking method based on S3MPR circuit is presented in this paper. The shortcomings of traditional MPPT topology that low mass and power ratio, low efficiency are overcome. The incremental conductance method can be used to reduce the oscillation near the maximum power point. The principle and control processes of the incremental conductance method are introduced. A co-simulation platform and semi-physical experimental platform is built to validate the circuit and control method. The results show, the incremental conductance peak power tracking method based on S3MPR circuit can achieve effective tracking of the maximum power point of the solar array, and has good tracking efficiency, the simulation and experiment platform are reasonable and effective. Keywords: Spacecraft

 MPPT  S3MPR  Incremental conductance method

1 Introduction With the continuous development of space technology, new spacecraft such as highresolution optical and radar remote sensing satellites and deep space detectors are increasingly demanding electrical energy, and their load power has reached tens of kilowatts [1]. Most spacecraft power conditioning systems adjust the voltage by setting a fixed reference operating point. Typically, the operating point voltage is lower than the maximum power output point voltage and the energy efficiency is low. The solar cell array maximum power point tracking technology can maximize the use of solar cell conversion energy, reduce the area of the solar wing under the premise of meeting the power requirements of the spacecraft, reduce weight, reduce heat consumption, and optimize spacecraft design [2, 3]. The traditional Maximum Power Point Tracking (MPPT) topology is to track the maximum output power of the solar array through the solar array output DC/DC converter. The topology have been successfully applied in US’s LANDSAT-4/5, ESA’s RADARSAT-2, the Italian Space Agency’s AGILE and other satellites [4, 5]. Since the DC-DC converter inevitably introduces magnetic components such as power semiconductor devices and inductor transformers, the power to mass ratio of the conventional MPPT topology and the efficiency are difficult to increase. In 2008, ESA first © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1689–1696, 2020 https://doi.org/10.1007/978-981-13-9409-6_203

1690

Q. Du et al.

proposed a new topology of Sequential Switching Shunt Maximum Power Regulator (S3MPR) on the BepiColombo Mercury Detector [6]. The topology is connected in series with the sequence switch at the output of the solar array. The Sequential Switching Shunt Regulator (S3R) circuit achieves maximum power point tracking with high power to mass ratio and transmission efficiency. At present, many researches have been carried out on MPPT technology at home and abroad, including disturbance observation method and conductance increment method [7–9], but most of the domestic research on spaceborne MPPT technology stays at the stage of principle analysis and simulation verification [9, 10], no engineering prototype was developed to achieve on-orbit applications. Based on engineering requirements and tracking of advanced power control technologies abroad, it is necessary to carry out research and implementation of spatial MPPT power system topology and control algorithms. In this paper, a conductivity incremental MPPT control method based on S3MPR topology is proposed. The S3MPR circuit has higher power to mass ratio, and the conductance incremental control method tracks the maximum power point accurately. The control process of the conductance increment method is introduced, and MatlabSaber co-simulation platform and semi-physical experimental platform is are built, and finally the simulation analysis and experimental verification results are given.

2 Space MPPT Power System Design 2.1

S3MPR Circuit Topology

The topological structure of the S3MPR circuit is shown in Fig. 1. SA1 to SAn are nstage solar arrays, and the output is connected in series with the S3R circuit. The main error controller MEA controls the turn-on and turn-off of the shunt Mosfet, thereby realizing the regulation of the bus voltage VBUS. Unlike the traditional S3R circuit, the voltage reference value VMPP of the S3MPR circuit is no longer a fixed value, but a variable that follows the maximum output power point of the solar array, which is generated by the MPP controller. The control mode of MEA is the same as that of the traditional constant voltage S3R circuit. Hysteresis control is adopted, and the hysteresis threshold voltage decreases linearly from SA1 to SAN. Assume that the MPP controller has tracked the current solar cell array maximum operating point voltage VMPP. When VBUS > VMPP, the MEA output error signal increases positively, so that the number of shunts is increased, and the current on the power bus capacitor array is reduced. The voltage drops; when VBUS < VMPP, the MEA output error signal decreases negatively, so that the number of shunts is reduced, the current on the power bus capacitor array increases, and the bus voltage rises; when VBUS is close to VMPP, the MEA outputs an error signal Maintaining a constant positive value, the solar cell array with the hysteresis threshold closest to it is in a regulated state, the shunt shunt with a high hysteresis threshold is closed, and the shunt shunt with a low hysteresis threshold is turned on. This stabilizes VBUS at the VMPP setting. The control mode of MEA is the same as that of the traditional constant voltage S3R circuit. Hysteresis control is adopted, and the hysteresis threshold voltage

Research of Space Power System MPPT Topology and Algorithm

1691

decreases linearly from SA1 to SAN. Assume that the MPP controller has tracked the current solar cell array maximum operating point voltage VMPP. When VBUS > VMPP, the MEA output error signal increases positively, so that the number of shunts is increased, and the current on the power bus capacitor array is reduced. The voltage drops; when VBUS < VMPP, the MEA output error signal decreases negatively, so that the number of shunts is reduced, the current on the power bus capacitor array increases, and the bus voltage rises; when VBUS is close to VMPP, the MEA outputs an error signal Maintaining a constant positive value, the solar cell array with the hysteresis threshold closest to it is in a regulated state, the shunt shunt with a high hysteresis threshold is closed, and the shunt shunt with a low hysteresis threshold is turned on. This stabilizes VBUS at the VMPP setting.

SA1

S3R-1

Adjustable DC bus C

SA2

S3R-2

VBUS

R

To Mosfet

+ MEA

-

VMPP Reference

SAn

S3R-n

Fig. 1. Topology of S3MPR

2.2

Conductivity Incremental Method Optimization Mechanism

At present, although there are many algorithms for MPPT, due to the special application conditions and high reliability requirements of spacecraft, there are few algorithms applied on spacecraft, and the disturbance observation method and its improved algorithm are still the main ones. In order to be closer to engineering applications, this paper uses the conductance increment method to track the maximum power point of the solar array. This method has been applied in civil fields such as photovoltaic power generation. The conductance increment method is based on the derivative of the solar cell array power curve at the maximum power Pmax of 0, positive to the left of Pmax, and negative

1692

Q. Du et al.

to the right of Pmax for the maximum power point of the solar array, as shown in Eq. (1). 8 at the maximum power point < dP=dV ¼ 0; dP=dV [ 0; ð1Þ On the left side of the maximum power point : dP=dV\0; On the right side of the maximum power point Due to

dP dV

dI DI ¼ dðVIÞ dV ¼ I þ V dV ffi I þ V DV , so Eq. (1) can be written as

8 < DI=DV ¼ I=V; DI=DV [  I=V; : DI=DV\  I=V;

at the maximum power point On the left side of the maximum power point On the right side of the maximum power point

ð2Þ

According to Eq. (2), the maximum power point can be tracked by comparing the instantaneous conductance and the conductance increment.

3 Simulation and Experiment 3.1

Simulation Analysis

As described in Sect. 2, the spatial MPPT power system can be divided into two parts: the S3MPR topology and the MPPT control algorithm. The former is the hardware circuit design and the latter is the control algorithm design. This requires the simulation environment to support the actual circuit simulation and Control algorithm simulation and real-time interaction of the two-part simulation results. This paper selected the joint simulation environment combining Matlab and Saber. Using SaberCosim interface, Saber as the main emulator, under the platform to build S3MPR topology, adjustment circuit and drive circuit, MPPT control algorithm built in Matlab, Simulink is automatically started in the simulation process, Saber transmits the sample voltage and current in S3MPR topology to Simulink in real time. After being processed by the MPPT control algorithm, Simulink outputs the error signal containing the power information to Saber. After the adjustment circuit and the drive circuit, the drive signal is generated to control the switch action in the S3MPR topology. The S3MPR circuit model built with Saber software is shown in Fig. 2. The 10-level solar array and S3R circuit are designed, and the step-by-step shunt is implemented according to the order of SA10 to SA1. The MPP bus voltage Vf is selected as the voltage feedback signal, and the output current if of the first-stage array is the current feedback signal, and the two are sent to the MPPT controller. After the MPPT algorithm, the target value Vref of the bus voltage is output, and the MPP bus voltage error signal empp is obtained after the difference with the feedback voltage, and then sent to the PI regulator to obtain the modulated error signal, and then sent to the respective levels bus hysteresis comparator drives, to turn-on and turn-off of the MOSFET in the S3R circuit.

Research of Space Power System MPPT Topology and Algorithm

S3MPR S3R-1

SA1

if

C

PI

KV

R

empp

S3R-2

SA2

Vf

MPP Bus Bus comp1

1693

+

Vref

-

Ki

MPPT controller

Bus comp2

S3R-10

SA10

Bus comp10

Fig. 2. Model of S3MPR circuit

Figure 3 shows the algorithm model based on Matlab. SaberCosim is the control that calls Saber in Matlab. First, the incoming bus voltage Vf in Saber and the output current If of the first-stage array are filtered, and then multiplied to obtain the output power Pf of the current solar array. The signal is a discrete signal that is passed through the sample holder to obtain a continuous power signal. This continuous power signal is sent to the MPPT algorithm module to complete the maximum tracking control of the output power. After the operation, the MPPT algorithm module will finally output the target bus voltage signal Vref and pass it back to Saber. The MPPT algorithm module is implemented by M function programming. Vf

SaberCosim

If

Mean × Mean

Sample and hold Pf

Multiplier

MPPT

Vref

M-file S-Function One beat delay

Fig. 3. Model of S3MPR circuit

1694

Q. Du et al.

The solar array uses a combination of 128 cells in series and a total of 3 strings in parallel. The maximum power point output voltage is 58.112 V, the output current is 1.413 A, and the maximum power is 82.112 W. Figure 4 shows the simulation results of the conductance increment method. The voltage adjustment step size DV is 0.2 V. It can be seen from Fig. 4 that it stabilizes at the maximum power point of 81.196 W after about 2 s, and the tracking efficiency can reach 99%.

Solar array power/W

85.0 82.5

81.196

80.0 77.5 75.0 72.5 0.0

1.0

2.0

3.0

Time/s Fig. 4. Simulation results

3.2

Test Verification

The semi-physical experiment hardware consists of S3MPR circuit, industrial computer, square matrix simulator and electronic load. Using the solar array simulator instead of the real solar array, the solar array simulator control software can realize the setting of the solar array open circuit voltage Voc, the short circuit current Isc, the maximum power point output voltage Vmp and the maximum power point output current Imp; S3MPR circuit includes the S3R part, the driving part, the control board and the capacitor array. On the one hand, the control board can convert the input target bus voltage Vref into a square wave signal, driving the S3R circuit step by step, on the other hand the signal acquisition of the bus voltage Vf and the output current If of the solar array 1 is realized; The data acquisition card of the NI is integrated in the industrial computer, and the acquisition and setting of the analog signal of 0–10 V can be realized. A total of 10 S3R circuits have been designed. The single-channel shunting capability is 5 A/channel, and the bus voltage adjustment range is 20–41 V. Set the square matrix simulator single array open circuit voltage Voc = 36 V, short circuit current Isc = 1.2 A, maximum power point output voltage Vmp = 34 V and maximum power point output current Imp = 1 A, then each solar array the maximum power point power Pmp = 34 W, a total of 10 arrays are turned on. Set the electronic load to constant current mode with a total load current value of 2 A.

Research of Space Power System MPPT Topology and Algorithm

1695

The control mode of the monitoring interface is set to MPPT mode, and Fig. 5 is the experimental result of the conductance increment method. It can be seen from the test curve that after the MPPT mode is started, Vf quickly tracks the direction of the maximum power point and gradually approaches it after a short adjustment. After about 3 s, a small oscillation is formed near 34 V, and the amplitude is about 0.25 V. The output current If is about 0.95 A, and the maximum power that is tracked is Pmp = 32.3 W, and the tracking efficiency can reach 95%. 34.52 34.1 34.05 34 33.95 33.9 33.85 1 0.98 0.96 0.94 0.92 0.9

Voltage V

32.88 32.06 31.24

Current A

Output power/W

33.70

30.41 29.59 28.77 0

2.5

5

7.5

10

Time/s Fig. 5. Results of incremental conductance method

4 Conclusion This paper designs a spatial MPPT power system based on S3MPR topology and conductance increment method, and builds a joint simulation platform based on Matlab-Saber, which maximizes the advantages of Matlab software in the control algorithm simulation and Saber in the hardware circuit simulation. The S3MPR circuit and the conductance increment method were simulated, and the peak power tracking efficiency was up to 99%. The semi-physical experimental platform of S3MPR circuit, industrial computer, solar array simulator and electronic load was built, and the circuit and algorithm were tested and verified, peak power tracking efficiency can reach 95%. The simulation and experimental results show that the S3MPR circuit works stably and can effectively track the maximum power point of the solar array. The simulation platform and the test platform are reasonable and effective, which can provide reference for the design and engineering application of the subsequent MPPT in the spacecraft power system.

1696

Q. Du et al.

References 1. Miao D (2011) Improved dynamic response power density and integrated topology research on power condition unit of spacecraft (in Chinese) 2. Ding L (2012) Research on micro-satellite power systems and related ground test equipment. Zhejiang University, Zhejiang (in Chinese) 3. Yan W, Xu W (2011) Analysis and comparison about power regulating technology. Aerospace Dongfanghong Satellite Co. Ltd., Beijing, 1–5(in Chinese) 4. Brambilla A, Gambarara M, Torrente G (2002) Perturb and observe digital maximum power point tracker for satellites applications. In: Proceedings of the sixth European space power conference. ESA, Paris, pp 263–268 5. Ebale G, Lamantia A, La Bella M (2005) Power control system for the AGILE satellite. In: Seventh European space power conference. ESA, Paris, pp 589–596 6. Garrigos A, Blanes JM, Carrasco JA et al (2007) The sequential switching shunt maximum power regulator and its application in the electric propulsion system of a spacecraft. In: Power electronics specialists conference. IEEE, New York, pp 1374–1379 7. Xu P, Liu F, Liu B et al (2007) Analysis, comparison and improvement of several MPPT methods for PV system. Power Electron 5(41):3–5 (in Chinese) 8. He W, Yu X, Yang J et al (2009) Photovoltaic maximum power point tracking system based on an improved MPPT algorithm. Electric Drive 6(39):39–41 9. Liu Z, Cai Z, Chen Q et al (2011) Overview of space power system design using MPPT for deep space spacecraft. Spacecraft Engineer 20(5):105–110 (in Chinese) 10. Lin W, Liu Z, Ma L (2013) Simulation of maximum power point tracking digital control based on optimized gradient method. Spacecraft Engineer 22(4):82–86 (in Chinese)

Far-Field Sources Localization Based on Fourth-Order Cumulants Matrix Reconstruction Heping Shi1(&), Zhiwei Guan1, Lizhu Zhang1, and Ning Ma2 1

School of Automobile and Transportation, Tianjin University of Technology and Education (TUTE), Tianjin 300222, China [email protected] 2 School of Electronic Engineering, Tianjin University of Technology and Education (TUTE), Tianjin 300222, China

Abstract. An effective algorithm named as Toeplitz fourth-order cumulants orthonormal propagator rooting method (TFOC-OPRM) is proposed. Firstly, the reduced-dimension fourth-order cumulants (FOC) matrix can be achieved via deleting the redundant information which is encompassed in the original FOC matrix, and then the Toepltiz structure is regained by utilizing the Toepltiz approximation technology. Finally, the DOAs of incident source signals can be estimated by exploiting the proposed orthonormal propagator rooting method (OPRM). The simulation results show that the proposed TFOC-OPRM algorithm can not only reduce the computational complexity significantly, but also achieve satisfying estimation performance. Keywords: Direction-of-arrival (DOA)  Fourth-order cumulants (FOC) Polynomial rooting  Toeplitz approximation



1 Introduction Source localization is a major research issue in passive radar, sonar, medical signal processing, and wireless communications [1–3]. Various high-resolution algorithms [4, 5] have been proposed to estimate direction-of-arrivals (DOAs) of far-field sources. However, these subspace-based algorithms demand the noise prior knowledge. Besides, the total number of sources impinging on the array should be less than that of sensors [6]. Therefore, the estimation performance of these subspace-based algorithms will drop sharply and even invalid in practical complex environment. Fortunately, the high-order cumulants (HOC) have been revealed to be a promising technology, the advantage of which is without knowing or estimating the noise covariance if only the noise is normally distributed [7, 8]. Moreover, another key point of choosing HOC is the ability to distinguish a greater number of sources than that of array elements [9]. Without using EVD or SVD, Marcos and co-workers [10] firstly proposed an orthonormal propagator method (OPM) to respectively obtain signal and noise subspaces by executing a linear-partition operation, which can decrease the computational

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1697–1702, 2020 https://doi.org/10.1007/978-981-13-9409-6_204

1698

H. Shi et al.

complexity effectively. In [11], a novel OPM-based denoise way is proposed for DOA estimation, which achieves satisfying result. In this paper, a novel TFOC-OPRM algorithm is introduced. Firstly, the improved FOC matrix is obtained to reduce computational complexity, and then the Toepltiz approximation is invoked to regain the Toeplitz structure of the improved FOC. Compared to the FOC-OPM algorithm, the proposed TFOC-OPRM algorithm not only achieves better estimation performance, but also utilizes the polynomial rooting method to replace spectral search, which result in much lower computational complexity.

2 Data Model Consider M narrowband far-field sources sl(t), (l = 1, …, M) impinging on a uniform linear array (ULA) with N sensors, where the distance between adjacent sensors is equal to half the wavelength. Assume that the incoming sources are stationary and mutually independent. The noise is the additive white, and statistically independent of the sources. Let the first sensor be the reference, and then the observed data received in time t at the kth sensor can be expressed as xk ðtÞ ¼

M X

ak ðhi Þsi ðtÞ þ nk ðtÞ;

k ¼ 1; . . .; N

ð1Þ

i¼1

where si(t) is the ith source, nk(t) is the Gaussian noise at the kth sensor and ak(hi) is the response of kth sensor corresponding to the ith source. ak ðhi Þ ¼ expðj2pðd=kÞk sin hi Þ

ð2Þ

where k is the central wavelength, d is the spacing between two adjacent sensors. Assume that the source signals are zero-mean stationary random process, the FOC can be defined as cumðk1 ; k2 ; k3 ; k4 Þ ¼ Eðxk1 ðtÞxk2 ðtÞxk3 ðtÞxk4 ðtÞÞ  Eðxk1 ðtÞxk3 ðtÞÞEðxk2 ðtÞxk4 ðtÞÞ  Eðxk1 ðtÞxk4 ðtÞÞEðxk2 ðtÞxk3 ðtÞÞ 

ð3Þ

E½xk1 ðtÞxk2 ðtÞE½xk3 ðtÞxk4 ðtÞ k1 ; k2 ; k3 ; k4 2 ½1; . . .; N 

where xkm ðm ¼ 1; 2; 3; 4Þ is the stochastic process. Apparently, cum(k1, k2, k*3, k*4) has N4 values with the change of k1, k2, k3, k4. For simplicity, Eq. (3) can be written in matrix form, which is expressed by cumulants matrix C4, and cum(k1, k2, k*3, k*4) appears as the [(k1 − 1)N + k2]th row and [(k3 − 1)N + k4]th column of C4.

Far-Field Sources Localization Based on Fourth-Order

1699

C4 ½ðk1  1ÞN þ k2 ; ðk3  1ÞN þ k4  ¼ cumðk1 ; k2 ; k3 ; k4 Þ ¼ BCs B

ð4Þ

H

where B and Cs respectively represent the extended array manifold and the FOC matrix of incident source signals. B ¼ A  A, and each column of B is bðhÞ ¼ aðhÞ  aðhÞ.

3 The Proposed Algorithms 3.1

The TFOC-OPRM Algorithm

An array of N arbitrary identical omni-directional sensors can be extended to at most of N2 – N + 1, which has been proven in [12]. In order to eliminate these repetitive elements existed in C4, a (2 N − 1)  (2 N − 1) matrix R4 is defined. Next, the 1st to Nth and all kNth (k = 2, …, N) rows of C4 are fetched out in sequence, and then store these rows in the 1st to (2 N − 1)th row of the new matrix R4. Similarly, the same operation is performed on the 1st to Nth and all kNth (k = 2, …, N) columns of C4 to get the 1st to (2 N − 1)th columns of R4. Similar to Eq. (4), R4 can be expressed as R4 ¼ DCs DH

ð5Þ

where D denotes the extended array manifold without redundancy, and each column of D has the form of d(h) = [1, …, z2N−2]T. As is known to us, the ideal R4 is Toeplitz matrix. However, due to finite sampling snapshots, the Toeplitz property will be destroyed in practical application. Therefore, the key is how to recover the Toeplitz structure of the matrix R4. Then, a new Toeplitz matrix R4T is reconstructed, which is based on R4, by solving the following optimization problem ð6Þ

min kR4T  R4 k

R4T 2ST

where ST is Toeplitz matrices, and the entries of the Toeplitz matrix R4T can be written as ch ¼ ð2N  1  h þ 1Þ1

2N1h Xþ1

rpðp þ h1Þ

ð7Þ

p¼1

where the element rpðp þ h1Þ denotes the pth row and (p + h − 1)th column of R4, h 2 ½1; . . .; 2N  1. And then R4T can be obtained by the following Toeplitization operator R4T ¼ Toepðc1 ; . . .c2N1 Þ where Toep stands for the Toeplitization operator.

ð8Þ

1700

H. Shi et al.

Although the conventional algorithms can be applied to estimate DOAs based on the R4T, the computational burden is much heavier due to the EVD and SVD involved. Here, we apply rooting-based OPM to reduce the complex computations. Specifically, set z = exp(j2p(d/k)sinh), we have d = d(z), where dðzÞ ¼ ½1; . . .; z2N2 T

ð9Þ

Based on the denominator of the spectral function, these M closest to the unit circle roots in the finite sample case can be obtained. After getting M roots, which are {z1, …, zi, …zM}, the DOAs of the sources can be estimated easily. 3.2

Complexity Analysis

The computational complexity of the FOC-OPM algorithm is O((9N4L) + (MN4) + (180/Δh)N4), and the computational complexity of the TFOC-OPRM algorithm is O(9(2N − 1)2L + 2(2N − 1) – 1 + (2(2N − 1)2 − (2N − 1)) + 2(2N − 1) – 1 + M (2N − 1)2 + MN), where L and Δh denotes the number of snapshots and the scanning interval, respectively. From the analysis above, we can see that compared to the FOCOPM algorithm, the improved TFOC-OPRM algorithm substitutes polynomial rooting for spectral search to reduce the computational complexity greatly.

4 Simulation Results In this section, simulation result is presented to test the performance of the proposed algorithm and compared algorithms in spatially-white noise environment. A threeelement ULA (N = 3) with k/2 spacing is employed. Consider three mutually independent far-field source signals (M = 3) coming from {−45°, 15°, 40°}. The estimated root-mean-square-error (RMSE) is considered to measure the performance of the three methods. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 2 500 X M u 1 X ð^hn ðiÞ  hn Þ RMSE = 500M i¼1 n¼1

ð10Þ

where ^hn ðiÞ is the estimate of hn for the ith Monte Carlo trial. In the experiment, the number of snapshots is L = 2000, and the SNR is varied from 8 to 24 dB. The performances of the RMSEs against input SNR is shown in Fig. 1, where the performances of the FOC-OPM algorithm is also plotted for comparison. It can be observed from Fig. 1 that the RMSEs of the two algorithms decrease monotonically as the SNR increases. Specifically, the proposed algorithm achieves better estimation performance than TFOC-OPRM algorithm; Furthermore, the improved TFOC-OPRM algorithm has much lower computational complexity than the FOC-OPM algorithm since the polynomial rooting method is involved.

Far-Field Sources Localization Based on Fourth-Order

1701

Fig. 1. RMSEs of the DOAs versus SNR

5 Conclusions In this paper, a computationally efficient TFOC-OPRM algorithm has been proposed. In the proposed TFOC-OPRM algorithm, the extended effective array aperture can resolve the number of sources more than or equal to that of the array elements. Moreover, compared to the FOC-OPM algorithm, the improved TFOC-OPRM algorithm can obtain much better estimation performance, as well as has lower computational burden. Acknowledgements. This work was supported by the Natural Science Foundation of Tianjin under Grant No. 18JCQNJC01500, by the National Key Research and Development Program of China under Grant No. 2017YFB0102501, by the Artificial intelligence Science and Technology Support planning Major project of Tianjin under Grant No. 17ZXRGGX00070, by the Tianjin Municipal Science and Technology Innovation Platform, Intelligent Transportation Coordination Control Technology Service Platform under Grant No. 16PTGCCX00150 and by the Scientific Research Program of Tianjin Municipal Education Committee under Grant No. JWK1609.

References 1. Krim H, Viberg M (1996) Two decades of array signal processing research: the parametric approach. IEEE Signal Process Mag 13(4):67–94 2. Suleiman W, Parvazi P, Pesavento M, Zoubir AM (2018) Non-coherent direction-of-arrival estimation using partly calibrated arrays. IEEE Trans Signal Process 66(21):5776–5788 3. Zhang D, Zhang Y, Zheng G, Deng B, Feng C, Tang J (2018) Two-dimensional direction of arrival estimation for coprime planar arrays via polynomial root finding technique. IEEE Access 6:19540–19549 4. Schmidt RO (1986) Multiple emitter location and signal parameter estimation. IEEE Trans Antennas Propag 34(3):276–280

1702

H. Shi et al.

5. Roy R, Kailath T (1989) ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans Acoust Speech Signal Process 37(7):984–995 6. Shan Z, Yum TP (2005) A conjugate augmented approach to direction-of-arrival estimation. IEEE Trans Signal Process 53(11):4104–4109 7. Fan Y, Wang J, Du R, Lv G (2018) Sparse method for direction of arrival estimation using denoised fourth-order cumulants vector. Sensors 18(6):1815 8. Zeng WJ, Li XL, Zhang XD (2009) Direction-of-arrival estimation based on the joint diagonalization structure of multiple fourth-order cumulant matrices. IEEE Signal Process Lett 16(3):164–167 9. Chevalier P, Albera L, Ferreol A, Comon P (2005) On the virtual array concept for higher order array processing. IEEE Trans Signal Process 53(4):1254–1271 10. Marcos S, Marsal A, Benidir M (1995) The propagator method for source bearing estimation. Sig Process 42(2):121–138 11. Sun M, Wang Y, Pan J (2019) Direction of Arrival estimation by modified Orthogonal Propagator Method with linear prediction in low SNR scenarios. Sig Process 156:41–45 12. Chevalier P, Ferreol A (1999) On the virtual array concept for the fourth-order direction finding problem. IEEE Trans Signal Process 47(9):2592–2595

ONENET-Based Greenhouse Remote Monitoring and Control System for Greenhouse Environment Wei-tao Qian, Jiaqi Zhen(&), and Tao-tao Shen College of Electronic Engineering, Heilongjiang University, Harbin 150080, China [email protected]

Abstract. In order to realize the diversity of the Internet of Things (IoT) system cloud platform transmission protocol and the reliability of data transmission and storage, this paper proposes a ONENET-based remote monitoring and control system for greenhouse environment. By designing the acquisition unit based on STM32 MCU’s air temperature and humidity, light intensity and soil temperature and humidity, the communication gateway, combined with the relay control unit and ONENET IoT cloud platform, realize remote monitoring and control of the greenhouse by computer or intelligent mobile terminal. The experimental data shows that the system has the advantages of high detection accuracy, simple structure and low cost, and can realize the function of remote monitoring and control of the greenhouse environment. Keywords: ONENET  STM32 MCU monitoring and control

 Greenhouse environmental  Remote

1 Introduction In recent years, the Internet of Things (IoT) has developed rapidly. The agricultural Internet of Things has gradually become a hotspot in the field of agricultural science research at home and abroad. It is one of the core technologies of smart agriculture and has important scientific and practical significance for the development of agricultural informatization [1, 2]. Wang R. et al. designed a smart agricultural system design based on ZigBee and Android technology, which can realize terminal applications such as environmental monitoring, data analysis and remote control. The intelligent agricultural system can process and control the growth process of crops and automatically improve the growth environment, and realize the intelligence of agricultural production [3]; He Peng, Na L. Y., etc. designed a greenhouse monitoring system based on IoT and LabView using ZigBee and GPRS technology, which can realize intelligent detection of greenhouse environment [4]; Zhu J. C. designed an environmental monitoring system for agricultural greenhouses based on the Internet of Things, realizing real-time remote monitoring and management of environmental information in agricultural greenhouses [5].

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1703–1708, 2020 https://doi.org/10.1007/978-981-13-9409-6_205

1704

W. Qian et al.

The existing remote monitoring and control system for greenhouse environment has more or less shortcomings. Therefore, it is of great practical significance to develop a greenhouse monitoring and control system with accurate information collection, reliable data transmission and intelligent remote control. In the more popular smart agricultural IoT solution, most of them are rewriting code, building a new IoT cloud platform or spending huge sums of money on the original basis for expansion, due to the operating environment and software. Different systems result in low resource utilization. Large and complex hardware facilities require high investment and maintenance costs. The server configuration requirements are too high when idle. In order to solve this situation and promote the leap-forward development of smart agriculture, China Mobile’s ONENET open cloud platform independently developed for the Internet of Things industry can be used to realize the remote monitoring and control system of smart agriculture combined with Internet of Things technology. This article uses ONENET cloud platform to associate with system equipment [6, 7]. This system uses STM32 as the main controller, combines all sensor equipment sub-modules in the whole system with the main controller, and then transmits data to ONENET IoT cloud platform through ESP8266 Wireless Fidelity (WIFI) module. Mobile users can connect to the ONENET cloud to complete the control of sensors and devices in the module, which is convenient for agricultural managers to query information and management in real time. The system can automatically control the environmental parameters and relay in the greenhouse, which can effectively reduce the environment and the development cost, shorten the development cycle, provide solutions for the upgrading of existing smart agricultural monitoring systems, so it has important practical significance.

2 System Overall Design The ONENET-based greenhouse remote environment monitoring and control system avoids the cumbersome steps of the traditional design of the IoT cloud platform. Using ONENET IoT cloud storage technology, the sensor devices of different observation points in the greenhouse remote monitoring and control system are collected at different times. Data is displayed, stored, and controlled in the cloud. The system can be divided into five parts: the sensor data acquisition module of the greenhouse, the ESP8266 wireless WIFI module, the router, the ONENET cloud platform and the mobile terminal. The sensor data acquisition module mainly includes a main controller, an information acquisition unit and a relay control unit, and the information collection unit is composed of a DHT22 temperature and humidity sensor, a BH1750 illumination sensor, an MG811 carbon dioxide concentration sensor, and a SHT-20X soil temperature and humidity sensor, and the main controller STM32. To control the collection of each sensor data and the control of the relay, and then through the data processing, connect to the router via ESP8266 wireless WIFI module, access the ONENET server through routing, upload the data, and establish the webpage of the monitoring and control system on the ONENET cloud platform. The system can obtain the data flow of each sensor to monitor the environmental data in the greenhouse, obtain the real-time environmental data in the greenhouse of the Internet of Things

ONENET-Based Greenhouse Remote Monitoring

1705

cloud server through the Internet, analyze and process the acquired data, and finally pass the intuitive real-time graph. Or the dashboard displays that the data package is stored in the database of the ONENET Cloud Service Platform [8].

3 System Hardware Design The hardware design of the sensor data acquisition module mainly includes the main controller, information acquisition unit, relay control unit, data transmission unit and power conversion circuit to realize the collection, processing and transmission of data such as light intensity, air temperature and humidity, and temperature and humidity of the soil. By controlling the switch of the relay, the switch of the fill light and the fan in the greenhouse can be controlled. (1) The main controller, the acquisition node uses the STM32F103C8T6 single-chip microcomputer as the control core, which is responsible for acquiring the sensor data with the slave-controlled single-chip communication, and communicating with the total monitoring station using the wireless communication module to transmit data. (2) The sensor data acquisition module mainly includes the illumination module, the air temperature and humidity module, the soil temperature and humidity module and the carbon dioxide concentration module, and the above various sensors can timely and effectively monitor the environmental parameters in the greenhouse. (3) The relay control module converts the voltage of 220 V into a voltage of 12 V through a transformer, connects the relay, and controls the switch of the fill light and the fan in the greenhouse through the relay. (4) Data transmission module, using ESP8266 wireless WIFI transmission module for data transmission, through the WIFI transmission module, the collected data can be uploaded to the ONENET cloud platform in real time, and the returned data is accepted in real time. (5) Power conversion circuit, using DC24V/1A power adapter to provide external power to the acquisition node, mainly to provide power for the relay. The ASM1117 chip with output voltage of 3.3 V is used for voltage regulation. The working voltage of STM32 MCU is 3.3 V, and other components in the circuit need 5 V power supply. Therefore, a voltage regulator device is used here to ensure the normal operation of the circuit.

4 System Software Design According to the above hardware circuit design, software programming for each hardware, hardware design using AD software, and programming with KEIL software, can realize online debugging of system software, facilitate program migration, optimization, secondary development, remote monitoring system. The software design part mainly consists of several large blocks, an initialization module, a sensor reading module, a wireless module and a relay control module. The initialization module is

1706

W. Qian et al.

mainly responsible for initializing the internal peripherals of the microcontroller; the sensor reading module is mainly responsible for reading the data inside the sensor; the wireless module transmits the data collected by the single chip to the database of the network; the relay control module mainly uses the get method to obtain the server data feedback. Switching relay operation is performed, and the command control network relay [9] is sent by the post method. The working flow chart of the remote monitoring system is shown in the Fig. 1.

Start

CPU initialization

Sensor initialization

Configure WiFi module N

Collect data

N

Connect to the server? Y Sending data and instructions

Received instructions from the cloud?

STM32 control

The data is packaged and transmitted and uploaded

Fill light lamp and fan switch

OneNet display and storage

OneNet display and control

Fig. 1. Main program flow chart

HyperText Transfer Protocol (HTTP) protocol is a TCP-based application layer protocol. Internet of Things devices use HTTP protocol to transmit data widely. It is also easy to deploy on the Internet of Things platform. This protocol only specifies the format of data packets and specific data transmission. It is implemented by TCP/IP and can be widely used in industries, logistics and agriculture. This paper uses the ONENET IoT platform as the storage processing center for cloud data. As the data relay coordination center, the ONENET IoT platform is the core

ONENET-Based Greenhouse Remote Monitoring

1707

of data exchange between the upper management system and the device terminal. The device terminal uploads the collected sensor data to the platform through the Wireless Local Area Networks (WLAN) network in the HTTP protocol packet format that the platform can recognize, and receives the data. Platform packet instructions: the upper layer control system exchanges data with the platform through APIKEY and device ID. The system can remotely monitor the air temperature and humidity, soil temperature and humidity, light intensity and carbon dioxide data in real time, and remotely control the fill light and fan in the greenhouse.

5 Conclusion This paper designs the hardware circuit of the terminal, uses the HTTP transmission protocol, completes the software programming of the hardware circuit, creates the virtual device on the cloud platform, and creates the monitoring system page by using the platform data visualization tool. The test results show that the ONENET-based greenhouse remote monitoring and control system has the advantages of low power consumption, low cost, high monitoring accuracy and short development period. The system can realize the remote monitoring of greenhouse temperature and humidity. Adding sensors to the system can monitor more greenhouse environments, and can also use the platform to issue commands to control the equipment in the greenhouse to achieve remote monitoring of the greenhouse. Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61501176, Natural Science Foundation of Heilongjiang Province F2018025, University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province UNPYSCT-2016017, and the postdoctoral scientific research developmental fund of Heilongjiang Province in 2017 LBH-Q17149.

References 1. Hu PJ, Jiang T, Zhao YD (2011) Soil moisture monitoring system based on ZigBee wireless network. Trans Chin Soc Agric Eng 27(4):230–234 2. Li Q (2012) Design and research of greenhouse automatic irrigation monitoring system based on wireless embedded technology. Zhejiang University, Hangzhou 3. Wang R (2019) Design of wisdom agriculture system based on zigBee and android technology. Electron Technol Softw Eng 26(6):93–94 4. He P, Nan LY (2016) Design of greenhouse monitoring system based on internet of things and LabVIEW. Chin J Agric Mechanization 37(9):218–222 5. Zhu JC, Zhang Q (2018) Design of environmental monitoring system for agricultural greenhouse based on internet of things. Chin J Agric Mechanization 39(9):76–80 6. Lu LJ (2015) Design and implementation of remote intelligent monitoring system for test greenhouse. University of Science and Technology of China, China 7. Cao FG (2007) Application of remote monitoring system in agricultural greenhouses. Liaoning University of Science and Technology, China

1708

W. Qian et al.

8. Zhou EH (2018) Design and implementation of IoT system in college wisdom classroom. Mod Electron Technol 41(2):30–33 9. Zhang DL, Li X, Dai M et al (2010) Design of low-temperature silkworm room temperature and humidity automatic control system based on dht11. Mod Agric Sci Technol 26(18):14–15

Design of Multi-Node Wireless Networking System on Lunar Panpan Zhan1(&), Yating Cao2, Lu Zhang1, Xiaofeng Zhang1, Xiangyu Lin1, and Zhiling Ye1 1

2

Beijing Institute of Spacecraft System Engineering, Beijing 100094, People’s Republic of China [email protected] Beijing Shenzhou Aerospace Software Technology Co., Ltd., Beijing 100094, People’s Republic of China

Abstract. At present, the research on lunar communication mainly focuses on the analysis and research of communication node level and link level. There are few studies on multi-device broadband wireless networking and protocol. The existing point-to-point communication mechanism poses challenges to multidevice missions on lunar. Facing the increasingly complex mission requirements of lunar exploration and scientific research station, it is urgent to build an efficient multi-node broadband wireless communication system to ensure the smooth implementation of the follow-up missions of lunar exploration. It analyses and compares the current wireless communication protocol technologies, studies the foreign lunar protocol schemes, and presents the design of the lunar’s multi-device broadband wireless network architecture. It focuses on the design of networking protocol system, which is compatible with the existing earth-moon communication protocol system, and supports the broadband wireless communication between multiple devices. Through the research in this paper, we can realize the efficient interconnection of multiple devices and provide the technical basis for the construction of the broadband wireless networking infrastructure on lunar. Keywords: Lunar  Wireless networking  Multi-node  Protocol architecture  Efficient interconnection

1 Introduction Deep space exploration is a frontier field in which spacecrafts enter the field of solar system exploration, exploring the major planets and their satellites, asteroids, comets and other celestial bodies [1]. CE-4, which has been successfully implemented recently, has achieved the world’s first lunar landing exploration, and successfully achieved communication between the lander and the lunar rover. Some Countries all over the world are increasing their investment in deep space exploration. Our country will further explore the moon, establish a lunar scientific research station or carry out manned lunar landing exploration missions on the achievements of the lunar exploration project. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1709–1717, 2020 https://doi.org/10.1007/978-981-13-9409-6_206

1710

P. Zhan et al.

The data transmission between a small number of detectors has been completed in previous lunar exploration missions in China, but the data communication support ability is not enough when the number of detectors is large. The main problems are as follows: (1) Point-to-point communication mode results in insufficient capacity of lunar networking. The detectors on lunar are not managed as a network. When a detector need to communicate with other detectors, either by adding measurement and control equipment, or by disconnecting the current transmission and then communicating with other detectors. (2) Ground management and control is complex and difficult to manage and resource scheduling. Communication and resource management between the devices are complex, and resource coordination of the devices is difficult. It also challenges the way of ground use, and is not suitable for the management of multiple lunar detectors. The networking of multiple lunar detectors, especially the broadband wireless networking of landers, lunar rovers, robots and other detectors, can effectively solve the problems of rational utilization of resources, efficient transmission of information and cooperative processing of tasks in the mission of lunar exploration. According to the characteristics of lunar transmission, a multi-type information network and a hierarchical protocol system are established, focusing on the broadband wireless networking and protocol architecture between lunar multi-devices, to realize the dynamic access of lunar detectors, compatible with the traditional and reliable 1553B bus network, so as to meet the dynamic access and data routing functions of various lunar detectors in the lunar network, and to realize the interior of lunar detectors.

2 Analysis of Wireless Communication Protocol The U.S. plan for return to the moon was launched by NASA in December 2006 [2]. This plan aims to replace the space shuttle with Orion spacecraft in the short term to achieve manned/cargo transportation between land and the International Space Station. In the medium term, it will achieve manned landing on the moon and long-term stay on the moon by combining Orion spacecraft with Altair lander, and in the long term to achieve the goal of landing on Mars. Figure 1 gives a schematic map of the lunar wireless communication network for the lunar re-entry plan, mainly using UHF, S, C and Ka bands. The UHF band is responsible for the reliable communication of voice and data between EVA astronauts, lunar module and stations, and can also be used for emergency communication on the lunar surface. The S band (TT&C communication) completes the high reliable transmission of voice, health parameters, instructions, remote control and telemetry data at medium and low speeds. The S-band (WLAN) provides wireless access to portable devices, physiological parameters acquisition, equipment location and status information acquisition for EVA astronauts, robots, lunar rovers, etc. So NASA plans to choose WLAN as part of its wireless communication protocol.

Design of Multi-Node Wireless Networking System on Lunar

1711

Fig. 1. Wireless communication network on lunar in plan for return to the moon

CCSDS [3], the Consultative Committee for Space Data System, has developed a series of standard protocols for deep space exploration applications. NASA’s planned protocol system for future lunar exploration missions is shown in Fig. 2, which includes the protocol planning scheme for Earth-Moon links, Earth-Moon relay links and lunar networking links [4, 5]. NASA plans to use a wireless local area network (WLAN) technology-based scheme for the lunar network links.

Fig. 2. Protocol stack of NASA’s future lunar exploration mission

1712

P. Zhan et al.

Through the research and analysis of the above wireless communication protocols, the following enlightenments are given for the future multi-node networking system on lunar: (1) The international mainstream space organizations have basically reached a consensus on the lunar networking communication, and plan to use the lunar LAN networking scheme based on WLAN technology. (2) At present, the application of international CCSDS recommendations is mainly in the data link layer and network layer, and has not yet formed a complete deep space information protocol system, especially in the lunar wireless networking protocol. It is necessary to study the lunar wireless network architecture and protocol architecture on the basis of existing network protocols.

3 Design of Wireless Networking According to the above research and analysis, combined with the needs of the lunar networking task, a scheme combining UHF-based low-speed MESH network with 802.11n-based high-speed WLAN local area network is designed. UHF low-speed MESH network is upgraded on the basis of UHF point-to-point low-speed communication system, which has been successful used on lunar. It can increase network communication capability, provide stable and reliable low-speed communication links for lunar networking. WLAN local area network refers to NASA’s standard requirements, and develops a high-speed LAN system suitable for lunar mid-long distance networking communication. The design is shown in Fig. 3.

Access Node 1

Access Node 2

...

Access Node n

802.11 UHF-based low-speed MESH network

802.11 802.11

Central Controller Node

UHF-based low-speed MESH network

Fig. 3. Design of multi-node wireless networking system on lunar

Design of Multi-Node Wireless Networking System on Lunar

1713

The multi-node wireless networking system on lunar takes the central controller node as the center, the UHF is used as the signaling channel and the minimum communication, and the S-band is used as the data channel. The UHF band has the characteristics of good channel stability and strong antimultipath ability, which can realize high reliable and low rate networking communication between detectors on lunar. When the initial link is built, the access nodes of each detector and the central controller node transmit reliable signaling data frames through UHF band to establish routing links. In the process of detector movement, the central controller node always acts as the central backbone node. The other detector nodes can adaptively change the network topology and access the central controller node or other detector nodes. When a detector access node and a central controller node successfully establish a link through UHF, they can send high-speed traffic data to the central controller node through WLAN band. Each node can select the communication frequency and rate adaptively according to the current channel quality. The following are two special application modes. Collaborative work mode is the mode of data communication between detector nodes when they carry out scientific exploration tasks. The content of transmission includes the data information that needs the cooperation between detector nodes and the telemetry and key business data that needs to be broadcast to the other nodes and backed up in multiple detector nodes. This mode mainly uses UHF band to communicate. The high-speed networking mode is the working mode of the detector node, which sends scientific detection data and log data to the central controller node. This mode uses S band as data transmission channel. The central controller acts as the central node (base station). The detector node acts as the access node (terminal). After data transmission, the network resources are released in time.

4 Design of Protocol Architecture In order to design the lunar wireless networking protocol system, the international CCSDS protocol architecture, the Internet protocol architecture, and the wireless communication protocol architecture are analyzed and selected, which have the following characteristics respectively. (1) The CCSDS protocol adopts a layered architecture and has been widely used in spacecrafts all over the world. CCSDS is particularly powerful in the data link layer, mainly considering the high bit error rate of space communication channel. At present, CCSDS has considered compatibility with the Internet, supporting the transmission of IP protocol on the CCSDS data link, which has a positive role in promoting the subsequent construction of space network and deep space network. 2) Ground network protocols adopt layered architecture and have been widely used on the ground. TCP, UDP and IP protocol are widely used in Internet. Although the CCSDS protocol is relatively perfect in space application, it can not be directly used to communicate with the ground network and has the problem of protocol

1714

P. Zhan et al.

conversion. In order to realize the interconnection between satellite network and ground network, the ground network protocols are applied to satellite network [6]. (3) Wireless Local Area Network (WLAN) is mainly used to solve the complicated wiring and maintenance problems in wired LAN. In the signal coverage area of WLAN, access points are not constrained by geographical location. IEEE Standards Committee has issued 802.11 series of standards, which has become the formal standard of WLAN. Combined with the space communication protocols defined by CCSDS and the IP in the Internet protocol architecture as the optional protocol in the network layer, the integrated ground-lunar network communication protocol architecture as shown in Fig. 4 is established. The link layer can adopt the existing AOS [3] system or extend 802.11 wireless communication protocol. Wireless communication protocol 802.11 can be used in link layer when the lunar wireless network communication is carried out. Since CCSDS does not define the data format within the package in the application support layer, we can apply the Packet Utilization Standard (PUS [7]) protocol for package telemetry and remote control applications. It can standardize the data format within the package. The DTN protocol will be used in the future.

Lossles D. Compro.

CFDP

User Defined Protocol

AMS

Applicati on Layer ECSS PUS

Transport Layer

Encapsulation Service

TC

Physical Layer

UDP

LTP

Network Layer

Data Link Layer

BP

AOS

TCP

IP

IP over CCSDS

Subnet Service

Proximity-1Data Link Protocol

Packet Service

Space Packet Protocol

Memory Access Service

Synchronisation Service

Test Service

Convergence Layer Wireless Communication Protocol 802.11

Radio Frequency and Modulation Systems

Inter-detector Communication

Proximity-1Coding And Sync. Protocol

Proximity-1Physical Layer

Subnet Layer

Data Link Layer

Physical Layer

ECSS 1553B Bus Link Protocol

1553B

TTE Protocol

Spacewire Protocol

Spacewire

Onboard Communication

Fig. 4. Communication protocol architecture of lunar-ground network

...

...

Design of Multi-Node Wireless Networking System on Lunar

1715

(1) Physical layer In the physical layer, the inter-detector communication protocol includes radio frequency and modulation system, Proximity-1 physical layer. Onboard interface [8] communication protocols include 1553B, SpaceWire and so on. (2) Data Link Layer Data link layer is responsible for data transmission between directly connected nodes on physical links. In the data link layer, the inter-detector communication protocol adopts remote control spatial data link protocol (TC), advanced on-orbit system (AOS), Proximity-1 spatial link protocol, Proximity-1 coding and synchronization sub-layer protocol defined by CCSDS. This layer is compatible with the existing lunar-ground communication protocol system and can support broadband wireless communication between multiple detectors on lunar. TC protocol and AOS protocol are still used in lunar-ground communication. Wireless communication protocol 802.11 can be used as link layer protocol in broadband wireless networking on lunar. (3) Network Layer The network layer is based on IP and provides unified addressing and routing services for upper users in the network [9]. The network layer protocol can be used as an optional protocol. The network layer includes IP protocol, encapsulation service and IP Over CCSDS protocol. IP packages are encapsulated by IP Over CCSDS, plus the head of IPE, and then through the encapsulation service, they can be transmitted through inter-detector links. (4) Transport Layer Transport layer provides end-to-end data transmission services for upper users. If IP protocol is applied in the network layer, TCP and UDP can be used directly in the transport layer. The data packets generated by TCP and UDP can be transmitted directly to the IP protocol processing in the network layer. There is no difference with the ground network, which facilitates the efficient integration with the ground network. If delay tolerant interruption (DTN) network requirements are realized, LTP protocol can be used in this layer. (5) Application Layer The application layer is mainly related to spacecraft platform and load application. This protocol system provides standard services for lunar-ground operation, inter-detector operation by applying PUS protocol and SOIS application support layer services. The PUS standard is defined by the European Cooperation for Space Standardization (ECSS). It describes in detail how the ground uses these services to standardize operations, and defines the data formats of business requests (remote control packages) and business reports (telemetry packages). If the delay tolerant interruption network requirement is realized, BP protocol can be used in this layer.

1716

P. Zhan et al. Central Controller Node

Access Node

Lunar Lander

Lunar Rover

Application Layer

PUS

PUS

Transport Layer

UDP

UDP

UDP

Network Layer

Space Packet Protocol or IP

Space Packet Protocol or IP

Space Packet Protocol or IP

Data Link Layer

AOS

WLAN 802.11

WLAN 802.11

Radio Frequency and Modulation Systems

Radio Frequency and Modulation Systems

Radio Frequency and Modulation Systems

Physical Layer

Ground

Fig. 5. Configuration of communication protocol between ground, lunar central controller and lunar access nodes

The protocol selection and tailoring can be carried out in the above protocol architecture. The typical protocol configuration for communication between ground, lunar central controller (lunar lander) and lunar access node (lunar rover) is shown in Fig. 5.

5 Conclusion According to the development trend of lunar exploration, after unmanned lunar exploration, some countries will carry out manned lunar exploration and lunar base construction. As one of the core objectives of lunar exploration, the lunar base refers to the long-term survival on the moon, providing support for engineering and scientific missions, with variable, more maintainable and scalable lunar base infrastructure. Through the design of multi-node wireless networking system on lunar, the basic lunar broadband wireless networking infrastructure is constructed, and the wireless networking technology and multi-node network protocol are studied, so that a unified lunar network can be formed between multiple detectors. Through this technology, we can provide the network foundation for the following lunar base mission.

References 1. Hongshuo H, Jie C (2008) 21st century foreign deep space exploration development plans and their progresses. Spacecraft Eng 17(3):1–22 2. NASA (2006) 2006 NASA strategic plan. NASA 1:1–48

Design of Multi-Node Wireless Networking System on Lunar

1717

3. Hongshuo H, Jing L (2009) NASA’s exploration technology development program. Aerosp China 8:34–39 4. Yanchun T (2006) US plan for return to the moon. Missiles Space Veh 2:27–31 5. CCSDS (2006) AOS space data link protocol, CCSDS 732.0-B-2, recommended standard, issue 2. Washington DC, USA 6. CCSDS (2005) Spacecraft onboard interface services—concepts and rationale, CCSDS 830.0G-0.4, Green Book. Washington DC, USA 7. CCSDS (2013) 850.0-G-2 spacecraft onboard interface services. Washington DC, USA 8. European Cooperation for Space Standardization (2003) Space engineering: ground systems and operations—telemetry and telecommand packet utilization. ECSS-E-70-41A 9. Panpan Z, Xiongwen H, Zhigang L et al (2019) Research for data communications based on IPv6 in integrated space-ground network. Wireless Satell Syst: 380–391

Algorithm Improvement of Pedestrians’ Red-Light Running Snapshot System Based on Image Recognition Zhiqiang Wang(&), Xiaodong Sun, Xiaoxu Zhang, Ti Han, and Fei Gao School of Intelligence and Electronic Engineering, Dalian Neusoft University of Information, Dalian 116023, China [email protected]

Abstract. Pedestrians’ red-light snapping system is applied more and more in traffic control. The snapping technology based on face image recognition is widely used, but there are some defects such as low successful capture rate and small number of simultaneous tracking. In order to improve the capture algorithm based on face recognition technology, the MHT tracking algorithm based on upper body recognition is introduced to improve pedestrian tracking accuracy and system load performance, so that the tracking success rate is increased to 85%, and the number of simultaneous tracking reaches 25 people. At the same time, the image quality is judged by image Laplacian variance, face size and angle, and the face capture algorithm is optimized to improve the snap quality of the close-up photos of the face and improve the system availability. Keywords: Human tracking running snapshot

 Face recognition  Image quality  Red-light

1 Introduction With the rapid development of China’s economy and the increasing level of people’s civilization, the government has paid special attention to the governance of pedestrians’ willingness to red light. In recent years, in order to cope with such bad behavior, hightech means such as face recognition have been innovatively applied to the management of pedestrians running red lights, and achieved significant social impact and governance effect. Along with artificial intelligence technology has rapid development, various types of pedestrian red-light snapping equipment are gradually moving away from gimmicks and going forwards to a more precise, no-defect practical goal. Most of the existing pedestrian recognition snapping techniques are based solely on face recognition technology, but in reality, pedestrians running through red lights often look sideways to see if there is a car, which causes the recognition of the face recognition algorithm to fail, which leads to the loss of tracking. So that, the successful rate of snapping is generally low [1, 2]. Even if it can be captured successfully, because pedestrians are in motion, the captured face images often have problems of blurring or poor angle. On the other hand, because the face recognition is small, the amount of © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1718–1726, 2020 https://doi.org/10.1007/978-981-13-9409-6_207

Algorithm Improvement of Pedestrians’ Red-Light

1719

computation in image detection is very large, especially in the real-time processing of video images, the system load will increase sharply as the number of faces increases, and so the existing system often has a great limitation on the number of pedestrians.

2 Algorithmic Improvement Analysis In view of many problems existing in the reality application of pedestrian red-light snapping system, this paper has make innovative design and improvement on the application algorithm to make up for the shortcomings of pedestrian tracking and snapping algorithm based on simple face recognition. Firstly, the human body tracking algorithm selects the larger target of the ipper body of the human body for detection and tracking. In the image, the pixels of the upper body are about 9 times that of the face. Therefore, it can reduce the sampling or increase the scanning step of the convolution network to improve the efficiency. Secondly, the tracking algorithm based on multiple Hypothesis Tracking (MHT) [3] can improve the effect of pedestrian tracking and reduce the rate of tracking failure in crowded situation, thereby avoiding repeated snapping of the same person. Thirdly, the image quality discrimination algorithm is introduced on the face recognition and snapshot algorithm, so that it can continuously optimize the quality of the snapshot image during pedestrians are moving, so as to improve the accuracy of pedestrian face matching.

3 Pedestrian Tracking and Snapping Process Design In order to meet the traffic violations, this paper captures a total of four pictures of pedestrians’ running through red lights. They are panoramic pictures before running through red lights, starting to run through red lights, ending to run through red lights and a clear close-up picture of faces.

Fig. 1. Recognition region partition map

1720

Z. Wang et al.

As shown in Fig. 1, in order to define the starting and ending position of running a red light, the image is divided into three regions. When pedestrians move from region ① to region ③ during the red light period, three photos are continuously captured. When pedestrians move in region ② and region ③, face recognition processing is turn on and a close-up of higher-quality face is captured, which facilitates illegal warning and subsequent identification by face recognition. The flow chart of the red light running behavior discriminant algorithm is shown in Fig. 2.

start get a frame image from camera

detect and track pEdestrian’ s upper body N

red light status& unfinished snapping

Y

Pedestrian location

Y

in region

in region

first picture captured?

second picture captured?

N capture & save first panorama before run red light

N capture & save second panorama start run red light

compute face image quality

N

face image quality meet requirements

Y

N

in region Y

Y

third picture captured? N capture & save third panorama after run red light

face close-up photo saved

Y

save face close-up photo

Y

pedestrian departure image all four photos saved N

discard pedestrian Capture information

save pedestrian capture information end of snapshot

Fig. 2. Identification flow chart

When the pedestrian enters the image, the tracking algorithm starts to work. When receives a state in which the signal light turns red, firstly, the location information is obtained from the pedestrian tracking sub-module and it is determined that if the pedestrian first enters the image, the pedestrian capture information is created. If the pedestrian position appears in the region ①, it is further determined whether the first picture has been captured, and if not, the first picture is taken. If the pedestrian position appears in region ② and the first picture is captured without capturing the second picture, then the second picture is captured. If the pedestrian position appears in region ③, and the first and second pictures are captured, and the third picture is not captured, then the third picture is captured. When the pedestrian position appearing in region ② or ③, the face image quality discrimination algorithm calculates by the face image

Algorithm Improvement of Pedestrians’ Red-Light

1721

quality and captures a close-up photo that satisfies the quality requirements are captured. If the pedestrian leaves the image region and has captured four pictures, the pedestrian tracking information is saved and the illegal forensics of the pedestrian is completed. On the contrary, it indicates that the evidence is insufficient, the information of the pedestrian is abandoned, and the identification of the pedestrian is ended.

4 Pedestrian Tracking Algorithm The pedestrian upper body detection and tracking algorithm in the above flowchart is designed based on MHT multi-hypothesis tracking algorithm. This method was proposed by Reid in 1979, and has been gradually optimized for engineering application. Two improved m2 optimal MHT technologies are proposed to optimize the computation amount [4, 5]. MHT establishes several candidate hypotheses and makes delay association judgment through subsequent data detection. Its implementation process framework is shown in Fig. 3.

start

new human position detection

data association

optimization

generate human tracking trajectory Hypothesis formation and pruning

Fig. 3. MHT basic implementation process

For the new observation part, the third-party upper body detection algorithm is used to detect the upper body features from complex background and obtain the upper body contour position. Compared with the face recognition and tracking algorithm, the upper body has large target size and is not easy to be occluded and not depending on the angle, so the target detection is more accurate. Because the moving speed of human is slow, and people usually move in one direction in the process of crossing the road, this paper adopts nearest neighbor method in data association processing (Nearest Neighbor, NN) [6, 7]. The idea of this method is to set correlation gates to limit the number of potential decision-making, and when there are multiple data points in the correlation gates, the data whose correlation measurement value is closest to the predicted value will be used. Considering that the running speed is generally less than 5 m/s, the moving distance converted into the interval of each frame image is 33 cm, which is close to the width of the human body. Therefore, the size of the correlation gate set in this paper is set as a human body width, while the multiple data in the correlation gate are correlated with the minimum ratio of distance square to human body width according to formula (1).

1722

Z. Wang et al.

MinððD X2 þ D Y2 Þ=W2 Þ

ð1Þ

Furthermore, aiming at the problem of Gaussian noise when obtaining the position and size of human body, this paper uses Kalman filtering algorithm [8, 9] to smooth the trajectory data and eliminate the jitter of pedestrian trajectory. The Kalman algorithm mainly estimates the present state prediction value according to the optimal probability based on the previous state prediction value and the present state measurement value. The formula for calculating the optimal predicted value under the current state is shown in Formula (2), where X (k|k) is the optimal estimated value under the K state, Z (k) is the measured value under the K state, H is the system parameter, and Kg (k) is the Kalman gain. Kg (k) should have been judged by covariance, and the calculation formula is shown in formula (3), in order to simplify the calculation, this paper calculates the ratio of displacement square to width square of human body in k − 1 state, and takes 0.5 as the optimum value by measuring system parameter H′, as shown in formula (4). X(kjk) ¼ X(kjk  1Þ þ Kg(k)(Z(k)  HX(kjk  1ÞÞ

ð2Þ

Kg(k) ¼ P(kjk  1ÞH0 =ðH P(kjk  1ÞH0 þ RÞ

ð3Þ

Kg(k) ¼ H0 ðDX(k)2 þ DY(k)2 Þ=W(k  1Þ2

ð4Þ

5 Face Image Quality Discrimination Algorithms In judging the quality of face image processing, not only the image clarity, but also the face size and angle should be considered. These three factors will affect the results of face matching. There are many mature methods to judge the definition of no reference image [10], such as gradient function method, image transform domain method, entropy function method, etc. This paper uses Laplacian gradient function to calculate. Firstly, based on the results of upper body detection, face recognition database is used to detect face position, and face close-up image is intercepted. Then, Laplace variance algorithm is used to judge image quality. Then, face size and angle feature information is obtained by face recognition, and face image quality score is obtained by comprehensive weighting calculation based on face size and face angle. As shown in formula (5), Vlaplacian is Laplacian variance, Vsize is face image size, V (pitch + yaw) is face angle, including up and down elevation and left and right rotation angles, and abh is weighted parameter. It can be seen from the formula that the larger Laplacian variance value and size value, the smaller angle value, the better quality of the image. Vquality ¼ a Vlaplacian þ b Vsize  h Vðpitch þ yawÞ

ð5Þ

Algorithm Improvement of Pedestrians’ Red-Light

1723

6 System Testing Software development is based on Windows C# Visual Studio development environment. Multi-threading method is adopted to realize user interface, image acquisition, tracking and snapping algorithm and other functions. The algorithm of upper body recognition and face recognition is realized by calling the third party C++ dynamic link library by C#. Finally, the interface of the system is implemented as shown in Fig. 4. The upper body tracking algorithm tracks the pedestrians in the image. When the pedestrian crosses the first warning line, the face capture algorithm is activated. The result of the capture is displayed in the right list. The pedestrian crosses the second warning line and ends the capture. The position of the red warning line is adjustable, and three capture regions can be set.

Fig. 4. System implementation interface

Compared with face recognition, the tracking effect based on upper body recognition is obviously improved. Three functions are tested and analyzed below. Pedestrian Tracking Test The MHT tracking algorithm has a successful tracking rate of more than 85% without occluding the upper body of the pedestrians, while the success rate base on face recognition using the same tracking algorithm is only about 65%. In addition to face recognition because face angle is easy to cause recognition failure, there is another important factor is the improvement of image detection speed. The upper body detection is about four times faster than face detection, which improves the accuracy of NN data association, thus improving the tracking accuracy.

1724

Z. Wang et al. Table 1. Pedestrian tracking test results comparison

Tracking mode Upper body recognition Face recognition

Accuracy of occlusionfree tracking (%) 85

Max tracking number meanwhile 25 people

Tracking 5 people frame rate/second 20 frame/sec

65

10 people

6 frame/sec

In addition, the maximum number of people tracking in the two tracking methods is tested processed, and the frame rate is processed in the case of 5 people, as shown in Table 1. Face Capture Test In the aspect of capturing the face of illegal pedestrians, the face image quality discrimination algorithm designed in this paper is more effective in capturing effect than directly capturing or performing simple angle judgment. On the one hand, the angle of capturing face is better, and the pixels and image clarity are high. On the other hand, since the limited capture interval is region ② and ③, the performance of capture processing is more efficient and the overall performance of the system is improved. System Load Test As shown in Fig. 5, the number of pedestrian tracking and the processor load are tested. The data confirm that the number of tracked pedestrians and the CPU usage increases linearly. The total memory usage also showed an overall trend but due to the memory usage and runtime status have a lot of relationships, so there is a lot of randomness. When the number of pedestrians reaches 25, the CPU usage rate has reached the limit.

1250

100% 90%

CPU Utilization Rate

70%

1050

60%

950

50% 40%

850

30% 20%

CPU

RAM

750

10% 0%

1

3

5

7

9

11

13

15

17

19

21

23

Number of pedestrians

Fig. 5. Load testing of system running

25

650

RAM Usage (MB)

1150

80%

Algorithm Improvement of Pedestrians’ Red-Light

1725

If the maximum number of people tracking need to be increased, it can either improve CPU performance or optimize the algorithmic calculation of a single pedestrian tracking, and memory usage does not become a system bottleneck.

7 Conclusion This paper comprehensively uses the human upper body identification detection library, MHT tracking algorithm and image quality detection algorithm to complete the improvement of the pedestrians’ red light capture system. The main improvement points are as follows: (1) Introducing the MHT tracking algorithm based on upper body detection to improve the pedestrian tracking effect, avoiding the disadvantage of easy to lose based on face tracking, and improving the tracking success rate to 85%. (2) Through the upper body detection and multi-thread concurrent design, the system load capacity is improved, and the number of synchronized trackable pedestrians is increased to 25 people. (3) Through the Laplace variance algorithm of the image, combined with the size and angle of the face, the face image quality comprehensive judgment algorithm is proposed to improve the quality of the face capture. The algorithm optimization in this paper makes the performance of the pedestrian red-light snapping system more satisfying the requirements of practical application. However, there is still room for improvement in accuracy and system algorithm efficiency.

References 1. Wei Y, Wan X, Xu H, Shen B (2018) Design and implementation of pedestrians’ red-light running evidence-collection system based on face tracking and recognition. Modern Electron Tech 41(19):36–39 2. Cao L (2015) Technology and application of face recognition and human action recognition. Electronic Industry Press 3. Shao J (2010) Multiple target tracking based on MHT. J Shanghai Univ Electr Power 26 (01):82–85 4. Peng D, Shi Y (2011) Two improved m-best multiple hypothesis tracking algorithms. Fire Control Command Control 36(5):8–12 5. Wu H, Lv W (2014) Application of MHT arithmetic based on LAP on multi-target tracking. Mod Defense Technol 42(1):77–83 6. Li X, Bar-Shaalom Y (1996) Tracking in clutter with nearest neighbor filters: analysis and performance. IEEE Trans Aerosp Electron Syst 32(3):995–1010 7. Zhang L, Wang YF (2018) Multi-target tracking data association algorithm based on greedy strategy. J Sichuan Univ Nat Sci Edn 55(1):56–60 8. Forsyth DA, Ponce J (2002) Computer vision: a modern approach. Pearson Educ: 534–549

1726

Z. Wang et al.

9. Lou TS, Yang N, Wang Y, Chen NH (2018) Target tracking based on incremental center differential Kalman filter with uncompensated Biases. IEEE Access 6:66285–66292 10. Li Z, Li X, Ma L, Hu Y, Tand L (2011) Research of definition assessment based on noreference digital image quality. Remote Sens Technol Appl 26(2):239–246

A Datacube Reconstruction Method for Snapshot Image Mapping Spectrometer Xiaoming Ding1,2 and Cheng Wang1,2(&) 1

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected], [email protected] 2 Department of Artificial Intelligence, College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China

Abstract. The snapshot image mapping spectrometer (IMS) can acquire the datacube of a target in real time, and has the advantages of high light throughput, high temporal resolution, compact structure et al. This paper proposes a datacube reconstruction method for IMS, which is based on the geometric model of IMS. The simulation results prove that the method is effective and efficient. Keywords: Snapshot imaging spectrometer

 Datacube  Reconstruction

1 Introduction Imaging spectrometer can obtain two-dimensional (2D) spatial distribution and onedimensional (1D) spectral intensity (called a three-dimensional (3D) datacube) of a target, which has been used in many fields, such as remote sensing [1], security [2], biomedicine [3], environment monitoring [4], and so on in recent decades. The snapshot imaging spectrometer (SIS) can acquire the datacube at a single exposure time. So that the snapshot instruments can detect the dynamic object in real time without any scanning component. The image mapping spectrometer (IMS) is a kind of SIS, which has the advantages of high optical throughout, high signal noise ratio (SNR) and simple postprocessing for datacube reconstruction [5]. In this paper, we propose a reconstruction method for IMS, which is based on the geometric model, and the simulation results prove that the method is efficient for the data reconstruction.

2 General Principle of IMS The system structure is shown in Fig. 1. The fore optics (Aperture stop and L1) transfer the light from the target to the image mapper, which is a key element in the system. It composes of hundreds of strip mirrors with 2-D tilt angles. The mirrors are arranged in periods and reflect the light to different directions, simultaneously, slice the input image to different pieces. The collimating lens (L2) collects the light and form a pupil array on

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1727–1736, 2020 https://doi.org/10.1007/978-981-13-9409-6_208

1728

X. Ding and C. Wang

the back focal plane. The prism disperses the light from different sub-pupils, and then, the re-image lens (L3) make the spectral imaging data measured on the format detector.

Fig. 1. The structure of the IMS

The coordinates of each optical plane is noted in Fig. 1. All the optical elements are assumed to be ideal elements, which means that the light propagation in the element is not considered. The aperture stop of the system is putted on the fore focal plane of L1 to ensure the telecentric in imaging space. The tilt angle between the image mapper and the primary plane is marked by h. The parameters of the entire system is shown in Table 1.

A Datacube Reconstruction Method for Snapshot

1729

Table 1. The parameters of IMS Fore optics Diameter of aperture stop Focal length of L1 Distance between object and aperture stop Distance between L1 and primary image Image mapper Length (yi), width (xi) Width of single facets Number of mirrors in each block Number of the blocks Number of the whole mirrors Width of each block Tilt angle of each mirror The angle between the image mapper and primary image Imaging spectrometer Focal length of L2 Diameter of each sub-pupil Wedge angle of prism Refractive index of prism Focal length of L3 Diameter of L3 Detector Size of each pixel Number of pixels Duty ratio of pixel

Dape f1 z1 z2 l, w b M N Ms c (am,n, bm,n), m = 1, …, M; n = 1, …, N h

f2 dsub A n f3 DL3

dpixel Md  Nd 1

3 Geometric Model of IMS The geometric model is derived to establish a relationship between the coordinate of the object plane and that of the detector plane. So that a remapping algorithm can be used to reconstruct the datacube. The relationship between the object plane and image mapper plane is given by, 

xi yi

"

 ¼

f1  z1 cos 0 h 0  zf11

#

xo yo

 ð1Þ

The 2D tilt angle of a single mirror facet on the image mapper is noted by (am,n, bm,n), which is illustrated in Fig. 2.

1730

X. Ding and C. Wang

Fig. 2. The tilt angle of a mirror facet

Since the tilt angle q between the normal of image mapper and optical axis of L2. The reflected light from image mapper is marked by (um;n ; cm;n ), which is shown in Fig. 3.     8 < /m;n ¼ arcsin2 sin am;n cos bm;n þ h cos am;n 2 cos2 ðbm;n þ hÞ cos2 am;n 1 : cm;n ¼ arccos  2h cos /

ð2Þ

m;n

The layout the imaging spectrometer of a single sub-pupil is shown in Fig. 4. As the system uses a single prism to disperse the light, the angle between the optical axis of L2 and L3 is given as,

A Datacube Reconstruction Method for Snapshot

Fig. 3. The reflected light from image mapper

Fig. 4. The layout of the imaging spectrometer

1731

1732

X. Ding and C. Wang

 A A do ¼ 2 arcsin no sin 2 where lo is the central wavelength, no is the refractive index of lo in the prism. The normal direction of Facet #1 in Fig. 4 is,  do þ A do þ A ! ; 0; cos N 1 ¼ sin 2 2 where A is the wedge angle of the prism. The normal direction of Facet #2 is,  A  do A  do ! ; 0; cos N 2 ¼  sin 2 2

ð3Þ

ð4Þ

ð5Þ

According to the geometric theory, the vector S3 from the prism is given by,

! ! ! i ! nk h! ! S2  S 2  N2 N2 þ N2 S3 ¼ no

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi  2  2

2 nk nk ! ! þ S2  N2 1 no no

Where the vector S2 through the prism is, sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi  2  2

! ! ! i ! no h! no no ! ! ! 2 S2 ¼ S1  S 1  N1 N1 þ N1 1 þ S1  N1 nk nk nk

ð6Þ

ð7Þ

The vector S1 is the ray direction propagating into the prism, which is given as, ! x00i y00i f2 ! ffi ; pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S 1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð8Þ 002 2 002 2 002 2 x002 x002 x002 i þ yi þ f 2 i þ yi þ f 2 i þ yi þ f2 Where,

x00i ¼ xi cos h þ xi sin h tan cm;n y00i ¼ yi þ xi sin h tan /m;n

ð9Þ

The S3 is assumed to be expressed by S3 = (e1, e2, e3), and the image point on the detector is given by, 8 e1 < xd ¼ f2 tan /m;n þ pffiffiffiffiffiffiffiffiffiffiffiffiffi f 2 2 3 1e1 e2

e2 f3 : yd ¼ f2 tan cm;n þ pffiffiffiffiffiffiffiffiffiffiffiffiffi 1e2 e2 1

ð10Þ

2

The Eq. (10) establishes the relationship between the coordinate of the object and that of the detector.

A Datacube Reconstruction Method for Snapshot

1733

4 Imaging and Reconstruction Simulations The simulations is conducted by MATLAB 2019a. The parameters for simulation is shown in Table 2. Table 2. The system parameters for simulation Fore optics z1 120 mm 90 mm z2 60 mm f1 4.5 mm Dape 450–650 nm

Image mapper lw 8 mm  11.385 mm MN 23  3 b 0.165 mm 22.5°

Imaging spectrometer f2 50 mm f3 15 mm Prism ZF7L A = 50.14° nd = 1.81 5.5 lm Detector dpixel

The input datacube is chose as the data obtained by push-broom hyperspectral imager (PHI) [6], which is shown in Fig. 5.

Fig. 5. Input datacube for simulation

The spectral imaging simulation result is given in Fig. 6. According to the Eq. (10), and use the remapping algorithm. The reconstructed datacube is shown in Fig. 7. We choose three points on the datacube arbitrarily to make the spectral curves, shown in Fig. 8. To evaluate the reconstructed spectral curves, the Relative spectral Quadratic Error (RQE) [7] and Spectral Angle [8] (SA) are chose to calculate, and the results is given in Table 3. According to the results, we can find that the recovered the spatial-spectral information almost fit the original one. Some degradation in the spectral images and curves are due to the non-linearity dispersion of single-prism, the diffraction of the strip mirrors, etc. Above all, the imaging simulation results and reconstructed datacube demonstrate that the reconstruction method based on the geometry model is effective.

1734

X. Ding and C. Wang

Fig. 6. The simulation raw data

Fig. 7. The reconstructed data cube

5 Conclusion In this paper, we propose a datacube reconstruction approach of IMS. The imaging simulation are conducted to prove that the reconstruction results accord with the original datacube. In future researches, we will perform more in-deep experiments to verify the effectiveness of the reconstruction method.

A Datacube Reconstruction Method for Snapshot

1735

Fig. 8. The spectral curves for three points in the datacube

Table 3. The valuation of the reconstructed spectral curves Object point A B C

RQE 0.3636 0.3832 0.4009

SA (rad) 0.0747 0.0775 0.0834

References 1. Vane G, Goetz AFH, Wellman JB (1984) Airborne imaging spectrometer: a new tool for remote sensing. IEEE Trans Geosci Remote Sens GE-22(6):546–549 2. Xiangli B, Yuan Y, Lü QB (2009) Spectral transfer function of the fourier transform spectral imager. Acta Phys Sin 58(8):5399–5405

1736

X. Ding and C. Wang

3. Shou-Peng L, Lin-Yuan W, Bin Y et al (2012) A Compton scattering image reconstruction algorithm based on total variation minimization. Chin Phys B 21(10):108703 4. Lu-Lu Q, Qun-Bo L, Huang M et al (2015) Piecewise spectrally band-pass for compressive coded aperture spectral imaging. Chin Phys B 24(8):080703 5. Gao L, Wang LV (2015) A review of snapshot multidimensional optical imaging: measuring photon tags in parallel. Phys Rep 616:1–37 6. Shao H, Wang JY, Xue YQ (1998) Key technology of pushbroom hyperspectral imager. J Remote Sens 2(4):251–254 7. Aiazzi B, Alparone L, Baronti S et al (2004) Tradeoff between radiometric and spectral distortion in lossy compression of hyperspectral imagery. Proc SPIE 5208:141–152 8. Kruse FA, Lekoff AB, Boardman JW et al (1993) The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer dat[J]. Rem Sens Env 44:145–163

LFMCW Radar DP-TBD for Power Line Target Detection Xionglan Chen1(&), Guanghe Chen1, and Zhanfeng Zhao2 1

2

Harbin Institute of Technology (Weihai), Weihai, China {2970377797,1782164656}@qq.com Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany [email protected]

Abstract. This paper proposes a pre-tracking detection algorithm based on dynamic programming in UAV power patrol obstacle avoidance technology. The new method combines the basic principle of millimeter wave radar with the kinematics equation, models the target echo, and then simulates the algorithm based on the established model. Finally, the echo data of the wires collected by the drone is verified by the algorithm. The experimental results show that the pre-detection tracking algorithm can detect the power line better and more stably. Keywords: Power inspection  Millimeter wave radar  Weak target  DP-TBD

1 Introduction How to detect the wires effectively and stably during the operation of the lined drone, avoid collision with the high-voltage line, is a problem that needs to be solved by power inspection. The traditional vision-based method detects that the wire has high color contrast to the environment. And is susceptible to light and weather [1]. Radar has been widely applied in automotive industry and other fields, due to its advantages of high range resolution, high precision etc. [2]. The new solution is to apply the new 77 GHz millimeter wave radar to small unmanned aircraft obstacle avoidance systems for power inspection [3]. The power line RCS is small, the signal-to-noise ratio is low. It is difficult for the radar beam to continuously and stably align the wires, and the amplitude of the echo signal fluctuates greatly. In this case, and the radar data alone cannot achieve reliable detection of the target, and only a solution accumulated over a long period of time can be used [4]. Long-term accumulation solutions generally require envelope alignment and correlation. For the detection of weak and small fluctuation target signals, envelope alignment and correlation are difficult under the prior knowledge conditions where the target motion law is unknown. This paper proposes a tracking algorithm based on dynamic programming for small and weak targets, and successfully solves the problem of reliable detection and tracking of wires. The limitation of the target movement, the target echo signals of adjacent data frames are statistically correlated, and the noise is a random a pressure. The dynamic programming TBD algorithm is constructed using the objective amplitude information to construct the stage objective function [5]. The echo data of the power line in the real © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1737–1743, 2020 https://doi.org/10.1007/978-981-13-9409-6_209

1738

X. Chen et al.

scene is collected by the 77 GHz millimeter wave radar sensor and the target echo analysis and algorithm research are performed on the collected data.

2 LFMCW Radar Theory LFMCW radar is a technique for performing target detection by frequency linear modulation of continuous electromagnetic waves. Let the radar continuously transmit N Chirp signals, and sample the intermediate frequency signal of each Chirp is M. As shown in Fig. 1, matrix 1 is an IF signal matrix of N  M size. By performing FFT on the matrix, a one-dimensional range spectrum can be obtained as the matrix 2 shows. In a frame time, the object moves small. The range spectrum peaks of Chirp will appear in the same range unit. Because the object moves approximately uniformly in a short time, the phase of the IF signal peak is uniformly changed. For the one-dimensional range FFT result, we can obtain the velocity information by performing FFT on the matrix 2. Chirp Index

Chirp Index

Doppler

FFT

FFT

Matrix 1

Matrix 2

Distance

Matrix3

Distance

Fig. 1. Ranging-velocity measurement method

The phase of the Range-Doppler map obtained by the one-dimensional uniform linear array is also uniformly changed at the peak of the spectrum. The target peak value in the Range-Doppler map of the four antennas is taken out, and a new vector is formed. The vector matrix after zero-padding performs FFT operation to obtain phase difference information, and then obtains target angle information.

3 LFMCW Radar DP-TBD Algorithm Simulation 3.1

Target Motion Model

In the Cartesian coordinate system, the radar is not moved at the origin, and the target state vector is formed by the distance, velocity and acceleration of the target to represent the motion of the target in the radar scanning frame. Assume that in the k scan frame of the radar, the target state vector is: sk ¼ ½xk ; vxk ; axk ; yk ; vyk ; ayk T

ð1Þ

LFMCW Radar DP-TBD for Power Line Target Detection

1739

In the formula, xk , yk , vxk , vyk , axk , ayk respectively represent the distance, velocity, and acceleration in the x, y direction. Then, in the weighted k þ 1 scan frame, the motion state of the object can be expressed as: sk þ 1 ¼ Fk sk þ Gk wk

ð2Þ

In the formula, Fk , Gk , wk respectively represent the state transition matrix, noise matrix, Gaussian white noise with mean zero and variance Q. 3.2

Target Measurement Model

Let the number of sampling points of radar data per frame be NR , and the number of Chirp be ND . The IF echo data frame is subjected to a two-dimensional FFT to obtain a Range-Doppler map, and each frame of data includes NR  ND Units, then the k data frame observed can be expressed as:     ði;jÞ 2 ði;jÞ  ði;jÞ zk ¼ fzk i ¼ 1; 2; . . .NR ; j ¼ 1; 2; . . .ND g; zk ¼ zA;k  ði;jÞ



zA;k ¼

Ak þ vk vk

target exists no target

ð3Þ ð4Þ

ði;jÞ

In the formula, zA;k , Ak , vk respectively represent the amplitude of Range-Doppler at the ði; jÞ unit, the echo signal amplitude and noise amplitude at the k frame data. 3.3

DP-TBD Algorithm Simulation

Dynamic programming solves the optimization problem, decomposes the optimization problem into n sub-phases, introduces state variables s and decision u to describe the transition process of states between sub-phases, and judges the pros and cons of decision-making by constructing phase objective function J. To make the objective function f of the whole process optimal, namely: f ðsn Þ ¼ opt

n X

Jðsi ; uÞ ¼ optfJðsn ; uÞ þ f ðsn1 Þg

ð5Þ

i¼1

In this paper, the opt takes max, and the state marked on the Doppler map of the k frame is sk ¼ ðm; nÞ, m, n is the row and column number of the Range-Doppler matrix, and the target is based on the decision u decides to move from sk1 to the sk state. The transition of the state depends on the amplitude and spatial azimuth of the candidate unit. The higher the unit amplitude, the smaller the azimuth change indicates that the more likely it is the real target. According to the actual physical model, the phase objective function is defined as:

1740

X. Chen et al.

 Jðsk ; uÞ ¼ maxfbAsk g; b ¼ sk 2U

1 0

jhsk  hsk1 j  5 jhsk  hsk1 j [ 5

ð6Þ

Ask , hsk , U respectively represent the sk unit amplitude, azimuth and candidate unit set.

Distance

Doppler

Fig. 2. Range-Doppler search area

On the Range-Doppler map shown in Fig. 2, Doppler has positive and negative directions, the target always keeps moving close to or away from the radar in a short time. So the target may only appear in the semicircular area of Doppler which is affected by acceleration and velocity at the next moment. The process of single-target dynamic tracking based pre-detection tracking algorithm is as follows: (1) Tracking start: The target’s track is approximately straight in a short time, so it is possible to accumulate N point energy along a straight line in the Range-Dopplertime series image, and the points which pass the threshold V as the target possible track starting point. And initialize f ðs1 Þ ¼ Jðs1 ; uÞ ¼ 0. (2) State transition: When k  2, the unit that maximizes the phase objective function Jðsk ; uÞ ¼ maxsk 2U fbAsk g is selected from the candidate unit set U, and the transition from sk1 to sk state is realized, b ¼ 0 indicates that the azimuth change is out of the allowable range. We believe that the searched peak is not the target, give up this state transition, record Jðsk ; uÞ ¼ 0, sk ¼ sk1 , and wait for the next scan. The next scan reselects the candidate cell set U 1 according to the radial velocity and acceleration of the target and the motion time, and selects the cell from U 1 that maximizes the phase objective function Jðsk þ 1 ; uÞ ¼ maxsk þ 1 2U fbAsk þ 1 g, implementing State transition from sk to sk þ 1 . (3) Recursive accumulation: The track energy accumulation is performed according to the formula (5), and the track energy accumulation value f ðsk Þ in the sk state is calculated. (4) Threshold judgment: When 2  k  K, if f ðsk Þ  VT , then X ¼ 1, the target is judged to be the real target, and iterative optimization is continued; when k [ K,

LFMCW Radar DP-TBD for Power Line Target Detection

1741

if f ðsk Þ\VT , then X ¼ 0, the target is determined to be a false target, and the current optimization is ended. (5) Track backtracking: When X ¼ 1, the track of the real target is estimated to be Sk ¼ fs1 ; s2 ; . . .; sk g. Table 1. Simulation parameters Parameters Target 1 Target 2

(vx ,vy ) (m/s) (0, 2) (−2, 0)

(ax , ay ) (m/s)2 (2, 0) (0, −2)

(x, y) (m) (2, 0) (0, −2)

Set the simulation parameters of the two targets as shown in Table 1. Target 1 is close to the radar motion, target 2 is far from the radar, and the signal-to-noise ratio is 5 dB. The sampling period is set to 60 ms, and the total sampling duration is set to 3.9 s.

(a)

(b)

Fig. 3. DP-TBD algorithm simulation

The simulation results are shown in Fig. 3. The tracking trajectory and the direction angle curve are almost coincident with the actual moving trajectory and the directional angle curve of the target, indicating that the pre-detection tracking algorithm based on dynamic programming can effectively perform target tracking.

4 LFMCW Radar DP-TBD Algorithm Verification This experiment uses camera module, high-speed signal processing board, TI’s IWR1243 millimeter wave radar module and ZYNQ7020 core board data acquisition platform to collect the echo data of the wire. UAV airborne data acquisition platform are shown in Fig. 4.

1742

X. Chen et al.

Fig. 4. Data acquisition platform

(b)

(a)

(d) (c)

Fig. 5. Power line radar signal processing

Figure 5 neutron diagram a is the experimental scene of collecting wire echo data in this paper. The millimeter wave radar data acquisition experiment is carried out on the experimental scene. In order to ensure the safety of the experiment, the drone will rise to the same height as the wire and do the movement away from the wire. Subgraph b is an amplitude time diagram obtained by counting the Doppler amplitude of the electric wire. The echo signal amplitude fluctuates greatly because the radar beam pitch angle is

LFMCW Radar DP-TBD for Power Line Target Detection

1743

narrow, which is easily affected by the maneuvering performance of the drone during the line inspection. It is difficult for the radar beam to consistently align the wires. Subgraph c is obtained by processing the echo signals radially away from the wire in the scene using a pre-detection tracking algorithm based on dynamic programming. It can be seen that three motion trajectories are interlaced with each other, and subgraph d is a subgraph in a side view of c, it can be seen that the three trajectories are actually separated and approximately equidistantly distributed in parallel because the three sets of power lines are spatially separated and have an equidistant parallel relationship. The experimental results show that the pre-detection tracking algorithm based on dynamic programming can detect the power line better and more stably.

5 Conclusion The pre-detection tracking algorithm based on dynamic programming can solve the problem of strong fluctuation target detection when dealing with actual power line echo data, and can remove the influence of clutter false alarm and give the power line tracking result. In addition, based on the tracking results of power lines, it is found that the tracking trajectories of multiple power lines have good parallel and equidistant characteristics, which brings new ideas to identify power lines.

References 1. Shuai C, Wang H, Zhang W (2017) Power lines extraction and distance measurement from binocular aerial images for power lines inspection using UAV. In: 2017 9th international conference on intelligent human-machine systems and cybernetics (IHMSC), Hangzhou, pp 69–74 2. Yu H, Shi R, Xie Z (2011) Design of power transmission line detection scheme for MMW radar. In: 2011 4th international congress on image and signal processing, Shanghai, pp 1870–1874 3. Wu X, Zhang N, Zhang H, Hong W (2016) Frequency estimation algorithm for ranging of millimeter wave LFMCW radar. In: 2016 IEEE international conference on ubiquitous wireless broadband (ICUWB), Nanjing, pp 1–3 4. Hongbo Y, Guohong W, Qian C (2013) A novel unscented filter TBD algorithm FOR weak radar target. In: 2013 IEEE international conference on signal processing, communication and computing (ICSPCC 2013), KunMing, pp 1–5 5. Wang J, Yi W, Kong L (2016) Improved DP-TBD methods based on multiple hypothesis testing for target early detection. In: 2016 19th international conference on information fusion (FUSION), Heidelberg, pp 1406–1413

Review of ML Method, LVD and PCFCRD and Future Research for Noisy Multicomponent LFM Signals Analysis Jibin Zheng1,2(&), Kangle Zhu1,2(&), Hongwei Liu1,2(&), and Yang Yang1,2(&) 1

National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China [email protected], [email protected], hwliu@xidian. edu.cn, [email protected] 2 Collaborative Innovation Center of Information Sensing and Understanding, Xidian University, Xi’an, China

Abstract. Noisy multicomponent linear frequency modulated (LFM) signals analysis plays an important role in radar signal processing and many analysis methods have been proposed. The maximum likelihood (ML) method, Lv’s distribution (LVD) and parameterized centroid frequency-chirp rate distribution (PCFCRD) represent three research directions for noisy multicomponent signals analysis. This paper aims to theoretically review and analyze these three methods. One numerical simulation is given to validate theoretical analyses and several discussions are given for realistic applications and future research. Keywords: Linear frequency modulated signal  Maximum likelihood method  Lv’s distribution  Parameterized centroid frequency-chirp rate distribution

1 Introduction The noisy multicomponent linear frequency modulated (LFM) signals often appear in radar signal processing and its analysis is of significant importance. Based on how the instantaneous frequency (CF) and chirp rate (CR) vary with time, two kinds of methods are developed, i.e., time-frequency transforms and time-CR transforms. The short-time Fourier transform [1] and Wigner- Ville distribution [2] are two typical time-frequency transforms. The cubic phase function [3] and high-resolution time-frequency rate representation [4] are two typical time-CR transforms. The time-frequency transforms and time-CR transforms can serve as a basis for the signal synthesis, coding and detection. The time-frequency transforms and time-CR transforms are based on onedimensional energy integration and have the integration signal-to-noise ratio (SNR) gain loss. In radar signal processing, a high integration SNR gain is necessary. Aiming to weaken or resolve this problem, algorithms based on two-dimensional energy integration of time-frequency transforms and time-CR transforms are proposed, © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1744–1750, 2020 https://doi.org/10.1007/978-981-13-9409-6_210

Review of ML Method, LVD and PCFCRD and Future Research

1745

known as the CF-CR analysis technique (CFCRAT). The Wigner-Hough transform [5], Lv’s distribution (LVD) [6] and parameterized centroid frequency-chirp rate distribution (PCFCRD) [7] are typical CFCRATs. The LVD and PCFCRD are based on timefrequency transform and time-CR transform, respectively. Compared to other twodimensional energy integration methods, the LVD and PCFCRD introduce the constant delay to enhance the cross term suppression, resolution, peak-to-sidelobe level (PSL) and anti-noise performance. In recent years, the LVD and PCFCRD have been widely applied in the radar detection, imaging and ultrasonic. However, some theoretical analyses of these two methods are lost and unclear. As we know, the maximum likelihood (ML) method [8] is based on the direct linear integration of the LFM signal and its performance is known to be optimal. With the fast Fourier transform (FFT), the implementation of the ML method can be speeded up. Therefore, it is necessary to theoretically analyze the LVD, PCFCRD and ML method, and obtain a conclusion for their realistic applications. In this paper, the ML method, LVD and PCFCRD are theoretically analyzed from aspects of the cross term, computational cost, resolution, PSL, anti-noise performance, and several discussions are given for their realistic applications. The remainder of this paper is organized as follows. Section 2 gives brief reviews of the ML method, LVD and PCFCRD. Section 3 theoretically analyze these three methods. A numerical simulation and several discussions are given in Sect. 4. Section 5 includes the conclusion.

2 Review of the ML Method, LVD and PCFCRD Consider the noisy multicomponent LFM signals sn ðt Þ ¼

P X

sp ðtÞ þ nðtÞ; 

p¼1



T T t 2 2

  1 2 þ nð t Þ Ap exp j2p a1;p t þ a2;p t ¼ 2 p¼1 P X

ð1Þ

where sp ðtÞ and nðtÞ denote the pth LFM signal and additive zero mean complex white Gaussian noise of the power r2 , respectively. P is the number of signal components. Ap , a1;p and a2;p denote the amplitude, CF and CR of the pth LFM signal, respectively. In noisy environment, a very direct noisy multicomponent LFM signals analysis method is the ML method [8], which can be achieved by a two-dimensional maximization. Z h  r i ð2Þ MLðf ; r Þ ¼ sn ðtÞ exp j2p ft þ t2 dt 2 t

where f and r denote the CF domain and CR domain, respectively. The ML method in (2) is linear and does not has the cross term interference in noisy multicomponent LFM signals analysis. In order to reduce the computational cost, the

1746

J. Zheng et al.

time-frequency transforms and time-CR transforms are developed. The Wigner-Ville distribution and cubic phase function are representative time-frequency transform and time-CR transform, respectively. Z  s  s ð3Þ WVDðt; fIF Þ ¼ sn t þ sn t  expðj2pfIF sÞds 2 2 s

Z CPFðt; rICR Þ ¼ s

  s  s sn t þ sn t  exp j2prICR s2 ds 2 2

ð4Þ

where s and * denote the lag variable and complex conjugation, respectively. fIF denotes the instantaneous frequency. rICR denotes the instantaneous CR. The Wigner- Ville distribution and cubic phase function are based on onedimensional energy integration and have the integration SNR gain loss. In addition, the reference [6] has indicated that the constant delay introduction can reduce the noise correlation, increase the resolution and cross term suppression. Therefore, the LVD and PCFCRD are proposed as    sþ1  s1 sn t  exp½j2pf s  j2prðs þ 1Þtdtds sn t þ LVDðf ; r Þ ¼ 2 2 s t  Z XP

 T f  a K sinc exp j2p r  a2;p ðs þ 1Þt dt ¼ 1;p p¼1 p 2 t |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Z Z



the auto term

þ CLVD ðf ; r Þ þ nLVD ðf ; r Þ ð5Þ "     # sþh sþh sþh 2 PCFCRDðf ; r Þ ¼ sn t  exp j2pr sn t þ 2 2 2 t s   exp j2prt2 expðj2pftÞdsdt "   #  Z XP  sþh 2 T ds G sinc exp j2p r  a2;p ¼ f  a1;p p¼1 p 2 2 s |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Z Z



the auto term

þ CPCFCRD ðf ; r Þ þ nPCFCRD ðf ; r Þ ð6Þ where f and r denote the CF domain and CR domain, respectively. h denotes a constant delay and its selection criterion is h ¼ T. CLVD ðf ; r Þ and CPCFCRD ðf ; r Þ denote the cross terms. nLVD ðf ; r Þ and nPCFCRD ðf ; r Þ denote the noise. The ML method is based on the direct linear integration of the LFM signal. It is linear and avoids the cross term. By contrast, the LVD and PCFCRD can be

Review of ML Method, LVD and PCFCRD and Future Research

1747

implemented by matrix operations and provide more development space for the CFCRAT. In recent years, he LVD and PCFCRD have been widely applied in the radar detection, imaging and ultrasonic. The ML method, LVD and PCFCRD are of significant importance for noise multicomponent LFM signals analysis. However, some theoretical analyses of these two methods are lost and unclear. Therefore, it is necessary to theoretically analyze the LVD, PCFCRD and ML method.

3 Comparisons Based on Theoretical Analyses In this section, basing on theoretical analyses, we compare these three methods from the aspects of the cross term, computational cost, resolution, PSL and anti-noise performance. 3.1

Cross Term

The cross term suppression plays an important role in noisy multicomponent LFM signal analysis and is determined by two factors: the cross term strength and the energy concentration of the auto term. The ML method is linear and does not have the cross term interference. Here, we calculate cross terms of the LVD and PCFCRD, and comparisons are accomplished based on the cross term characteristic analysis. With sn ðtÞ in (1), we calculate the cross terms of the LVD and PCFCRD as CLVD ðf ; r Þ Z Z X P1 X P



 0 0 ¼ Alq Dlq ðt; sÞexp j2p f  ra1;lq s exp j2p r  ra2;lq ðs þ 1Þt dtds s

l¼1 q¼l þ 1

t

ð7Þ CPCFCRD ðf ; rÞ ( " #)  Z Z X P1 X P

  sþh 2 2 00 00 dsdt Alq Dlq ðt; sÞexp j2p f  ra1;lq t exp j2p r  ra2;lq þt ¼ 2 l¼1 q¼l þ 1 t

ð8Þ

s

n h iio 

h 0 00 where Dlq ðt; sÞ ¼ cos 2p Da1;lq t þ Da2;lq 2 t2 þ ððs þ 1Þ=2Þ2 . Dlq ðt; sÞ ¼   0 00 Da2;lq tÞ½ðs þ hÞ=2g. Alq and Alq denote amplitudes. cos 2p Da1;lq þ   ra1;lq ¼ a1;l þ a1;q 2, and ra2;lq ¼ a2;l þ a2;q 2, Da1;lq ¼ a1;l  a1;q Da2;lq ¼ a2;l  a2;q . 0 00 As long as a1;l 6¼ a1;q and a2;l 6¼ a2;q , Dlq ðt; sÞ and Dlq ðt; sÞ exist and cross terms cannot accumulate as their auto terms. However, compared to the ML method, the cross term does exist in these two algorithms and may influence the auto term detection. This is a drawback of the LVD and PCFCRD.

1748

3.2

J. Zheng et al.

Computational Cost

The ML method can be implemented by the FFT after the compensation of the Doppler spread and its computation cost is about N 2 log2 N (where N denotes the signal length). Basing on analyses in [6] and [7], we know that the LVD and PCFCRD can be speeded up by the chirp-z transform and NUFFT, respectively. Therefore, their computational costs are about 3N 2 log2 N. For comparisons, computational costs of the ML method, LVD and PCFCRD are also listed in Table 1. It is obvious the description here is different from those of other papers. Table 1. Computational cost

3.3

Algorithm

ML method

LVD

PCFCRD

Computational cost

N 2 log2 N

3N 2 log2 N

3N 2 log2 N

Resolution and PSL

As we know, the support interval determines the resolution. The resolutions and PSLs of the ML method, LVD and PCFCRD can be presented as  (

(

dML ð f Þ ¼ T2 dML ðr Þ ¼ T82

dLVD ð f Þ ¼ T2 dLVD ðr Þ ¼ ðT þ8 1Þ2 ;

dPCFCRD ð f Þ ¼ T2 dPCFCRD ðr Þ ¼ ðT þ8 hÞ2 ;



PSLML ð f Þ ¼ 13:3dB PSLML ðr Þ ¼ 8:785dB





ð9Þ

PSLLVD ð f Þ ¼ 26:6dB PSLLVD ðr Þ   17:57dB

ð10Þ

PSLPCFCRD ð f Þ ¼ 26:6dB PSLPCFCRD ðr Þ ¼ 17:57dB

ð11Þ

Obviously, the LVD and PCFCRD have higher resolutions than the ML method along the CR axis. When h [ 1, the resolution of the PCFCRD is higher than that of the LVD. The PSLs of the ML method and PCFCRD are better than that of the LVD. 3.4

Anti-noise Performance

This subsection focuses on the comparison of the integration SNR gain. With the discrete signal sn ðmÞ ¼ s1 ðmÞ þ nðmÞ (m ¼ ½N=2; ½N=2 þ 1; . . .; ½ðN  1Þ=2 and the sampling interval is Ts ), we obtain n o E jMLðf0 ; r0 Þj2 ¼ N 2 A21 þ Nr2

ð12Þ

Review of ML Method, LVD and PCFCRD and Future Research

E½LVDðf0 ; r0 Þ ¼

   N 2 A21 þ W r2 ; W r2 2 0; Nr2 2

E ½PCFCRDðf0 ; r0 Þ ¼

N 2 A21 h 2

1749

ð13Þ ð14Þ

Due to the introduction of the constant delay, noise correlation is reduced, which is not found in the previous research. The PCFCRD even does not have the noise influence in (14).

4 Simulations and Some Discussions The cross term, resolution and PSL jointly determine the adjacent LFM signals separation. Here, four LFM signals, denoted by Gu1, Gu2, Gu3 and Gu4, are considered. The signal parameters are set as follows: A1 ¼ 1, a1;1 ¼ 1 Hz, a2;1 ¼ 2 Hz/s for Gu1; A2 ¼ 0:8, a1;2 ¼ 1 Hz, a2;2 ¼ 2 Hz/s for Gu2, A2 ¼ 0:4, a1;2 ¼ 0:5 Hz, a2;2 ¼ 0:5 Hz/s for Gu3, and A3 ¼ 0:2, a1;3 ¼ 8 Hz, a2;3 ¼ 20 Hz/s for Gu4. Processing results by the ML method, LVD and PCFCRD are shown in Fig. 1, where the part marked with the red ellipse is zoomed.

(a) ML method

(b) LVD

(c) PCFCRD

Fig. 1. Comparisons of the ML method, LVD and PCFCRD under LFM signals with different amplitudes

The cross term, computational cost, resolution, PSL and anti-noise performance play important roles in the noisy LFM signals analysis. (1) The ML method is linear and has the advantage in the computational cost. The LVD and PCFCRD have the cross term interference, while they have advantages in the resolution and anti-noise performance due to the introduction of the constant delay. However, this is at the cost of more data. (2) The LVD and PCFCRD are based on the matrix operation, which may provide more space for the future improvement. In the future, we can try to use the window to suppress the cross term. (3) The advantages of the PCFCRD are obvious. In future research, are these advantages useful for realistic research?

1750

J. Zheng et al.

5 Conclusion The ML method, LVD and PCFCRD represent three research directions for noisy multicomponent signals analysis and play important roles in radar signal processing. This paper theoretically review and analyze these four methods, and then give several discussions for their realistic applications. These discussions are important for the future research. A numerical simulation is also given to validate theoretical analyses.

References 1. Hlawatsch F, Bourdeaux GF (1992) Linear and quadratic time-frequency signal representations. IEEE Signal Process Mag 9(2):21–67 2. Boashash B (2015) Time-frequency signal analysis and processing: a comprehensive reference, Wiley, Academic Press 3. O’Shea P (2004) A fast algorithm for estimating the parameters of a quadratic FM signal. IEEE Trans Signal Process 52(2):385–393 4. Zuo L, Li M, Liu Z, Ma L (2016) A high-resolution time-frequency rate representation and the cross-term suppression. IEEE Trans Signal Process 64(10):2463–2474 5. Barbarossa S (1995) Analysis of multicomponent LFM signals by a combined Wigner-Hough transform. IEEE Trans Signal Process 43(6):1511–1515 6. Lv X, Bi G, Wang C, Xing M (2011) Lv’s distribution: principle, implementation, properties, and performance. IEEE Trans Signal Process 59(8):3576–3591 7. Zheng J, Liu H, Liu QH (2017) Parameterized centroid frequency-chirp rate distribution for LFM signal analysis and mechanisms of constant delay introduction. IEEE Trans Signal Process 65(24):6435–6447 8. Abatzoglou TJ (1986) Fast Maximnurm likelihood joint estimation of frequency and frequency rate. IEEE Trans Aerosp Electron Syst 6:708–715

Research on Vision-Based RSSI Path Loss Compensation Algorithm Guangchao Xu, Danyang Qin(&), Ping Ji, Min Zhao, Ruolin Guo, and Pan Feng Key Lab of Electronic and Communication Engineering, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. In recent years, RSSI-based indoor positioning system has been widely used worldwide due to its low installation cost and wide coverage. However, due to the complex and varied indoor environment, the crowd is relatively dense, and the propagation of wireless signals is greatly disturbed. This leads to the fact that the RSSI-based indoor positioning cannot meet the requirements of people. In this study a wireless signal compensation model considering population density is proposed. This model can use image information to compensate the path loss of RSS signals to achieve accurate indoor positioning. Keywords: RSSI

 Image information  Dense crowd  Indoor positioning

1 Introduction Common radiofrequency location methods include fingerprint identification [1, 2] and trilateral positioning [3]. Indoor positioning using RF requires the establishment of a signal propagation model that converts the received signal strength (RSS) into the long distance between access points (APs) and user equipment (such as smartphones). However, the high population density in the room will affect the intensity of signal transmission, reduce the accuracy [4, 5]. Therefore, we use the population density as an influencing factor and combine the traditional indoor signal propagation model, and propose a new signal attenuation model based on two-dimensional information to compensate the propagation loss of wireless signals. In our model, the human detection method based on Convolutional Neural Network (CNN) can count the number of people in an indoor crowded scene; then, our model will calculate the relationship between the number of individuals and signal attenuation. Finally, the three-sided positioning principle is used to achieve indoor positioning.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1751–1757, 2020 https://doi.org/10.1007/978-981-13-9409-6_211

1752

G. Xu et al.

2 Indoor Crowded Scene Algorithm Based on RSSI Model 2.1

The Impact of the Human Body

We know that electromagnetic fields involve two aspects: electric and magnetic fields. The biological tissue of the human body will affect the strength of the two fields. When a wireless signal hits the human body, the electric charge in the electric field accumulates on the skin, changing the spatial distribution of the original electric field of the human body. Therefore, the power of the signal is reduced. Specific Absorption Rate (SAR) (W/kg) represents the electromagnetic power absorbed by human tissue [6]. Because different tissues and organs have different conductivity and permittivity, the calculation formula of the average SAR is as follows: SARAV where: r E ðvÞ q E ðvÞ ¼

2.2

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ex2 þ Ey2 þ Ez2

R 1 rðvÞjE ðvÞj2 dv R ¼ 2 qðvÞdv

ð1Þ

Is the conductivity of each grid unit of the body organization, its unit is (S/m); S is Siemens, is the international conductivity standard unit. Indicates the electric field strength value of the biological tissue. Is the mass density of biological tissue, its unit is kg/m3. (V/m) and V are the derivation units of the potential, the potential difference, and the electromotive force, respectively.

Individual Quantity Detection

For detecting the number of individuals, we will use the Convolutional Neural Network (CNN) to process the image. This is because CNN-based image processing can create image descriptors based on scenes, making the calculations robust. The CNN-based method can achieve generalization of target detection: if the image covers part of the area, it is described by a signal word; if the image covers the entire area, it is described by a short sentence. The method is based on a Full Convolution Location Neural Network (FCLN) architecture, which contains the convolutional neural network, the dense localization model and the recursive neural network language model. However, to speeding up the processing, we delete the recursive neural network language model. The architecture of a convolutional neural network consists of the following components: sði; jÞ ¼ ðX  W Þði; jÞ þ b ¼

NX matrix k¼1

ðXk  Wk Þði; jÞ þ b

ð2Þ

Research on Vision-Based RSSI Path Loss Compensation Algorithm

1753

where: Nmatrix is the number of input matrices, Xk represents the kth input matrix, Wk is the kth convolution kernel, b is the offset, s(i, j) is the output matrix at position (i, j), and the output matrix s. It is also the feature map value. Convolution processing is the foundation of CNN, the core operation of performing training, which triggers neurons in the network through convolution processing. After offline training, FCLN is robust to the detection of individual numbers. The convolution network uses a visual geometry group 16 (VGG-16) structure to generate a feature map of the original image. The localized lower layer is responsible for receiving these spatial regions where activation, recognition features are significant, and separating a scale-invariant, representative feature from each region. The recognition network is a neural network that is tightly connected to the localized lower layer, which processes the regional features received from the localized lower layer. For each region, a value of size Dim ¼ wcon  hcon  vfeature is generated, which is obtained from the feature plane, where wcon is the width of the convolution kernel, hcon is the height of the convolution kernel, and vfeature is the dimension of the feature image.

Fig. 1 Human body detection flow chart

Figure 1 shows the human body detection process. Convolutional neural network first processes the original image. The positioning layer will generates a region suggestion and uses bilinear interpolation to extract the activation of the corresponding batch. Finally, the features will send to the recognition network and identify the people in the image. Finally, the computer can tell us the number of the people who on the signal propagation path. The most important layer of FCLN is the human body positioning layer. The human body positioning layer gives the area where the person may be located, and uses the method of cubic interpolation to extract the area and smoothly extract a corresponding batch of activation. The fully connected identification network will process the regions, and these regions will be described by using a recurrent neural network model [7]. The gradient descent method is used to train the model end to end [8]. In addition, we can determine that the region suggestion of the human body locating layer

1754

G. Xu et al.

is used as a candidate region for the region in which the person is located, and then Wimg Himg  hcon generate an initialization region. Initialize the acreage of each region R to wcon (Wimg is the width of the image, Himg is the height of the image), and the coordinates can be described as follows: x ¼ xcen þ lx WR y ¼ ycen þ ly HR w ¼ WR expðtw Þ h ¼ HR expðhw Þ

ð3Þ

where: The position (x, y) is the center of the output area, R (xcen , ycen ) is the center of the initialization area, (lx , ly , tw , th ) is the predicted scalar of the FCLN, WR and HR are width and height are respectively. Wimg 1  wcon 2 Himg 1 HR ¼  hcon 2 WR ¼

ð4Þ

Based on the region score of the vector of length w, a confidence score for each output area can be calculated, which can be used to locate each person. After training the FCLN with the database, we can build a model which name is feature map to detect and identify the human body. Then, people sent the photo to the server where the FCLN is installed, the people in the Then, everyone in the image can be  photo can be located.  positioned by the area B0 b01 ; b02 ; b03    b0n of the person’s positioning layer. Finally, we can get how many people in the path by calculating the number of output regions. 2.3

Consider a New Indoor Transmission Model of the Human Body

The human penetrating loss model is used to modify the path loss model, and the indoor wireless signal propagation model is obtained. This model compensates the power loss and obstacles caused by human body to receive signals. resulting in relatively accurate transmission power. The calculation formula of the new model is as follows: RSS ¼ RSSre þ PLhuman þ PLðd Þ

ð5Þ

where: RSS Is the propagation power RSSre Represents the strength of the actual received signal (dBm) The transformation formulas for dBm and W are shown in the following function. In the International System of Units, W is the unit of power:

Research on Vision-Based RSSI Path Loss Compensation Algorithm

PLhuman ¼ 10 lg

  p  103 ðmwÞ ¼ 30 þ 10 lg humanweight  n  SARAV 1ðmwÞ

1755

ð6Þ

where: PLhuman (dBm) is the signal power loss reduced by the people, P(W) is the signal power dissipation caused by the human body, and humanweight is the average weight of the human body, and the unit is kg. n represents the number of human body blocking the signal, and we can get the n from the photos taken by the smartphone camera (Fig. 2).

3 Testing and Evaluation The experiment was carried out at the Physics Laboratory Building of Heilongjiang University. The 709 laboratory is about 33 square meters. This area is mainly used to test our proposed algorithm. In this section, we will test the effects of the human body on RSSI readings. Figure 3 shows the test environment at the 709 lab.

Fig. 2 709 laboratory test environment

We will hold the spectrum analyzer and collect the RSSI value of AP1 at 100 sampling points; then, when there are 4 people between AP1 and our measuring instrument, we will collect the RSSI value of 200 sampling points at the same position; After leaving, collect 200 sampling points at the same location. Figure 3 shows the received signal strength as a function of human walking. As can be seen from Fig. 3, the RSSI curve is relatively stable when no one is present, but when there is a person, the RSSI curve fluctuates greatly and the signal is weakened, causing the received signal strength curve to change drastically, indicated by an arrow. Figure 4 shows the location cumulative error distribution map after wireless signal transmission path loss compensation using different strategies. It can be seen from Fig. 4 that the positioning accuracy of the vision-based wireless signal path compensation algorithm is relatively high.

1756

G. Xu et al. -50 -55

RSSI(dBm)

-60 -65 -70 -75

someone appears

-80 -85

0

someone gone

100

200

300

400

500

sampling points

Fig. 3 Relationship between human body and receiving wireless signal strength

1.0

Probability

0.8

0.6

0.4

0.2

Imagine-based CS-based FM-based

0.0 0

1

2

3

4

Position Error(m)

Fig. 4 Cumulative error distribution map

4 Conclusion In this paper, we propose a method to compensate for the attenuation of RSS signals during transmission, as a means of generating high-precision indoor positioning results in crowded scenarios. Considering that the number of individuals has a great influence on signal fluctuations, we add the factors of human existence in the compensation

Research on Vision-Based RSSI Path Loss Compensation Algorithm

1757

signal attenuation model, and combine the image-based method to compensate the wireless signal propagation loss. Based on the data measured by the smartphone, our proposed model can predict the precise distance between the wireless signal transmitting node and the receiving node. Then, we use the trilateral positioning method to get the user’s position. The experimental results show that the proposed method improves the positioning accuracy in crowded environments. Acknowledgements. This work is supported by the National High Technology Research and Development Program of China (2012AA120802), National Natural Science Foundation of China (61771186), Postdoctoral Research Project of Heilongjiang Province (LBH-Q15121), Undergraduate University Project of Young Scientist Creative Talent of Heilongjiang Province (UNPYSCT-2017125), Postgraduate Innovative Research Project of Heilongjiang University (NO. YJSCX2019-059HLJU).

References 1. He S, Chan SHG (2015) Wi-Fi fingerprint-based indoor positioning: recent advances and comparisons. IEEE Commun Surv Tutor 2015:1 2. Zhang P, Zhao Q, Li Y, Niu X, Zhuang Y, Liu J (2015) Collaborative WiFi fingerprinting using sensor-based navigation on smartphones. Sensors 15:17534–17557 3. Wang Y, Yang X, Zhao Y, Liu Y, Cuthbert L (2013) Bluetooth positioning using RSSI and triangulation methods. In: Proceedings of the consumer communications and networking conference, Las Vegas, NV, USA 4. Aguirre E, Arpon J, Azpilicueta L, Falcone F (2012) Evaluation of electromagnetic dosimetry of wireless systems in complex indoor scenarios with human body interaction. Prog Electromagn Res B 43:189–209 5. Gosselin MC, Vermeeren G, Kuhn S, Kellerman V, Benkler S, Uusitupa TMI, Joseph W, Gati A, Wiart J, Meyer FJC et al (2011) Estimation formulas for the specific absorption rate in humans exposed to base-station antennas. IEEE Trans Electromagn Compat 53:909–922 6. Johnson J, Karpathy A, Fei-Fei L Densecap: fully convolutional localization networks for dense captioning 7. Mikolov T, Karafiát M, Burget L, Cernocký J, Khudanpur S (2010) Recurrent neural network based language model. In: Proceedings of the Interspeech, Makuhari, Chiba, Japan 8. Nguyen DT, Li W, Ogunbona PO (2016) Human detection from images and videos: a survey. Pattern Recognit 51:148–175

Efficient Energy Power Allocation for Forecasted Channel Based on Transfer Entropy Zhangliang Chen(&) and Qilian Liang Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA [email protected]

Abstract. In recent years, mobile network data has an explosive growth. To adapt this demand and accelerate the development of new applications, the fifth generation of mobile communication networks emerged. At present, the vision and needs of 5G have been gradually clarified. How to integrate existing technologies and various potential new technologies to realize 5G network becomes the next research and development focus. In econometrics field, Granger causality test is a normal analysis tool for time series data based on autoregression, but it is not limited. It is also widely used based on the information theory conditional common information stage generalized Transfer Entropy (TE). In this paper, first Granger causality test is proposed on testing the correlation between two 5G channels, then transfer entropy algorithms is applied to forecast 5G channel coefficient. Then based on the forecasted channel, the energy allocation of the channel is performed by the Inverse Water Filling (IWF) algorithm. Finally, we demonstrate the high energy efficiency of the IWF on channel power allocation. The simulation further validates our theoretical results. Keywords: Transfer entropy  Granger causality  Energy allocation  Inverse water filling

1 Introduction In the past half century, mobile communication has achieved a leap-forward advancement, which has dramatically improved people’s life quality, work efficiency and economic development. But people have not stopped the pursuit of higher performance mobile communication networks. In 2012, the European Union initiated the METIS (mobile and wireless communications enable for the 2020 information society) project [1] to research on 5G mobile communication networks. For satisfying mobile communication need in the future, 5G is defined as next generation wireless mobile communication network, which is required to have lower cost and power consumption, higher security and reliability. For the transmission rate, it is also has a requirement for increasing by 10–100 times, with a peak value which is 10 G bit/s. To support on IOT development, the millisecond level point to point transmission delay should be reached, the connected device density increases by 10– © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1758–1765, 2020 https://doi.org/10.1007/978-981-13-9409-6_212

Efficient Energy Power Allocation for Forecasted Channel Based

1759

100 times [2, 3]. 5G will enable information communication to break through time and space restrictions, quickly realize the interconnection of people and things, and bring users an excellent interactive experience. The rest of this paper is organized as following: Granger Causality test is applied in Sect. 2, the simplified 5G channel forecasting based on TE will be proposed in Sect. 3. In Sect. 4, IWF algorithm is presented. Simulations are presented in Sect. 5. Finally, the conclusion is shown in Sect. 6.

2 Granger Causality Test The Granger causality test is frequently used on testing the causality between time series variables statistically. For things with unclear causality in economic phenomena, this method can be used to test statistically. On the other hand, the 5G channel can also be regarded as a time series, so for the two time series of 5G channels, Granger causality is a suitable method to test correlation between two channels. The principle of the Granger causality is: determine whether the inclusion of the hysteresis of channel 1 significantly improves the prediction of channel 2 when regressing other variables (including their past values). If the forecasting of channel 2 can be significantly improved, then channel 1 is proved to be the Granger cause of channel 2. As the same, channel 2 can be considered as the Granger cause of channel 1 [4, 5]. For here, we use X represent channel 1 series and Y for channel 2 series. The Granger causality test model can be defined as following: Yt ¼ a þ

P X i¼1

Yt ¼ a þ

ai Yti bXi þ lt

m X i¼1

ai Yti þ lt

ð1Þ

ð2Þ

In (1) and (2), lt is the Gaussian noise, A, B are the coefficients, n represents the sample size, and p is the lag order of the X and Y variables. ESS1 and ESS0 are the sum of squares of the residuals of (1) and (2) respectively. Then the null hypothesis and alternative hypothesis are assumed as H0: bj = 0 and H1: bj 6¼ 0 (j =1, 2, p). Now we can suppose null hypothesis is true, we can get F¼

ðESS0  ESS1 Þ  F ðm; n  2p  1Þ ESS1 =ðn  2p  1Þ

where the first degree of freedom of F-distribution is p and second is n − (2p + 1). The H0 will be rejected if the statistic value F is greater than the critical value of the standard F-distribution, which indicating that the change in channel 1 is the cause of the channel 2 change, that is, channel 1 is the Granger cause of channel 2. Same process, it can decide whether channel 2 is the Granger cause of channel 1 [5].

1760

Z. Chen and Q. Liang

In this paper, two simulated independent 5G channels are plugged into Granger causality test to check the correlation between them. Table 1 shows the test results: Table 1. Granger causality test results 1 on 2 2 on 1

F-value 20.9127 97.0205

Critical value 0.229 0.229

Confidence level 0.95 0.95

First, channel 1 correlated to channel 2 is tested. From Table 1 we can see F value is 20.9127, while critical value is 0.229. Obviously, the former value is larger than the latter. Based on this, we can conclude that channel 1 is the Granger cause of channel 2. For the same reason, we also performed channel 2 correlated to channel 1. F value is 97.0205 from table, while critical value is still 0.229. The F value is also much larger than the critical value, so we can conclude that channel 2 is also the Granger cause of channel 1. In our work, we found the 5G channel coefficients are obeyed Gaussian distributions. When a set of variables obeys the Gaussian distribution, Granger causality and TE are characterized by equivalence. Based on this characteristic, the TE is satisfied using on forecasting 5G channel coefficients.

3 Channel Forecasting Based on Transfer Entropy This section first introduces the basic concepts and interrelationships of information entropy theory, and then introduces the theory of TE. Information entropy is the amount of information needed to uniquely identify the sample space of a random variable [6], defined as. HI ¼ 

X

pðiÞlog2 pðiÞ

ð3Þ

i

where p(i) is the probability distribution of I, and the unit of information is bit. If the definition is generalized to two discrete random variables, the entropy value can be represented by joint entropy. Knowing that p(i, j) is the joint probability distribution of I and J, the joint entropy can be defined as HIJ ¼ 

X

pði; jÞlog2 pði; jÞ

ð4Þ

i;j

That is, mutual information MIJ = HI + HJ − HIJ [7] can be obtained. It can be defined as

Efficient Energy Power Allocation for Forecasted Channel Based

MIJ ¼

X i;j

pði; jÞlog2

1761

pði; jÞ pðiÞpð jÞ

ð5Þ

In order to accurately measure the information transfer between random variables in a dynamic process to characterize the coupling relationship between random variables, the definition of TE can be obtained by using the transition probability based on conditional probability [8]. The relationship between the basic concepts in the information entropy theory is shown in Fig. 1. The relationship between two random variables can be defined in the spatial dimension, including joint entropy and mutual information. The average rate of change of entropy, i.e. the entropy rate, can be defined in the time dimension.

Fig. 1. The relationship between the basic concepts in the information entropy theory

If the Markov property of the random variable J is defined as pðjn þ 1 jjn ; . . .; jnl þ 1 Þ ¼ pðjn þ 1 jjn ; . . .; jnl Þ [9], then the transfer entropy from J to I is defined as TJ ! I ¼

X   pðin þ 1 jikn ; jln Þ p in þ 1 ; ikn ; jln log2 pðin þ 1 jikn Þ i;j

ð6Þ

If the information transfer from J to I is ignored, the state of J has no effect on the transition probability of I, that is, the TE value is 0, so it is known that the TE TJ ! I is a measure of information transfer between two random variables. Where J ! I represents the entropy value passed from J to I, and the TE from I to J can be obtained by exchanging J to I in the formula. Different from mutual information, the TE reflects the asymmetry of information and includes the dual characteristics of coupling strength and coupling direction. In (6), J and I represents 5G channel 1 and 2 respectively. Based on this equation, we can use TE forecasting channel coefficients. We show the simulations in part 5.

1762

Z. Chen and Q. Liang

4 IWF Algorithm The IWF algorithm can allocates the transmission based on the state of the channel adaptively. If signal-to-noise ratio (i.e., the sum of the signal and noise power spectrums) is constant, the maximum total channel capacity requirement is achieved. IWF can be modeled as an optimization problem as described below: max Csum

p1 ;p2 ;...;pN

N X

Pn jhn j2 ¼ log 1 þ N0 n¼1

! ð7Þ

where Cm represents the total channel capacity of the system, N is the number of channels, Pn is the power of the nth channel, hn is the channel gain of the nth channel, and N0 is the noise power spectral density. The Lagrangian multiplier method can be used to find the optimal solution. N X

Pn jhn j2 log 1 þ Lðk; P1 ; P2 ; . . .; Pn Þ ¼ N0 n¼1

! þk

N X

! Pn  Psum

ð8Þ

n¼1

The optimal power allocation scheme is 1 N0 þ Pn ¼ ð  Þ k jhn j2

ð9Þ

where ()+ means that the value is non-negative [10]. We also propose Equal Gain (EG) algorithm on channel capacity and compare it with IWF algorithm. Simulations are presented in Sect. 5.

5 Simulations and Analysis First, we use TE forecasting 5G channels and get Root Mean Square Error between real channels and forecasted channels. Then we compare RMSE of two channel forecasting algorithm, TE and Box Jenkins. Here we simulated 1000 pairs of 5G channels and get the average value. Figure 2 shows the simulation below.

Efficient Energy Power Allocation for Forecasted Channel Based

1763

Fig. 2. RMSE between real 5G channels and forecasted channels

From the figure we can see when the SNR is low, the RMSE of two methods do not have big difference. With SNR increasing, the RMSE of TE is going to be smaller than the Box Jenkins method, which prove TE algorithm is more accurate than Box Jenkins. Figures 3 and 4 are 2 sample simulations of all forecasted 5G channels energy allocation based on IWF.

Fig. 3. Forecasted 5G 1 channel energy allocation based on IWF

1764

Z. Chen and Q. Liang

Fig. 4. Forecasted 5G channel 2 energy allocation based on IWF

Fig. 5. Average forecasted channel capacity comparison based on IWF and EG

In Fig. 5, the comparison of forecasted channel capacity based on IWF and EG is proposed. We can observe that at the same SNR, the channel capacity of the IWF is greater than the EG, where demonstrates the energy allocation efficiency of the IWF algorithm working on the forecasted 5G channels.

Efficient Energy Power Allocation for Forecasted Channel Based

1765

6 Conclusion In this paper, our work can be mainly concluded into three parts. First, we tested Granger causality for two 5G channels to make sure they have Granger cause. Second, based on the equivalence of Granger cause and TE under Gaussian variable, we proposed TE to forecast 5G channel. Last, based on forecasted 5G channels, IWF algorithm is applied on channel energy allocation. From work we can obtain the TE algorithm has high accuracy for forecasting 5G channel coefficient, and IWF is a efficiency method on channel energy allocation. Acknowledgements. The 5G Channel data used in this paper is provided by New York University Wireless Communication center open source. The work in this paper is funded in part by NSFC under Grants 61771342, 61731006, 61372097, and 61711530132.

References 1. Wang Z, Luo Z, Wei K (2014) 5G service requirements and progress on technical standards. ZTE Technol J 20(2):1–4 2. Baldemair R, Dahlman E, Parkvall S, Selen Y, Balachandran K, Irnich T, Fodor G, Tullberg H (2013) Future wireless communications. In: Vehicular technology conference (VTC Spring), 2013 IEEE 77th. IEEE, pp 1–5 3. Liu S, Wu J, Koh CH, Lau VK (2011) A 25 gb/s (/km 2) urban wireless network beyond imtadvanced. IEEE Commun Mag 49(2):122–129 4. Shojaie A, Michailidis G (2010) Discovering graphical granger causality using the truncating lasso penalty. Bioinformatics 26(18):i517–i523 5. Granger CW (1980) Testing for causality: a personal viewpoint. J Econ Dyn Control 2:329– 352 6. Schreiber T (2000) Measuring information transfer. Phys Rev Lett 85(2):461 7. Aulogiaris G, Zografos K (2004) A maximum entropy characterization of symmetric kotz type and burr multivariate distributions. Test 13(1):65–83 8. Zografos K, Nadarajah S (2005) Expressions for rényi and shannon entropies for multivariate distributions. Stat Probab Lett 71(1):71–84 9. Mendel JM (1995) Lessons in estimation theory for signal processing, communications, and control. Pearson Education 10. Palmer RD (2008) Fundamentals of radar signal processing. Bull Am Meteor Soc 89 (7):1037

A Modular Indoor Air Quality Monitoring System Based on Internet of Thing Liang Zhao1(&), Guangwen Wang1, Liangdong Ma2, and Jili Zhang2 1

Key Laboratory of Intelligent Control and Optimization for Industrial Equipment of Ministry of Education, School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China [email protected] 2 Institute of Building Energy, Dalian University of Technology, Dalian 116024, China

Abstract. With the problem of environmental pollution becoming more and more serious, people pay more and more attention to indoor air quality, because more than 80% of the time spent in the indoor environment. This paper designs an indoor air quality monitoring system based on Internet of Thing cloud platform, which adopts modular design idea and realizes the development process of the system quickly. The architecture design of perception layer, network layer and application layer of the system is given. Sensors can be connected to the communication gateway as long as they conform to Modbus communication protocol. The WYSIWYG (What-You-See-Is-What-You-Get) method is used to develop the interface of the monitoring system. Finally, the system is tested and analyzed. The test results show that the system runs stably and accurately, and supports remote monitoring of web and mobile versions. Keywords: Indoor air quality

 IoT  Sensors  Gateway

1 Introduction With the rapid development of economy and the remarkable improvement of people’s living standards, the problem of air pollution has become increasingly prominent. Air pollution has become a major problem that needs to be solved urgently in China. Indoor air quality is one of the most critical factors affecting human health, because people spend more than 80% of their time indoors every day [1, 2]. Safe and comfortable living environment has become an urgent problem to be solved, so indoor air quality monitoring system is particularly important.

This work is supported by National Key Research and Development Project of China No. 2017YFC0704202, and supported by National Natural Science Foundation of China (61803067). © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1766–1772, 2020 https://doi.org/10.1007/978-981-13-9409-6_213

A Modular Indoor Air Quality Monitoring System Based

1767

Tsang et al. [3] implemented IAQ monitoring system based on ZigBee wireless sensor network, but the trunk system only designed simulation scenarios through OPNET software. Kim et al. [4] designed an indoor environment monitoring system based on microprocessor MSP430 microcontroller and Zigbee technology. Sensor nodes communicate with computer through serial interface. Firdhouse et al. [5] proposed an IAQM system based on the Internet of Thing, but the system is limited to monitoring ozone concentration in offices near photocopiers. Chen et al. [6] described a system for monitoring carbon dioxide in indoor environment. Users can get real-time information of indoor CO2 concentration through mobile APP. In this paper, we propose a modular indoor air quality monitoring system based on the Internet of Thing, which integrates environmental parameters sensor nodes, including temperature, humidity, CO2 and PM2.5, advanced embedded communication gateway and Internet of Thing cloud platform server. Store and publish real-time data on the server. Users can grasp indoor air quality anytime and anywhere through mobile app and login web pages.

2 IoT Structure and Sensor Selection The proposed system structure is shown as in Fig. 1. The Internet of Thing system is divided into three levels. (1) The sensing layer, which is responsible for data acquisition, including temperature sensor, humidity sensor, CO2 sensor and PM2.5 sensor. The data collector collect data on individual sensors via RS485 fieldbus, and the communication parameters of all sensors should be consistent. (2) The network layer is mainly composed of data collector, which collects the data of each sensor regularly, and then packages them according to MQTT format. The data collector transmits information to application layer by GPRS module, so the data can be uploaded to cloud platform as long as there is cellular network signals. (3) The application layer adopts server load balancing technology and cluster technology to ensure the stable operation of services under high concurrency and large traffic. The database uses master and backup architectures. The system will automatically backup data, recover disaster in the same city and different places, and maximize the security of user data. The platform has the functions of setting alarm upper and lower limits, viewing historical data and exporting report data.

1768

L. Zhao et al.

Fig. 1. Structure of IoT system

Table 1 shows the specific parameters of selected sensors in the system. The temperature sensor is SHT30, which has high reliability and long-term stability. SHT30 is fully calibrated and linearized and offers wide supply voltage range from 2.4 to 5.5 v. The communicate interface with STM32 is I2C. The dust sensor is PMS5003, which is a universal particle concentration sensor based on laser scattering principle that measures particulate concentration values in PM1.0, PM2.5 and PM10. PMS5003 has high precision and can be easily used without calibration. Finally, CO2 sensor is S8-0053, which is non-dispersion infrared sensor. The response time of S8-0053 is about 2 min, and can be obtained high accuracy without calibration. The communicate interface with STM32 is UART TTL. Table 1. Basic information for the sensors used in this paper Parameter Temperature Humidity Dust CO2

Sensor SHT30 PMS5003 S8-0053

Resolution 0.015 °C 0.01% RH 1 lg/m3 1 ppm

Repeatability ±0.2 °C ±2% ±10 lg/m3 ±40 ppm

Range 40–125 °C 0–100% RH  1000 lg/m3 400–2000 ppm

A Modular Indoor Air Quality Monitoring System Based

1769

3 Platform Software Development The gateway configuration interface is shown in Fig. 2. It mainly completes the acquisition and configuration of sensor layer access sensor, including the time interval of data acquisition and the delay time of communication failure. If a sensor drops off, the gateway will send acquisition instructions three times in a row. If no data is returned, the module will be considered to have problems and gateway continue to collect the next sensor. The configuration parameters of each sensor include communication address, Modbus function code, communication start address, data length, etc. After the configuration is completed, the gateway will collect sensors in rotation and send data to the cloud platform server in real time.

Fig. 2. Data acquisition configuration interface of gateway

After receiving the data uploaded by the gateway, the cloud platform server parses the data one by one according to the communication protocol of each instrument, and develops the system with WYSIWYG method, including numerical display, dashboard and real-time data curve display, which enables users to visually view indoor air quality realtime. In addition, the application layer configuration also includes the functions of history query, upper and lower alarm setting, data export, user privilege classification and

1770

L. Zhao et al.

Fig. 3. Web page version of indoor air quality monitoring system

Fig. 4. Mobile version of interface of indoor air quality monitoring system

so on. The working interface of indoor air quality monitoring system is shown in Figs. 3 and 4, showing the monitoring interface of web page and mobile version respectively.

A Modular Indoor Air Quality Monitoring System Based

1771

4 Experimental Results and Analysis The test scenario is in an 18 m2 office room, in Dalian University of Technology. The test period is from May 30 to June 15, 2019. The data storage period is 2 min and total test time is 17 days. The monitoring curve during test period is given in Fig. 5, and the statistical information of monitoring data is given in Table 2. From Fig. 5 we can see that the temperature fluctuation presents a certain periodic characteristic, the daytime temperature gradually rises, the night temperature gradually decreases, and the fluctuation characteristic of CO2 also presents a certain periodicity. As can be seen from Table 2, the average concentration of CO2 is 437.4, which indicates that the air quality is fresh and suitable for office work. From the humidity Table 2. Statistical value of air quality monitoring Parameter Temperature/°C Humidity/% Dust/ppm CO2/ppm

80

Max 31.1 51.4 91 995

Min 25.3 13 1 390

900

PM2.5 /ppm

70

Avg 28.28 36.52 24.93 437.4

CO2 /ppm

800

60 50

700

40

600

30 20

500

10

400

0 May 30 Jun 2

32

Jun 5

Jun 8

Jun 11 Jun 14 Jun 17

May 30 Jun 2

55

Temperature/

31

50

30

45

29

40

28

35

27

30

26

25

Jun 8

Jun 11 Jun 14 Jun 17

Jun 8

Jun 11 Jun 14 Jun 17

Humidity /%

20

25 May 30 Jun 2

Jun 5

Jun 5

Jun 8

Jun 11 Jun 14 Jun 17

May 30 Jun 2

Jun 5

Fig. 5. Historical curve of air quality monitoring

1772

L. Zhao et al.

curve, it can be seen that the humidity value increased gradually from May 31 to June 6, which is the result of continuous rainy and cloudy days. During the same period, the temperature change showed a little downward trend. Finally, the maximum monitoring data of PM2.5 is 91 ppm, the minimum value is 1 ppm, and the average value is 24.93 ppm. According to the standard, the air quality of office environment is “excellent”.

5 Conclusion Based on the monitoring technology of Internet of Thing (IoT) cloud platform, a rapid indoor air quality monitoring system is built using modular configuration programming. Various types of sensor modules are connected to the system through RS485 bus. The IOT gateway collects sensor data regularly and packages them to the cloud platform server. Users can access and monitor the indoor air quality remotely at any time and anywhere, and issue control commands to control the work of air quality control equipment such as ventilation system, humidifier, etc., with the function of automatic adjustment of air quality parameters. The system adopts modular design idea, with short development cycle, low cost and strong scalability. It is suitable for long-time real-time collection of indoor air quality data. It provides an effective solution for remote monitoring of indoor air quality, and can be widely used in hospitals, hotels, shopping malls, office buildings and other occasions.

References 1. De Nazelle A et al (2013) Improving estimates of air pollution exposure through ubiquitous sensing technologies. Environ Pollut 176:92–99 2. Klepeis NE, Nelson WC, Ott WR et al (2001) The national human activity pattern survey (NHAPS): a resource for assessing exposure to environmental pollutants. J Expo Anal Environ Epidemiol 11:231–252 3. Tsang KF, Chi HR, Fu L et al (2016) Energy-saving IAQ monitoring ZigBee network using VIKOR decision making method. In: Proceedings of the 2016 IEEE international conference on industrial technology (ICIT), pp 14–17 4. Kim JY, Chu CH, Shin SM (2014) ISSAQ: an integrated sensing systems for real-time indoor air quality monitoring. IEEE Sens J 14:4230–4244 5. Firdhous M, Sudantha B, Karunaratne P (2017) IoT enabled proactive indoor air quality monitoring system for sustainable health management. In: Proceedings of the 2nd IEEE international conference on computing and communications technologies (ICCCT), pp 216–221 6. Chen RC, Guo HY, Lin MP et al (2014) The carbon dioxide concentration detection using mobile phones combine Bluetooth and QR code. In: Proceedings of the 6th IEEE international conference on awareness science and technology (iCAST), pp 29–31

Performance Analysis for Beamspace MIMO-NOMA System Qiuyue Zhu, Wenbin Zhang(B) , Lingzhi Liu, Bowen Zhong, and Shaochuan Wu Communication Research Center, Harbin Institute of Technology, Harbin, China [email protected]

Abstract. At present, the research about beamspace MIMO-NOMA system focus more on mathematical model. It is difficult to utilize this model in practice. To solve this problem, we decompose this system into four parts, that is beam selection, clustering, power allocation and precoding. Next we discuss the impact of beam selection and clustering on spectral efficiency (SE) and energy efficiency (EE) with a specified power allocation and precoding algorithm. The simulation result show that no scheme of system can outperform others in SE and EE in any environment.

Keywords: Beamspace

1

· MIMO-NOMA · Beam selection · Clustering

Introduction

Due to high spectral utilization and less interference, Massive MIMO for millimeter-wave plays an important role in 5G communication [1]. Whereas there are many problems to be solved, most serious two of which are high hardware complexity and power consumption resulting from a lot of radio-frequency (RF) chains [2]. The beam selection based on beam space is proposed to solve these two problems, which means that the number of orthogonal transmit beams is equal to that of base station (BS) transmit antennas. Each beam requires a RF chain, so we can reduce the number of RF chains by selecting only a few beams, which may lead to obvious performance loss. However, we can select proper beams by beam selection algorithm to achieve a tradeoff between the hardware complexity and the performance. To further improve SE of system, the technology of non-orthogonal multiple access (NOMA) was proposed, which allows different users to use the same time-frequency resources [3]. To separate signals from these users, transmitter and receiver adopt superposition coding and successive interference cancellation (SIC) respectively. So NOMA improves SE at the cost of computational complexity [4]. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1773–1781, 2020 https://doi.org/10.1007/978-981-13-9409-6_214

1774

Q. Zhu et al.

Recently, beam space MIMO-NOMA system was proposed [5,6], which makes it possible that the maximum number of users is no longer limited by the number of RF chains. Considering NOMA distinguish users in power domain, it is important for these users to properly select their own transmit powers. The author in [5] mainly proposes a dynamic power allocation scheme to maximize the achievable sum rate by solving the joint power optimization problem. Above-mentioned researches focus on the general mathematical model of beam space MIMO-NOMA rather than implementation of it. In this paper, we will elaborate four parts of this system, which are beam selection, clustering, digital precoding and power allocation. By combining different algorithms or schemes, we can obtain different structures of system. Next, we take SE and EE as metrics to evaluate the performances of various structures. The rest of this paper is organized as follows: Sect. 2 details the system models and channel model. The relevant algorithms and methods are introduced in Sect. 3. Simulation results and conclusion are presented in Sects. 4 and 5 respectively.

2

System Model

In this paper, we consider the downlink of a beamspace MIMO-NOMA system shown in Fig. 1. Assuming that channel state information is available at the transmitter. BS is equipped with N dimensional ULA, and each K users has a single antenna.

Fig. 1. The downlink of beamspace MIMO-NOMA system

2.1

Channel Model

For simplicity, we consider Saleh-Valenzuela (S-V) channel model [7] for mmWave communication, which is regarded as a time-invariant channel. The channel vector of the k-th user is denoted as hk = βk,0 a (θk,0 ) +

Np  i=1

βk,i a(θk,i ),

(1)

Performance Analysis for Beamspace MIMO-NOMA System

1775

where θk,i and βk,i represent space angle and complex gain of the i-th propagation path for the k-th user, respectively. The first term in Eq. (1) is the LoS component, the second term represents NLoS components with Np paths. The channel matrix of K users is denoted as H. The N × 1 steering vector a (θ) is a discrete complex space sine curve, which is defined as   (2) a (θ) = e−j2πθi i∈Γ (N ) , where Γ (N ) = { − (N − 1)/2 :  = 0, 1, . . . , N − 1} is a symmetric set of indices centered around zero. The spatial direction is defined as θ = 0.5 sin (φ), which is related to the physical angle φ ∈ [−π/2, π/2]. Then, we can obtain the beamspace representation of channel matrix by a unitary transformation. The unitary matrix used is given by 1 U = √ [a (iΔθ0 )]i∈Γ (N ) , N

(3)

where fixed intervals is Δθ0 = N1 . The columns of U represent steering vectors with N fixed space angles. Therefore, beamspace representation of S-V channel matrix is written as Hb = [hb1 , . . . , hbK ] = UH H = [u1 , . . . , uN ]H [h1 , . . . , hK ],

(4)

where Hb is a N × K matrix, and each element of it corresponds to a projection of a column of H on a column of U. 2.2

Beamspace MIMO-NOMA System Model

Let s = [s1 , s2 , . . . , sK ]T denotes data steams for K users. These steams can be clustered by clustering methods, and the dimension of steams become Q × 1. Q denotes the number of the clusters. It is worth noticing that NOMA technology will be adopted in each cluster. Sq , q ∈ {1, 2, . . . , Q} denotes the q-th cluster. Si ∩ Sj = ∅, i = j stands that no user belongs to two clusters. In addition, we Q let q=1 |Sq | = K mean that each user must belong to certain cluster. Next, allocate power for each Q clusters and implement a digital precoding. Users in a cluster share a precoding vector and a NRF × 1 vector gq denotes precoding vectors of the q-th cluster. After precoding, signals pass through NRF RF chains and transmit by NRF beams. Here NRF equals the number of beams, which depends on beam selection algorithm. Once RF chain is determined, the dimension of Hb in Eq. (4) is reduced to NRF × K. Let a NRF × 1 vector hm,q , hH m,q gq denote channel vector between BS and the m-th user in the q-th cluster and the equivalent channel gain respectively. The users in the same cluster are listed in descending order by hH m,q gq . The received signal of the m-th user in the q-th cluster can be expressed as

1776

Q. Zhu et al.

ym,q =

√ hH m,q gq pm,q sm,q 





+ hH m,q

desired signal



|Sj | 

√ gj pi,j si,j + nm,q  j=q i=1  noise

inter-cluster interferences

+ hH m,q gq 

m−1 

|Sq |  √ √ H pi,q si,q + hm,q gq pi,q si,q

i=1



i=m+1

,

(5)



intra-cluster interferences

where sm,q and pm,q denote transmitted signal and power for the m-th user in the q-th cluster respectively. Where nm,q is the noise following the distribution CN 0, σ 2 . Equation (5) includes the desired signal, inter-cluster interferences, noise and intra-cluster interferences. According to (5), we can write the SE of system as Rsum =

|Sq | Q  

log2

q=1 m=1

1+



H

hm,q gq 2 pm,q 2 D

(6)

2 m−1

2

H



where D is further expressed as D = hH m,q gq 2 i=1 pi,q + hm,q gq 2

H

2 |Si |  |Si | 2

i=m+1 pi,q + i=1 pi,j + nm,q . Accordingly, the EE can be j=q hm,q gj 2 written as Rsum , (7) ε= Pmax + NRF × PRF where Pmax denotes transmitted power of the system, PRF denotes the power consumed in the components per RF chain, except Pmax .

3 3.1

Relevant Algorithms and Methods Beam Selection Algorithm

N beams can be supported by the ULA in theory. It will lead to high computation complexity and power consumption to use all these beams. Fortunately, selecting only a few beams doesn’t incur obvious performance loss owing to the sparsity of mmWave channel. Now, there mainly exist three beam selection algorithms, which are Maximum Magnitude Selection (MM-S), Maximization of SINR Selection (MS-S) and Maximization of the Capacity Selection (MC-S) [8]. MM-S selects beams corresponding to the large elements of Hb in amplitude. While MS-S or MC-S finds one from all the possible combinations of beam to maximize SINR or capacity, so sub-optimal algorithms are proposed in [8]. The performance of MS-S and MC-S is related to Channel Power Capˆ H ˆ H) tr(H tured (CPC), which is written as η = tr(Hbb HbH ) , Where NRF × K matrix b ˆ 1, . . . , h ˆ i, . . . , h ˆ K ] denotes the result of Hb disposed by decremental ˆ b = [h H selection algorithm. Before selecting beams, we can determine the number of beams by fixed CPC or the number of users.

Performance Analysis for Beamspace MIMO-NOMA System

3.2

1777

Clustering Method

Users in each cluster adopt NOMA technology to share the same time-frequency resource. Different users are selected to form a cluster by different clustering method. Nevertheless it is infeasible in actual communication system that exhaustive search due to the high computation complexity. Then we will introduce two sub-optimal clustering methods. We just consider no more than two users in one cluster in this paper. ˆb Method 1. Amplitude-Clustering After beam selection, each column of H corresponds to each user. Firstly, we define following set for each user.   M (k) =

(i∗ , i∗∗ )|i∗ =

|hi,k | , i∗∗ = 2

arg max i,i∈{1,...,NRF }

arg max i,i=i∗ ,i∈{1,...,NRF }

2

|hi,k |

,

(8) where ∀k ∈ {1, 2, . . . , K}. hi,k denotes the i-th element of the k-th column of ˆ b . The row numbers based on the largest two channel gain of k-th column of H ˆ Hb are selected and denoted as M (k) . Then, any two users which have the same set will be put into a cluster. Remaining users form a cluster respectively. Method 2. Correlation-Clustering Correlation-clustering is a method based ˆ b , correlation on correlation and gain difference of channel vector [9]. For H coefficient between the i-th user and the j-th user is defined as Corr (i, j) = hˆ i ·hˆ j 2 , i ∈ {1, 2, . . . , K}, j ∈ {i + 1, i + 2, . . . , K}. ˆ h  i 2 hˆ j 2 gain difference of these two users as d (i, j) =



we defined  Next,

ˆ   ˆ  hi − hj . We set a threshold ρ ∈ [0, 1]. When Corr (i, j) > ρ, we will 2

2

calculate d (i, j). Two users corresponding to the largest d (i, j) will be chosen into a cluster. Similarly, remaining users form a cluster respectively. 3.3

Power Allocation of Intra-cluster and Inter-cluster

The joint intra-cluster and inter-cluster power allocation can find the global optimum SE at the cost of high computation complexity. To achieve the tradeoff between performance and complexity, we adopt equal power allocation in inter-cluster and a suboptimal fractional transmit power control (FTPC) [10] in intra-cluster. The power allocation of two users in the i-th cluster is denoted as ⎧  −αf tpa 2P /K gi (1) ⎪ , user1 ⎨ pi (1) =  −αf tpa ni (1) j∈|Si | (gi (j)/ni (j))  −αf tpa (9) 2P /K gi (2) ⎪ ⎩ pi (2) =  , user2 −αf tpa n (2) i (g (j)/n (j)) j∈|Si |

i

i

where P stands for the total power of transmitted signal, ni (j) denotes the inter 2

ference plus noise suffered by the j-th user in i-th cluster. gi (j) = hH j,i gj,i 2 and

1778

Q. Zhu et al.

αf tpa denotes the squared norm of equivalent channel gain and power distribution factor respectively. So the difference between gi (1) /ni (1) and gi (2) /ni (2) satisfies the requirements of both SIC at receiver and transmitted power of two users at transmitter. 3.4

Precoding Matrix

To remove inter-cluster interferences, we adopt zero-force (ZF) precoding based on singular value decomposition (SVD) [5]. Assuming that channel state information is available at transmitter. After beam selection and clustering, precoding  matrix can be expressed as G = αF = [g1 , . . . , gq , . . . , gQ ], where α = ρ is used to satisfied the equation E (Gs)(Gs)H = I. Λs = E ssH tr(FΛs FH ) ˆ b (H ˆ HH ˆ b )−1 denotes denotes the diagonal matrix, and ρ is signal power, F = H b the result of ZF precoding.

4

Simulation Results

In this section, we take algorithms in Sects. 3.3 and 3.4 as precoding and power allocation respectively, and analyzes the effect of beam selection and clustering on the SE and EE of system by simulation. Main parameters involved in simulation results are listed in Table 1. Table 1. Simulation parameters Parameter

Value

Number of antennas N

81

Number of NLoS components Np

2

Rician factor of Rayleigh fading KRice 5 Power of one RF chain PRF

34.4 mw [8]

Transmitted power of the system Pmax 31.6 mw(15 dBm) Power distribution factor αf tpa

4.1

0.2

System with Various Beam Selection Algorithms and Amplitude-Clustering

In this part, we discuss the impact of various beam selection algorithms on SE and EE of system with amplitude-clustering. Figure 2 shows the SE against the number of users K, where SNR is 15 dB. We observe MC-S and MS-S with η = 95% can achieve higher SE than the other algorithms. Furthermore MMS outperforms MS-S and MC-S with N = K. Since this performance mainly depends on the number of beams at transmitter. EE against the number of

Performance Analysis for Beamspace MIMO-NOMA System

1779

Fig. 2. SE against the number of users K, where SNR = 15 dB

Fig. 3. EE against the number of users K, where SNR = 15 dB

users is shown in Fig. 3, where SNR is also set as 15 dB. The curves of MS-S, MC-S with N = K and MM-S increase with the number of users K, in contrast, MS-S and MC-S with η = 95% decrease with it. Next, fixing K = 40, we discuss the impact of SNR on SE of system with various beam selection. Figure 4 shows that MS-S and MC-S with η = 95% can achieve higher SE than other algorithms. 4.2

System with Various Beam Selection Algorithms and Correlation-Clustering

In this subsection, we explore the effects of correlation-clustering and amplitudeclustering elaborated in Sect. 3.2 on SE, while other parts of system and simulation parameter remain unchanged.

1780

Q. Zhu et al.

Above all, we determine the threshold ρ of correlation coefficient corresponding to the highest SE by simulation, the result of is ρ = 0.8. We consider K = 40 and MS-S with N = K, Fig. 5 shows SE against the number of users for two clustering methods. Comparing with amplitude-clustering, correlation-clustering can achieve higher SE.

Fig. 4. SE against SNR, where K = 40

Fig. 5. SE against the number of users of MS-S, where K = 40

In summary, we ensure that beam selection algorithms and clustering methods play an important role in power consumption of transmitter by the above simulation results. No scheme of system can achieve the highest SE and EE in any communication scenario. So we should select proper scheme to meet our requirement in practice.

Performance Analysis for Beamspace MIMO-NOMA System

5

1781

Conclusions

In this paper, we elaborate four parts of beam space MIMO-NOMA system, which consist of beam selection, clustering, power allocation and precoding. Next we select ZF precoding based on SVD, equal power allocation in inter-cluster and FTPC in intra-cluster, and discuss the impact of beam selection and clustering on SE and EE of system. Simulation results show that no scheme of system can outperform others in SE and EE in any environment. In the future, we will focus on finding the nearly optimal scheme for any specified communication scenario. Acknowledgements. The work presented in this paper was supported by National Natural Science Foundation of China under Grant No. 61671173.

References 1. Rappaport TS et al (2013) Millimeter wave mobile communications for 5G cellular: it will work!. IEEE Access 1:335–349 2. Rangan S, Rappaport TS, Erkip E (2014) Millimeter-wave cellular wireless networks: potentials and challenges. Proc IEEE 102(3):366–385 3. Amadori P, Masouros, C (2015) Low RF-complexity millimeter-wave beamspaceMIMO systems by beam selection. IEEE Trans Commun 2212–2222 4. Ding Z, Peng M et al (2015) Cooperative non-orthogonal multiple access in 5G systems. IEEE Commun Lett 19(8):1462–1465 5. Wang B, Dai L, Wang Z, Ge N, Zhou S (2017) Spectrum and energy-efficient beamspace MIMO-NOMA for millimeter-wave communications using lens antenna array. IEEE J 35(10):2370–2382 6. Ali S, Hossian E, In Kim D (2017) Non-orthogonal multiple access (NOMA) for downlink multiuser MIMO systems: user clustering, beamforming, and power allocation. IEEE Access 5(42):565–577 7. Saleh Adel AM, Valenzuela R (1987) A Statistical model for indoor multipath propagation. IEEE J Select Areas Commun 8. Amadori PV, Masouros C (2015) Low RF-complexity millimeter-wave beamspaceMIMO systems by beam selection. IEEE Trans Commun 63(6):2212–2223 9. Kimy B, Lim S et al (2013) Non-orthogonal multiple access in a downlink multiuser beamforming system. In: 2013 IEEE military communications conference, pp 1278– 1283 10. Saito Y, Benjebbour A, Kishiyama Y, Nakamura T (2013) System-level performance evaluation of downlink non-orthogonal multiple access (NOMA). In: IEEE international symposium on personal, indoor, mobile radio communications, pp 611–615

A Novel Low-Complexity Joint RangeAzimuth Estimator for Short-Range FMCW Radar System Yong Wang(&), Yanchun Li, Xiaolong Yang, Mu Zhou, and Zengshan Tian School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China [email protected]

Abstract. Aiming at the problem of high complexity of the conventional parameter joint estimation algorithm and the more clutter interference in the measured data. This paper proposes a novel joint range and azimuth estimator using multi-chirp coherent accumulation (MCA) for short-range frequency modulated continuous wave (FMCW) radar. Specifically, we combine fast Fourier transform (FFT) and multiple signal classification (MUSIC) using MCA to estimate range and angle. We verify the performance of the proposed algorithm by measured data, and the results show that our algorithm greatly improve the accuracy of measured data estimation and reduce the complexity. Keywords: Short-range FMCW radar  Joint range and azimuth estimation Multi-chirp coherent accumulation (MCA)  FFT-MUSIC



1 Introduction With the development of gesture recognition and vital feature detection [1, 2], shortrange frequency modulated continuous wave (FMCW) radar [3] is widely used because its characteristic such as no contact, no affected by light and high resolution. Therefore, the accuracy and real-time applications of target parameter estimation are the key point in the research of short-range FMCW radar. The parameter information of targets from radar is meaningful in conjunction with accurate range and azimuth angle. Two-dimensional parameter estimator is usually used to improve the accuracy. In case of it, conventional 2D-Discrete Fourier transform (DFT) has problems with spectrum leakage and fence effect, resulting in low parameter estimation accuracy. To solve this problem, 2D-ESPRIT [4] and 2D-multiple signal classification (MUSIC) [5] have been adopted to estimate parameters of multiple targets. However, they require covariance matrix and eigenvalue decomposition lead to high complexity. To reduce the algorithm complexity, in Ref. [6], the authors propose

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1782–1786, 2020 https://doi.org/10.1007/978-981-13-9409-6_215

A Novel Low-Complexity Joint Range-Azimuth Estimator

1783

to extract noise subspace directly from the correlation of received signals without singular value decomposition (SVD) and eigenvalue decomposition (EVD). Unfortunately, the computation of two-dimensional peak search is still huge. Hence, the algorithm of DFT for time-of-arrival (TOA) and ESPRIT for direction-of-arrival (DOA) [7, 8] is proposed, but due to clutter interference in measured data, the accuracy of estimator estimation is not high. In this paper, we proposed a novel low complexity algorithm that combine fast Fourier transform (FFT) and multiple signal classification (MUSIC) using multi-chirp coherent accumulation (MCA). In particular, we use fast Fourier transform (FFT) for MCA to get range spectrum, which can suppress clutter and reduce noise, and then the angle is estimated by MUSIC. The algorithm is verified by the measured data.

2 FMCW Radar Signal Model We assume that there are K targets, sampled intermediate frequency (IF) signal is expressed as scf ½l ¼ Ae

j2pqd sin hk k

 2nR e

2p

c

k l fs

þ

f0 2Rk c



þ wðlÞ;

n ¼ 0. . .L  1:

ð2:1Þ

where A is the amplitude of the IF signal, f0 is the initial operating frequency of the signal. n ¼ B=Tc is the slope of linear FMCW wave, Tc is the chirp duration. Rk is range of k-th target. W is the Gaussian white noise with a variance of r2 and a mean of constant, and q ¼ 0; 1;    Q  1 is the antenna number. L is the number of samples. According to Eq. (2.1), we get the relationship between frequency offset and the range of target such that fb ¼ 2Rn=c and the steering vector can be built A ¼ ½a1 ðtÞ; a2 ðtÞ; . . .aQ ðtÞT :

ð2:2Þ

3 Proposed Algorithm In this paper, we use fast Fourier transform (FFT) for MCA to get range spectrum, and then the angle is estimated by MUSIC. The block diagram of propose algorithm in this paper as shown in Fig. 1.

1784

Y. Wang et al. FFT for sampled signal of each chirp

Range spectrum

RX_1

RX_2

MUSIC

RX_Q

Fig. 1. Block diagram of propose algorithm

First, we use FFT for each chirp, for the i-th chirp time domain signal, the range spectrum vector calculated by the FFT as pi ¼

Ns X

pði; nÞ:

ð3:1Þ

n¼1

From Eq. (3.1) we get the range spectrum matrix Pr ¼ ½p1 ; p2 ; . . .pNc T of all chirps. Then, calculating the dot product from the range spectral matrix Pdot ¼ Pr  Pr and extract the peak information of the receiving antenna, the snapshot vector at time tNs is composed of the peaks of the range spectra of Q receiver elements can be written as Sðtn Þ ¼ ½Pr1 ðtNs Þ; Pr2 ðtNs Þ; . . .PrQ ðtNs ÞT :

ð3:2Þ

Finally, calculated the covariance matrix and formulate EVD for Eq. (3.2), which can split signal into the noise subspace Un and the signal subspace Us .Then according to the orthogonality relationship between noise matrix En ¼ Un UnH and Steering vector aðhÞ can construct spatial spectral functions such as Pmusic ¼ 1=ðaðhÞÞEn aH ðhÞÞ: The angle estimation can be realized by spectral peak search.

ð3:3Þ

A Novel Low-Complexity Joint Range-Azimuth Estimator

1785

4 Experimental Results and Analysis In this paper, the measured data is collected using AWR1642 radar chip with the work frequency of 76–81 GHz. The chip was designed by Texas Instruments (TI). The radar parameters [2] are set as 128 chirps, 64 sampling points and the sampling rate is 2 MHz work best. Meanwhile, the antenna has 2 transmits and 4 receivers. For the experiments, single corner reflector targets are placed separately at X1 ¼ ½25 cm; 0  and X2 ¼ ½20 cm; 20 , and the result of joint range-angle estimation of them using different algorithms, as shown in Figs. 2 and 3.

(a) 2D-MUSIC

(b) Ours

Fig. 2. Range-angle results at X1

(a) 2D-MUSIC

(b) Ours

Fig. 3. Range-angle results at X2

Figures 2 and 3 show compared with other algorithms, the proposed algorithm can effectively suppress clutter and reduce noise, where real target in the box. Algorithm performance comparison as shown in Table 1.

1786

Y. Wang et al. Table 1. Comparison of algorithms performance

2D-MUSIC Proposed

Range RMSE(cm) 2.0 3.2

Angle RMSE ( ) 1.3 0.9

CPU time (s) 131.89 3.02

Table 1 shows that although the range RMSE of proposed algorithm is 1.2 cm larger than the 2D-MUSIC one, its angle RMSE is 0:4 lower. More importantly, the CPU time of the proposed algorithm is 40 times less than 2D-MUSIC one.

5 Conclusion In this paper, we have proposed a low-complexity joint range and azimuth estimator using MCA for short-range FMCW radar. The proposed method consists of FFT for multi-chirp to get range spectral matrix and MUSIC for azimuth. The experimental results show that the proposed method greatly improve the estimation accuracy of measured data while reducing the complexity.

References 1. Ahmad A, Roh J C, Wang D et al (2018) Vital signs monitoring of multiple people using a FMCW millimeter-wave sensor. In: 2018 IEEE radar conference (RadarConf18). IEEE, pp 1450–1455 2. Wang Y, Wang S, Zhou M et al (2019) TS-I3D based hand gesture recognition method with radar sensor. IEEE Access 7:22902–22913 3. Hyun E, Oh W, Lee JH (2011) Design and implementation of automotive 77 GHz FMCW radar system based on DSP and FPGA. In: IEEE international conference on consumer electronics. IEEE, pp 517–518 4. Zoltowski DM, Wong TK (2000) ESPRIT-based 2-D direction finding with a sparse uniform array of electromagnetic vector sensors. IEEE Trans Signal Process 48(8):2195–2204 5. Belfiori F, van Rossum W, Hoogeboom P (2013) 2D-MUSIC technique applied to a coherent FMCW MIMO radar. In: The IET international conference on radar systems. IET, pp 1–6 6. Oh D, Lee J (2015) Low-complexity range-azimuth FMCW radar sensor using joint angle and delay estimation without SVD and EVD. IEEE Sens J 15(9):4799–4811 7. Kim S, Oh D, Lee J (2015) Joint DFT-ESPRIT estimation for TOA and DOA in vehicle FMCW radars. IEEE Antennas Wirel Propag Lett 14:1710–1713 8. Bongseok K, Sangdong K, Jonghun L (2018) A novel DFT-based DOA estimation by a virtual array extension using simple multiplications for FMCW radar. Sensors 18(5):1560– 1577

Comparative Simulation for Nonlinear Effect of Hybrid Optical Fiber-Links in High-Speed WDM Systems Zhan-Heng Dai1 , Wei-Feng Chen2 , Li-Min Li3 , Ruo-Fei Ma1(B) , Bo Li1 , and Gongliang Liu1 1

3

Department of Communication Engineering, Harbin Institute of Technology, Weihai 264209, China [email protected] 2 TOEC Technology Co., Ltd., Tianjin 300221, China China Academy of Electronics and Information Technology, Beijing 100041, China

Abstract. With the continuous development of high-speed wavelength division multiplexing (WDM) system, the transmission speed of light signals in optical fiber channels dramatically increases, which has caused the fiber nonlinear effect more and more prominent. As the previous methods to resist the influence were to some extent reliant on extra devices, to optimize the design of connection relationships in hybrid optical fiber networks might be a promising way to further improve the resistance ability of fiber nonlinear effect. To prove this, a simple WDM system including only one transmission path formed by two different types of optical fiber cables, i.e., G.652 and G.655, is established, based on which comparative simulations are performed to analyze the end-to-end nonlinear effect when the two types of optical fiber cables are connected in different order. Simulation results indicate that different connection orders of G.652 and G.655 cables correspond to different optical signal to noise ratio (OSNR) at the receiver side, demonstrating the difference in nonlinear effect accumulation in the end-to-end transmission and the importance on fiber link design in the WDM networks. Keywords: WDM system

1

· Optical fiber · Nonlinear effect · OSNR

Introduction

High-speed wavelength division multiplexing (WDM system, as an indispensable part of communication field, has been developed considerably in technology. Especially, the single-wavelength transmission rate of the WDM system has been increased from 10 to 40 to 100 Gb/s, which expands the capacity of the WDM system. On the other hand, the fiber nonlinear effect of the fiber link is also more and more significant in the WDM system [1]. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1787–1794, 2020 https://doi.org/10.1007/978-981-13-9409-6_216

1788

Z.-H. Dai et al.

At present, the major methods for resisting or suppressing fiber nonlinear effect can be concluded as follows. First, using the optical signal phase modulation device to inhibit the fiber nonlinear effect [2]; Second, performing reasonable controlling of incident light power to restrain the fiber nonlinear effect [3]; Third, suppressing the fiber nonlinear effect via destroying the formation conditions of four-wave mixing effect by polarization characteristics of light waves [4]. Obviously, the previous methods were relying much on various devices. Nevertheless, when operators deploy their backbone communication networks, an end-to-end transmission optical fiber transmission path usually consists of different types of optical fiber cables. Different optical fiber paths might corresponds to different end-to-end fiber nonlinear effects. Hence, how to reduce the influence of the nonlinear effect of a fiber transmission path by well designing the deployment of each fiber cable should also be investigated. This is also the motivation of this paper. We want to show the influence of connection relationship between two different types of fiber cables on the fiber nonlinear effect of entire optical transmission path via simulations, which can be used as a reference for the deployment of the backbone optical fiber networks. In this paper, the Hybrid-Optical-Fiber-Group Networks have been discussed by comparing and analyzing fiber-link schemes in the WDM system. Due to the physical characters of the fibers, including mode-field-diameter etc., the optical signal has induced different fiber-nonlinear effects in the fiber links, which have been used to optimize the optical signal to noise ratio (OSNR) in the WDM system. To provide a quantitative analysis, a simple optical fiber communication network consisting of two types of optical fiber cables, i.e., G.652 and G.655, is established to simulate and analyze the influence of nonlinear effect on the end-to-end transmission in terms of the receive OSNR. The simulation results validate the different receive OSNR performances for the two different connection orders, indicating the importance of the optimization design on the connections of optical fiber cables with different parameters. The remainder of this paper is organized as follows. Section 2 demonstrates the system model for the hybrid optical fiber group network. Problem analysis and description are presented in Sect. 3. Section 4 provides the simulation results and corresponding analysis, followed by conclusions of the entire paper in Sect. 5.

2

System Model

The relationship between the fiber nonlinear effect coefficient η and the major parameters of a optical fiber cable can be approximately expressed as [5] η∝

2π · n2 · Leff · Pk , λ · Aeff

(1)

where n2 is the nonlinear refractive index determined by the physical properties of the fiber material, Leff is the length of the optical fiber link, Pk is the peak power of the optical signal, λ is the wavelength of light, and Aeff is the light effective mode field cross-sectional area.

Comparative Simulation for Nonlinear Effect of Hybrid Optical Fiber-Links

1789

As indicated by (1), the nonlinear effect of a optical fiber link is reversely proportional to the light effective mode field cross-sectional area Aeff and is proportional to the peak power of the optical signal Pk and the length of the fiber link Leff , i.e., the nonlinearity would be not obvious when the optical signal power is low [6]. Since, different optical fibers corresponds to different parameters, e.g., dispersion coefficients, effective mode field diameters etc., it might show different fiber nonlinear effect when the connection relationships of some different types of optical fiber cables are changed. To verify the influence of connection relationships between different optical fiber cables on the end-to-end nonlinear effect of a WDM network, we establish a simplified system model, which consists of a transmitter device, two types of optical fiber links, an optical amplifier (OA) with dispersion compensation system, and a receiver device, as shown in Fig. 1. The two types of optical fiber links are connected via OA, and a 40 Gb/s optical signal is transmitted from the transmitter node to the receiver node by passing through the first type of optical fiber cable, the OA, and the second type of optical fiber cable sequentially.

Transmitter

First type of optical fiber cable

OA

Second type of optical fiber cable

Receiver

Fig. 1. Schematic diagram for a simplified optical fiber communication system

3

Problem Description and Analysis

In this paper, the two types of widely used optical fibers in engineering practices are selected for the research purpose mentioned above. Obviously, we can easily conclude that based on (1) that when G.652 optical fiber is applied in both of the two links in the system shown in Fig. 1, the end-to-end nonlinear effect resistance performance of the entire system might be better than that when G.655 optical fiber is applied in both of the two links in the same system. Because G.652 optical fiber corresponds to an overall better characteristics in reducing the influence of nonlinear effect than G.655 optical fiber, as shown by Table 1. However, the end-to-end nonlinear effect of the entire system could not be easily figured out. Preliminary deduction showed that the end-to-end nonlinear effect of the hybrid fiber-link should be different in a WDM system when the connection relationships have been changed. When the system was built and the major parameters were fixed, the magnitude of nonlinear effect on an optical

1790

Z.-H. Dai et al.

fiber link can be reflected by the noise power level at the receiver side, thus receive OSNR can be used as an indicator for the nonlinear effect level of an optical fiber link. Since OSNR is selected as the indicator for nonlinear effect, the receive OSNR under two connection relationships between G.652 and G.655 optical fibers when they are both applied into the system shown in Fig. 1 will be simulated and compared. The two connection relationships are shown by Figs. 2 and 3, respectively, which are called Scheme A and Scheme B, respectively. Before performing simulations, we can make a predictive analysis on the comparison of the two connection schemes. In specific, it is expected that Scheme B might correspond to a better OSNR performance at the receiver side than Scheme A. Because G.655 optical fiber might corresponds to more noise caused by the nonlinear effect due to its overall characteristics, and applying it as the first-hop transmission link might lead to more noise accumulation in the second-hop transmission link. Such a prediction will be verified via simulation in the next section.

OA

Transmitter

G.655

Receiver

G.652

Fig. 2. System model corresponding to connection Scheme A

OA

Transmitter

G.652

Receiver

G.655

Fig. 3. System model corresponding to connection Scheme B

4

Simulation and Analysis

To validate the theoretical prediction, two hybrid-link optical transmission model, as shown in Figs. 2 and 3, respectively, are established in the 16-channel WDM system with Optisystem (13 version). As commonly deployed in the operators’ networks, both G.652 and G.655 optical fiber cables are set at a length of

Comparative Simulation for Nonlinear Effect of Hybrid Optical Fiber-Links

1791

Table 1. Major parameters used in performance evaluation. Parameters

G.652 G.655

Dispersion (ps/km/nm)

17

5

Dispersion slope (ps/km/nm2 )

0.075

−0.3

Effective mode field diameter (µm)

10.5

4–8

Light effective mode field cross-sectional area (µm2 ) 80

20–60

25 km. The data rate for the WDM system is set at 40 Gb/s and non-return-tozero (NRZ) coding technique is adopted. Furthermore, the incident light power at the transmitter side is set as 1 dBm, while the noise figure of the OA is with 6 dB and the attenuation of each optical fiber link is with 0.2 dB/km. Other major parameters used in the simulation have been listed in Table 1 [8]. In Table 1, the light effective mode field cross-sectional area of G.655 fiber cable is sequentially selected from 20 to 60 µm2 with a fixed interval, and seven different cross-sectional area values are selected, where the start value is 20 µm2 and the end value is 60 µm2 . These selected different cross-sectional area values is corresponding to Case A 1, Case A 2, . . ., and Case A 7 in Fig. 4, respectively, when connection Scheme A is used, and is also corresponding to Case B 1, Case B 2, . . ., and Case B 7 in Fig. 5, respectively, when connection Scheme B is used.

31

OSNR (dB)

30.5 30

29.5 29 28.5 191.5 191.7

191.9 192.1

192.3 192.5 192.7 192.9 193.1 193.3 193.5 193.7 193.9 194.1

Frequency (GHz)

Case_A_1

Case_A_2

Case_A_3

Case_A_4

Case_A_5

Case_A_6

194.3

194.5

Case_A_7

Fig. 4. Receive OSNR of connection Scheme A

Based on parameters introduced above, the receive OSNR performance the entire transmission path under the two different connection relationships, i.e., connection Scheme A and connection Scheme B, can be simulated, respectively. Figures 4 and 5 show the receive OSNR performance under the two connection schemes, respectively, while the effective cross-sectional area and the optical signal frequency is varying.

OSNR (dB)

1792

Z.-H. Dai et al.

31 30.8 30.6 30.4 30.2 30 29.8 29.6 29.4 29.2 29 191.5191.7 191.9 192.1

Case_B_1

192.3 192.5

Case_B_2

192.7 192.9

193.1 193.3 193.5 193.7 193.9 194.1 194.3 194.5 Frequency (GHz)

Case_B_3

Case_B_4

Case_B_5

Case_B_6

Case_B_7

Fig. 5. Receive OSNR of connection Scheme B Case_B_1-Case_A_1

0.6

Case_B_2-Case_A_2

OSNR difference (dB)

0.5

Case_B_3-Case_A_3 Case_B_4-Case_A_4

0.4

Case_B_5-Case_A_5 0.3

Case_B_6-Case_A_6

0.2

Case_B_7-Case_A_7

0.1 0 1 9 1 . 51 9 1 . 71 9 1 . 91 9 2 . 11 9 2 . 31 9 2 . 51 9 2 . 71 9 2 . 91 9 3 . 11 9 3 . 31 9 3 . 51 9 3 . 71 9 3 . 91 9 4 . 11 9 4 . 31 9 4 . 5

-0.1

Frequency (GHz)

Fig. 6. Receive OSNR difference between connections Schemes A and B (OSNR of Scheme B minus OSNR of Scheme A)

From Figs. 4 and 5, it is found that the fiber nonlinear effects have different influences on the receive OSNR performance when different effective mode field diameters are adopted under both of the two connection schemes. In most cases, with the same light signal frequency, maximum difference of OSNR under different cross-sectional area for connection Scheme B is smaller than that for connection Scheme A. This indicates that Connection scheme A might be more sensitive to the parameter of cross-sectional area than connection Scheme B. Comparing the connection Schemes A and B, it is found that in most cases, the OSNR value of connection Scheme B is higher than that of Scheme A under the same frequency and cross-sectional area. The comparison result is depicted in Fig. 6 in terms of OSNR difference, which is defined as the OSNR value of Scheme B minus the corresponding OSNR value of Scheme A. In Fig. 6, Case B 1−Case A 1, Case B 2−Case A 2, . . ., and Case B 7−Case A 7 repre-

Comparative Simulation for Nonlinear Effect of Hybrid Optical Fiber-Links

1793

sent the results of OSNR difference corresponding to the seven selected crosssectional area values of G.655, respectively. Apparently, such a comparing result validates that connection Scheme B is superior to connection Scheme A on the capability of nonlinear effect resistance. When fixing the cross-sectional area of G.655 fiber cable at 60 µm2 , i.e., Aeff,G655 = 60 µm2 , the power spectrum densities of the received light signals at receiver side under connection Scheme A and connection Scheme B are illustrated in Fig. 7 (The subfigure on the left side corresponds to Scheme A and the subfigure on right side corresponds to Scheme B). The noise power of under connection Scheme B is about 2.4 dB lower than that under connection Scheme A, which proves that connection Scheme B performs better on resisting fiber-nonlinear noise in the WDM system.

Fig. 7. Spectrums of signals at receiver node under connection Schemes A and B

5

Conclusion

In this paper, the fiber nonlinear effect in hybrid optical fiber links in high-speed WDM system was researched. We established a simplified optical transmission system including two fiber links connected by an OA. Different types of optical fibers, G.652 and G.655, were applied to form the two links, respectively, to research the influence of different connection relationships on the end-to-end nonlinear effect of the entire optical transmission path. Two connection schemes were formed and compared. Receive OSNR was selected as the performance indicator. Simulation results showed that changing the connection orders of two different types of optical fibers might have a influence on the end-to-end nonlinear effect of the entire optical link, and a well design on the connection relationships in a hybrid optical communication networks might be able to reduce the noise caused by nonlinear effect.

1794

Z.-H. Dai et al.

Acknowledgements. This work was supported partially by Shandong Provincial Natural Science Foundation, China (Grant No. ZR2019QF003), the Joint-Construction Project for Universities at Weihai City, China, and the Fundamental Research Funds for the Central Universities, China (Grant No. HIT.NSRIF.2019081).

References 1. Lee J, Kaneda N, Chen YK (2017) Nonlinear equalizer for 112-Gb/s SSB-PAM4 in 80-km dispersion uncompensated link. In: Optical fiber communications conference and exhibition. IEEE 2. Igarashi K, Kikuchi K (2008) Optical signal processing by phase modulation and subsequent spectral filtering aiming at applications to ultrafast optical communication systems. IEEE J Select Top Quantum Electron 14(3):551–565 3. Hansen PB, Nielsen TN, Stentz AJ (2001) Method of optical signal transmission with reduced degradation by non-linear effects. United States Patent 6323993 4. Inoue K (2002) Arrangement of orthogonal polarized signals for suppressing fiber four-wave mixing in optical multichannel transmission systems. IEEE Photonics Technol Lett 3(6):560–563 5. Agrawal GP (2013) Nonlinear fiber optics, 5th edn. Academic Press 6. Asobe M, Kanamori T, Kubodera K (1993) Applications of highly nonlinear chalcogenide glass fibers in ultrafast all-optical switches. IEEE J Quantum Electron 29(8):2325–2333 7. ITU-T: Series G: Transmission system and media, digital system and network— Transmission media characteristics—Characteristics of optical components and subsystems—Optical interfaces for multichannel systems with optical amplifiers (Recommendation G.692). ITU Standard 8. Alvarez L, Sauceda A, Xiao M (2003) Optical transmission of A subwavelength aperture: size and fiber parameter dependence of near-field resolution. Opt Commun 219:9–14

POI Recommendation Based on Heterogeneous Network Yan Wen1(B) , Jiansong Zhang1 , Geng Chen2 , Xin Chen2 , and Ming Chen3 1

College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China [email protected] 2 College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China 3 State Grid Shandong Electric Power Company, Qingdao Power Supply Company, Qingdao 266500, China

Abstract. With the development of wireless networks and positioning technologies, location-based social networks (LBSN) have gained popularity. More and more people share experiences about points of interest (POI) through “check-in” behavior. Mining the check-in data can help people discover the POI they are interested in. However, the data sparsity of user check-in records and the cold start problem with users and POI pose serious challenges. In addition, POI recommendation need to consider the impact of multiple factors. In order to solve the above problems, we propose a POI recommendation method based on heterogeneous network representation learning, called HRPR. First, we propose to use the meta-path based weighted random walk method to generate node sequences and learn the representation vector of the user and POI by means of the skip-gram model. Then, we design a POI recommendation framework based on deep neural network. The experimental results on real-world Yelp dataset demonstrate the effectiveness of our framework. Keywords: Points of interest · Heterogeneous network Recommendations · Neural network

1

·

Introduction

With the development of Mobile Internet and smart terminals, many locationbased social networks have emerged. Currently popular LBSN platforms include Yelp, Gowalla, Foursquare and Facebook etc. In recent years, [1] POI recommendation based on LBSN has become the hot research topic. Different from traditional recommendation tasks, POI recommendations are based on contextual information and location-aware personalized recommendations [2]. The POI recommendation aims to mine the potential relationship between user and POI c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1795–1802, 2020 https://doi.org/10.1007/978-981-13-9409-6_217

1796

Y. Wen et al.

based on the user’s behavior data [3]. It not only helps people discover locations of interest, but also helps businesses discover potential customers [4]. So the research on POI recommendation is of great significance. There are many challenges in the POI recommendation. First, the user’s check-in data sparseness makes it impossible for traditional Recommendation algorithm to perform [5]. Cold-start problem [6], cold-start is a problem that has always existed in the field of recommended systems, and it also plagues POI recommendations. POI recommendations are not only related to user preferences, but also to geographic factors, social factors [7], etc. Users have different POI preference at different times and locations. In this paper, we propose a new POI recommendation method based on heterogeneous network representation learning, called HRPR. It can effectively solve the problems of data sparsity and cold-start. Specifically, we capture user preferences, geographic impact, and social impact separately by using different meta-paths in heterogeneous networks. Then we sample different node sets of sequences by different meta-paths based weighted random walks. We use the skip-gram model to learn the embedding of the user and POI nodes in the sequence in each set. Due to the heterogeneity of the data in the heterogeneous network, we optimize the negative sampling in the skip-gram method. We fuse the user and POI embedding vectors in multiple sequence sets into separate user vectors and POI vectors through the vector fusion method. Finally, the user and POI vector are spliced together as a feature vector and input to the deep neural network for the final recommendation task. The experiments on Yelp datasets demonstrate the effectiveness of the method.

2

Representation Learning Model Based on Heterogeneous Network

In this section, we will introduce a method based on heterogeneous network representation learning. First, we use the meta-path based weighted random walk strategy to sample the sequence, and then use the skip-gram model in the word2vec method to perform node representation learning. We use a heterogeneous negative sampling method in the skip-gram model optimization. Next, we will explain the method in detail. Since different meta-paths contain different semantic relationships. For example, the meta-path UU contains the influence of social relationships, the UPU contains the user’s preference for the POI, and the PRP contains the geographical influence of the POI. The key to generating a meaningful sequence of nodes is to design an effective walk strategy that captures the complex semantics in the HIN. In the HIN literature, the meta-path is an important concept for characterizing the semantic pattern of HIN [8]. Therefore, we recommend using a metapath based weighted random walk method to generate a sequence of nodes. Given R R R a heterogeneous network G = (V, E) and meta-path :A1 →1 A2 →2 A3 · · · →t At+1 , At denoted as the t-th node type Rt denoted as the relationship between the Tth

POI Recommendation Based on Heterogeneous Network

1797

node and the t + 1-th node, The path is generated by the following probability distribution:  p(nt+1 = x | nt = v) =

 i∈N

exp(wvx ) exp(wvi )

At+1 (v)

(v, x) ∈ E, ϕ(x) = At+1 ;

0 otherwise;

(1)

where nt is the t-th node in the walk, the node type of v is At , N At+1 (v) represents the neighbor node under the original path constraint of node v. wvx determines the transition probability for the weight, and ϕ(x) is the type function. The weighted random walk strategy based on the meta-path ensures that the semantic relationships between different types of nodes can be correctly merged into the skip-gram. Example 1. Given two meta-paths UPU, UU, starting with the Jack user node we can generate two paths:: (1) Jackuser → Caf epoi → T omuser and (2) Jackuser → Jeasonuser → M aryuser . It can be intuitively seen that these metapaths can generate a sequence of nodes with different semantic relationships. The meta-path based weighted random walk strategy ensures that the semantic relationships between different types of nodes can be correctly merged into the skip-gram. The negative sampling method in the traditional skip-gram model is to sample the nodes of all negative samples. This method is applicable in homogeneous networks, but in heterogeneous networks, traditional negative sampling may sample negative sample nodes that are different from positive sample node types due to different node types, which causes some problems. We use a heterogeneous negative sampling method. The method performs negative sampling by sampling only the nodes of the same type as the positive samples, and then performing the final optimization. In other words, the conditional probability p(ct | v; θ) is normalized on a specific node type. exp(Xct · Xv ) ut ∈Vt exp(Xut · Xv )

p(ct | v; θ) = 

(2)

where Vt is a set of t-type nodes, ct is a set of first-order neighbor nodes of node type t, and a set of multi-distribution is assigned to each type of neighborhood in the output layer of the skip-gram model. In the traditional skip-gram model, the dimension of the output multinomial distribution is equal to the number of nodes in the network. However, in a hetero-negative-sampling skip-gram.The multi-distribution dimension of a node of type t is determined by the number of nodes of that type. We will sample different sets of sequences based on different meta-paths, and then perform embedded learning for different sets of sequences. We can construct the neighborhood Nu of node u based on co-occurrence in the fixed window size. We can optimize the following goals to learn the representation vector of the node:   p(c | v; θ) (3) arg max θ

v∈V c∈N(v)

1798

Y. Wen et al.

where N(v) is the neighborhood of node v in network G, in other words it is the first-order neighbor nodes set of node v. p(c | v; θ) depends on the conditional probability given to the context node c of node v. We use stochastic gradient descent to optimize this goal to learn the skip-gram embedding model. The main difference between the traditional method and our method is the construction of N(u) . Our method uses a weighted random walk based on a meta-path to select neighbor nodes.

3

Base on Deep Neural Network Recommendation Framework

Heterogeneous network representation learning provides a unified way to extract useful information from heterogeneous networks.Then we have studied how to extract and represent useful information from the LBSN for recommendation based on heterogeneous networks. Our Framework consists of two parts: vector fusion and deep neural network. Through heterogeneous network representation learning, we can obtain user representation vectors and POI representation vectors. Next we will study how to use the embedded vector to do POI recommendation. We will get the user representation vector and the POI representation vector from different sequence sets, respectively, using the vector summation average to perform the fusion operation. The vector summation method can perform vector fusion organically and integrate the influence of different factors. Su =

 lu

u i=1 ei

lu

(4)

where lu is the number of user u vectors, eui represents the i-th vector of user u, and Su represents the merged user vector. Finally, the merged user vector and POI vector can be obtained through the vector summation average. The fusion of POI vectors and user vector fusion practices are consistent. We combine the user vector and the POI vector obtained by fusion to form a feature vector. Su,p = [Su , Sp ]

(5)

We built our score predictor using deep neural networks (DNN). The feature vector is then input into the input layer of the deep neural network. The input layer performs the following calculations: a(2) = f (W (1) Su,p + b(1) )

(6)

where Su,p is the feature vector of user u and location p. Specifically, each hidden layer performs the following calculations: a(l+1) = f (W (l) a(l) + b(l) )

(7)

POI Recommendation Based on Heterogeneous Network

1799

where l is the number of layers and f () is the activation function, usually using a linear activation function ReLU . a(l) , b(1) and W (1) are the activation function, the deviation and the weight of the first layer model, respectively. The output layer is a linear function that implements the final prediction of the score, which performs the following operations: (n) n a + bn r u,p = W

(8)

We input the feature vectors fused by the vector summation average into the final deep neural network to learn the parameters of the proposed model in a uniform way. Its objective function can be expressed as follows: L=

 u,p,ru,p ∈R

2 (ru,p − r u,p ) + λ

m     (j) 2 W 

(9)

j=1

where r u,p is the predicted score using Eq. 8. λ is a regularization parameter, m  (j) 2  λ j=1 W  s a regular term used to prevent over-fitting of the model. We use stochastic gradient descent (SGD) to effectively optimize the final goal.

4

Experiments

In this section, we will demonstrate the effectiveness of the algorithm by experimenting with Yelp real dataset and comparing it with other recommended methods. The Yelp dataset records user ratings for local businesses, including corporate social relationships and attribute information, including 16,239 users and 14,282 POI locations, of which 198,397 are scored in the United States. Among these methods, a method based on heterogeneous network representation learning requires specifying a meta path to be used. We give the meta path used in {U P U, U U, P U U P, U P T P U, U P T P U, U P RP U, P RP }. The parameter settings for this model: the embedding dimension d = 128, the path length of the random walk is set to 40, and the window size is set to 3. The parameters in the recommended model are set as follows: the learning rate is 0.005, using the mean square error loss function, the optimizer uses the random gradient descent (SGD), sets the number of iterations to 50, sets the early stop, and stops the training when the loss does not fall for 10 consecutive rounds. We use the widely used mean absolute error (MAE) and root mean square error (RMSE) to measure the recommended performance of different models. We consider the following methods for comparison: • PMF [9]: It is a classical probability matrix decomposition model by explicitly decomposing the rating matrix into two low-dimensional matrices. • FM-HIN [10]: It is a context-aware decomposition machine that can take advantage of various auxiliary information. In our experiments, we extracted heterogeneous information as contextual features and incorporated them into a decomposition machine for rating prediction.

1800

Y. Wen et al.

• DSR [11]: It is a MF-based recommendation method with dual similar regularization, while imposing constraints on users and projects with high and low similarities. • HERec [12]: It is a recommended model based on the metapath2vec method in heterogeneous information networks. In effectiveness experiments, it uses a personalized nonlinear fusion function to fuse vectors. We divided the Yelp dataset into a training set and a test set. We set five relatively large training ratios, such as 0.9, 0.8, 0.7, 0.6, 0.5, we will randomly generate a training set based on each ratio, and the remaining data as a test set. We average the results for the final performance. The result is shown in the Fig. 1. The main results of the experimental results are summarized as follows:

(a) MAE

(b) RMSE

Fig. 1. Performance comparison of different methods on Yelp datasets.

We experimented separately on the two datasets, using the parameters defined above. The evaluation indicators used in the model are MAE and RMSE. (1) In the baseline model, the HIN-based method HERec performs better than the traditional MF-based method PMF and FM-HIN and DSR, indicating the usefulness of adding heterogeneous information. It is worth noting that the HERec module performs best in addition to our model. The reason is that there is a wealth of auxiliary information in HIN, which makes the features of embedded vector expression more abundant. Using these features to input into our recommendation model to improve the fitting degree of the data, thus improving the recommended effect. (2) HRPR is our approach and it is always better than all comparison models. Compared with other HIN-based methods, HRPR adopts a more unified approach to integrate all influencing factors. It uses deep neural network and the vector summation average to improve the recommendation system based on HIN representation learning. thereby providing better information

POI Recommendation Based on Heterogeneous Network

1801

extraction through the HIN embedded model. Furthermore, the superiority of our proposed HRPR becomes more significant as the training data decreases. We consider the extent to which different cold starts are used for our experimental model. First, we first divided the “cold start” users into three groups based on the number of score records. It is easy to see that the first group of cases is the most sparse because the user ratings from this group are the least. Here, we only choose to use the HIN-based information for the recommended baseline model, including DSR and FM-HIN. We show the performance comparison of the different methods in Fig. 2. For the sake of convenience, we will show the effect improvement rate of the PMF. In general, all comparison methods are superior to PMF. The method presented in this paper performs best in all methods. For users with fewer scores, it is important to reduce the sparseness of the data by adding other supporting factors. The results show that the HIN-based information effectively improves the recommendation performance, and the proposed HRPR method can utilize the LBSN and other auxiliary information in a more efficient manner to alleviate the problems caused by cold start.

(a) MAE

(b) RMSE

Fig. 2. Performance comparison of different methods for cold-start prediction on Yelp datasets.

5

Conclusion

In this paper, we first proposed to do POI recommendation based on heterogeneous network representation learning, and to capture user preferences, geographic influences, and social impacts through different meta-paths. Embed users and POIs into a shared low-dimensional space to cope with data sparsity and cold-start problems in LBSN. In addition, in order to make the embedded vector more expressive of rich semantics, we use the vector summation average to

1802

Y. Wen et al.

organically combine the user vector and the POI vector. Then the embedded vector of the location that the user has not visited and the embedded vector of the target user are spliced into the model, and the POI is recommended to the user according to the ranking of the final score. Acknowledgements. This work was supported by Natural Science Foundation of China (No. 61701284, 61702306, 6160227), MOE (Ministry of Education in China) Project of Humanities and Social Sciences (No. 17YJCZH187), Qingdao Philosophy and Social Sciences Planning Project (QDSKL1801131).

References 1. Bao J, Zheng Y, Mokbel MF (2012) Location-based and preference-aware recommendation using sparse geo-social networking data. In: Proceedings of the 20th international conference on advances in geographic information systems. ACM, pp 199–208 2. Feng W, Wang J (2012) Incorporating heterogeneous information for personalized tag recommendation in social tagging systems. In: Proceedings of the 18th ACM SIGKDD international conference. ACM, pp 1276–1284 3. Gao H, Tang J, Hu X, Liu H (2015) Content-aware point of interest recommendation on location-based social networks. In: Twenty-ninth AAAI conference on artificial intelligence 4. Lian D, Zhao C, Xie X, Sun G (2014) GeoMF: joint geographical modeling and matrix factorization for point-of-interest recommendation. In: Proceedings of the 20th ACM SIGKDD international conference. ACM, pp 831–840 5. Rendle S (2012) Factorization machines with libfm. ACM Trans Intell Syst Technol (TIST) 3:57 6. Shi C, Hu B, Zhao WX, Philip SY (2018) Heterogeneous information network embedding for recommendation. IEEE Trans Knowl Data Eng 31:357–370 7. Shi C, Philip SY (2017) Heterogeneous information network analysis and applications. Springer 8. Sun Y, Han J, Yan X, Yu PS, Wu T (2011) Pathsim: meta path-based top-k similarity search in heterogeneous information networks. Proc VLDB Endowment 4:992–1003 9. Wang H, Shen H, Ouyang W, Cheng X (2018) Exploiting POI-specific geographical influence for point-of-interest recommendation. In: IJCAI, pp 3877–3883 10. Wang H, Terrovitis M, Mamoulis N (2013) Location recommendation in locationbased social networks using user check-in data. In: Proceedings of the 21st ACM SIGSPATIAL international conference. ACM, pp 374–383 11. Yin H, Cui B (2016) Spatio-temporal recommendation in social media. Springer 12. Zheng J, Liu J, Shi C, Zhuang F (2017) Recommendation in heterogeneous information network via dual similarity regularization. Int J Data Sci Anal 3:35–48

A Survey on Named Entity Recognition Yan Wen1(&), Cong Fan1, Geng Chen2, Xin Chen2, and Ming Chen3 1

2

College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China [email protected] College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China 3 State Grid Shandong Electric Power Company, Qingdao Power Supply Company, Qingdao 266500, China

Abstract. Natural language processing is an important research direction and research hotspot in the field of artificial intelligence. Named entity recognition is one of the key tasks, which is to identify entities with specific meanings in the text, such as names of people, places, institutions, proper nouns, etc. Traditional named entity recognition methods are mainly implemented based on rules, dictionaries, and statistical learning. In recent years, with the rapid expansion of Internet text data scale and the rapid development of deep learning technology, a large number of deep neural network-based methods have emerged, which have greatly improved the accuracy of recognition. This paper attempts to summarize the traditional methods and the latest research progress in the field of named entity identification, and summarize and analyse its main models, algorithms and applications. Finally, the future development trend of named entity recognition is discussed. Keywords: Dictionary mechanism

 CRF  LSTM  Transfer learning  Attention

1 Introduction Natural Language Processing (NLP), also known as Information Extraction (IE), is an important research direction in the field of computer science and artificial intelligence. It mainly studies various theories and methods that can realize effective communication between human and computer in natural language. NLP contains multiple subtasks such as word segmentation, named entity recognition, text summarization, machine translation, sentiment analysis, speech recognition and more. Named Entity Recognition (NER) is one of the key tasks. Named entities were originally proposed at the 6th MUC (Message Understanding Conferences) in 1995 [1], mainly referring to words or phrases with specific names in the text. It generally consists of three major categories (Entity classes, Time classes, and Number class) and seven subclasses (Person name, Place name, Institution name, Time, Date, Currency, and Percentage) [1, 2]. NER is designed to identify these proper nouns in the text and classify them into appropriate categories. NER is the basis for many advanced tasks in © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1803–1810, 2020 https://doi.org/10.1007/978-981-13-9409-6_218

1804

Y. Wen et al.

natural language processing, such as syntax analysis, text summary, information retrieval, automatic question and answer and so on.

2 Rule-Based and Dictionary-Based Methods Rule-based and dictionary-based methods generally analyze the characteristics of an entity and then manually construct rules for matchup, which are the earliest methods used in NER. The rule templates rely on the establishment of knowledge bases and dictionaries [3]. In 1991, Rau [4] published a paper on “Extracting and Identifying Company Names” at the 7th IEEE Artificial Intelligence Application Conference, which mainly used heuristic algorithms and manual writing rules. In 1997, Zhang and Wang [5] used rule-based methods to identify Chinese university names. The accuracy and recall rates were 97.3% and 96.9%, respectively. In 2000, Farmakiotou et al. [6] proposed a rulebased identification method for named entities in Greek financial texts. In 2002, Wang et al. [7] of the Hong Kong Polytechnic University used a rule-based approach for efficient name recognition. For text with many features, the method of using rules is simple and effective. However, these rules often rely on specific language, domain, etc., so they have poor applicability and high maintenance costs [3].

3 Statistical Learning Based Method The statistical-based methods build statistical learning models such as Maximum Entropy Models (MEM), Hidden Markov Model (HMM), Conditional Random Field Model (CRF), etc. on the manually labelled corpus and a number of selected features, and the models are used to extract new named entities. 3.1

HMM

HMM is a statistical model that is very widely used and classical. In HMM, fx1 ; x2 ; . . .; xn þ 1 g is the hidden state of the Markov transition process, y1 ; y2 ; . . .; yn þ 1 is the output, that is, the observation state. The model of the HMM is expressed as: k ¼ ðA; B; pÞ

ð1Þ

where A is the state transition probability matrix; B is the observation probability matrix [8]. In 1999, Bikel et al. [9, 10] proposed an English named entity recognition method based on HMM-IdentiFinder™, and selected number symbols and special character sets as features. For the MUC-6 test text set, the recognition accuracy of English place names, Institution names and Person names reached 97%, 94% and 95%, respectively, and the recall rates could reach 95%, 94% and 94%, respectively. In 2009, the HMM

A Survey on Named Entity Recognition

1805

constructed by Liu [11] could take into account the contextual feature information in the process of word segmentation and labeling, the context to got the best state sequence. Due to the high efficiency of the Viterbi algorithm, HMM is relatively simple and fast for training. However, due to its output independence assumption, the model has relatively poor performance. 3.2

CRF

CRF is also a classic supervised learning model for sequence labelling. The linear chain conditional random field was the mainstream model of NER before the prevalence of deep neural network based models. Let X ¼ ðX1 ; X2 ; . . .; Xn Þ, Y ¼ ðY1 ; Y2 ; . . .; Yn Þ, which are all sequences of random variables represented by linear chains. Given a sequence X of random variables, the conditional probability distribution P(Y|X) of the random variable Y constitutes the CRFs, Markov property is met [8]. The model expression is as follows: ! X X 1 kk tk ðyi1 ; yi ; x; iÞ þ ll sl ðyi ; x; iÞ exp PðYjXÞ ¼ ZðxÞ i;k i;l ZðxÞ ¼

X Y

exp

X i;k

kk tk ðyi1 ; yi ; x; iÞ þ

X

ð2Þ

! ll sl ðyi ; x; iÞ

ð3Þ

i;l

where tk and sl are characteristic functions; kk and ll are corresponding weights; Z(x) is a normalization factor. In 2006, Krishnan and Manning [12] used the CRF model to extract global constraint information from NER. The paper trained two CRF models: one that used local features for prediction; the other extracted local information and features from the output of the previous CRF model. The F1 value of the second CRF model reached 87.24%. The performance of the CRF algorithm is better than that of the HMM. It does not normalize every node, but all features are globally normalized, so the global optimal value can be obtained. But the training of CRF model is more complicated, with slow convergence and long training time. There are other methods of statistical-based that are not mentioned as follows. In 1998, Borthwick et al. [13] proposed MEM-based NER method for English and Japanese. Sekine et al. [14] proposed a Japanese-named entity recognition method based on Decision Tree. In 2002, Takeuchi and Collier [15] used Support Vector Machine-based approach to identify entities in the field of molecular biology. In 2003, De Meulder and Daelemans [16] proposed a named entity recognition method based on Memory Learning Algorithm. Although the statistical-based methods are effective, they need well-selected features and have to go through more complex training. Moreover, these methods rely heavily on the corpus, but the labelled corpus that can be used to construct and evaluate the named entity recognition system is relatively small [3].

1806

Y. Wen et al.

4 Hybrid Method The hybrid methods [3] generally perform filtering and pruning through a set of rules in advance based on traditional models. In 2006, Zhou et al. [17] proposed Chinese named entity recognition based on Cascaded Conditional Random Field Model (CCRF). The authors first used the Nshortest path segmentation strategy to perform text segmentation. Then they used the underlying CRF model to identify ordinary and non-nested names and placed names in the segmented result set. Finally, a high-level CRF model was used to identify the institution name. The recall rate of the model was 90.05%, and the accuracy rate was 88.12%. The performance was better than other Chinese institution name recognition algorithms. However, the over-fitting problem can arise. In addition, in 2006, Yu et al. [18] proposed a Chinese named entity recognition based on Cascading Hidden Markov Model (CHMM) by using the method of internal cascading fusion of statistical learning methods. In 2009, Liao and Veeramachaneni [19] proposed a semi-supervised learning algorithm based on a conditional random field model for NER. The semi-supervised learning algorithm utilized unlabelled data to mitigate the impact of insufficient labelled data. Since natural language processing is not entirely a random process, the use of statistical-based methods alone makes the state search space extremely large, and it is necessary to perform filtering and pruning in advance with the help of rules [3]. At present, almost all NER systems use statistical models based on rule knowledge.

5 Deep Learning Based Approach In recent years, with the rapid expansion of the scale of Internet text data and the rapid development of deep learning technology, a large number of new methods for NER based on deep neural networks have emerged, which greatly improves the recognition accuracy. The models include: Recurrent Neural Network Model (RNN), Convolutional Neural Networks Model (CNN), Long Short-Term Memory Model (LSTM), LSTM-CRF Model, etc. In 2011, Collobert et al. [20] used neural networks for NER, and finally added the CRF model to the objective function (later generally referred to as: a layer of CRF layer was combined). Drawing on the above CRF ideas, a series of work combining the RNN structure and the CRF layer for NER appeared around 2015. Huang et al. [21] proposed a series of sequence labelling models based on LSTM model, including LSTM model, bidirectional LSTM model (BI-LSTM), LSTM model with CRF layer (LSTM-CRF) and BI-LSTM model with CRF layer (BI-LSTM-CRF). The BI-LSTM-CRF model first feeded the words into the BI-LSTM model, and then feeded all the scores predicted by the BI-LSTM block to the CRF layer, and finally selected the tag sequence with the highest predicted score as the best labels. Its accuracy was as high as 94.27% and 97.46%, respectively. The experimental results showed that the BI-LSTM-CRF model has reached or exceeded the CRF model and inherited the advantages of the deep learning method without the need for a large number of manually defined features.

A Survey on Named Entity Recognition

1807

Compared with traditional models, the deep learning based methods can embrace the context information and can avoid the laborious work of feature selection in models like CRF. The model obtained by further stacking the CRF layer over the neural network can achieve better results. However, methods based on deep neural network generally require larger training data.

6 Latest Method Recently, in the research of neural-network-based named entity recognition, there are two new prevalent trends worth of noticing: one is the use of Attention Mechanism; the other is models based on a small number of labeled training data, including Transfer Learning and Semi-Supervised Learning. 6.1

Attention Mechanism

The Attention mechanism was first proposed in the field of visual images, and its essence comes from the human visual attention mechanism. In 2014, Bahdanau et al. [22] used a similar attention-based mechanism to simultaneously translated and aligned on machine translation tasks. This work was currently recognized as the first to propose the application of the attention mechanism to the NLP field. The attention in deep learning can be interpreted broadly as a vector of importance weights. In 2016, Rei et al. [23] focused on the splicing of word vectors and character vectors based on the RNN-CRF model structure. The algorithm first decomposed the word into single characters and got the corresponding character embeddings ðc1 ; . . .; cR Þ which were feed into the BI-LSTM model, and then used the resulting hidden vector as the input word to obtain the corresponding vector m; and finally obtained the weighted sum by adding the vector m and the word embedding x together. Its weight was predicted by the hidden layer of traditional neural networks. The experimental results showed that the model is better than the original splicing method, the F1 values in the CoNLL-2003 dataset were 84.09% and 83.37%, respectively. Introducing new parameters to compensate for the fitting ability of a certain aspect may not be comparable to the original method, and it may cause over-fitting and increase computational complexity. 6.2

Transfer Learning

The initial popular transfer learning in NLP was brought about by the term embedded model. An important challenge for sequence tagging is how to transfer knowledge from one task to another, which is often referred to as transfer learning [24]. Transfer learning can be used in several settings, notably for low-resource languages [25, 26] and low-resource domains such as biomedical corpora [27] and Twitter corpora [28]. In order to improve the performance on a target task by joint training with a source task, in 2017, Yang et al. [29] extended the basic sequence labelling model—NN-CRF model and proposed three migration learning architectures—T-A model, T-B model,

1808

Y. Wen et al.

and T-C model. If the two domains had mutually mapped label sets, the algorithm proposed T-A model, that was, shared all model parameters and feature representations in the NN, and then performed a label mapping step at the top of the CRF layer to complete the cross-domain transfer. If the two domains had different label sets, then proposed a T-B model, which was to remove the parameter sharing in the CRF layer based on T-A model. At the same time, T-B model could also be adopted in the crossdomain migration learning. For cross-language migration, the algorithm proposed T-C model, which mainly performed migration learning by sharing word vectors and wordlevel layers between different languages. The experimental results showed that the migration learning model proposed in the paper had significantly improved various data sets under low resource conditions and achieved better results in some benchmark tests. 6.3

Semi-supervised Learning

Marking the collected data wastes a lot of time and labour, and the unlabelled data can provide information about the distribution of the data, so we used this information to propose a semi-supervised algorithm. In the NLP field, the basic idea of semisupervised learning is to use the model assumptions on the data distribution to establish a learner to label unlabelled samples. In 2017, Peters et al. [30] proposed a semi-supervised method for sequence tagging tasks. The paper used a massive unlabelled corpus to train a bidirectional neural network language model (LM Embedding), and then used this model to obtain the language model vector of the current word to be annotated, and finally added the vector as a feature to the original bidirectional RNN-CRF model. The experimental results showed that the addition of this language model vector could greatly improve the sequence labelling performance on a small amount of annotation data.

7 Summary and Outlook Named entity recognition has broad application prospects in many fields, such as biomedicine, medical text, finance, and so on. In the field of biomedicine, biomedical named entity recognition mainly identifies the named entities such as genes, proteins, disease names, drug names, and organization names in the biomedical literature [31]. In the field of medical texts, medical text entity recognition can fully exploit the value in information, and is an important basic work in medical knowledge mining, medical intelligent robots, medical clinical decision support systems and other application fields [32]. With the increasing international exchanges and the rapid development of the Internet, communication between different languages is becoming more and more important. Therefore, the application of named entity recognition in the field of machine translation also has a broader development space. This paper introduces various existing methods for identifying named entities, including rules and dictionary based methods, HMM-based methods, CRF-based methods, deep learning-based methods, and so on. At present, the research on NER is relatively mature, but NER still has problems such as easy over-fitting, good effect in

A Survey on Named Entity Recognition

1809

individual fields, and inability to achieve versatility. At the same time, due to the large number of Chinese named entities, the complex structure and the ambiguity problem, the Chinese named entity recognition task more complicated than the English named entity recognition. In recent years, the introduction of semi-supervised learning and unsupervised learning algorithms are trying to solve the problem of insufficient corpus. NER will integrate all aspects of development in a more open field and lay a solid foundation for the deep development of natural language processing. Acknowledgements. This work was supported in part by National Natural Science Foundation of China (No. 61701284, No. 61702306, No. 61602278), Ministry of Education Humanities and Social Sciences Research Youth Fund Project (17YJCZH187) and Qingdao Philosophy, Social Science Planning Project (QDSKL1801131).

References 1. Chinchor N (1995) MUC-6 named entity task definition (version 2.1). In: Proceedings of the 6th conference on message understanding, Columbia, Maryland; Bakushinsky A, Goncharsky A (1994) Ill-posed problems theory and applications 2. Chinchor N, Robinson P (1997) MUC-7 named entity task definition. In: Proceedings of the 7th conference on message understanding, Columbia, Maryland 3. Sun Z, Wang H (2010) Overview on the advance of the research on name entity recognition. Data Anal Knowl Disc 26(6):42–47 (中文) 4. Rau LF (1991) Extracting company names from text. In: Proceedings of the seventh IEEE conference on artificial intelligence applications. IEEE 5. Zhang X, Wang L (1997) Identification and analysis of chinese organization and institution names. J Chin Inf Process 11(4):22–33 (中文) 6. Farmakiotou D, Karkaletsis V, Koutsias J et al (2000) Rule-based named entity recognition for Greek financial texts. In: Proceedings of the workshop on computational lexicography and multimedia dictionaries (COMLEX 2000), pp 75–78 7. Wang N, Ge R, Yuan C et al (2002) Company name identification in Chinese financial domain. Chin J Inf Sci 16(2):1–6 (中文) 8. Li H (2012) Statistical learning method. Tsinghua University Press (中文) 9. Bikel DM, Miller S, Schwartz R et al (1998) Nymble: a high-performance learning namefinder. arXiv preprint cmp-lg/9803003 10. Bikel DM, Schwartz R, Weischedel RM (1999) An algorithm that learns what’s in a name. Mach Learn 34(1–3):211–231 11. Liu J (2009) Chinese named entity recognition algorithm based on improved hidden Markov model. J Taiyuan Normal Univ Nat Sci Ed 1:80–83 (中文) 12. Krishnan V, Manning CD (2006) Association for computational linguistics the 21st international conference, Sydney, Australia (2006.07.17–2006.07.18). Proceedings of the 21st international conference on computational linguistics and the 44th annual meeting of the ACL, ACL’06—An effective two-stage model for exploiting non-local dependencies in named entity recognition. International conference on computational linguistics & the meeting of the association for computational linguistics. Association for Computational Linguistics, pp 1121–1128

1810

Y. Wen et al.

13. Borthwick A, Sterling J, Agichtein E et al (1998) NYU: description of the MENE named entity system as used in MUC-7. In: Seventh message understanding conference (MUC-7): proceedings of a conference held in Fairfax, Virginia, April 29–May 1998 14. Sekine S, Grishman R, Shinnou H (1998) A decision tree method for finding and classifying names in Japanese texts. In: Sixth workshop on very large corpora 15. Takeuchi K, Collier N (2002) Use of support vector machines in extended named entity recognition, In: Proceedings of the 6th conference on natural language learning, vol 20. Association for Computational Linguistics, pp 1–7 16. De Meulder F, Daelemans W (2003) Memory-based named entity recognition using unannotated data. In: Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003, vol 4. Association for Computational Linguistics, pp 208–211 17. Zhou J, Dai X, Yin C et al (2006) Automatic recognition of Chinese organization name based on cascaded conditional random fields. Chin J Electron 34(5):804–809 (中文) 18. Yu H, Zhang H, Liu Q et al (2006) Automatic recognition of Chinese organization name based on cascaded conditional random fields. Trans Commun 2 (中文) 19. Liao W, Veeramachaneni S (2009) A simple semi-supervised algorithm for named entity recognition. In: Proceedings of the NAACL HLT 2009 workshop on semi-supervised learning for natural language processing. Association for Computational Linguistics, pp 58– 65 20. Collobert R, Weston J, Bottou L et al (2011) Natural language processing (almost) from scratch. J Mach Learn Res 12(Aug):2493–2537 21. Huang Z, Xu W, Yu K (2015) Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 22. Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 23. Rei M, Crichton GKO, Pyysalo S (2016) Attending to characters in neural sequence labeling models. arXiv preprint arXiv:1611.04361 24. Pan SJ, Yang Q (2009) A survey on transfer learning. IEEE Trans Knowl Data Eng 22 (10):1345–1359 25. Zirikly A, Hagiwara M (2015) Cross-lingual transfer of named entity recognizers without parallel corpora. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, vol 2: Short papers, pp 390–396 26. Wang M, Manning CD (2014) Cross-lingual projected expectation regularization for weakly supervised learning. Trans Assoc Comput Linguist 2:55–66 27. Kim JD, Ohta T, Tateisi Y et al (2003) GENIA corpus—a semantically annotated corpus for bio-textmining. Bioinformatics 19(suppl_1):i180–i182 28. Ritter A, Clark S, Etzioni O (2011) Named entity recognition in tweets: an experimental study. In: Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 1524–1534 29. Yang Z, Salakhutdinov R, Cohen WW (2017) Transfer learning for sequence tagging with hierarchical recurrent networks. arXiv preprint arXiv:1703.06345 30. Peters ME, Ammar W, Bhagavatula C et al (2017) Semi-supervised sequence tagging with bidirectional language models. arXiv preprint arXiv:1705.00108 31. Zheng Q, Liu Q, Wang Z et al (2010) Research and development on biomedical named entity recognition. J Comput Appl 27(3) (中文) 32. Zhang F, Wang M (2017) Medical text entities recognition method base on deep learning. Comput Technol Autom 36(1):123 (中文)

A Hybrid TWDM-RoF Transmission System Based on a Sub-Central Station Anliang Liu1(&), Haichao Wei1, Zhenyu Na1, and Hongxi Yin2 1

2

School of Information Science and Technology, Dalian Maritime University, Dalian 116026, Liaoning, China [email protected] School of Information and Communication Engineering, Dalian University of Technology, Dalian 116026, Liaoning, China

Abstract. In this paper, a full-duplex time- and wavelength-division multiplexing -radio-over-fiber (TWDM-RoF) system which can support a hybrid transmission of wired and wireless data is proposed based on an additional subcentral station (SCS). For the downlink, the TWDM technology is employed to transmitted wired and wireless services from a central station (CS) to a SCS with baseband data formats. For the uplink, one upstream optical carrier can simultaneously support both wired and wireless signals to achieve upstream transmissions. Better system compatibility, wavelength utilization and dispersion tolerance for bidirectional transmission links can be achieved in the proposed system. Finally, a demonstrated system with one 10-Gbps wired signal and two 2.5-Gbps wireless signals carried by a 28-GHz radio frequency (RF) signal is established. We validate the feasibility of this system based on the results of the bit error rate (BER) curves for both downlink and uplink. Keywords: Radio-over-fiber

 TWDM  Hybrid transmission  Full-duplex

1 Introduction The demand of services for terminal users has rapidly increased and the maximum peak rate in the fifth-generation mobile communications system (5G) will be at the Gbps level [1]. Higher radio-frequency (RF) carriers should be employed due to its broad bandwidth and abundant spectrum resources, especially in 5G high-band communications [2]. However, the propagation distance of a high RF signal is significantly limited by atmospheric attenuation, which also means a denser cellular structure (such as microcells and pico-cells) is necessary to ensure requirements of quality of service (QoS) for mobile users [3, 4]. Radio-over-fiber (RoF) technology, which has many advantages, such as low loss, high bandwidth, low cost, simple structure, centralized management and flexible mobility, has been a suitable candidate solution for the enhanced mobile broadband (eMBB) scenario [5–7]. Numerous optical fibre links are required to guarantee the connection between the central station (CS) and a large number of remote base stations (BSs). Since the passive optical network (PON), an optical access network technology, has been used widely and maturely [8]. A hybrid time- and wavelength-division multiplexing PON (TWDM-PON) © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1811–1818, 2020 https://doi.org/10.1007/978-981-13-9409-6_219

1812

A. Liu et al.

was identified by the full-service access network (FSAN) community in 2012 as the formal standard for the next-generation passive optical network stage 2 (NG-PON2) [9]. The existing optical fiber link resources will be effectively utilized by combining the RoF system with the TWDM-PON [10, 11], which would not only significantly reduce the system costs but also realize a hybrid access network of the wireless (RoF) and wired (NG-PON2) services and thereby satisfy the high bandwidth and mobility requirements of end users. In this paper, a TWDM-RoF system is proposed based on a sub-central station (SCS), which can realize a hybrid transmission of both wired data and wireless data with an efficient wavelength resource utilization, good system compatibility and dispersion tolerance. A 28-GHz full-duplex TWDM-RoF system with one 10-Gbps wired signal and two 2.5-Gbps wireless signals over a 25-km single-mode fiber (SMF) is demonstrated, and the reliable and practical performance of this system is verified based on results of the bit error rate (BER) curves for both downstream and upstream signals.

2 Architecture of the TWDM-RoF System The proposed TWDM-RoF system is shown in Fig. 1, where the CS is compatible with the optical line terminal (OLT) and the optical distribution network (ODN) function is included in the SCS. BSs and optical network units (ONUs) are the receiving terminals for the wireless data and wired data signals, respectively. For the downlink, original wireless data is transmitted from the CS to the SCS as a baseband format and centralized modulation of multiple wireless signals are employed at the SCS. Thus the wireless transmitters (TXs) can be effectively compatible with the existing transmitters in the OLT. Moreover, better dispersion tolerance of can be achieved than the RF over fiber system. TWDM technology are employed in this system, which means wired and wireless services can share different time slots at individual wavelengths via medium access control (MAC) layer functionalities in the OLT/CS. Several laser diodes (LDs) are utilized as the upstream optical carriers for each BS to be economic and colorless. All these different wavelengths are combined by a WDM multiplexer (MUX) and transferred to the SCS over a long-distance fiber link. ONU1 OLT/CS

MUX

Wired/Wireless fdown TX

f down - f LO f down + f LO Downlink

ODN

SMF Upstream LD

fup



MZM

Wireless Rx

Uplink

Coupler

Antenna

BS 1

fLO

DEMUX A EDFA

Splitter

f up + f LO

f up

f down Splitter

SCS

f up - f LO

B

PA PD C

PA

E/O

SMF Wired Tx

f down - f LO f down + f LO

f up + f LO/f up - f LO

DEMUX

Fig. 1. Architecture of the proposed TWDM-RoF system

A Hybrid TWDM-RoF Transmission System Based on a Sub-Central

1813

At the SCS, the received TWDM signals after an optical splitter are broadcast to each ONU. At each ONU, a de-multiplexer (DEMUX) is tuned to a corresponding wavelength and a wired signal receiver (RX) is employed to realize a downstream transmission, the same as the traditional PON structure. One of the splitter output channels is fed into a MZM biased at Vp and driven by a local oscillator (LO) signals at the frequency fLO to achieve a centralized optical carrier suppression (OCS) modulation. Then each wavelength of fdown and fup generates an upper sideband and a lower sideband. The optical spectrum is shown in point A of Fig. 1. To ensure their optical powers of received signals to BSs, these modulated TWDM signals are amplified by an erbium-doped fiber amplifier (EDFA) and broadcasted to each BS after an optical splitter. The modulated RF subcarriers will suffer from a negligible dispersion effect due to the short distance transmission from the SCS to each BS. At each BS, the downstream signal fdown is filtered by a programmable wavelength-selective switch (WSS) as a DEMUX. The output spectrum of WSS is shown in point B of Fig. 1. Then, an RF carrier signal with frequency 2fLO can be detected by the photo-detector (PD) with its beating function. The detected RF signal is amplified by a power amplifier (PA) and broadcast to mobile terminals by an antenna for a complete downstream transmission of the wireless signals. For the uplink, the wireless signal received from mobile terminals by each RF antenna is amplified by an electric PA and sent into an external electrical-to-optical (E/O) modulator. The newly generated first-order upper and lower sidebands (with frequencies fup + fLO and fup − fLO, respectively), as shown in point C of Fig. 1, can both be used to realize an OCS modulation for the uplink. All the upstream wired signals from different BSs are coupled and transmitted back to the CS. In this TWDM-RoF system, two new upstream optical carriers are obtained by OCS modulation at the SCS with a single light source at the CS, which supports a higher number of BSs or more upstream traffic. In addition, the optical carrier fup can be reused for wired upstream signals at the ONUs, indicating that the utilizing efficiency of centralized light sources is improved. All upstream signals from BSs and ONUs are separated by the DEMUX in the OLT/CS and then recovered by wired RXs and wireless RXs for a complete upstream transmission.

3 Simulation and Results A hybrid TWDM-RoF system with one wired signal at 10 Gbps and two wireless signals at 2.5 Gbps is evaluated and demonstrated based on the simulation tools of OptiSystem and Matlab to verify the performance of the proposed system. At the OLT/CS, the LD with the central frequency f1 = 187.1 THz (k1 = 1602.31 nm) is used to transmit the 10 Gbps wired downstream signal, and the central frequencies of wireless TX at f2 = 187.2 THz (k2 = 1601.46 nm) and f3 = 187.3 THz (k3 = 1600.60 nm) are chosen for two 2.5 Gbps wireless downstream signals. In addition, another light source with the central frequency f4 = 187.4 THz (k4 = 1599.75 nm) is distributed as an upstream optical carrier. All baseband data signals are modulated to their corresponding optical carriers through direct modulated lasers (DMLs) with 10 dBm output power. The modulated signals and

1814

A. Liu et al.

upstream light sources are multiplexed into a SMF via a MUX. The fiber-optic length between the CS and the SCS is 25 km with an attenuation of 0.25 dB/km. At the SCS, the received signal is divided in two channels by a 1:2 splitter, the first channel signal is sent to ONU1, and the second channel signal is modulated by a 14GHz LO through a DMZM, whose bias DC voltage is set to Vbias = Vp = 4 V. The electric phase shift of inserted LO signals between two arms of the DMZM is 180° to achieve OCS modulation. Figure 2 shows the spectrum of the modulated signal, where the central frequencies of optical carrier signals are suppressed, and the bandwidths between the newly generated first-order upper sidebands and the first-order lower sidebands are 28 GHz. After amplified by an EDFA with a 20-dB gain, the modulated signals are separated by a 1:2 splitter and sent into BS1 and BS2, respectively.

Fig. 2. Optical spectrum after OCS modulation at the SCS

In order to simulate the receiving process for the mobile terminal and verify the quality of the wireless signals received at BSs, a frequency mixer with a 28-GHz LO and a low-pass filter (LPF) are employed to recover the original wireless baseband data and the structure of measured system at the BS is shown in Fig. 3. Taking BS1 as an example, a 1  2 WSS is utilized to separate the downstream data signal and the upstream optical carrier. The central frequency of the output port 1 of WSS is set to 187.2 THz with a bandwidth of 32 GHz, which exceeds the frequency of the necessary RF signal. Then, a PD is exploited to realize the optical-to-electrical conversion and to obtain the 28-GHz RF signal by the beating function of the PD. The generated RF signal is down-converted by a 28-GHz LO. After an LPF, the 2.5-Gbps wireless baseband data can be recovered by a 3R receiver and measured by a BER tester

A Hybrid TWDM-RoF Transmission System Based on a Sub-Central

1815

(BERT). At BS2, by setting the central frequency of the output port 1 of WSS to 187.3 THz, the wireless signal can be recovered and the BER performance can be measured.

BS1 Downstream Signals

PD

PA

1

RX

28GHz LO LPF

BERT

2 TX Upstream Data 2.5Gbps

MZM

WSS

28GHz LO

Fig. 3. Structure of measured system at the BS

The measured BER curves of three downstream signals are shown in Fig. 4 and it is shown that a slight power penalty of 0.1 dB is observed between BS1 and BS2. Therefore, the performances of BS1 and BS2 are almost equivalent, and the receiver sensitivities of the two wireless downstream signals are −25 dBm at a BER of 10−3, which can be reduced below 10−10 if a forward error correction (FEC) will be exploited. The measured received power of the wired downstream signal in ONU1 is −23.5 dBm at a BER of 1.4  10−3. To achieve the same BER compared with that of the wireless downstream signals a 1.5-dB power penalty for the wired downstream signal is required. This finding is attributed to the higher data rate of the wired signal, which suffers from the impact of chromatic dispersion over a 25-km fiber-optic link. -2

10

-3

Downstream BER

10

-4

10

-5

10

-6

10

-7

10

-8

10

-25.5

ONU1 BS1 BS2

-25

-24.5

-24

-23.5

-23

-22.5

-22

-21.5

-21

Received Optical Power (dBm)

Fig. 4. BER performance of wired and wireless downstream signals

1816

A. Liu et al.

For the upstream transmission, an upstream wired signal at 10 Gbps with a pseudorandom binary sequence (PRBS) for a word length of 231–1 is modulated by an external MZM and transmitted back to the OLT via a 25-km SMF. At the BSs, two new sidebands with frequencies of 187.414 and 187.386 THz are generated after the OCS modulation. To verify the transmission performance of the first-order sideband signals, the upper sideband at 187.414 THz is filtered by setting the central frequency of the WSS to 187.414 THz. The lower sideband at 187.386 THz is similarly filtered by another WSS in BS2. The wireless data at 2.5 Gbps are mixed with a 28-GHz LO and inserted into an MZM for an O/E conversion. After the 25-km transmission, the wireless signal is recovered in the CS with the same structure as in BS1. The input optical powers of the MZM at two BSs and ONU1 are very important for the reliability of the upstream signals in backhaul. The BER performances of the wired and wireless upstream signals at different input optical power levels are shown in Fig. 5. For a wireless signal received in the CS with a BER below 10−3, the input optical power should be at least −12.5 dBm and the BER for the upstream wired signal is 1.3  10−3 when its input optical power is −15 dBm. To obtain the same BER, the input optical powers at BSs must be 2.5 dB higher than the input optical power at an ONU. The communication performance degradation of the wireless system is due to RF transmission in the optical fiber link, which suffers from the dispersion effect of fiber-optic. To solve this issue, the DEMUX at the CS is employed to filter one of two sidebands that contain the original baseband data signal. Then, optical down-conversion is performed to obtain the baseband wireless signal. Since only one sideband is utilized, the power of the other sideband signal is wasted, and a larger input power of the MZM at the BSs is essential to guarantee the received power of single sideband signal.

-2

10

-3

Upnstream BER

10

-4

10

-5

10

-6

10

-7

10

ONU1 BS1 BS2

-8

10

-15

-14

-13

-12

-11

-10

Launched Optical Power (dBm)

Fig. 5. BER performance of wired and wireless upstream signals

A Hybrid TWDM-RoF Transmission System Based on a Sub-Central

1817

4 Conclusions In this paper, we propose a full-duplex TWDM-RoF system that can provide a hybrid transmission of wired and wireless signals based on the SCS structure. Compared with a traditional RoF system, this system has many advantages such as a seamless fusion between the CS and OLT, a reduction of CS cost because of the sharing of optical fiber links and transceivers in the existing optical network. The impact of the chromatic dispersion effect on RF signals caused by the long-distance transmission in an optical fiber can be eliminated by using the baseband data format. Bidirectional communication is implemented in the TWDM-RoF system by the upstream optical carrier centralized allotment in the CS. The light sources provided for the ONUs and upper/lower sideband signals generated by OCS modulation are adopted by the BSs in the RoF system, which can support more BSs and improve the utilization of the central light sources. Finally, a 28-GHz TWDM-RoF system with two wireless signals at 2.5 Gbps and one wired signal at 10 Gbps for the bidirectional transmission over 25 km is experimentally demonstrated via our simulation. The reliability and practicality of this system is verified based on the results of the BER curves of the bidirectional transmission of wired and wireless signals, which makes this system highly practical for future 5G mobile communications. Acknowledgements. This work is supported in part by the China Postdoctoral Science Foundation (2019M651095), the Fundamental Research Funds for the Central Universities under Grant 3132019210 and Grant 3132019220.

References 1. Soldani D, Manzalini A (2015) Horizon 2020 and beyond: on the 5G operating system for a true digital society. IEEE Veh Technol Mag 10:32–42 2. Alliance N (2015) 5G white paper. Next generation mobile networks, white paper, pp 1–125 3. Li R, Zhao Z, Zhou X et al (2017) Intelligent 5G: when cellular networks meet artificial intelligence. IEEE Wirel Commun 24(5):175–183 4. Chen H, Li Y, Bose SK, Shao W, Xiang L, Ma Y, Shen G (2016) Cost-minimized design for TWDM-PON-based 5G mobile backhaul networks. J Opt Commun Netw 8:B1–B11 5. Sauer M, Kobyakov A, George J (2007) Radio over fiber for picocellular network architectures. J Lightwave Technol 25:3301–3320 6. Ying C, Lu H, Li C, Chu C, Lu T, Peng P (2015) A bidirectional hybrid lightwave transport system based on fiber-IVLLC and fiber-VLLC convergences. IEEE Photon J 7:1–11 7. Mahmood NH, Lauridsen M, Berardinelli G et al (2016) Radio resource management techniques for eMBB and mMTC services in 5G dense small cell scenarios. In: Vehicular technology conference, Montreal, Canada, pp 1–5

1818

A. Liu et al.

8. Mitchell JE (2014) Integrated wireless backhaul over optical access networks. J Lightwave Technol 32:3373–3382 9. Luo Y, Zhou X, Effenberger F, Yan X, Peng G, Qian Y, Ma Y (2013) Time- and wavelength-division multiplexed passive optical network (TWDM-PON) for next-generation PON stage 2 (NG-PON2). J Lightwave Technol 31(4):587–593 10. Nesset D (2015) NG-PON2 technology and standards. J Lightwave Technol 33:1136–1143 11. Wey JS, Nesset D, Valvo M, Grobe K, Roberts H, Luo Y, Smith J (2016) Physical layer aspects of NG-PON2 standards—part 1: optical link design. J Lightwave Technol 8:33–42

Optimal Subcarrier Allocation for Maximizing Energy Efficiency in AF Relay Systems Weidang Lu1(&), Shanzhen Fang1, Yiyang Qiang1, Bo Li2, and Yi Gong3 1

3

College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China [email protected] 2 School of Information and Electrical Engineering, Harbin Institute of Technology, Weihai 264209, China Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen 518055, China

Abstract. In this paper, we propose a new energy efficiency maximization algorithm for AF relay OFDM-SWIPT system. Specifically, we divide the transmission time into two parts. In the first part, the base station transmits information to the user and the relay node. The relay node uses different subcarriers to implement SWIPT. Then in the second part, the relay node forwards information to the user. We formulate the above process, and then through some complex calculations to get the maximum energy efficiency. Keywords: Energy efficiency allocation

 AF relay  SWIPT  OFDM  Subcarrier

1 Introduction With the deepening of the concept of green communication, energy efficiency has become an important indicator for evaluating communication systems. And because of this, how to effectively improve energy efficiency has become an urgent problem in the field of communication. In [1], the authors consider the problem of maximizing energy efficiency in both single-user and multiusers situations, where the signal is transmitted directly from the base station to the user. Amplify and forward (AF) is an important protocol of cooperative communication, which can effectively improve the signal transmission quality. In [2], the authors apply AF technology to smart cities and get the maximum achievable rate. Two-way AF relay network transmission rate optimization problem is studied in [3], where the authors prove that the relay location also affects the system performance. Some scholars have innovatively applied the AF protocol to improve the energy efficiency of networks. In [4], the authors studied the energy efficiency optimization problem of full-duplex AF relay channels, where they also compared the energy efficiency of direct transmission, half-duplex and full-duplex. In [5], power and time allocation are jointly optimized for maximizing the energy efficiency. A new energy efficiency optimization EEM algorithm for multi-user multi-carrier AF relay system is © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1819–1825, 2020 https://doi.org/10.1007/978-981-13-9409-6_220

1820

W. Lu et al.

proposed in [6], in addition, the authors also considered a sub-optimal algorithm with lower complexity.

2 System Model and Problem Formulation 2.1

System Model

We consider the traditional three-node system, i.e., a base station (BS), a relay node (RN) and a user (U), as shown in Fig. 1. We define the set of subcarriers as N ¼ f1; 2; . . .Ng, and h1;n , h2;n represent the channel coefficients from BS to U and BS to RN over subcarrier n in the first time part, respectively, h1;n0 and h3;n0 denote the channel coefficients from BS to U and RN to U over subcarrier n0 in the second time part. And we let r2i;n ; i 2 f1; 2; 3g denote the noise at BS, RN, U, respectively, P is the total transmission power from the BS.

h1,n

h1, n ' S Base Station (BS) h2,n

User (U)

I

Information decoding

SP Energy harvesting

h3, n '

Relay node (RN) First part

Second part

Fig. 1. System model

According to the Shannon formula and the Maximal Ratio Combining (MRC) theory, we can get the total rate at U after two time parts Rtotal ðSÞ ¼ RRB;U þ R1B;U þ R2B;U

ð1Þ

where  2 ! h1;n  pe;n 1X ¼ ln 1 þ 2 n2SP r21;n

ð2Þ

!   N h1;n0 2 pi;n0 1XX ¼ ð1  qnn0 Þ ln 1 þ 2 n2SI n0 ¼1 r21;n0

ð3Þ

R1B;U

R2B;U

Optimal Subcarrier Allocation for Maximizing Energy Efficiency

0

RRB;U

1 2 2 jh2;n j pi;n jh3;n0 j pr;n0   2 : r2 N h1;n  pi;n B C r22;n 1XX 3;n0 C ¼ qnn0 lnB 1 þ þ 2 2 @ 2 n2SI n0 ¼1 r21;n h p h p j 2;n j i;n j 3;n0 j r;n0 A þ r2 r2

1821

ð4Þ

3;n0

2;n

where pi;n , pe;n denote the power for receiving information and harvesting energy from BS to RN in the first time part, respectively. pr;n0 is power from RN to U, and pi;n0 is the power from BS to U in the second time part. qnn0 2 f0; 1g represents the subcarrier pairing of the first and second time parts, if the subcarrier n for receiving information in the first time part is paired with the subcarrier n0 for transmitting information in the second time part, and then let qnn0 ¼ 1, conversely, qnn0 ¼ 0. The power consumed by the whole system is expressed as Utotal ðSÞ ¼

X n2SI

where Q ¼

pi;n þ

X

pe;n þ

N XX

ð1  qnn0 Þpi;n0 þ

n2SI n0 ¼1

n2SP

N XX

qnn0 pr;n0  Q

ð5Þ

n2SI n0 ¼1

 P   2 pe;n h2;n þ r22;n represents the energy harvested at RN. So the

n2SP

energy efficiency [7, 8] of the system can be written as Eeff ðSÞ ¼

2.2

Rtotal ðSÞ : Utotal ðSÞ

ð6Þ

Problem Formulation

We aim to the maximize the energy efficiency by optimizing subcarrier allocation with some constraints. The optimization function and constraints are as follows P1 : max Eeff ðSÞ S

s:t: C1 : RP total  RT P C2 : pi;n þ pe;n  P n2SI

C3 : C4 :

N P P

n2SI n0 ¼1 N P P n2SI n0 ¼1

n2SP

qnn0 pr;n0  Q

ð7Þ

ð1  qnn0 Þpi;n0  P

where RT is the target rate.

3 Optimal Solution We introduce the parameter q to transform the objective function in (7), and then write the Lagrange function of the new objective function as

1822

W. Lu et al.

" LðSÞ ¼ ðRtotal  qUtotal Þ þ b1 ½Rtotal  RT  þ b2 P  " þ b3 Q 

N XX

#

X

pi;n 

n2SI

"

qnn0 pr;n0 þ b4 P 

n2SI n0 ¼1

N XX

X

# pe;n

n2SP

#

ð8Þ

ð1  qnn0 Þpi;n0

n2SI n0 ¼1

This equation is very complicated, so in order to simplify the calculation, we evenly allocate the power of the first time part, the power of the second time part is based on the water-filling algorithm. The subcarrier pairing is based on the channel gain of the two slots from the highest to the lowest. And b ¼ fb1 ; b2 ; b3 ; b4 g represent the dual variables, which can be updated by sub-gradient iteration, and then can get the optimal b. After the power of the first and second time parts and the subcarrier pairing coefficient qnn0 are known, (5) can be rewritten as LðSÞ ¼

X

Fn;n0 þ

N X 1þb n¼1

2

n0 ¼1

n2SI

þ

N  X 1þb

2

1

 1

lnð1 þ pi;n0 c1;n0 Þ  ðq þ b4 Þpi;n0

lnð1 þ pe;n c1;n Þ þ ðb3 þ qÞðpe;n r22;n c2;n þ r22;n Þ  ðq þ b2 Þpe;n

 b1 RT þ ðb2 þ b4 ÞP ð9Þ where cj;n ¼

 2 hj;n  r2j;n

ðj ¼ 1; 2Þ; cm;n0

  hm;n0 2 ¼ 2 ðm ¼ 1; 3Þ rm;n0

Fn;n0 ¼ F1n;n0 þ ðq þ b2 Þðpe;n  pi;n Þ   ðb3 þ qÞðpe;n r22;n c2;n þ r22;n Þ

ð1 þ b1 Þ lnð1 þ pe;n c1;n Þ 2

and F1n;n0

pi;n c2;n pr;n0 c3;n0 ð1 þ b1 Þ ¼ ln 1 þ pi;n c1;n þ pi;n c2;n þ pr;n0 c3;n0 2  ðb3 þ qÞpr;n0

! 

ð1 þ b1 Þ lnð1 þ pi;n0 c1;n0 Þ þ ðq þ b4 Þpi;n0 2

Optimal Subcarrier Allocation for Maximizing Energy Efficiency

So we get the optimal set of subcarriers in an iteration X SI ¼ arg max Fn;n0 SI

1823

ð10Þ

n2SI

SP ¼ N  SI

ð11Þ

After multiple iterations until convergence, we can get the optimal solution of the problem P1.

4 Simulation Results

Fig. 2. Total transmission power versus energy efficiency

In the simulation, we set the noise of BS, RN, and U to be the same. In Fig. 2, we set the different noise parameters and find that the noise power has an important impact on energy efficiency. In addition, we notice that the energy efficiency reaches the maximum at P ¼ 1:6 W, which proves the effectiveness of the proposed algorithm.

1824

W. Lu et al.

Fig. 3. Number of iterations versus energy efficiency

First, we make the parameter q be equal to 0, and get the energy efficiency of the first iteration, and then let q equal the energy efficiency value obtained in the first iteration, and obtain the energy efficiency of the next iteration. As shown in Fig. 3, energy efficiency converges to the maximum value after five iterations. Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant 61871348, in part by Shenzhen Science and Technology Program under Grant JCYJ20170817110410346, in part by Peng Cheng Laboratory under Grant PCL2018KP002, in part by the Project funded by China Postdoctoral Science Foundation under Grant 2019T120531, and in part by the Fundamental Research Funds for the Provincial Universities of Zhejiang under Grant RF-A2019001.

References 1. Lu W, Fang S, Hu S, Liu X, Li B, Na Z, Gong Y (2018) Energy efficiency optimization for OFDM based 5G Wireless networks with simultaneous wireless information and power transfer. IEEE Access (6):75937–75946 2. Lu W, Gong Y, Liu X, Wu J, Peng H (2018) Collaborative energy and information transfer in green wireless sensor networks for smart cities. IEEE Trans Industr Informa 14(4):1585–1593 3. Lu W, Zhao W, Hu S, Liu X, Li B, Gong Y (2018) OFDM Based SWIPT for two-way af relaying network. IEEE Access (6):73223–73231 4. Ma J, Huang C, Cui S, Li Q (2019) Energy efficiency of amplify-and-forward full-duplex relay channels. IEEE Wireless Commun Lett

Optimal Subcarrier Allocation for Maximizing Energy Efficiency

1825

5. Tan F, Lv T, Huang P (2018) Global energy efficiency optimization for wireless-powered massive MIMO aided multiway AF relay networks. IEEE Trans Signal Process 66(9):2384– 2398 6. Gupta A, Singh K, Sellathurai M (2019) Time-switching EH-based joint relay selection and resource allocation algorithms for multi-user multi-carrier AF relay networks. IEEE Trans Green Commun Networking 3(2):505–522 7. Liu D, Zhao M, Zhou W (2018) Energy efficiency optimization in energy harvesting incremental relay system. In: 10th international conference on wireless communications and signal processing (WCSP). Hangzhou, pp 1–6 8. Kuang Z, Liu G, Li G, Deng X (2019) Energy efficient resource allocation algorithm in energy harvesting-based D2D heterogeneous networks. IEEE Internet Things J 6(1):557–567

A Study on D2D Communication Based on NOMA Technology Xiumei Wang1, Kai Mao2, Huiru Wang3, and Yin Lu3,4(&) 1

3

4

College of Electronic and Optical Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210023, China 2 School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China Jiangsu Key Laboratory of Wireless Communications, Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected] Engineering Research Center of Health Service System Based on Ubiquitous Wireless Networks, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing, China

Abstract. Considering the spectrum utilization of cellular system and the overall communication capacity of the system, a device-to-device (D2D) communication system model which based on non-orthogonal multiple access technology (NOMA) is established, and the concept of “D2D group” which allows one D2D transmitter communicate with multiple D2D receivers simultaneously is introduced. In this model, D2D groups utilize NOMA technology for transmission, and multiple D2D groups can reuse the same sub-channel. Aimed at the co-channel interference caused by shared spectrum resources between cellular users and D2D groups, and the power allocation problem based on the NOMA principle in the D2D groups, an optimization problem of maximum the total rate of cellular networks is handled by two steps. Firstly, an effective resource allocation scheme based on weighted bipartite graph algorithm is proposed and then obtains the power allocation scheme of each D2D group by converting the power allocation factor into an unknown quantity. The simulation results show that the algorithm which is proposed can greatly increase the number of D2D user groups and the total speed of cellular system. Keywords: Non-Orthogonal multiple access  Device-to-Device communication  Resource allocation  Power allocation

1 Introduction D2D technology provides a controllable and stable working environment [1]. Compared with traditional Orthogonal Multiple Access (OMA) technology, NOMA can achieve higher spectral efficiency and support larger connections [2]. Combining these two technologies can meet the deployment requirements of mobile communication networks much better. The communication system in which D2D users coexist with cellular users can improve spectrum utilization and quality of service (QoS), and reduce communication © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1826–1834, 2020 https://doi.org/10.1007/978-981-13-9409-6_221

A Study on D2D Communication Based on NOMA Technology

1827

delay [3] than cellular user systems. However, applying D2D technology to cellular networks will brings problems such as resource scheduling, real-time performance, reliability, and interference suppression. How to suppress interference is a difficult and critical issue. Fodor et al. [4] proposes a solution for cellular mobile devices and D2D shared spectrum resources, thereby improving the spectral efficiency and energy efficiency of traditional cellular networks. Doppler et al. [5] describes the session and management mechanism of D2D communication and how to limit the interference generated by D2D communication. Kim [6] proposes a D2D link radio resource allocation strategy for the interference problem between D2D users and cellular users. Authors [7, 8] proposes a wireless resource allocation scheme to minimize the interference between users, which caused by the coexistence of D2D communication users with cellular users and the combination of D2D technology and multi-user input and output (MU-MIMO) technology. NOMA can allocate a resource to multiple users to improve spectrum utilization [9]. Reasonable allocation of power can reduce mutual interference between users and further increase the overall system speed. Zhao [10] proposes to use power division multiple access technology of NOMA to distinguish users, users can simultaneously transmit signals at the same frequency, which greatly improving system capacity. In this paper, with the problem of the shortage of spectrum resources and the lager number of users in mobile cellular networks, the resource allocation and power allocation algorithm for D2D user groups based on NOMA technology is proposed to maximize the total rate of cellular network systems. Firstly, for the co-channel interference caused by shared spectrum resources of cellular users and D2D user groups and the unreasonable channel allocation, a weighted bipartite graph algorithm is proposed to propose an effective resource allocation scheme. Then, with the condition of KarushKuhn-Tucher, the optimal power allocation scheme is calculated based on NOMA in each D2D user group.

2 System Model The D2D group will use NOMA technology to communicate which is different from the traditional communication method. In the new communication system, the D2D transmitter can communicate with multiple D2D receivers through NOMA technology. This paper focuses on the uplink communication scenario with one cell, as shown in Fig. 1. There are one base station (BS), M cellular users and N D2D user groups in the cell. It is assumed that all cellular users and D2D user groups are evenly distributed in the cell which is centered on the base station, and the D2D receivers are evenly distributed in a circle with r as the radius and D2D transmitter as the dot. The cellular user is represented by set C, where C = {CU1, …, CUM}; each cellular user is assigned a subchannel which is mutually orthogonal. The subchannels are represented by set SC, where SC = {SC1, …, SCM}. D2D user group is represented by set D, where D = {D1, …, DN}; there is one transmitter Dn and K receivers DRn,k, k 2 {1, 2, …, K} in each D2D user group. The communication channel follows Rayleigh fading. Each channel gain is a fixed value. Since the channels are mutually orthogonal and each cellular user is assigned a subchannel, there is no inter-channel interference.

1828

X. Wang et al.

CU1

SC

2

CU2

SC1

SC2

DT1 DR1,1

DTn

...

DRn,1

DR1,2

BS DRn,K

D1 SCm

DRn,2

SC

m

Dn CUm

Fig. 1. System model

The signal received by the base station from the cellular user CUm in the uplink communication is: X pffiffiffiffiffiffiffi pffiffiffiffiffi ð1Þ ym ¼ Pcu hm;BS xm þ n km;n Pd gn;BS tn þ nm ; where xm and tn are the transmission signals of the cellular user CUm and D2D transmitter DTn, respectively. nm is an additive white Gaussian noise (AWGN) with a variance of r2. Pcu and Pd represent the transmit power of CUm and DTn. hm,BS is the communication link gains between the cellular user and the D2D transmitter, and the gn,BS is the communication link gains between the cellular user and the BS. km,n is an access factor. When the user group Dn is allowed to access to the subchannel SCm, km,n is 1, otherwise it is 0. From Eq. (1), the Signal to Interference plus Noise Ratio (SINR) of the cellular user CUm received by the BS is:

A Study on D2D Communication Based on NOMA Technology

 2 Pcu hm;BS  cm ¼ P :  2 kn;m Pd gn;BS  þ r2

1829

ð2Þ

m

It is assumed that the user group Dn shares the subchannel SCm with the cellular user CUm, and the channel gain of the receiver of Dn can be ordered as |fn,1|  |fn,2|  …  |fn,K|. According to NOMA transmission mechanism, linearly superimposing is performed after each user is allocated different powers at the transmitter. The user with a larger channel gain should allocate lower power. The decoding is performed in the order of decreasing signal strength (channel gain ascending order), where the weaker signal of the received superimposed signal is used as interference by receiver to decode the stronger signal. For the kth receiver in the nth D2D user group, the interference comes from receiver whose index is greater than k. Therefore, the received signal of the kth user in Dn is: xm n;k ¼ fn;k

K pffiffiffiffiffiffiffiffiffiffiffiffiffi X X pffiffiffiffiffi pffiffiffiffiffiffiffi an;k0 Pd sn;k0 þ km;n0 Pd gn0 ;n;k sn0 þ hm;n;k Pcu xm þ nn;k k0 ¼k

ð3Þ

n0 6¼n

where fn,k is the channel gains between DTn and DRn,k, hm,n,k is the channel gains between CUm and DRn,k and gn′,n,k is the channel gains between DT n′ and DRn,k. The an,k′ represents the power distribution coefficient of DRn,k, and sn,k′ is the signal from DTn to DRn,k′. The nn,k is the AWGN received by DRn,k. When the user group Dn′ multiplexes the same subchannel SCm with the user group Dn, km,n′ = 1, otherwise km,n′ = 0. According to Eq. (3), the SINR of the kth receiver of D2Dn is: cm n;k

¼

 2 fn;k  Pd an;k in þ I out þ I cu þ r2 In;k n;k n;k

:

ð4Þ

When the SINR of cellular user and D2D user is greater than the threshold cCmin and the user can access the system. According to Shannon’s theorem, Eqs. (2) and (4), the communication rates of the cellular user CUm on the subchannel SCm and the D2D user group Dn which access to the channel can be calculated respectively:

cD min ,

Rm ¼ log2

!  2 Pcu hm;BS  1þ P ¼ logð1 þ cm Þ  2 km;n Pd gn;BS  þ r2

ð5Þ

n

s:t: Rm n ¼ ¼

K1 X

log2 1 þ

k¼1 K1 X k¼1

cm  cCmin  2 fn;k  Pd an;k in þ I out þ I cu þ r2 In;k n;k n;k

log2 ð1 þ cn;k Þ þ log2 ð1 þ cn;K Þ

s:t:

D cm n;k  cmin

! þ log2 1 þ

 2 fn;K  Pd an;K

!

in þ I out þ I cu þ r2 In;K n;K n;K

ð6Þ

1830

X. Wang et al.

The total rate on the subchannel SCm is: RSCm ¼ Rm þ

N X n¼1

km;n Rm n:

ð7Þ

The total rate of communication system is: Rsum ¼

M X m¼1

RSCm :

ð8Þ

3 Problem Description For simple, we assume that the receivers number of every D2D user group is 2, where the D2D user group has two receivers. A user with a weak power gain is referred to as the first user, and the stronger one is referred to as the second user. During demodulation, the signal of first user can be detected because whose power is higher, after subtracting the multiple access interference caused by the first user from the superimposed signal, and then the signal of second user can be detected. so the second user is not subject to the interference from the first user within the same D2D user group. If Dn and CUm share the same channel resource, the SINR of the first user in the group is: cm n;1

¼

 2 fn;1  an;1 Pd in þ I out þ I cu þ r2 In;1 n;1 n;1

:

ð9Þ

The SINR of the second user in the group is: cm n;2

¼

 2 fn;2  an;2 Pd out cu In;1 þ In;1 þ r2

:

ð10Þ

So, the Eq. (6) can be rewritten as: m m Rm n ¼ log2 ð1 þ cn;1 Þ þ log2 ð1 þ cn;2 Þ D s:t: cm n;k  cmin

k ¼ 1; 2:

ð11Þ

The system capacity of D2D user groups and the total system rate are related to access factor km,n and the power allocation coefficients of users in the D2D user group. max

kn;m ;an;1 ;an;2

s:t:

cm  cCmin ;

Rsum

8m 2 f1; 2; . . .; Mg

ð12Þ ð12:1Þ

A Study on D2D Communication Based on NOMA Technology D cm n;k  cmin ; 8n 2 f1; 2; . . .; Ng; k ¼ 1; 2

km;n 2 f0; 1g; M X

ð12:2Þ

8m 2 f1; 2; . . .; Mg; n 2 f1; 2; . . .; Ng

km;n 2 f0; 1g;

1831

8n 2 f1; 2; . . .; Ng

ð12:3Þ ð12:4Þ

m¼1

an;1 þ an;2 ¼ 1;

8n 2 f1; 2; . . .; Ng

ð12:5Þ

an;1 [ an;2 [ 0;

8n 2 f1; 2; . . .; Ng

ð12:6Þ

where, Eqs. (12.1) and (12.2) represent the SINR restriction condition of the receiver user of D2D user group and the cellular user. Equation (12.3) represents the value range of the access factor. Equation (12.4) indicates that each D2D user group can be assigned to one channel at most. Equation (12.5) ensures that the total power allocated by users is equal to the power of the D2D transmitter. According to the NOMA criterion, Eq. (12.6) allocates the lager power to the user with weak power gain, instead allocates less power.

4 Resource and Power Allocation Algorithm The main idea of the resource allocation algorithm is to presume whether the SINR of the user received signal satisfies the threshold value, to maximize the total system rate, and to enable as many D2D user groups as possible to access the cellular network, to obtains a suboptimal solution with a weighted bipartite graph algorithm. All candidate D2D user groups and cellular users represent the set of vertices of the bipartite graph of two group respectively. According to the principle of NOMA, the power allocation coefficient of the first user is set to 0.6, and the second one is set to 0.4. For the multiplexing combination between the D2D user group and the cellular user, the M  N matrix G can be constructed, as shown in Table 1: Table 1. Reuse combination SC1 SC2 … SCm … SCM

D1 G(1,1) G(2,1) … G(m,1) … G(M,1)

D2 G(1,2) G(2,2) … G(m,2) … G(M,2)

… … … … …

Dj G(1,n) G(2,n) … G(m,n) … G(M,n)

… … … … …

DN G(1,N) G(2,N) … G(m,N) … G(M,N)

The element G(m, n) in the matrix G stores the total capacity RnSCm of the channel, the access factor km,n, and the permission factor em,n, when the D2D user group Dn shares the subchannel SCm with the cellular user Cm. As follows:

1832

X. Wang et al.

9 8 < em;n = Gðm; nÞ ¼ km;n ; : Rn ; SCm

8m 2 f1; 2; . . .; Mg; 8n 2 f1; 2; . . .; Ng:

ð13Þ

After calculating, we have already known whether all D2D user groups share the channel resources of the cellular user, which channel resource is allocated, and the value of access factor km,n. Therefore, the rate of the cellular users in each subchannel is determined, and the interference of D2D user group caused by the of other user groups is also determined, which makes sure the maximize of total rate of each D2D user group. The power allocation problem of the D2D user group is considered below. First, the power allocation factor of the D2D user group is set to an unknown quantity, the power allocation factor of the second user is a, and the power allocation factor of the first user is 1-a. The objective function shown in Eq. (14) is to obtain a for the maximum total rate of each D2D user group. In order to ensure the communication quality of the D2D receiver, its SINR should be greater than threshold: f ðaÞ ¼ log2

! !  2  2 fn;1  ð1  aÞPd fn;2  aPd 1 þ  2 þ log2 ¼ log2 ð1 þ cm n;1 Þ fn;1  aPd þ I In;2 n;1

þ log2 ð1 þ cm n;2 Þs:t:

0  a  0:5

D cm n;1  cmin

ð14Þ

D cm n;2  cmin

where, In,k, k = 1,2 represents all interferences received by the receiver except the intragroup interference.

5 Simulation and Analysis In this paper, we consider the uplink communication scenario of a single cell. System simulation parameters are shown in Table 2. Table 2. System simulation parameters Parameter The radius of cell/m Maximum distance between users in the D2D group/m Maximum transmit power of cellular user/dBm The SINR threshold of cellular user/dB Maximum transmit power of D2D user group/dBm The SINR threshold of D2D user group/dB Number of cellular users Path loss factor Number of simulations

Value 500 20 24 2.6 21 1.8 15 4 10,000

Without special description, the D2D user group in all algorithms is based on the NOMA transmission mechanism.

A Study on D2D Communication Based on NOMA Technology

1833

Figure 2 shows that the relationship between the number of D2D user groups that are allowed to access to the system and the number of D2D user groups that are requested to access to the system for different resource allocation algorithms. It can be seen from the Fig. 2, as the number of D2D user groups that are requested to access to the system increases, the number of D2D user groups that are allowed to access to the system increases for the proposed algorithm and the greedy algorithm.

Fig. 2. Number of allowed D2D groups in different algorithm

Figure 3 shows the relationship between the total system rate and the number of D2D user groups that are requested to access to the system for different algorithms. Referring to Figs. 2 and 3, it can be intuitively seen that the total system rate increases as the number of D2D user groups accessed to the system increases. When there is a great number of D2D user groups requested to access to the system, the total system rate based on the proposed algorithm and the greedy algorithm is much higher than that based on the one-to-one matching algorithm.

Fig. 3. Total system rate in different algorithm

1834

X. Wang et al.

6 Conclusion This paper proposes a D2D user group resource allocation and power allocation algorithm based on NOMA technology. Firstly, fix the power allocation factors of each D2D user group, and propose an effective resource allocation scheme by weighted bipartite graph algorithm. Then, set the power allocation factor as an unknown quantity, and the optimal power allocation scheme is calculated based on NOMA in each D2D user group. The simulation results show that the proposed algorithm can allow more users to access to the system. The D2D user group communication system framework based on NOMA technology can obtain higher total system speed than the which based on OMA technology. Acknowledgements. This work was supported by National Natural Science Foundation of China (No. 61271236), Major Projects of Natural Science Research of Jiangsu Provincial Universities (No. 17KJA510004), and Postgraduate Research & Practice Innovation Program of Jiangsu Province (No. KYCX17_0763 and No. KYCX18_0907).

References 1. Wang F, Zhou B, Xu J et al (2011) An efficient retransmission scheme for data sharing in D2D assisted cellular networks. In: 2011 Global mobile congress (GMC). pp 1–6 2. Wu Z, Lu K, Jiang C et al (2018) Comprehensive study and comparison on 5G NOMA schemes. IEEE Access 6:18511–18519 3. Peng C, Li C (2011) Application of D2D technology in LTE-A system. Modern Information Technol 41(Z1):92–95 4. Fodor G, Dahlman E, Mildh G et al (2012) Aspects of network assisted device-to-device communications. IEEE Commun Mag 50(3):170–177 5. Doppler K, Rinne M, Wijting C et al (2009) Device-to-device communication as an underlay to LTE-advanced networks. IEEE Commun Mag 47(12):42–49 6. Kim H, Na J, Cho E (2014) Resource allocation policy to avoid interference between cellular and D2D links in mobile networks. In: The international conference on information networking 2014 (ICOIN2014). pp 588–591 7. Li JC, Lei M, Gao, F (2012) Device-to-device (D2D) communication in MU-MIMO cellular networks. In: 2012 IEEE global communications conference (GLOBECOM). pp 3583–3587 8. Tsai AH, Wang LC, Huang JH et al (2012) Intelligent resource management for device-todevice (D2D) communications in heterogeneous networks. In: International symposium on wireless personal multimedia communications. pp 75–79 9. Saito Y, Benjebbour A, Kishiyama Y et al (2013) System-level performance evaluation of downlink non-orthogonal multiple access (NOMA). In: 2013 IEEE 24th annual international symposium on personal, indoor and mobile radio communications. pp 611–615 10. Zhao R (2015) Research on power allocation in non-orthogonal multiple access. Beijing University of Posts and Telecommunications

Research on Deception Jamming Methods of Radar Netting Xiaoqian Lu1(&), Hu Shen2, Wenwen Gao2, and Xiaoyu Zhong2 1

Information Center, University of Electronic Science and Technology of China, Chengdu 611731, China [email protected] 2 School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China

Abstract. The difficulty of reconnaissance and jamming have been greatly increased as the result of the extensive use of Radar Netting system. So how to put effective jamming on the Radar Netting system becomes an important problem in electronic countermeasures realm recently. In this paper, we briefly introduce the concept of the Radar Netting and the characteristic of the Radar Netting; Second, we establish a distributed Radar Netting system which is composed of three radars’ models; Finally, we analyze the algorithm of track jamming on the netting radar and also give the formula and the jamming with the process of the algorithm. The effectiveness of the algorithm is verified by simulating. Keywords: Radar netting

 Distributed  Track  Jamming  Modeling

1 Introduction Radar networking [1] is the appropriate layout of radar which is based on a number of different systems, different frequency and different polarization modes, form the “net” collection and transmission by the radar information within the network, and comprehensive processing [2], control and management by the central station, then it forms a unified organic whole. The emergence of Radar Netting proposed powerful challenge to radar countermeasures. However, the Radar Netting is not perfect. Every kinds of radar systems will have their weaknesses. Aiming at the defects of the Radar Netting, radar countermeasures are also proposed some new methods such as the networked reconnaissance system, the distributed jamming and the jamming of radar net communication chain and so on, which are the effective ways of the confrontational and distributed Radar Netting.

2 Radar Netting Modeling If we want to do some jamming in the Radar Netting [3], we need to establish a model of Radar Netting firstly. In this section, we will establish a Radar Netting system which is the objects of the research on jamming in this paper. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1835–1843, 2020 https://doi.org/10.1007/978-981-13-9409-6_222

1836

X. Lu et al.

Since the widely application by the distributed Radar Netting system. Here, we set a distributed system of radar net which is that after the track processing of detecting data by each radar, then the track data and the data of new detection point will be transmitted to the host in order to the track correlation and the data fusion [4]. We will set radar A as a main station, all of the data fusion will happen in radar A, and make a final judgment. The radar parameters as are shown in Table 1. Table 1. Settings of radar parameters Parameter name Work frequency (MHz) Pulse width (ls) Pulse repletion intervals (Hz) Antenna revolving rate (r/min) Antenna revolving mode Beam width Antenna gain Transmitter power (kW) Side-lobe level (dB) Receiver noise (dB) Receiver bandwidth (MHz)

Radar A 4000–4200 10 300 3 Circular scan Azimuth 10° 100 200 −25 5 0.5

Radar B 4180–4320 4 300 4 Circular scan Azimuth 10° 2500 400 −35 5 1

Radar C 4400–4600 8 300 4 Circular scan Azimuth 10° 1000 400 −25 5 1

Three radars’ stations take the triangular layout, and the radar is located in an equilateral triangle’s vertices. The location of radar A is on the Y axis, and the other two radar’s location is on the X axis, as shown in Fig. 1.

Fig. 1. The layout of Radar Netting’ schematic diagram

In Fig. 1, the distance of two radars is 60 km, the direction of Y axis is north.

Research on Deception Jamming Methods of Radar Netting

1837

3 Track Jamming of Netted Radar 3.1

The Parameter Settings on Track’s Deception Jamming

3.1.1 Interference Power The false deception jammer of the warning guided radar’s power has nothing to do with the radar’s radiated power, it is mainly related to the radar antenna gain and side lobe gain, associated with the noise level of radar receiver [5], as shown in (1): Pj G j ¼

ð4pÞ2 R20 Pr MF  k2 Gt ðhÞ Mf0

ð1Þ

In this formula, R0 is the distance between the radar and jammer, MF is the jammer’s spectrum width; k is the radar’s working wavelength; Gt ðhÞ is the side lobe level of radar antenna, Pr is the input noise power of radar receiver; Pr can be represented as Pr ¼ KT0 Mf0 N, and K is Boltzmann constant; T0 is absolute temperature; N is the input noise figure of the receiver, and we can plug this formula in (1), as shown in (2): Pj Gj ¼

ð4pÞ2 R20 KT0 NMF k2 Gt ðhÞ

ð2Þ

From (2), the interference power of the jammer can be determined. 3.1.2 Time Delay on Distance To produce the false targets on the radar’s track points, we must know the distance between the radar and the active false target, and then we can calculate the time delay ðMtÞ about the radar relative to the false target point. After the time delay ðMtÞ, the received signal of the jammer will sent, then it can produce a false target of distance deception. If we want to generate a false target point in the distance of Rf that is relative to the radar, R0 is the distance between the radar and the jammer, so the signal transmission time from this point to the radar is Mtf ¼ Rf =c, and the signal transmission time from the jammer to the radar is Mt0 ¼ R0 =c, c is the speed of light. The time delay on distance can be shown in (3): 

Mt ¼ 2ðMtf  Mt0 Þ Mt ¼ 2ðMtf  Mt0 Þ þ Tp

Rf [ R0 Rf \R0

ð3Þ

Tp is the radar’s repetition frequency. 3.1.3 Time Delay on Bearing If the jamming power of the radar’s side lobe is greater than or equal to the minimum power that the radar can detect, then the direction of the received jamming signal’s instantaneous main lobe will be mistaken as the target location, which is the principle of radar azimuth deception. If the distance deception joins up to the azimuth deception, then we can generate the random track of the false target.

1838

X. Lu et al.

If the radar’s antenna main lobe aimed at the jammer, it also needed to turn an angle ðuÞ, then it can be aimed to the location of the false target, as shown in Fig. 2.

Fig. 2. The angle of deception diagram

Fig. 3. The generation of the false target track

The way and velocity of rotation is known which is Vh , so the time delay on bearing can be shown in (4): Mtx ¼

u Vh

ð4Þ

3.1.4 The Doppler Frequency Shift Assume that the radar is stationary, also called ground radar, the jammer moves at a certain speed and the Doppler frequency of jammer is fd . The transmitted signal of false target by jammer also carries the Doppler information. If we want to generate the priming speed of the false signal in the predetermined orientation and distance, on the one hand, it need to balance out the Doppler frequency of jammer, on the other hand, it is need to generate the Doppler frequency of false target. The jammer received frequency of the radar signal is:

Research on Deception Jamming Methods of Radar Netting

1839

f1 ¼ f0 þ fd =2

ð5Þ

f ¼ f1  fd þ fd0

ð6Þ

The frequency of jamming is:

3.1.5 The Jamming Signal Form Assuming the radar transmits linear frequency modulation signal. The mathematical expression of the signal is: sðtÞ ¼ gðtÞ expðjpkt2 Þ expðj2pf0 tÞ

ð7Þ

In this formula, gðtÞ is the rectangle signal, k is the frequency modulation slope of the linear frequency modulation signal, the chart ðf0 Þ is the carrier frequency, so the received jamming signal can be expressed in (8): sr ðtÞ ¼ A  sðt  Mt0 Þ ¼ A  gðt  Mt0 Þ exp½jpkðt  Mt0 Þ2 

ð8Þ

 exp½j2pðf0 þ fd =2Þðt  Mt0 Þ A is signal gain of the radar. The analysis of (3, 4) show that it wants to generate the false target in the distance ðRf Þ relative to the radar and in the azimuth angle ðuÞ relative to the jammer, the jammer should transmit the jamming signal after the received signal time ðMtx Þ, the form is: s0 ðtÞ ¼ B  gðt  Mt0 þ MtÞ exp½jpkðt  Mt0 þ MtÞ2   exp½j2pf ðt  Mt0 þ MtÞ

ð9Þ

In this formula, B is amplitude of the jamming signal which is transmitted by the jammer. The above deduction is all under the condition of the movement of the jammer and the false target, the resting false target is easy to eliminate by the radar. The time of bearing delay when the radar antenna turning, the false target also has a certain amount of movement. If the velocity of the false target is slow, the error of the bearing delay is not large by the formula (3), but if the velocity of the false target is large, so the error of the bearing delay is obvious, then it will produce the false target which is different from the preset position [6]. To reduce the error by the movement, we can use the multiple iterative algorithms. The Schematic diagram of the track of the first active false target in the radar display used by the multiple iterative algorithms, which is shown in Fig. 3. The flow chart of algorithm shown in Fig. 4.

1840

X. Lu et al.

Determine the scan cycle and scan way of radar, the azimuth of the jammer

Set the function expression of the false target track

Calculate the time (t) when the radar scanning to the false target and read the location of the false target when the time is t

Calculate the position of the false target when the radar scanning to the jammer

Calculate the azimuth error

The variance of error is less than the preset approximation error

No

Determine the scan cycle and scan way of radar, the azimuth of the jammer

Yes The location of the false target is in accord with the condition of track point

Fig. 4. The flow chart of algorithm about the generation of the false track

3.2

The Track Jamming of Netted Radar

3.2.1 Algorithm Analysis Supposed the active false track is S which is located in the range of the force of three radars, called P1 , P2 , P3 . Because of the radar antenna rotates, the intersection time of its main lobe and the preset track is different. As a result, they will be located in different discrete points in this false target track, but in the absence of error, they should be located in the track S [7] , which is shown in Fig. 5.

Research on Deception Jamming Methods of Radar Netting

1841

Fig. 5. The related track of these radars

Fig. 6. The jamming of the Radar Netting

The producing process of this route’s track point can be regarded as a subroutine and substitute into the radar parameters in the corresponding respectively, and control the working frequency and the direction of beam, then we can obtain the false target track in some radars at the same time [8]. If it was calculated when we use the same route, after the processing of the received data by multiple radars, the track we obtained must have the spatial relativity, that is to say that the false target track is on the same track. 3.2.2 The Simulation Experiment Due to the Radar Netting model, its radar’s frequency is different, but it is close and overlaps each other, so it can satisfy the requirement of frequency by a jammer, therefore, in the simulation, continuously forwarding these three radars’ signal by a jammer, so as to form the jamming, the parameter settings are shown follows:

1842

X. Lu et al.

The initial position distance is 40 km relative to the radar A. In the direction of north in the radar A, that is to say the distance from the origin is 91.9 km, at a speed of 100 m/s. Along the horizontal direction, the initial azimuth ðhÞ of the preset jamming track is relative to the radar is 60o, the initial distance R is 60 km, the trajectory is an asymptote, without considering the error, the simulation results is shown in Fig. 6. The straight line in this figure represents the trajectories of the jammer, the symbol “*” represents the track of the deception track, the symbol “o” represents the location of three radars. From Fig. 6, we can see that the track is located in the preset route, after a period of flight time of the jammer, it is out of the detection range of radar B, therefore, it no longer transmits the signal of radar B, only for forwarding the signal of radar A and radar B. It also can be seen from the figure that the false target tracks from the main detecting airspace of radar B to the main detecting airspace of radar C, in the course of area’s conversion, the track is still very smooth and is no breakpoint. This suggests that the radar A and the radar C have the common scan space, and the common area has no blind area. 3.2.3 The Error Analysis In order to analyze the error, in this, we only consider the noise in the process of signal transmission and the transmission of the jammer. We assume that these noises are Gaussian noise, the variance is 100. After the superposition of the noise to the signal, through processing, finally we can get the error of the simulation as shown in Fig. 7. From the figure we can see that the distance error of track point is within 100 m, the biggest error is also less than 300 m, all of them are within the detecting precision of the interfered with radar, these can achieve good effect of deception jamming.

Fig. 7. The error analysis of the jamming (left noise variance is 100, right noise variance is 200)

If we change the variance of noise, the jamming error will change. We assume that the noise variance is 200 and repeat the above simulation, then the deception jamming error as shown in Fig. 7. Comparing the left with the right in the figure, when the noise variance increases, the fusion of point trace error in the radar net’s host station also increases. When the noise variance is 200, most of the point’s error reached 150 m, a part of the point’s error is above 600 m, which can be concluded that with the noise increases and the effect of the deception jamming will be decrease.

Research on Deception Jamming Methods of Radar Netting

1843

4 Conclusion In this paper, we set up a Radar Netting by three radars and its Radar Netting model is a “target”, using the way of the track jamming to interfere with the Radar Netting. The result of research shows that with the way of the deception track jamming can generate the false target track which is all basically consistent by each radar in the Radar Netting system, these false target features are similar to the real target, so the radar net doesn’t eliminate it, which can enhance the false-alarm probability of the radar net and can disrupt their command decisions and fire attack. Acknowledgements. This work was supported by the Science and Technology Department of Yibin under Grants 2018ZSF001, 2018SF020 and the Science and Technology Department of Sichuan Province under Grants 2017GZYZF0014, 2018JZ0050.

References 1. Williamson FR, Brooks R, Greneker EF, Currie NC, Williamson J, McGee MC (1992) Radar as part of a netted surveillance system-a problem revisited. IEEE 2. Rorborts JBG (1988) Highly parallel processors in military systems. IEE Proc 135:202–207 3. Jinlei J, Hongbin R et al (1998) A preliminary research into the netted radar seekers for antistealth aircraft. IEEE 4. Parkinson GC, Xue DP, Faroop M (1997) Registration in a distributed multi-sensor environment. IEEE, pp 993–996 5. Yougguang C, Xicheng L, Hua Q, Xiaojun J (1997) On study of the application of ATM switches in netted-radar systems. IEEE, pp 970–974 6. Paradowski L (1996) Position estimation from netted radar systems with sensor position uncertainty. IEEE, pp 609–613 7. Que W, Peng Y, Lu D, Hou X (1997, October 14–16) A new approach to data fusion for stealth targets tracking. Radar 97. Publication No. 449:657-661 8. Weiyan Q, Zongying S, Wenli X, Yingning P, Dajin L (1998) Data fusion for tracking small targets with randomly sampling data. IEEE, pp 762–763

Cluster Feed Beam Synthesis Network Calibration Zhonghua Wang1(&), Yaqi Wang2, Chaoqiong Fan3, Bin Li3, and Chenglin Zhao3 1

3

Thirty-Eighth Research Institute, China Electronics Technology Group Corporation, Hefei, China [email protected] 2 Beijing Space Information Relay and Transmission Technology Research Center, Beijing, China The School of Information and Communication Engineering (SICE), Beijing University of Posts and Telecommunications (BUPT), Beijing, China

Abstract. The cluster feed beamforming network uses a ground-based beamforming technique. In view of the complicated link transmission in groundbased beamforming networks, the channel calibration for forward/backward links of ground-based beamforming networks is given in the paper. Aiming at the problem of channel amplitude and phase inconsistency in beamforming network, two solutions are given: point frequency coherent amplitude-phase detection method and code division coherent amplitude-phase detection method. Because continuous wave signal is poor to resist wideband interference and fuzzy to solve delay ambiguity, a channel amplitude-phase detection method based on composite orthogonal code sequence is proposed, and the performance of beam synthesis after channel calibration is verified by the prototype, which proves the validity of the method. Keywords: Cluster feed beamforming  Channel calibration detection  Composite orthogonal sequence

 Coherent

1 Introduction A cluster feed beam synthesis network consists of dozens or even hundreds of channels. Due to the use of ground-based beamforming technology, complex beamforming equipment is moved to the ground, and the wireless link of the satellite is up to 37,000 km. The signal transmission is affected by clouds, rain, fog and ionosphere, causing attenuation or distortion. The amplitude and phase (referred to as the amplitude phase) consistency vary greatly with the influence of the surrounding environments. Each channel of satellite-ground cable channel is in different ambient temperatures, and the amplitude-phase inconsistency between different channels varies with temperature. In addition, the influence of installation error and thermal distortion of antenna system is also included. The influence of these environmental or self-variation on the signal will affect the amplitude and phase of each channel. As a result, the synthetic gain of

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1844–1854, 2020 https://doi.org/10.1007/978-981-13-9409-6_223

Cluster Feed Beam Synthesis Network Calibration

1845

ground-based beam forming decreases. Therefore, calibration is the first problem to be solved in ground-based beam synthesis. The cluster feed reflector antenna system calibration [1–3] mainly includes two aspects, one is to correct the pointing deviation caused by the deformation of the reflective surface, and the other is to correct the beamforming network. Here, the correction method of cluster feed beam synthesis network is mainly studied.

2 Beam Synthesis Network Calibration Beam synthesis networks include forward link channels and backward link channels. Both forward and backward link channels contain wireless link channels, which are vulnerable to interference from space environment factors. Therefore, the selection of calibration schemes is crucial. 2.1

Point Frequency Amplitude-Phase Detection

The point frequency, amplitude and phase detection scheme adopt the mode of different frequency transmission. By accurately estimating the Doppler frequency shift of the reference signal and the transmitted correction signal, the amplitude and phase detection of the correction channel can be realized. (I) Transmitter Implementation Architecture Let the frequency and phase of the local oscillator signal on the satellite are f0 and a0 respectively. The reference signal S1 and the correction signal S2 are: 

S1 ¼ A1 cos ½ðN0 þ N1 Þð2pf0 t þ a0 Þ  S2 ¼ A2 cos ðN0 þ N2 Þð2pf0 t þ a0 Þ þ acaj

ð1Þ

The coordinates of the array antenna to be calibrated are: l

l

r

RðtÞ ¼ R þ v t 0

ð2Þ

l

where R0 represents the distance between the central point and the point of the antenna. Then the radial velocity in the rail is: l

R0 vj ¼ v  R0

ð3Þ

 vj  def D¼ 1 c

ð4Þ

def r

The Doppler shift factor is:

1846

Z. Wang et al.

(II) Receiver Implementation Architecture The receiver implementation in this scheme mainly includes filter, quadrature mixer, local oscillator, analog-to-digital converter (A/D), frequency estimator and amplitudeto-phase estimator. Let the local oscillator signal be:   Svco ¼ cos 2pf0g þ a0g

ð5Þ

Among them: f0g is consistent with the on-board local oscillator frequency. In fact, due to their different environments, they will randomly fluctuate in value. Then:    S4 ¼ B4 cos N0 2pf0 þ a0g

ð6Þ

After filtering and mixing with S4 , the center frequency signal and channel correction signal are obtained: S5 ¼ B5 cos½2pðf1  N0 fvco Þt þ /1  N0 avco 

ð7Þ

S6 ¼ B6 cos½2pðf2  N0 fvco Þt þ /2  N0 avco 

ð8Þ

S5 and S6 are respectively orthogonally mixed to obtain their complex signals as: 

   

S7 ðnÞ ¼ B7 exp j2pðN1 þ N0 Þf0 D  f0g nT þ /1  ðN1 þ N0 Þa0g 

S8 ðnÞ ¼ B8 exp j 2pðN2 þ N0 Þ f0 D  f0g nT þ /2  ðN2 þ N0 Þa0g

ð9Þ

When the satellite is relatively geostationary, the radial velocity of the satellite is less than 1 m/s relative to the earth. So: v j \3e  9 c

ð10Þ

If the correction time is much less than the coherence time representing the change of transmission environment parameters, the phase of the sampled data can be regarded as a constant  in the correction process. Therefore, since  N1 , N2 and  N0 are known, if ðN1 þ N0 Þ f0 D  f0g can be measured accurately, f0 D  f0g can be measured accurately. The correction signal is calculated using an offset constant:

R0 þ @rj2 U ¼ ðN0 þ N2 Þ a0  a0g  2pf0 c def

ð11Þ

The real and imaginary parts of S8 ðnÞ are defined as: 

S8j ðnÞ ¼ > S8j ðnÞ þ ½S8Q ðnÞ2 < B8 ¼ M1 n1 o M n P Þ > S ðnÞ > : acaj þ U ¼ M1 tan1 S8Q8j ðnÞ  2pðN0 þ N2 Þ\f0 D  f0g [ est T M 2þ 1

ð13Þ

n1

where M is the cumulative number. If s is used to indicate the coding interval of the correction signal and the number of transmission times, then the whole correction time is: Tj ¼ Ntj ðs þ MTÞ

ð14Þ

Therefore, the phase measurement error is:     ðM þ 1ÞT ea ¼ 2pðN0 þ N2 Þ f0 D  f0g  \f0 D  f0g [ est Ntj s þ 2

ð15Þ

Let ðN0 þ N2 Þ  1e4 and f0  1 MHz, since the maximum quantization error of the 5bit phase shifter is p=32, the cumulative time must be satisfied: Ntj ðs þ MTÞ 

1 1 64 ðN0 þ N2 Þ f0 D  f0g  \ f0 D  f0g [ est

ð16Þ

For the geostationary satellite system under consideration, the cumulative time to reach the required SNR is Tj ¼ Ntj ðs þ MTÞ * 100 ms. Then the frequency estimation error satisfies:     1 1 1 def et ¼ ðN0 þ N1 Þ f0 D  f0g  \ f0 D  f0g [ est \  0:16 N0 64 N0 þ N2 Ntj ðs þ MTÞ ð17Þ The parameters used in computer simulation are: 

SNR ¼ 20 dB; f1 ¼ 151:134 KHz;

fs ¼ 200 KHz f2 ¼ 163:134 KHz

ð18Þ

The frequency can be measured by using the method of periodogram. When the coherent sampling time is 2.6214 s, the mean and standard deviation of frequency estimation error obtained by Monte Carlo simulation are as follows:     8 Þ Þ > > std et1 ¼ 2e  5 < mean et1 ¼ 0:07713;     Þ Þ > > : mean et2 ¼ 0:002684; std et2 ¼ 2e  6

ð19Þ

1848

Z. Wang et al.

It can be seen that the accuracy of frequency correction meets the requirements. However, when the satellite-ground integrated transmission link is too long, the pointfrequency signal will have the problem of time delay ambiguity, and the ability of the point-frequency signal to resist broadband interference is poor. 2.2

Code Division Amplitude-Phase Detection

Aiming at the problem that the ability of continuous wave signal to resist broadband interference is poor [5], and the ability of single-frequency continuous wave to solve time-delay ambiguity is insufficient, the amplitude-phase detection using orthogonal codes is proposed, which has the advantages of providing anti-interference performance, high detection accuracy and no ambiguity of distance. Assuming that P1 ðtÞ and P2 ðtÞ are pseudo-code sequences generated on the satellite, the code length is N:  

 n1 n  T U t T ; cj ðnÞ U t  Pj ðtÞ ¼ N N n1 N X

i ¼ 1; 2:

ð20Þ

P Among them: cj ðnÞ ¼ 1; i ¼ 1; 2; n ¼ 1; 2; K; N:, Nn1 cj ðnÞck ðnÞ ¼ Ndði  kÞ; i; k ¼ 1; 2:, T is the duration of the code sequence. Assuming that the local oscillation signal Ss ðtÞ produced by crystal oscillation is Ss ðtÞ ¼ cosð2pfs t þ /s Þ

ð21Þ

where /s represents the initial phase of the local oscillator signal. After passing through the band-pass filter, the corrected signal and the reference signal are obtained in the form of: S1 ðtÞ ¼ A1 ðtÞ cosð2pfs t þ /s þ h1 Þ

ð22Þ

S2 ðtÞ ¼ A2 ðtÞ cosð2pfs t þ /s þ h2 Þ

ð23Þ

The theoretical relative amplitude and phase of the corrected signal and the reference signal are: AjhJ ¼

A1 ; hjhJ ¼ h1  h2 A2

ð24Þ

If Atj ðf Þ and htj ðf Þ are used to represent the amplitude change and phase shift of signals from satellite to ground, the satellite-to-ground delay in a single symbol period can be modeled as a first-order time function: s¼

 1 R0 þ vj t c

ð25Þ

Cluster Feed Beam Synthesis Network Calibration

1849

Among them, R0 denotes distance, vj denotes relative velocity of satellite and ground station, and c denotes speed of light. The received signal of the ground station is:

R0 RðtÞ ¼ Atj ðfs DÞA1 P1 ðt  sÞ cos 2pfs Dt  2pfs þ htj ðfs DÞ þ /s þ h1 c

R0 þ htj ðfs DÞ þ /s þ h2 þ Atj ðfs DÞA2 P2 ðt  sÞ cos 2pfs Dt  2pfs c

ð26Þ

v

where D ¼ 1  cj . The local oscillation signal of ground station is set as follows:   Sg ðtÞ ¼ cos 2pfg t þ hg

ð27Þ

Among them: hg is the unknown initial phase of the local oscillator at t = 0, and fg is the local frequency close to fs D. The mixing signal passes through a low-pass filter and generates I and Q signals. The analytical expressions are as follows: Rg ðtÞ ¼ Atj ðfs DÞA1 P1 ðt  sÞejð2pðfs Dfg Þt2pfs c

R0

þ htj ðfs DÞ þ /s /g þ h1 Þ

þ Atj ðfs DÞA2 P2 ðt  sÞejð2pðfs Dfg Þt2pfs c

R0

þ htj ðfs DÞ þ /s /g þ h2 Þ

ð28Þ





Þ Þ Rg ðtÞ is depredated with delay pseudo-code sequences P1 t  s and P2 t  s : Bj ¼ aA1 e

jh1

1 T

Z

1 þ aA2 ejh2 T

T þs s

Z s

Þ

Pj ðt  sÞP1 ðt  sÞej2pðfs Dfg Þt dt

T þs

Þ

Pj ðt  sÞP2 ðt  sÞe

ð29Þ j2pðfs Dfg Þt

dt;

R0 where the complex constant a ¼ Atj ðfs DÞejð2pfs c þ htj ðfs DÞ þ hs hg Þ . Delay error and frequency compensation error are recorded as:

  Þ Ds ¼ s s; Dx ¼ 2p fs D  fg

ð30Þ

  Bj ¼ aejDex A1 ejh1 Gj1 þ A2 ejh2 Gj2 i ¼ 1; 2

ð31Þ

There are:

Where:

1850

Z. Wang et al.

1 Gjk ðDs; DxÞ ¼ T

Z

r

Pj ðnÞPk ðn  DsÞejDxn dn

ð32Þ

0

The ratio of two dispreads signals can be calculated: 8 2   B1 A1 jðh1 h2 Þ < G11 ðDs; DxÞ 41 þ ¼ e b¼ : G22 ðDs; DxÞ 1 þ B2 A2

39

A1 G12 ðDs;DxÞ jðh2 h1 Þ = A1 G11 ðDs;DxÞ e 5 A1 G21 ðDs;DxÞ jðh1 h2 Þ ; e A1 G22 ðDs;DxÞ

ð33Þ

Therefore, the relative amplitude and phase measurements can be obtained: J

A ¼ jbj;

jeJ

J

h ¼ angleðbÞ

jeJ

ð34Þ

When both delay error and frequency compensation error are zero, Gjk ð0; 0Þ ¼ djk

ð34Þ

Accurate measurements can be achieved. At this time, there are: b¼

A1 jðh1 h2 Þ e A2

ð35Þ

Receiver noise can be suppressed by coherent accumulation to further ensure the accuracy of correction [6].

3 Amplitude-Phase Detection of Ground-Based Beamforming Network Foundation beam forming scheme is adopted in this paper. The signal transmission channel of beam synthesis network is partly on the ground and partly on the satellite. Calibration signal is generated by ground equipment. In forward calibration, the forward channel amplitude and phase calibration equipment generates compound code calibration signal, which passes through forward channel beam synthesis network and power division network in turn. After the channel amplitude and phase detection is completed in the forward channel amplitude and phase calibration equipment, a compensation table is generated and sent to the beam synthesis equipment [7, 8]. Forward channel amplitude and phase detection needs to select codes with good autocorrelation and low cross-correlation. In this paper, an improved composite orthogonal code composed of small m sequence and Walsh sequence is used [9]. Small m sequence is a pseudo-random sequence with good autocorrelation but poor cross-correlation. Walsh sequence has poor autocorrelation, but good cross-correlation. For this reason, two sequences are improved.

Cluster Feed Beam Synthesis Network Calibration

1851

(I) Improved autocorrelation and cross-correlation properties of m-sequence Small m sequences are generated by generating polynomials. The period is N = 2r-1 (r is the length of generating polynomials). It is impossible to multiply the two sequences directly. Therefore, it is necessary to modify the small m sequence by adding a 0 after the last symbol of the small m sequence. In this way, the period of the improved small m sequence becomes 2r, and the autocorrelation and cross correlation characteristics are close to those of the small m sequence. (II) Autocorrelation and Cross-correlation of Walsh Sequences Walsh sequence is derived from Hadamard matrix [4]. The sequence of Hadamard matrix is orthogonal to each other, that is, the result of multiplication and accumulation of any two rows or two columns is 0. It is orthogonal, and the elements of the matrix are + and 1. Hadamard matrix has the following properties: HT ¼ H

ð36Þ

Formula (36) shows that the Hadamard matrix is a symmetric matrix: H 1 ¼

1 H N

ð37Þ

By formula (37), the inverse of Hadamard matrix and Hadamard matrix H are proportional to coefficient 1/N. The rows or columns of Hadamard matrix are binary sequences composed of “+1” and “−1”, with order N = 2r. Therefore, Hadamard matrix is composed of N sequences with N elements. These sequences form a set of orthogonal sequences, in which the product of any two sequences is 0, and the crosscorrelation function of each sequence in the set is as follows: X lj mj ¼ 0 ð38Þ hlv ð0Þ ¼ j0

This is the Walsh sequence. Walsh sequence has poor autocorrelation and good cross-correlation. (III) Compound Orthogonal Sequences Compound orthogonal sequences are obtained by processing the improved m sequence and Walsh sequence. The improved m sequence has the same period as Walsh sequence. Firstly, m sequence and Walsh sequence are mapped from “−1” and “+1” to “0” and “1” respectively, then modulo 2 is added to process, and finally to “−1” and “+1”. Therefore, the composite orthogonal sequence has better cross-correlation characteristics than the small m sequence and better autocorrelation characteristics than the Walsh sequence. In this paper, a composite orthogonal sequence with code length 1024 is selected for channel amplitude and phase detection. In calibration, the code sequences of 30 channels are generated at the same time and assigned to the forward channels of

1852

Z. Wang et al.

different beam synthesis networks. After frequency division synthesis and separation, the code sequences are sent to the forward channels of on-board beam synthesis networks. In this way, the calibration signal of each channel carries the amplitude and phase information of the channel, and is sent to the forward channel calibration equipment through the coupling calibration channel, thus realizing the channel calibration. (IV) Calibration results of channel amplitude and phase In this paper, the amplitude and phase of 30 forward link channels are tested. The composite orthogonal sequence code length is 1024 and the SNR is 20 dB. The correlation peak of channel 1 is generated by the sliding correlation between the compound orthogonal code received by the forward channel calibration device and the received correction signal. The channel inconsistency is calculated by calculating the amplitude and phase of the correlation peaks. The calibration results can satisfy the requirements of amplitude  ±0.5 dB and phase  ±5°.

4 Simulation Analysis The cluster feed of beam synthesis network array has N elements, and the signal vectors of each channel (T1, T2, …, Tn). Let’s assume that the digital baseband signal is B1–Bn, expressed as a vector: (B1, B2, …, Bn). The beamforming matrix is: 0

V11 B : B Vij ¼ B B : @ : Vm1

: : : : :

: : : : :

: : : : :

1 V1n : C C : C C : A Vmn

ð39Þ

The matrix cell Vij represents the amplitude and phase adjustment weight of the i-channel element in the j-channel beam signal. The beam signal of path j is: Bj ¼

M X

Ti  Vij

ð40Þ

i1

As shown in Fig. 1, beam synthesis is tested by using 30 backward units. The synthetic gain is 14.5 dB, which is only 0.2 dB from the theoretical value of the synthesis. It demonstrates that the ground-based beam synthesis scheme is feasible (Fig. 2).

Cluster Feed Beam Synthesis Network Calibration

1853

Fig. 1. Single channel SNR detection

Fig. 2. Multi-channel beamforming SNR detection

5 Conclusions Firstly, the channel calibration of forward/backward link in ground-based beamforming network is given. Then, aiming at the inconsistency of channel amplitude and phase, two solutions are given: coherent amplitude and phase detection in point-frequency

1854

Z. Wang et al.

channel and coherent amplitude and phase detection in code-division channel. On this basis, a forward channel detection method based on composite orthogonal code sequence is proposed, which overcomes the problem of poor cross-correlation performance of small m sequence and greatly improves the accuracy of channel amplitude and phase detection. Finally, the weighted model of beam synthesis is given, and the calibration results are verified by using ground beam synthesis network equipment. The results show that the synthetic gain of beam is close to the theoretical value, and the calibration compensation results can meet the requirements of beam synthesis.

References 1. Chaudhary S, Samant, A (201) Characterization and calibration techniques formulti-channel phase-coherent systems. IEEE Instrum Measurement Mag 6,1 9(4) 2. Agrawal A, Jablon A (2003) A calibration technique for active phased array antennas. IEEE Int Symp Phased Array Syst Technol 223–228 3. Li W, Lin J, Wang W, Wang Y, Chen Z (2015) Method of multi-channel calibration for digital array radar. In: 2015 European radar conference (EuRAD) 4. Pratt WK, Kane J (1969) Hadamard transform image coding. Proc IEEE 57(1):58–68 5. Wang Z, Zhou W (2017) An Improved adaptive method of power amplifier nonlin-earity simulation. In: 17th International symposium on communications and information technologies (ISCIT) 6. Drotar P, Gazda J, Kocur D, Galajda F (2018) PMC-CDMA performance analysis for different spreading codes at HPA Saleh model. In: 2008 18th International conference Radioelektronika. pp 1–4 7. Jason D, Peter M (2010) A Review of adaptive beamforming techniques for wideband smart antennas. In: International conference on wireless communications. pp 1–5 8. Angeletti P, Alagha N (2009) Space/ground beamforming techniques for emerging hybrid satellite terrestrial networks. In: 27th international communications satellite systems conference (ICSSC 2009). Edinburgh (Scotland), pp 1–4 9. Angeletti P, Gallinaro G, Lisi M, Vernucci A (2001) On-ground digital beamforming techniques for satellite smart antennas. In: Proceedings of the 19th AIAA international communications satellite systems conference (ICSSC 2001). Tolouse, France

Design and Optimization of Cluster Feed Reflector Antenna Zhonghua Wang1(&), Yaqi Wang2, Chaoqiong Fan3, Bin Li3, and Chenglin Zhao3 1

The Thirty-eighth Research Institute, China Electronics Technology Group Corporation, Hefei, China [email protected] 2 Beijing Space Information Relay and Transmission Technology Research Center, Beijing, China 3 The School of Information and Communication Engineering (SICE), Beijing University of Posts and Telecommunications (BUPT), Beijing, China

Abstract. Based on the cluster-fed reflector antenna, a low-coupling cluster-fed reflector antenna design scheme is proposed to solve the problem of feed occlusion and beam scanning of the forward-fed reflector antenna. The accuracy test and feed coupling test of the prototype are carried out, and the test results meet the design requirements. Then we optimize the beam of the beam feed reflection surface, and the large-scale beam-forming problem is transformed into an optimization problem. An improved genetic algorithm for the target beam optimization is proposed, which adopts the envelope method to model the target beam optimization problem, thus reducing the control difficulty and computational complexity. Based on the original genetic algorithm, the methods of optimal preservation strategy, pseudo-parallel strategy and sharing niche algorithm are introduced to improve the genetic algorithm, to ensure the convergence of the optimization algorithm, increase optimization speed, and enable the algorithm to approximate the Pareto optimal solution. Keywords: Cluster feed  Array feed bias reflection surface algorithm  Envelope sampling

 Genetic

1 Introduction In recent years, the domestic multi-beam antenna has developed rapidly, and the arrayfed reflector multi-beam technology has been fully developed and applied. As the most important antenna, reflector antenna can be used to construct larger aperture antenna. Compared with the array antenna, the reflector antenna has better performance when the space is enough and the scanning speed is low [1]. Array-fed multi-beam singleaperture reflector antenna has good point-beam characteristics [2], and is increasingly used in the field of multi-beam communication. Jason Duggan [3] realizes the optimization of multi-beam of cluster-fed reflector antenna by using the closed-form equation of directivity coefficient of feed cluster-fed

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1855–1863, 2020 https://doi.org/10.1007/978-981-13-9409-6_224

1856

Z. Wang et al.

reflector antenna. KleinCA [4] also uses Minmax method to synthesize the normal and scanning point beam patterns in the case of large focal-diameter ratio. Based on the cluster-fed reflector antenna, a low-coupling cluster-fed reflector antenna design scheme is proposed to solve the feed occlusion and beam scanning problems of the forward-fed reflector antenna. The accuracy test results of the prototype and the feed coupling test results meet the design requirements. Furthermore, in order to optimize the cluster feed reflector beams, the large-scale beam-forming problem is modelled as an optimization problem, and an improved genetic algorithm for target beam-forming is proposed. The envelope method is used to optimize the target beamforming, which reduces the difficulty of control and the amount of computation. Compared with the traditional optimization algorithm, the proposed algorithm is comparable to the Pareto optimal solution.

2 Reflector Beam Optimization of Cluster Feed 2.1

Envelope Method for Optimizing Object Modeling

When optimizing the construction of objective function, analytical method and envelope method can be used for mathematical modeling. Analytical method uses highclass number fitting of the target beam tangent graph, and envelope method uses discrete points to construct the target beam tangent graph. The basic idea of envelope method is: on the basis of given contour, the sampling points of contour are determined according to certain criteria in the main lobe area and the side lobe area, and the values of sampling points are directly substituted into the objective function to calculate the fitness. In this paper, the optimization method based on the characteristic envelope of the target is adopted. The envelope method collects the data of the envelope of the target according to certain rules on the basis of the given envelope of the beam. The position and amplitude of the main lobe are taken in the main lobe region, and a limited number of sampling points on the transition curve of the main lobe and the side lobe are taken. A certain number of sampling points are collected from the side lobe region. The feature envelope sampling is realized. 2.2

Optimum Design of Target Beam

In this paper, the beam envelope method with fewer control parameters and less computation is used to construct the target beam, and the genetic algorithm is used to optimize the complex excitation coefficient. The objective function is constructed by discrete sampling of beam characteristic parameters on the main section of the beam, which is a weighted combination of beam max, sidelobe level, beam width and beam pointing parameters. After obtaining the optimal coefficients, the composite beams are calculated again.

Design and Optimization of Cluster Feed Reflector Antenna

1857

3 Beam Optimization by Improved Genetic Algorithm The synthetic beam process is shown in Fig. 1. 3.1

Coding

Conventional binary coding is adopted. The corresponding relationship is: 00000000…00000000 = 0!Umin 00000000…00000001 = 1!Umin+d 11111111…11111111 = 2l  1!Umax Then we can get the encoding accuracy as follows: d ¼ ðUmax  Umin Þ=ð2l  1Þ

ð1Þ

We assume that the encoding of a number is X: bl bl1 bl2 . . .b2 b1 , The corresponding decoding formula can be expressed as x ¼ Umin þ

l X

! bi  2

i¼1

i1

Umax  Umin 2l  1

ð2Þ

Binary coding is relatively intuitive. It is very simple to convert an individual into binary. It is easy to realize the operation of crossover and mutation of an individual. 3.2

Choice

In the process of genetic algorithm optimization, it is necessary to select the optimized individuals and select them according to the evaluation results of the fitness of the optimized individuals. This avoids the loss of genes and improves the convergence performance. Proportional operator is a common method. Setting the initial number of optimized individuals as M, the relationship between selection probability and individual fitness is as follows. Pis ¼ Fi =

M X

Fi

ði ¼ 1; 2; . . .; MÞ

ð3Þ

i¼1

In this paper, the selection operator method which combines the optimal preservation strategy with the proportional selection method is adopted. 3.3

Crossing

Individual pairing is the precondition of crossover. In this paper, random pairing method is used to cross-operate each individual. The use of crossover is related to specific problems, and a balance should be found between maintaining the original characteristics and new individuals. The main task of crossover is to find the crossover

1858

Z. Wang et al.

Fig. 1. Flow chart of software mechanism

Design and Optimization of Cluster Feed Reflector Antenna

1859

location and corresponding genes. Commonly used crossover operators are single-point crossover and multi-point crossover. In this paper, a single-point crossover operator is used to pair all individuals, and then randomly set the position of crossover points, and exchange them according to the crossover probability to generate new individuals. 3.4

Variation

Variation creates new individuals by randomly changing genes at a given location. New individuals are mainly generated by crossover, and mutation is used as an assistant to increase new individuals, so as to realize individual diversity. Cross can search in a wide range, which can reflect the global search ability, while mutation is the local search ability. The combination of the two enlarges the depth and breadth search of genetic algorithm and ensures the good performance of genetic algorithm. 3.5

Treatment of Constraints

In practical work, every optimization problem has constraints. It needs to deal with the optimization process and constraints. Common constraints include search space method and penalty function method. (1) Search Space Limitation Method The typical feature of this method is to limit the search space to a certain range so that the corresponding spatial solution of the individual is unique. Through certain coding rules, it is guaranteed that the corresponding feasible solution can be found in the solution space. (2) Penalty Function Method The basic idea is in the process of optimization, if there is no corresponding feasible solution for the optimized individual, a new target fitness value can be obtained by penalty function method. By this way, the fitness of the poor individual can be reduced and the genetic probability can be reduced. The way to adjust the fitness can be expressed as follows: if X does not satisfy the constraints, the processing method is as follows F0 ðxÞ ¼



FðxÞ; x satisfy the constraints FðxÞ  PðxÞ; x not satisfy the constraints

ð4Þ

In formula 4, F(x) is the original fitness, F′(x) is the new fitness and P(x) is the penalty function. The application of search space limitation and penalty function method to optimization process not only improves the accuracy of optimization, but also reduces the fitness of bad individuals, speeds up the optimization speed and improves the optimization performance.

1860

Z. Wang et al.

4 Simulation Analysis 4.1

Test Results of Reflector Principle Prototype

The minimum diameter of the reflector antenna prototype is 28 m. Because the antenna is too large to test its electrical performance, this paper only tests the diameter and surface accuracy of the antenna. The minimum diameter of the tested antenna is 28.2 m, and the root mean square accuracy of the antenna surface is 5 mm (design requirements: less than 7 mm). The surface accuracy test point cloud is shown in Fig. 2 and the error distribution is shown in Fig. 3. In order to display the error distribution more clearly, the error is magnified 1000 times. It can be seen that the edge error is larger and the middle error is smaller.

Fig. 2. The channel selection probability evolution

Fig. 3. The average utility of system for different number of users

4.2

Optimum Design and Simulation

According to the requirement of satellite EIRP value 56.65 dBW, each beam is allocated 7 dBW power. Considering 1.5 dB scanning loss, 1.5 dB beam overlap loss and 3 dB system margin, the antenna gain is 49.65 dBi for the edge beam of reflector antenna. Considering the error loss of antenna deformation, the diameter of antenna is

Design and Optimization of Cluster Feed Reflector Antenna

1861

28 M. In order to satisfy the requirement of C/I of the same frequency beam of the system (>14 dB), the sidelobe level ( C(8, 6), so the OFDM-IM system with n = 8, k = 4 activation mode can carry more index bits, and this part of the bits has a lower probability of error at medium to high SNR.

Fig. 2. Simulation results of 4-QAM modulation

Figure 3 shows the simulation results of OFDM-IM system and traditional OFDM system using BPSK modulation respectively. In the figure, the OFDM-IM system uses two different activation modes: n = 4, k = 2, k = 8, k = 4. Among them, the spectral efficiency of the simulation scheme of “OFDM-IM (n = 8, k = 4), BPSK” is 1.25 bps/Hz. The spectrum efficiency of the other simulation schemes is 1 bps/Hz. It can be seen from the figure that the BER performance of the OFDM-IM system is about 6 dB higher than that of the traditional OFDM system when the bit error rate is in the order of 10 to 4 under the activation mode of n = 4 and k = 2. The significant

1890

D. Wang et al.

improvement of BER performance is due to the low error probability of index bits carried by the index combination of activated subcarriers in OFDM-IM system at medium and high signal-to-noise ratio (SNR). When the transmission power of OFDMIM is the same as that of traditional OFDM-IM system, the transmission power of single subcarrier in OFDM-IM system will be larger than that of single subcarrier in traditional OFDM system. In addition, for the two different activation mode configurations of OFDM-IM system, with the increase of signal-to-noise ratio (SNR), the BER theoretical value curve of the system matches the simulation curve more and more.

Fig. 3. Simulation results using BPSK modulation

4 Concluding Remarks In this paper, through the modeling and analysis of OFDM system and OFDM-IM system, it can be found that carrier serial number modulation OFDM-IM is a new physical layer transmission technology, which can obtain better spectral efficiency and BER performance while retaining the advantages of traditional OFDM. There are still many shortcomings, there is still a lot of room for improvement of the algorithm, and MIMO technology is one of the core technologies of the next generation mobile wireless communication, which can be combined with it as the focus of research.

Research on Multi-carrier System Based on Index Modulation

1891

References 1. Basar E, Aygolu U, Panayirci E et al (2013) Orthogonal frequency division multiplexing with index modulation. IEEE Trans Signal Process 61:36–49 2. Ye S, Blum RS, Cimini LJ (2006) Adaptive OFDM systems with imperfect channel state information. IEEE Trans Wirel Commun 5:1276–1536 3. Ntouni G, Kapinas V, Karagiannidis G (2017) On the optimal tone spacing for mitigation in OFDM-IM systems. IEEE Commun Lett 21:1019–1022 4. Wen M, Zhang Y, Li J et al (2016) Equiprobable subcarrier activation method for OFDM with index modulation. IEEE Commun Lett 20:2386–2389 5. Zhang H, Yang LL, Hanzo L (2016) Compressed sensing improves the performance of subcarrier index-modulation assisted OFDM. IEEE Access 4:7859–7873

A GEO Satellite Positioning Testbed Based on Labview and STK Yunfeng Liu1, Qi Zhang2, Shuai Han2, and Deyue Zou1(&) 1

School of Information and Communication Engineering, Dalian University of Technology, Linggong. 2, Dalian 116024, China [email protected], [email protected] 2 School of Electronics and Information Engineering, Harbin Institute of Technology, Yikuang. 2, Harbin 150080, China [email protected], [email protected]

Abstract. With the increasing use of high orbit satellites, the global navigation and positioning system (GNSS) for high orbit satellite positioning has become a hot research topic. In this paper, the GNSS-based high-orbit spacecraft orbit determination algorithm is built to simulate and verify the system, and the system state obtained by system-level simulation is verified in real time. Traditional STK simulation, parameter changes need to be manually adjusted, so this article uses Labview to control the STK from the bottom, automatically set the parameters, to facilitate related research, so as to verify the visible stars, DOP deserves specific values and dynamic trends. Keywords: High-orbit satellite

 GNSS  Labview  STK

1 Introduction Geostationary Earth Orbit (GEO) satellites and Highly Elliptical Orbit (HEO) satellites are important communications and meteorological satellites. With the development of satellite navigation and positioning systems, the use of navigation satellites to orbit other satellites has gradually developed in the context of increased use and emissions of Low Earth Orbit (LEO) and high Earth orbit satellites [1–3]. At present, the orbit determination mode of high-orbit satellites is mainly the measurement and control method of ground stations. The construction of ground stations is affected by geographical location and the construction cost is high, which cannot provide sufficient positioning resources. Therefore, GNSS is used for high-orbit satellite positioning [4–7]. Since the traditional STK simulation parameter changes need to be manual, use Labview to control the STK and automatically control the parameters to facilitate verification of the simulation results. In this paper, the simulation verification system using Labview to collaboration STK is realized. At the bottom layer, a DLL is written in C language for the Labview side to collaboration. It realizes the STK side simulation demonstration effect on the Labview side. In addition, Labview system has powerful functions and can realize the information transmission between two terminals through the serial port. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1892–1899, 2020 https://doi.org/10.1007/978-981-13-9409-6_229

A GEO Satellite Positioning Testbed Based on Labview and STK

1893

The second part of this article introduces STK’s CON module and how to create a STKCONNECT.dll file that can be called by Labview. When the physical program is large, the serial port is used for information transmission. The third part introduces the main body design and implementation of the simulation verification system, and realizes the whole verification system through the program architecture of six different state machines.

2 CON Module and Serial Port Transmission 2.1

STK CON Module

STK/CON is a connection module between STK and external. It includes a series of functions to open UNIX or TCP/IP to STK interface to send commands to STK and receive data returned from STK, and close the interface after communication is completed. Connect has the capability to provide information and output error and diagnostic information in a variety of ways specified by the user. If necessary, the user can also cancel this information to meet his needs. To use Connect, you need to provide the connection name and port so that the STK resides and opens the interface. Create a STKCONNECT.dll that can be used by Labview for the Labview main program to use STK. When creating STKCONNECT.dll, follow the above initialization, open the connection, send the instruction, close the connection, use the C++ language, connect several functions in Table 1, and generate an overall function. The resulting dll will be used when the function node is used in the Labview main program. The specific call configuration is shown in Fig. 1. Table 1. Calling function table Function name AgConInit() AgConOpenSTK() AgConProcessSTKCmd() AgConCloseSTK()

Function Initialize the underlying STK Open the connection between the third party and the STK Send instructions from the third party to STK Close the connection between the third party and the STK

Fig. 1. Library function call node configuration diagram

1894

2.2

Y. Liu et al.

Serial Transmission

When the simulation verification system and the entity main program are on different terminals, or the entity program is relatively large and independent on the same terminal, it needs to use the serial port to transfer information. It uses the VISA series VI in Labview. The front panel of the VI used on the data sender and data receiver is shown in Fig. 2.

Fig. 2. VISa serial port front panel

When setting up, it is very important that both ends of the transceiver should be consistent, otherwise the normal transceiver of data cannot be realized. The sender connects the demo part of the command seen on the front panel directly to the output data of the main program of the sender, and selects the sending function. The receiving end opens the data reading function accordingly. The read buffer verifies the portion of the data that is currently read. The response window verifies the result of connecting the data read in a split. At the sending and receiving data end, VISA reads VI and VISA writes VI into the loop, so that the sending and reading are respectively in the loop state; However, there is a waiting time at the reading end, and after the reading end waits longer than the limited waiting time threshold, an error will appear at the reading end. When the output time of the sender program cannot be controlled, the receiver adds a wait time before each reception to avoid waiting for a timeout.

3 Design and Implementation of Simulation Verification System The most important thing is to design the main program of the simulation verification system. The sender data is converted into a receivable instruction on the STK side, and the instruction is sent to the STK to control the image presentation. The demo end

A GEO Satellite Positioning Testbed Based on Labview and STK

1895

information changes the presentation effect in real time following the main program change, avoiding the time synchronization problem. In this paper, the simulation verification system adopts a simple state machine architecture, because the overall state number is limited, and there is no need to respond to the front panel event. The entire state machine implements three functions: the serial port receives the message, the message is converted into an STK-receivable command, and the instruction is sent to the STK demo verification. The state machine is then designed to convert between six states. This paper uses the way to generate instructions to write to a document. Different documents can be converted and extracted. Program understanding is simpler and the operating error rate is lower. The state machine structure of the simulation verification program is shown in Fig. 3.

Write Instruction Document

Write to Document 12

Write Instruction Document 3

Read Instruction Document 1

Simulation Verification Waiting

Read Instruction Document 3

Fig. 3. Simulation verification system state machine schematic

The following six states and their sub-vi are introduced in detail: State 1: write instruction document This state will localize the visible star specific information obtained by the simulation operation at a certain time transmitted from the serial port. Then use Labview to generate instructions and send them to the STK side for demonstration verification. This state contains two VIs, one is to read the VI from the serial port; the other is to write standard instructions into the instruction documentation. Standard instructions are formed by adding the received information to some string. At the receiving end and the transmitting end, the serial port setting requirements are exactly the same, so that both ends can be sent and received smoothly. The reading parameter is set to the maximum number of characters that need to be read each time. It is guaranteed that at least one complete set of valid information can be extracted in each read data. After extracting the complete valid information, it needs to be converted into the STK standard instruction format. According to the instruction format, the received satellite number array of the control demo is converted into a corresponding satellite demonstration command. Write the instructions to the document and generate the document as shown in Fig. 4a.

1896

Y. Liu et al.

(a) Written instructions

(b) Modify instructions

Fig. 4. Instruction document

This document contains one or more complete and valid information, so it needs to be filtered. State 2: write to document 12 This state implements filtering of the instruction document generated by state 1. Intercept the contents of the adjacent two animation start instructions in the loop as a complete instruction. At the end, add an end mark such as “End” to the document. The document is convenient for finding the end if it is read line by line when the STK side reads the instruction. At the same time that the instruction document 1 is generated, it can also be intercepted. The interception method can be set according to its own requirements, and the instruction document 2 can be generated. The user can change the 6 rooms and 1 corridor form the scenario. In contrast to the traditional pre-clustering interception condition of the instruction document 1 according to his own needs, and generate different instruction document 2. State 3: Write instruction document 3 State 3 is the local modification of instruction document 2 generated in the previous state to generate instruction document 3. When the status update is carried out at each moment, the demo situation of the previous state needs to be cleared, such as closing the object of the last open demo, changing the color of the object, changing the position of the object, etc. Therefore, state 3 is required to complete this requirement. The generated instruction document 3 corresponding to instruction document 2 above is shown in Fig. 4b. State 4: read instruction document 1 Reading the command document is to control the demonstration, through the STK 2D and 3D animation, the visible star obtained by the operation is demonstrated. The

A GEO Satellite Positioning Testbed Based on Labview and STK

1897

basic constellation needs to be established during the preparation of the demo, including the establishment of the main objects, such as satellites, locations, basic settings of these objects, link establishment, setting of the entire animation time period, setting of the animation operation mode. The established constellation diagram is shown in Fig. 5.

Fig. 5. Constellation building

It mainly includes 31 stars of the GPS constellation, as well as signal transmitters on each star, GEO satellites, receivers, and links between receivers and satellites. In the simulation verification process, the link demonstration between the visible star and the visible star and the GEO satellite was mainly controlled, and the four stars of the selected star were demonstrated in red. After experiencing state four, the demonstration of reading instruction document 1 in STK after the animation starts to run is shown in Fig. 6.

Fig. 6. Read instruction document 1 dynamic presentation

1898

Y. Liu et al.

State 5: Simulation Verification Waiting At the same time as the 3D and 2D animations in the STK are started in the previous state, the functions of the visible star number of the receiving end and the DOP value of the visible star constellation and the result of the star selection are synchronously demonstrated. The demonstration effect interface is shown in Fig. 7.

Fig. 7. Simulation demo waiting for Labview front panel and sync STK animation

Pressing the Clear button while the program is running clears all the curves in the numerical verification chart on the right. The right curve corresponds to the visible star number and DOP value on the left respectively in real time. State 6: Read Instruction Document 3 This state is consistent with the functions implemented in state 5. After reading instruction document 3, we realized the change of color and the closing of object demonstration. The verification result at the previous moment is cleared, and the verification result at the next moment is shown in Fig. 8. This achieves a loop dynamic demonstration verification.

Fig. 8. Read instruction document 3 dynamic presentation

A GEO Satellite Positioning Testbed Based on Labview and STK

1899

4 Conclusion The main content of this paper is to collaboration the underlying functions of STK through Labview to control the STK-side verification system-level simulation results. Real-time synchronization of data results such as DOP values and visible stars through the Labview front panel. The system realizes flexible control of STK demonstration by establishing a text file for instructions, and freely stores various instructions, and freely filters and changes various instructions. While controlling 3D and 2D animation at STK end, the system realizes synchronous demonstration of positioning DOP value and number of visible stars. This makes up for the defect that STK terminal can generate data in the form of report but cannot conduct real-time synchronous demonstration. The state contained in the whole verification system almost completely covers all the data that can be controlled and verified by simulation at the STK end, and the system has very powerful functions. Acknowledgements. This research is supported by the National Natural Science Foundation of China #61701072.

References 1. Winternitz LMB, Bamford WA, Heckler GW (2009) A GPS receiver for high-altitude satellite navigation. IEEE J Sel Topics Signal Process 3(4):541–556 2. Bağci M, Hacizade C (2015) Performance Analysis of GPS Based Orbit Determination via Numerical Methods for a LEO Satellite. In: International conference on recent advances in space technologies. IEEE, Istanbul, Turkey, pp 731–735 3. Aoki M, Kogure S (2010) QZSS and MSAS status and development. Italy, Turin 4. Yang Y, Yue X, Dempster AG (2016) GPS-based onboard real-time orbit determination for leo satellites using consider Kalman Filter. IEEE Trans Aerosp Electron Syst 52(2):769–777 5. Balbach O, Eissfeller B, Hein GW et al (2002) Tracking GPS Above GPS satellite altitude: first results of the GPS experiment on the HEO mission equator-S. Position location and navigation symposium. Palm Springs, IEEE, CA, USA, pp 243–249 6. Lim WG, Jang HS, Yu JW (2010) New method for back lobe suppression of microstrip patch antenna for GPS. Microwave Conference, IEEE, Paris, France, pp 679–682 7. Zhan PY, Fan SL, Xiong Z et al (2012) Orbit determination algorithm for GEO satellite with GNSS based on UKF. Modern Electronics Technique

SVR Based Nonlinear PA Equalization in MIMO System with Rayleigh Channel Bowen Zhong, Wenbin Zhang, Shaochuan Wu(B) , and Qiuyue Zhu Communication Research Center, Harbin Institute of Technology, Harbin, China [email protected]

Abstract. Power amplifier (PA) nonlinearity has been one of the crucial constraints to the performance of radio frequency (RF) communication systems. The distortion caused by amplitude-phase modulation (APM) results in severe performance degradation. In this paper, we study the effect of PA nonlinear distortion on bit error performance in Multiple-Input Multiple-Output (MIMO) wireless communication systems, and develop a new method based on Support Vector Regression (SVR) to compensate the nonlinear distortion. Under the condition that the receiver has no knowledge of PA distortion parameters, we propose a receiver compensation technique which involves estimating the points of the distorted AM-AM curve based on training using SVR. The proposed scheme can realize a model-free estimation. Simulation results show that, for 4 × 4 MIMO with 16-QAM, the proposed scheme is effective to deal with the nonlinear distortion caused by PAs. Keywords: Support vector regression model

1

· PA distortion · MIMO · Rapp

Introduction

In wireless communication systems, PA is an indispensable component working at the end of transmitter. RF power amplifiers are usually nonlinear in practice, the output signal will be distorted by this nonlinearity. Besides, most modern wireless communication systems, including the fifth generation (5G) cellular systems, use multi-carrier or orthogonal frequency division multiplexing (OFDM) modulations whose signals have extremely high peak to average power ratio (PAPR). This makes it reckless to neglect the nonlinearity of PA. In order to suppress the distortion, PAs in wireless transceivers usually have to work with high output backoff (OBO) in practice. However, PA’s optimal power efficiency only lies in nonlinear saturated region. Driving the PA closer to its saturation point is appealing, since it would increase the energy efficiency and prolong the battery life of a mobile device. However, the nonlinear distortion makes it difficult to conduct symbol detection at the receiver. Therefore, it is c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1900–1907, 2020 https://doi.org/10.1007/978-981-13-9409-6_230

SVR Based Nonlinear PA Equalization

1901

attracting to carry out a method to enhance the power efficiency of PA and improve the accuracy of signal detection simultaneously. Various strategies have been investigated to solve this problem. First strategy is to linearize the PAs at transmitter. One of the popular methods today is to adopt the digital pre-distortion (DPD) before the PA. DPD techniques remove the nonlinear distortion by learning the nonlinearity parameters through a feedback path to characterize the inverse transfer function of PA for linearization. However, DPD is too complex and costly for small and low-cost 5G devices, especially for cost and battery limited device in massive machine-type communications or internet of things (IoT) in uplink transmissions. So another strategy is to mitigate nonlinear PA distortion at the receivers via post-distorter equalization. This method does not require a feedback RF chain at the transmitter to learn the PA model and implement the predistorter, which reduce the hardware complexity in transmitter. Artificial neural networks (ANN) have also be studied for both nonlinear modeling of PAs and nonlinear equalization [1–3]. But some shortages also exist, that neural networks usually require relatively long training sequences to model the PA nonlinearity accurately. To deal with this problem, we consider taking advantage of the SVR algorithm to save the training sequences resource. SVR is an popular kernel-based supervised learning algorithm [4]. In contrast to ANN, SVR requires less training time and memory for modeling, allowing it to search for solutions more efficiently in a very high-dimensional space. Although SVR has been successfully applied for PA and some other microwave device modeling [5], previous work has not focused on PA nonlinear distortion equalization and recovery based on SVR as far as we know. In this paper, we develop a nonlinear equalization scheme based on support vector regression algorithm to recover the PA distorted signals at the receiver. We take Rapp model [6],which is a widely accepted PA model of solid state power amplifiers (SSPA), as an example to construct the output signals distorted by nonlinear PA of a MIMO system. As the both the PA distortion model and parameters are unknown at the receiver in practice, we firstly propose a trainingbased nonlinearity estimation technique using SVR. The estimated nonlinearity model-free parameters are then used to eliminate the distortion at the receiver. It is noticeable that we just apply Rapp model as an example to validate the effectiveness of proposed model-free algorithm, which is demonstrated in this paper by simulations and experiments. The rest of this paper is organized as follows: first, the MIMO channel model with channel’s nonlinearity is introduced in Sect. 2. In Sect. 3, the SVR-based nonlinear estimator and equalizer is explained. Then, some simulation results are given in Sect. 4. Finally, conclusions are shown in Sect. 5.

2

MIMO System Model with Nonlinear Channel

Consider an MIMO system with nt transmit antennas, nt transmit RF chain, and nr receive antennas. The MIMO transmitter with nonlinear PAs is shown in Fig. 1. In this paper, we assume that channel is block fading and the block

1902

B. Zhong et al.

length is N . Let x ∈ C nt ×N denote the modulated signal matrix, where xi,n ,i = 1, ..., nt , n = 1, ..., N denotes the complex signal in M-point signal constellation. We here adopt the 16-QAM signal constellation.

Fig. 1. MIMO system in Rayleigh channel with PAs.

An accurate nonlinear PA model is the Rapp model. The PA gain G(·) can be expressed as A (|xi,n |) exp (jφ (|xi,n |)) , (1) G (|xi,n |) = |xi,n | where the term |xi,n | denotes the amplitude of xi,n . Real amplifiers exhibit various magnitudes of nonlinearities. These are usually described by the amplitude transfer characteristics (also known as the amplitude modulation/amplitude modulation (AM/AM) conversion) and the phase transfer characteristics (also known as the amplitude modulation/phase modulation (AM/PM) conversion) of the amplifier. Function A(·) and φ(·) represent the AM/AM conversion and the AM/PM conversion respectively. In the Rapp model for SSPA, the phase distortion is assumed to be small enough so that it can usually be neglected [7]. Therefore, the Rapp model can be characterized by the following AM/AM αA φ(r) = 0,where A = |xi,n | is and AM/PM conversions g(A) =   1 , 2r 2T 1+( αA β ) the amplitude of the PA input signal, α is a small signal gain usually normalized to 1, β is the limiting output amplitude, and r controls the smoothness of the transition from linear operation to saturated operation. The AM/AM characteristic of such a model with β = 1, r = 2 is shown in Fig. 2. Due to the MIMO transmitter has nt transmit RF chains, we extend the PA distortion function G(·) defined in (1) to vector functions as follows: Δ

∀v ∈ C nt , G(v) = {G (vi ) , i = 1, . . . , nt } .

(2)

 = G(x), x  ∈ C nt ×N . Note that the map x → G(x) is one to one, then we see that x nr ×nt denote the MIMO channel gain matrix, where Hij denotes Let H ∈ C the complex channel gain from the j-th transmit antenna to the i-th receive antenna. Both the real part and image part of the complex channel gains are assumed to be independent Gaussian with zero mean and unit variance. The  ∈ C nt ×N denotes received signal can be presented as y = H x + n, where x

SVR Based Nonlinear PA Equalization

1903

Fig. 2. AM/AM conversion characteristic of the SSPA model.

the transmitted modulated signal which is distorted by PA nonlinearity. And n ∈ C nr ×N denotes the additive zero mean complex Gaussian noise vector with covariance matrix σ 2 Inr . The average received signal-to-noise ratio (SNR) per P receive antenna is given by O,avg σ 2 , where PO,avg denotes the average power of the transmitted signal, which is the output of the PA.

3

SVR Based Nonlinear PA Distortion Equalizer

Linear approaches cannot achieve high estimation precision in the presence of nonlinear power amplifier, where the PA’s amplitude characteristics perform heavy nonlinearity. Therefore, we adapt SVR derived from the SVM, which is a powerful tool in solving nonlinear, small samples and high dimensional regression problems. Thus, we map the input vector into a higher dimensional feature space H (possibly infinity) by means of the nonlinear transformation ϕ(·). The following regression function is y = wT ϕ(x)+b+e, where w is the weight vector, b is the bias term , and residuals {em } account for the effect of both approximation errors and noise. In the SVM framework, the optimality criterion is a regularized and constrained version of the regularized Least Squares criterion. In general, SVM algorithms minimize a regularized cost function of the residuals, usually the Vapnik’s ε − insensitivity cost function. To improve the performance of the estimation algorithm, a robust cost function is introduced which is ε-Huber robust cost function, given by ⎧ |e| ≤ ε ⎨ 0, 2 1 (|e| − ε) , ε < |e| ≤ eC , (3) lε (e) = 2γ ⎩ C (|e| − ε) − 12 γC 2 , eC < |e| where eC = ε + γC, ε is the parameter which is positive scalar that represents the insensitivity to a low noise level, parameters γ and C control essentially the trade-off between the regularization and the losses. In fact, concerning errors

1904

B. Zhong et al.

overhead eC , the cost function is linear, however, it is quadratic for errors among ε and eC . It should be noted that, errors lower than ε are disregarded in the ε − insensitivity zone. Now, the SVR primal problems can be formulized as m

 1 2 lε (f (xi ) − yi ). min w + C w,b 2 i=1

(4)

By introducing relaxation variable ξi to allow the existence of classification errors within a certain range and improve the generalization ability of learning methods, the original problem can be optimized as

m  2 1 ˆ ξ w + C + ξ min ˆ i i w,b,ξi ,ξi 2

s.t.

i=1

f (xi ) − yi ≤ ε + ξi , ˆ yi − f (xi ) ≤ ε + ξ, ξi ≥ 0, ξˆi ≥ 0, i = 1, 2, . . . , m.

(5)

Considering the Lagrange multiplier α, we can convert the above problems into dual ones by convex optimization method, which can be expressed as max

m 

α,α ˆ i=1

s.t.

yi (ˆ αi − αi ) − ε(ˆ αi + αi ) − m  i=1

1 2

m m   i=1 j=1

(ˆ αi − αi )(ˆ αi + αj )xTi xj (6)

(ˆ αi − αi ) = 0,

0 ≤ αi , α ˆi ≤ C α ˆ. ˆ i and b, then the solution By solving the convex problem, we can get the αi , α of the SVR problem can be presented as f (x) =

m 

(ˆ αi − αi )xTi x + b.

(7)

i=1

Let K(xi , xj ) = ϕ(xi ), ϕ(xj ) be a Mercer’s kernel which we here choose the RBF kernel.

The RBF kernel function can be presented as K(xi , xj ) = x −x 2 , where · represent the euclidean norm, σ is the scaling factor, exp − i2σ2j which could be optimized by cross-validation or by particular priori knowledge. By introducing the following kernel function we can get the final solution of regression problem. f (x) =

m 

(ˆ αi − αi )K(x, xi ) + b,

(8)

i=1

where xi is the input of training samples, and x represents the signal amplitude recovered from distortion based on SVR. For PA nonlinearity is much more obvious for large signal amplitude, we can apply small-amplitude training sequences xi to estimate the channel matrix H and then recover nonlinear distortion after channel equalization.

SVR Based Nonlinear PA Equalization

4

1905

Simulation Results

In this section, we present the simulation results for an Nt = 4, Nr = 4 MIMO system with 16-QAM. The Rapp model of the SSPA with the following nonlinear distortion parameters is β = 1, r = 2. Figure 3 shows the SVR-predicted amplitude curve at the receiver with nonlinear PA distortion at the MIMO transmitter. The amplitude of the transmitted signal is normalized to 1. This figure represents the training process of the SVR based nonlinear distortion estimation. The blue points represent the noisy training data accumulated by receiver, which is used to train the SVR-based nonlinearity estimator. These training data firstly go through the MIMO channel equalization process, which we assume that the channel matrix H is perfectly estimated by pilots whose amplitude is small enough to avoid the PA nonlinear distortion. The red curve represents the predicted PA nonlinear AM-AM curve. With the increase of SVR training size, the fitting curve becomes more accurate and show the tendency of convergence.

Fig. 3. Trained predicting curve based on SVR with noisy training data.

Figure 4a, b shows the points of the 16-QAM signal constellation at the input and output of the PA for the selected PA model and parameters abovementioned. The AM/AM nonlinear distortion of PAs cause the amplitudes of both the in-phase and the quadrature components of the complex signal no longer maintain the original ratio for amplitude levels. We see that the points with amplitude levels farther from zero are much more effected by distortion, and the minimum Euclidean distance among the constellation points at the PA output gets reduced, which results in the degradation in BER performance, which is highlighted in Fig. 5.

1906

B. Zhong et al.

Figure 4c, d show the received data before and after SVR-based nonlinear distortion recovery at Eb /N0 = 25 dB. It is obvious that after the nonlinear equalization process with proposed scheme, the constellation points return to normal in general, which the effectiveness of the proposed method is corroborated from the perspective of constellation points.

(a) Input signal of PA

(b) Output signal of PA

(c) Received signal before recovery

(d) Received signal after recovery

Fig. 4. Constellation diagrams before and after equalization.

Figure 4 shows the BER performance with varying Eb /N0 in different scenarios. For the blue curve in Fig. 5, no compensation is done to deal with the PA distortion. Undoubtedly, the performance degrades in this scenario comparing with the MIMO system with ideal PAs, which is shown by the yellow curve as the contrast analysis. The green curve represents the BER performance for proposed scheme at the receiver. We can see that the SVR-based method performs close to within 4dB of the performance of the MIMO system with ideal PAs.

5

Conclusions

The paper develops a model-free nonlinear PA distortion equalization scheme in MIMO communication system based on support vector regression. The simulation results show that this method is efficient to deal with the nonlinear distortion at the receiver after training. As we only choose one PA nonlinear

SVR Based Nonlinear PA Equalization

1907

Fig. 5. Ber performance for MIMO system in different scenarios.

model and 16-QAM modulation to simulate in this paper, more research may be carried out with other common PA model and modulation scheme. Besides, more interest may concentrate on how to diminish the training data size and make it much faster to converge. Acknowledgements. The work presented in this paper was supported by National Natural Science Foundation of China under Grant No. 61671173.

References 1. Ibnkahla M (2000) Applications of neural networks to digital communications—a survey. Sig Process 80(7):1185–1215 2. Park DC, Jeong TKJ (2002) Complex-bilinear recurrent neural network for equalization of a digital satellite channel. IEEE Trans Neural Netw 13(3):711–725 3. Uncini A, Vecci L, Campolucci P et al (1999) Complex-valued neural networks with adaptive spline activation function for digital-radio-links nonlinear equalization. IEEE Trans Sig Process 47(2):505–514 4. Chang CC, Lin CJ (2011) A library for support vector machines 5. Chen P, Merrick BM, Brazil TJ (2015) Support vector regression for harmonic optimization in continuous class-F power amplifier design. In: Integrated nonlinear microwave and millimetre-wave circuits workshop. IEEE, New York 6. Rapp C (1991) Effects of HPA-nonlinearity on a 4-DPSK/OFDM-signal for a digital sound broadcasting signal. In: Proceedings of the second European conference on satellite communications 7. Bhat S, Chockalingam A (2016) Compensation of power amplifier nonlinear distortion in spatial modulation systems. In: IEEE international workshop on signal processing advances in wireless communications. IEEE, New York

LoS-MIMO Channel Capacity of Distributed GEO Satellites Communication System Over the Sea Chi Zhang1,2, Hui Li1,2(&), Xuan An Song1,2, Jie Cheng1,2, and Li Jie Wang1,2 1

School of Information and Communication Engineering, Hainan University, Haikou 570228, China [email protected] 2 Engineering Research Center of Marine Communication and Networks in Hainan Province, Haikou 570228, China

Abstract. Based on the geographical position on the South China Sea, this paper puts forward a strategy of setting antennas on islands and constructing 2  2 MIMO (Multiple Input and Multiple Output) system with two distributed GEO (Geostationary Earth Orbit) satellites. The fixed orientation of antennas can ensure the stability of communication, and the signals transmitted by the islands can cover the sea area near the islands, and provide communication services to the passing ships. The geometric parameters of antennas can be adjusted to achieve the maximum line of sight channel capacity. Keywords: GEO satellite communication  Distributed MIMO satellites system  Line of sight channel  Marine  Capacity

1 Introduction The application of MIMO systems is very extensive on the ground, and the high potential spectral efficiency also makes it applicable in satellite communication system. MIMO satellite communication can be used to construct a stable and large capacity communication system. In land communication, the main object is the community, and many houses or other buildings prevent the direct transmission of signal. At the same time, the construction of the community provides plenty of scatterers which is helpful for the multi-path transmission. Therefore, there is a general consensus on the application of MIMO satellite communication in the community. That is, the maximum channel capacity can only be achieved by the multi-path transmission. Multipath transmission plays a very important role in the community because of the abundance of scatterers in the environment. Marine communication is different from land. However, in marine communication, multipath scattering is difficult to be used as a stable mode of propagation, although multipath transmission can pass through the fog at sea or the wave propagates. MIMO channels are influenced by the weather, when the weather conditions change, they will influence the channels a lot. So when discussing the application of MIMO satellite © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1908–1915, 2020 https://doi.org/10.1007/978-981-13-9409-6_231

LoS-MIMO Channel Capacity of Distributed GEO

1909

communication over the sea, Multi-path transmission is not the best option for communication links. Another communication mode of MIMO satellite communication is line of sight (LoS) transmission, which is very common. However, it is wrongly believe that the transmission mode contract to small channel capacity, especially in community, because of the influence of shields, but in marine conditions, there are enough no direct shelter like tall buildings generally. As for the possible cloud fog, it does not constitute a great attenuation of the signal. So we choose LoS transmission as the main transmission mode. Basically there are two ways of communication over the sea. One is to monitor the weather changes through BeiDou satellite, to predict the weather situation over the South China Sea [1]. The other is a broadcast system, through the antennas on port such as HaiKou, SanYa, GuangZhou, ZhanJiang, etc. [2]. It only covers the surrounding waters and has limited functionality like receive short messages, predict weather changes, warnings, etc. It can not achieve normal communication. Besides, with the wide coverage of ports on land, the ability of communication may decline with the distance of the port. There are many islands in the South China Sea, which is helpful to build a stable and large capacity satellite system of MIMO communication, and the maximum channel capacity can be achieved by adjusting the parameters of the antennas. There are lots of parameters can be choose to improve the channel capacity, Li and Liu proved the relative parameters that can improve the capacity [3], and in [4], the methods to optimize the up- and down- links are proposed. Zhang brought about a analysis methods, combine this parameters and be replace by a parameters called satellite projection [5]. In a MIMO system dominated by LoS transmission mode, if the MIMO channel with maximum channel capacity is to be constructed, the arrangement of antenna position must be satisfied. Based on the environment of the South China Sea [2], the antenna can be set on islands. So the stable MIMO system can be build from the satellite and the antenna in island, moreover, it is easy to adjust the antenna parameters to achieve the maximum channel capacity. The island will also have Macro base station, so the ship passing through can access the station. The paper is the organized as follows. After showing the model of the system in Sect. 2, the maximum capacity was calculated in Sect. 3. And in next section, the arrangement of antenna was proved to have the influence on capacity, and the simulation results were given. Section 5 concluded this paper.

2 System Model In this paper, the uplink is not considered, because the position of the anchor station is determined by economic or many other factors. And this station has some degree of freedom, as long as it is aimed at the satellite. In the downlink of maritime communication, however, the satellite can only supply limited power, so the maximum channel capacity need to be achieve.

1910

C. Zhang et al.

In this paper, the following closed-loop scenario is set: two GEO satellites in orbit acts as the transmitter of the downlink, and they connect to each other through highspeed space optical links [4] receive antennas will be located on one island over the South China Sea (Fig. 1).

Fig. 1. Illustration of the MIMO scenario over the sea

The main reason for this setting is that: first of all, the position of the GEO satellite relative to the earth is fixed, and the receive antennas are set on the island, which can make the position of the transceiver and receiver relatively fixed, and it is helpful to build a stable MIMO channel. The receiving antennas are placed on different islands, the surrounding areas are covered, and the communication network is established. When the ships enter the coverage areas of different islands, they will access the signals of different islands. Finally, in the MIMO satellite communication system when considering LoS transmission, the antenna spacing is a key that must be considered, and the antenna spacing will greatly affect the channel capacity. The maximum channel capacity can be achieved only when the antenna spacing is greater than a certain minimum value. When the antenna were set on the island, the spacing can be easily changed.

3 Maximum MIMO Capacity Following notations are used throughout the paper: f is the carrier frequency, UR is the pitch angle of the receiver antenna, and h is the direction angle. H(f) is the channel transfer matrix of the frequency selective MIMO satellite channel. It have two parts, the HLOS(f) and the HNLOS(f), the relationship of them are as follows [6]

LoS-MIMO Channel Capacity of Distributed GEO

rffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffi k 1 Hðf Þ ¼ HNLOS ðf Þ HLOS ðf Þ þ kþ1 1þk

1911

ð1Þ

where K denotes the Rician K-factor, which is the power of the LoS signal part divided by the power of the multipath components. In this paper, we only consider LoS channel for simplicity, so we have Hðf Þ ¼ HLOS ðf Þ

ð2Þ

The element HmR ; mT ðf Þ of the channel transfer matrix at the position mR, mT is described by the mechanism of free-space propagation according to 

2pf rm ;m HmR ;mT ðf Þ ¼ amR ;mT ðf Þ  exp j c0 R T

 ð3Þ

where rmR ;mT is the distance between the mR receiving antenna and the mT transmitting antenna. c is the speed of light in free space. amR ;mT is the complex envelope that is calculated by amR ;mT ðf Þ ¼

c  eju 4pfrmR; mT

ð4Þ

where u is the carrier phase angle at the time of observation, and set u ¼ 0. Because all the mR-th receiving antenna mT-th transmitting antenna are fixed, so we have amR; mT ðf Þ ¼ jaj

ð5Þ

Hðf Þ ¼ H

ð6Þ

The time invariant MIMO spectral efficiency without channel knowledge at the transmitter is calculated according to the equation [6]    C ¼ log2 det ImR þ qT HH H

ð7Þ

Here the transmit symbols are realizations of uncorrelated in dependent, identically distributed (i.i.d.) Gaussian random variables. Furthermore, fgH denotes the complex conjugate transpose. The ratio qT incorporates all the gains of the link budget except the LoS path loss of the channel which is included in the channel transfer matrix. In order to achieve maximum multiplexing gain of the MIMO channel, the two eigenvalues c1, c2 of the matrix V, V = HH.H, should be equal. c1 ¼ c2 ¼ jaj2 maxfMR ; MT g

ð8Þ

V ¼ H H H; MR  MT

ð9Þ

1912

C. Zhang et al.

And we have mR = mT = 2, so we can obtain the optima channel capacity as follows

Copt

c1 ¼ c2 ¼ 2jaj2

¼ 2  log2 1 þ 2  qT  jaj2

ð10Þ ð11Þ

4 The Arrangement of the Antenna Position The maximum channel capacity of the formula (7), system needs to be realized under certain conditions, and the expression of V of the matrix is as follows: 2 6 6 V¼6 4

jaj MR 2

MR P

2

j aj 

e

2

jaj 

MR P

e

j2pf c ðrmR ;1 rmR 2 Þ 0

mR ¼1

j2pf c ðrmR ;1 rmR 2 Þ

2

jaj MR

0

mR ¼1

3 7 7 7 5

ð12Þ

The second order characteristic formula is detðV  cIÞ ¼ c2  2jaj2 MR c þ jaj4 MR 2  jaj4 

MR X MR X j2pf ðr r þ r r Þ  e c0 k;1 k;2 l;2 l;1 ¼ 0 k¼1

l¼1

ð13Þ The eigenvalues are c1=2

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u MR MR uX X j2pf ðr r þ r r Þ 2 2 t ¼ jaj MR  jaj   e c0 k;1 k;2 l;2 l;1 k¼1

ð14Þ

l¼1

To achieve maximum channel capacity, you need to satisfy c1=2 ¼ jaj2 MR , that is PMR PMR j2pf c0 ðrk;1 rk;2 þ rl;2 rl;1 Þ ¼ 0. From reference [3], it can be obtained that rmR ;mT k¼1 l¼1 e rmR ;mT 

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   R2T þ R2R  2RR RT cos/R cos hR  hT;mT       2RT cos dR sin hR  hT;mT þ sin dR cos hR  hT;mT sin /R MR  1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ mR  1  dR  ffi 2 2  R2T þ R2R  2RR RT  cos/R cos hR  hT;mT

ð15Þ We have the radius of the Earth RR = 6378.1 km, and then the radius of the GEO orbit RT = 42,164 km, Carrier frequency f = 2.4 GHz. According to formula (15), we

LoS-MIMO Channel Capacity of Distributed GEO

1913

can adjust the parameters d, UR and h to make the system reach the maximum channel capacity, which is easy to achieve for the antenna which are fixed on the island.

Capacity [bit/s/Hz]

15

X: 17 Y: 13

X: 6 Y: 12.99

X: 28 Y: 12.99

10

5

0

MIMO=2*2 SISO=1*1

0

10

20

30

d[m]

(a) ΦR=60º, θ=45º

Capacity [bit/s/Hz]

15

X: 4 Y: 13

X: 12 Y: 12.98

X: 20 Y: 12.97

10

5

0

MIMO=2*2 SISO=1*1

0

10

20

30

d[m] (b) ΦR=90º, θ=45º Fig. 2. Channel capacity C as a function of d

It can be seen that the channel capacity changes periodically with the change of antenna spacing at the receive antenna. As shown in Fig. 2a, the period of the peak dp= d12 = d23 = 11 m. And its performance as a 2  2 MIMO system is better than that of the general SISO (Single Input and Single Output) system. The channel capacity has a minimum value with the change of antenna spacing, that is, when the channel capacity reaches the peak for the first time, the value of antenna spacing. When setting the antenna spacing, it must be larger than this value, and then adjust to the given parameters. In Fig. 2a, the minimum value of antenna spacing is dmin = 6 m. But this can be change when change the other parameters. AS the UR increased from 60° to 90°, the antenna spacing as a function of channel capacity also change. The period of peak now is dp= 8 m as shown in Fig. 2b, and the minimum value dmin = 4 m. This can be seen as the parameters can have effect to each other when they as a function of channel capacity.

1914

C. Zhang et al.

After that, the pitch angle UR is tested, and the other parameters are fixed to observe the change of channel capacity with it as shown in Fig. 3.

Capacity [bit/s/Hz]

15

10

5

0 0

20

40

60

80

R

Fig. 3. Channel capacity C as a function of UR when d = 6 m, h = 45°

The variation of the pitch angle of the antenna is still periodic, but the fluctuation is bigger. And in the simulation of the antenna direction angle h, in order to see more clearly, we reduce the range of observation to 45° in Fig. 4.

Capacity [bit/s/Hz]

15

10

5

0 10

20

30

40

Fig. 4. Channel capacity C as a function of h when d = 6 m, UR= 12°

The change of antenna direction angle h is similar to that of pitch angle UR, and it fluctuates periodically and rapidly. The simulation results show that the overall channel capacity of the MIMO system with 2  2 is better than that of the SISO system.

LoS-MIMO Channel Capacity of Distributed GEO

1915

Secondly, we can make the MIMO system dominated by the LoS achieve the maximum channel capacity by adjusting the parameters.

5 Conclusions Based on the geographical conditions of the South China Sea and the current communication mode in it, this paper proposes a way to place the antenna on the island and use the orbital satellite to construct the 2  2 MIMO LoS channel model. After the antenna is fixed, the antenna will always be in the state of alignment, and then the sea area will be covered with the signal through the antenna on the island to provide communication server for passing ships. This communication system can meet the requirements of stability, and in the construction of the system, the maximum channel capacity can be realized by adjusting the antenna parameters, such as antenna spacing d, pitch angle UR, antenna direction angle h. Acknowledgements. This work is supported by High Tech. of Key Research and Development Project of Hainan Province (ZDYF2018012) and by National Natural Science Foundation of China (61661018). Hui Li is the corresponding author.

References 1. Gan ZQ, Lu TJ, Li DJ, Wu KC, Zhang DY (2015) Application of Beidou satellite communication in marine meteorological view of the South China Sea. Meteorol Hydrol Mar Instrum 32(1):70–74 2. Zhang DK (2016) Study on water safety communication guarantee system in South China Sea. Pearl River Water Transport 11:63–64 3. Schwarz RT, Knopp A, Ogermann D, Hofmann C, Lankl B (2008) Optimum-capacity MIMO satellite link for fixed and mobile services. In: ITG Workshop on Smart Antenna, pp 209–216 4. Schwarz RT, Knopp A, Lankl B, Ogermann D, Hofmann C (August 2008) Optimum-capacity MIMO satellite broadcast system: conceptual design for LOS channels. In: 4th international conference on advanced satellite mobile systems, pp 66–71 5. Li X, Liu YJE (2010) A 3-D channel model for distributed MIMO satellite systems. In: IEEE global telecommunications conference, vol, p 15 6. Telatar I et al (1999) Capacity of multi-antenna Gaussian channels. Euro Trans Telecommun 10(6):585–596

Design and Implementation of Flight Data Processing Software for Global Flight Tracking System Based on Stored Procedure Peng Wang(&), Wanwei Wang, Zhe Zhang, Min Chen, and Jun Yang Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China, Tian Jin, China [email protected]

Abstract. Civil Aviation Administration of China (CAAC) proposed to enhance the global tracking and monitoring capability of China civil aircraft after the MH370 incident. In order to meet the urgent needs of global tracking for civil aircraft in China, the design scheme of flight data processing software for global flight tracking system was presented in this paper. Based on the features and operational process of global flight tracking, the overall framework and interface design of the software were given in detail. In order to meet the performance needs of the real-time processing of emergency mass data, the database access technology based on stored procedure was adopted and the realization method of flight data processing software was given. The engineering application and the test results shows that the software can well meet the functionality and performance requirements for the flight data processing of global flight tracking system, which provides technical reference for improving the global tracking and monitoring capability of China civil aircraft. Keywords: Global flight tracking

 Flight data processing  Stored procedure

1 Introduction On March 8, 2014, Malaysia airlines MH370 flight which carried 227 passengers and 12 crew members lost contact after taking off from Kuala Lumpur Airport [1–3]. Then ICAO took global flight tracking as a priority task, also issued relevant guidance documents and revised relevant standards and norms. In order to improve the tracking and monitoring capability of China civil aircraft, a leading group on the system construction of civil aviation aircraft tracking and monitoring was set up. As the technical support unit of the leading group, Civil Aviation University of China (CAUC) carried out technical support and related scientific research on aircraft tracking and monitoring. In order to meet the urgent needs for global tracking of civil aircraft in China, CAUC has developed a global flight tracking system. At present, the system has been successfully deployed in the Operational Monitoring Center of CAAC, and has preliminarily completed the real-time tracking for domestic and overseas civil aviation aircraft of China.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1916–1924, 2020 https://doi.org/10.1007/978-981-13-9409-6_232

Design and Implementation of Flight Data

1917

2 System Construction and Operational Procedure The global flight tracking system includes data access and preprocessing software, flight data processing software, integrated data display software and integrated data playback software. The architecture of the global flight tracking system is shown in Fig. 1. Flight data processing software is one of the core software of the global flight tracking system, which plays an important role.

Fig. 1. Architecture of the global flight tracking system

Flight data processing software receives flight plan data and surveillance data from data access and preprocessing software, as well as operation commands from integrated data display software and integrated data playback software. The surveillance data is the fusion data of ACARS data (OOOI message and POS message), secondary surveillance radar data and ADS-B data. The flight data processing software processes the received data and commands, and outputs the processing results to the integrated data display software and the integrated data playback software for display. The operational procedure of flight data processing in the global flight tracking system is shown in Fig. 2.

Fig. 2. Operational procedure in the global flight tracking system

1918

P. Wang et al.

The specific flow of flight data processing is as follows: 1. After receiving the flight plan data, the global flight tracking system will parse, store and manage the flight plan. 2. The global flight tracking system is responsible for controlling the life cycle of the flight plan. Different flight plan states constitute the life cycle of the flight plan. The change of the flight plan state is managed by the system according to the processing logic. 3. The global flight tracking system can filter and query flight plans according to the requirements. 4. The global flight tracking system is responsible for automatically associating flight plans with surveillance information. It can intuitively acquire aircraft flight intentions and enhance aircraft global tracking capability. 5. The global flight tracking system is responsible for estimating the expected flight path based on flight plan, flight route information and aircraft performance information, and identifying the consistency between the expected flight path and the actual monitoring information to check whether the current aircraft is yawing.

3 Software Architecture and Interface Design The overall software framework consists of database layer, basic technical support layer, operational logic layer and front-end data display layer. The framework of software based on stored procedure is shown in Fig. 3.

Fig. 3. Framework of flight data processing software based on stored procedure

Design and Implementation of Flight Data

1919

1. Database Layer The database layer is used for data storage and management, including flight plan database, expected flight path estimation database, flight plan and surveillance information correlation database, yawing information database, geographic information database, etc. 2. Basic Technical Support Layer Basic technical support layer provide the models and processing methods needed by the software, including database operation model based on stored procedure, parse method of flight plan data, change model of flight plan state, estimation model of expected path data, update algorithm of expected path data, calculation method of deviation between expected path data and monitoring data. 3. Operational Logic Layer The operational logic layer lays down the basic functional operation for the flight data processing software of the global flight tracking system. The software defines some basic operations, including parse flight plan, store flight plan, filter and query flight plan, change flight plan state, correlate flight plan with surveillance information, detect yawing condition. 4. Front-End Data Display Layer The flight data processing software of the global flight tracking system will output flight plan data, flight plan state, deviation between expected path data and monitoring data, yawing warning or warning release commands. The execution results can be displayed in flight data processing software, and the output information can be released in the integrated data display software and the integrated data playback software with the network. Software interface include dependent software, software input, software data processing and software output. Software interface is shown in Fig. 4. 1. Dependent Software The normal operation of flight data processing software depends on the following software: • Data access and preprocessing software is developed independently to provide flight plan data and surveillance data for flight data processing software; • Integrated data display software and integrated data playback software are developed independently to send operation commands for flight data processing software, and display the output data from the flight data processing software.

2. Software Input The input interfaces of the flight data processing software include flight plan data, monitoring data and plan query commands.

1920

P. Wang et al.

Fig. 4. Interface design of flight data processing software

3. Software Data Processing The application function module responds to the data received by the software to realize the software function. The application function module includes data processing module, data communication module and data log module. The data processing module is used for processing the data received by flight data processing software, which is divided into plan data processing module and monitoring data processing module; Data communication module is used for data interaction among different modules; Data log module is used to record the execution information of each module. 4. Software Output The output interfaces of the flight data processing software include flight plan data, flight plan state, deviation between expected path data and monitoring data, yawing warning or warning release commands.

Design and Implementation of Flight Data

1921

4 Software Implementation Based on Stored Procedure The definition of stored procedure can be found in document [4]. In this paper, a layer of stored procedure is added to the database server of the traditional database operation model. The stored procedure is represented as a set of SQL programs that perform specific functions in the database. It is compiled only once and stored in the database of the database server side [5]. In this paper, object-oriented visual studio C++ 2010 and SQL Server 2008 R2 development environment are used to develop the flight data processing software based on stored procedure. According to the requirement analysis, the software is coded and implemented on the basis of the software architecture and function design. After a large number of tests, a stable version which meets the requirement is gradually formed.

(a) Engineering Application of System

(b) Software Display

Fig. 5. Software display and engineering application

Engineering Application is shown in Fig. 5a, the flight data processing software runs on the rack server; the software display is shown in Fig. 5b, and the execution results can be displayed on the software.

5 Performance Test Based on Stored Procedure The test dependent software includes data access and preprocessing software, integrated data display software and integrated data playback software. Every application program runs on the terminal computer and the operating system used is Windows 7. The database runs on the rack server and the operating system used is Windows Server 2008 R2. The schematic diagram of the test environment is shown in Fig. 6.

Fig. 6. Software test environment

1922

P. Wang et al.

The same data were processed by flight data processing software based on conventional method and stored procedure respectively. The test data were divided into several groups, each group consisted of different number of data bars. The total processing time consumed, single processing time consumed were recorded by two methods respectively. Both methods record the time when the program starts running after the database is connected successfully. After a set of data processing is completed, the timing is stopped and the connection with the database is closed. After each group of data is processed 50 times, the average processing time is taken as the final processing time of this group of data. During the experiment, a large number of data were tested. In order to facilitate the discussion, the following groups of data were used to illustrate. The total timeconsuming comparison of software based on two methods is shown in Fig. 7. represents the total time-consuming of conventional method and represents the total time-consuming of stored procedure method. As can be seen from the graph, with the increase of the amount of data processed, the total time-consuming of conventional methods increases rapidly, and the total time-consuming growth trend of stored procedure methods is relatively slow, and the total time-consuming is much less than that of conventional method. When the amount of data processed increases to a certain extent, the growth trend of the total time-consuming of both two methods tend to be flat, but the total time-consuming of stored procedure method is only about one thousandth of conventional method.

Fig. 7. Total time-consuming comparison of software based on two methods

The single time-consuming comparison of software based on two methods is shown represents the single time-consuming of conventional method, in Fig. 8. represents the single time-consuming of stored procedure method. As can be seen from the graph, with the increase of the amount of data processed, the single processing time of conventional method is almost unchanged. However, the single processing time of stored procedure method decreases exponentially at first, when the amount of data increases to a certain extent, the decreasing trend of processing time gradually slows down and is much smaller than that of conventional method.

Design and Implementation of Flight Data

1923

Fig. 8. Single time-consuming comparison of software based on two methods

In conventional method, the compilation times of SQL statements and the communication times between application program and database will increase with the number of processing data bars. Different from the conventional method, the stored procedure is compiled only once and stored in the database on the server. It is a reusable component, and the communication times between the application program and the database are limited in the stored procedure. The stored procedure method has more advantageous than the conventional method in processing speed. Stored procedure method can process data stably and quickly to meet the performance requirements of the software.

6 Concluding Remarks According to the characteristics of aircraft global tracking, this paper designs the flight data processing software of the global flight tracking system based on stored procedure, including the overall design and interface design. In order to meet the performance requirements of the system, the stored procedure technology is adopted and the software is implemented based on the development environment of Visual Studio C++ 2010 and SQL Server 2008 R2. In this paper, the realization method of the software and the running example in engineering application are described, and the performance test of the software is completed. The software can provide technical reference for improving the global tracking and monitoring capability of china civil aircraft. Acknowledgements. This work is supported in part by the National Key Research and Development Program of China (No. 2016YFB0502405), in part by the National Natural Science Foundation of China (No. U1633107), this work also supported in part by the National University’s Basic Research Foundation of China (No. 3122018D010).

1924

P. Wang et al.

References 1. Gao J, Mu L, Wang GS et al (2016) Drift analysis and prediction of debris from Malaysia Airlines Flight MH370. Chin Sci Bull 61:2409–2418 (in Chinese) 2. Zhao Y, Lu Z, Zhang J et al (2018) Changes in the relationship between elements of geocyberspace: a case study of MH370 track tracking/area searching. World Reg Stud 27 (5):126–135 3. Ye HJ, Jia SY et al (2019) System design and key technology verification of wide-area aviation safety surveillance system. Command Inf Syst Technol 10(1):19–25 4. Eimasri N (2006) Fundamentals of database systems. Addison Wesley, Boston, pp 568–569 5. Wang WF, Huang HY et al (2008) Reaearch on high performance database application model based on stored procedure. Comput Eng Des 29(10):2573–2575

Face Recognition Method Based on Convolutional Neural Network Yunhao Liu and Jie Yang(&) Electronic and Communication Engineering, Heilongjiang University, No. 74 Xuefu Road, Harbin, People’s Republic of China [email protected]

Abstract. In this paper, a face recognition method based on deep learning is studied and implemented. By adjusting the hierarchical depth and structure of the typical convolutional neural network model ResNet, a new network model structure is designed, which uses the LFW face detection benchmark. The database is used for confirmatory experiments. The experimental results show that the overall accuracy and model size of the system have a good performance. Keywords: Deep learning

 Convolutional neural network  ResNet  LFW

1 Introduction Face detection and recognition technology has always been an important research direction in the field of computer vision. With the rapid development of related technologies in the field of computer vision, especially the extensive application of deep learning technology, face detection and recognition have gained more and more attention [1]. Neural networks are the predecessors of deep learning, using computers to simulate human neuron structures to accomplish a variety of nonlinear intelligent tasks. Deep learning is a deeper and more detailed neural network, and it is the most promising branch of artificial intelligence. At present, deep neural network is a featurebased method for face recognition. Its advantage is that feature extraction is performed by layer-by-layer convolutional dimension reduction, and then through multi-layer nonlinear mapping, the network can be free from special pre-processing. In the training sample, automatic learning forms a feature extractor and classifier suitable for the recognition task. This method reduces the requirements for training samples, and the more network layers, the more comprehensive the learned features [2].

2 Convolutional Neural Network Convolutional neural network (CNN) is a classic deep neural network model widely used in computer vision. It plays an important role in the history of deep learning. As early as 1998, Yann Lecun proposed LeNet-5 for the recognition of handwritten digits [3]. Figure 1 shows the basic module in the convolutional neural network. As a basis for the later deep network model, this method has been widely used. Handwriting © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1925–1929, 2020 https://doi.org/10.1007/978-981-13-9409-6_233

1926

Y. Liu and J. Yang

recognition problems. In recent years, although convolutional neural networks have made breakthroughs in many fields, researchers have proposed some new CNN structural models, but today’s deep learning frameworks are also based on LeNet’s simplified improvements.

Fig. 1. LeNet-5 network structure

2.1

Network Model Based on ResNet Design

The more complex the network structure of deep neural network is, the more precise and abstract the extracted features are, the higher the accuracy is, and the higher the time and space cost of the corresponding calculation is [4]. However, in practical applications, face detection requires high speed, and considering the limited hardware resources, this requires a lightweight network model, which can achieve faster detection speed without reducing too much detection accuracy. In the experiment, three different ResNet network structures are used, which are 20 layers, 34 layers and 64 layers. Considering the actual needs, an 18-layer ResNet network structure is redesigned. Its network structure is shown in Fig. 2.

data

prob

conv1_x

res1-3

conv2_x

res2-3

res2-5

conv3_x

res3-3

res3-5

conv4_x

res4-3

fc

Fig. 2. ResNet-18 overall structure

res3-7

Face Recognition Method Based

1927

It can be seen from the figure that there are 4 convolution blocks in the network structure, and the convolution layer layers corresponding to each convolution block are 3 layers, 5 layers, 7 layers and 3 layers, respectively, and the convolution kernel of each module. They are all 3  3 in size. There are different numbers of residual modules in each convolution block. As shown in the figure, the number of residual modules in each convolution block is 1, 2, 3 and 1. Each residual module is connected after the corresponding convolution block. Together they form an 18-layer ResNet network structure. In the network design, the pooling layer in each convolution block is removed, and the pooling layer is downsampled by the convolution layer with stride 2. 2.2

Experimental Results and Analysis

Experimental results analysis of ResNet network models at different levels Three different ResNet network structures were used in the experiment, which are 20layer, 34-layer and 64-layer network structures. The experimental results are shown in Table 1.

Table 1. Different network structure performance Basic network ResNet-20 ResNet-34 ResNet-64

Training time (h) LFW highest accuracy Model size (M) 7 0.984 113 8.5 0.986 165 14.2 0.988 235

From the comparison results shown in the table above, we can see that: The deeper the ResNet network model, the higher the test accuracy. The depth of network level is positively correlated with the training cost. The training time of 64layer ResNet network is about twice as slow as that of 20-layer network, and the size of model and memory occupied are also larger. The accuracy of ResNet network at 34 layers is slightly higher than that of ResNet network at 20 layers. Although they are not as accurate as 64-tier ResNet network, the model size, memory occupancy and training time of the shallow 20 and 34 layers are all in the appropriate range, and the experience is better in the actual use of the system. Analysis of experimental results of network model based on ResNet design This paper redesigned an 18-layer ResNet network structure. The model ResNet-18 obtained by training this ResNet network is trained to train the model ResNet-20 and the model ResNet-34 obtained in the 20-layer and 34-layer network structures. The specific accuracy changes are shown in Table 2.

1928

Y. Liu and J. Yang Table 2. Different levels of ResNet accuracy change ResNet

Training steps 22,000 24,000 ResNet-18 0.978 0.986 ResNet-20 0.981 0.985 ResNet-34 0.983 0.989

26,000 0.985 0.983 0.988

28,000 0.984 0.985 0.986

30,000 0.983 0.986 0.984

It can be seen from the above test accuracy that the 18-layer ResNet network obtained after the original ResNet is not lost too much in the test accuracy. On the contrary, the highest test accuracy of the model on the LFW is slightly higher. A deeper 20-layer ResNet network. Table 3 compares the parameters of the model to the three different hierarchical structures. Table 3. Model performance of different hierarchies ResNet network level Training time LFW highest accuracy Model size (M) ResNet-18 5.6 0.987 123 ResNet-20 6.3 0.986 136 ResNet-34 7.6 0.989 146

As can be seen from the above table, the highest accuracy of the 18-layer ResNet network structure in the LFW test set is between ResNet-20 and ResNet-27. And the training time required to train the 18-layer ResNet network structure is the least among the three, taking only about 5 h. This model achieves the original intention of designing the network structure of this paper. It reduces the parameter quantity and model size of the model while taking into account the performance of the model, and accelerates the loading time of the model.

3 Summary This paper designs a face recognition based on ResNet 18-layer network structure for LFW face database. The redesigned network structure has the advantages of fewer parameters, smaller model, and shorter time required for model loading. The experience of the actual system is relatively better, and it also satisfies the actual needs. The overall system of the 18-layer ResNet network structure Accuracy and model size have a good performance, providing a new way of thinking for face recognition.

Face Recognition Method Based

1929

References 1. Zhang K, Zhang Z, Li Z et al (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23(10):1499–1503 2. Roli F, Marcialis GL (2006) Semi-supervised PCA-based face recognition using self-training. Structural, syntactic, and statistical pattern recognition. Springer, Berlin Heidelberg, vol 9, pp 560–568 3. Lecun Y, Bottou L, Bengio Y et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324 4. 陈耀丹, 王连明 (2016) 基于卷积神经网络的人脸识别方法. 东北师大学报(自然科学版) 48(02):70–76

An On-Line ASAP Scheduling Method for Time-Triggered Messages Guevara Ania(&), Qiao Li, and Ruowen Yan School of Electronic and Information Engineering, Beihang University, 37 Xueyuan Rd., Haidian District, Beijing 10191, China

Abstract. The Time-Triggered Ethernet (TTEthernet) had been under considerations to be adopted in aerospace or spacecraft avionics domain, because of its time deterministic to allocate Time-Triggered messages belonging to each strictly periodic virtual links (VLs) into scheduled time windows according to off-line generated timetables. An on-line time window allocation method for Time-Triggered messages using a heuristic method called As Soon As Possible (ASAP) was present in this paper. Under a topological configuration with multistage connected TTEthernet switches and full-duplex accessing end systems, an optimal path for each on-line scheduled VL can be selected firstly as a result of searching orderly set of physical links which make the allocation more easy. If the smallest sum of occupied time-slots and the smallest number of hops along candidate paths become an incompatible dilemma, a criterion was addressed to make a reasonable choice. With shifted and masked time-scales of each selected physical links, some non-occupied time slots are reserved for the on-line scheduled VL, which is going to achieve these slots unless other VLs’ conflicted selections. Cases in MATLABTM language had been developed and studied to make this method verified. Keywords: Onboard networks  Time-Triggered ethernet scheduling  As-soon-as-possible algorithm

 On-line

1 Introduction Nowadays some works have been base on different protocols to improve the time and the communication between several sub system in an aircraft vehicle, all of this needs to use a protocols to determine how work or will work, exist a huge spectrum of protocol and some is the Time-Triggered Ethernet [6, 8]. Performing a mixed-critical extension with a time-triggered protocol transplanted into an Ethernet standard accredited as IEEE 802.3 makes it possible to communicate within the same cohabite network simultaneously between standard and deterministic data. To support the last of these, precise synchronization algorithms, detection, start and restart clock were designed to improve the scalable fault tolerance and to offer self-stabilization mechanisms as indicated [1]. Besides some systems had been featured the time triggered protocol successfully in practice such as SAE AS6003 TTP-based buses [7], the SAE AS6802 TTEthernet standard also mentioned by literature [10] had been employed for the NASA Orion Multipurpose Crew Vehicle [5] that successfully completed the Exploration Flight © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1930–1940, 2020 https://doi.org/10.1007/978-981-13-9409-6_234

An On-Line ASAP Scheduling Method

1931

Test-1 (ETF-1). Some research work [3] showed that the mixed criticality integration based on Ethernet and global synchronization should help the improvements of scalability and cost-effectiveness in embedded networks. Time-activated messages (TT) are transmitted periodically, providing deterministic communication channels. For the sending times of the TT message frames, these are usually defined offline in a conflict-free programming. To guarantee the correct sending time of the frames, a common time base must be agreed in order that all the nodes involved in time-activated communication are synchronized [2]. On the other hand, for the online state a strategy must be designed that offers the best possible option, in the shortest time and the best result. Scheduling a TT virtual link means reserving a time window called transmission window at a scheduled point in time for the transmission of each frame of that virtual link on each physical link in its path. The literature [4] presented the similarity between the scheduling of TT frames in a physical link and the ones of independent strictly periodical jobs within some non-preemptive systems, all of which tried to find a set of conflict-free offset values for the scheduled entities in their system. In this paper, we find a scheduling algorithm based on short path. In section two, we figure out the short path of TT flows from a source end system (ES) to a destination ES, taking into account the virtual link and the physical link. The Selection of Path for Easier Scheduling is introduced in the third section in addition to the method we use; also discuss the schedule of TT flows according to its own property. The assignment for a virtual link is made in a way in Sect. 4. With aspects about request conflictions, a series of protocol assumptions will be descripted in the Sect. 5 when the time slot allocation had been given. Finally, we conclude in the Sect. 6.

2 Scheduling Models The Steiner’s work [9] presented a method to transform the programming problem of TT virtual links to a cluster of SAT Modulo Theories (SMT) conditions, which characterize the network and memory resources available on each switch nodes. The author provided the programming of all TT virtual links present in the network but requiring an exponentially increasing computational time for scheduling time-table synthesis [2]. In addition, Craciunas and Oliver in combination proposed [3] an extension of the Steiner’s work in which a combined schedule of TT tasks and TT virtual links. In this extension, a MIP formulation is addressed to allow the solver to find a combined program while optimizing multiple parameters such as synthesis time etc. The scheduling timetable for a TTEthernet includes many time scales assigned for each of physical links. Each time scale is marked time windows for TT or non-TT flows (see Fig. 1). If a class of on-line scheduling TT flows is to be considered, their available time resource also must be marked. For simplicity, the fixed length of time slot will be adopted as a unit. Referred to Fig. 2, the spared time slots can be used for on-line scheduling TT messages will be marked by “0” s; and the others marked “1” s include the occupation of Protocol Control Frames (PCFs), off-line scheduling TT flows and the “porosity” reserved for low-priority RC or BE traffic.

1932

G. Ania et al.

Fig. 1. Time scales concerned to physical links

Fig. 2. Divide the time-scale by time-slot unit

It is notable that although original TTEthernet need not to use time slots, a parameter as “raster”, which has the same meaning to control timetable’s granularity, has been being used in the TTE Plan software. The length of time slot should be designed properly because a too longer one causes fractional space and a too shorter one causes inconvenience assignment and extra store size to timetable. We choose the length as eight microseconds in this paper. In the paper the scheduling model is base in shortest path principle for us is to make the path of messages with the least number of ones in the virtual link. Duplex physical communication links and network nodes define the physical topology; include end system (ES) and switches (Sw). Figure 3 describes an example of our TTEthernet network topology with sixteen nodes (twelve end systems ES and four switches Sw). The source is ES5 and destination is ES 12, our allocation method is show in Fig. 3, from the source to the destination exist five possible transmission paths for a TT flow sent from ES 5 to ES 12:

An On-Line ASAP Scheduling Method

1933

Fig. 3. Example of physical topology connected in multi-hop

ð1Þ Es5  Sw1  Sw3  Es12: ð2Þ Es5  Sw1  Sw2  Sw3  Es12: ð3Þ Es5  Sw1  Sw4  Sw3  Es12: ð4Þ Es5  Sw1  Sw4  Sw2  Sw3  Es12: ð5Þ Es5  Sw1  Sw2  Sw4  Sw3  Es12: Where path (1) is the path with the least number of hop, just two hops, and path (2) and (3) are three hops, path (4) and (5) use four hops to complete the transmission. In this case, there are five candidate paths for a certain virtual link from Es5 to Es12. Definition 1 (Hyper-period) The hyper-period value is equal to the least common multiple of all the period values belonging to VLs concerning on-line scheduling. Remark Because of the cyclic feature for scheduling timetable and each time scales, a strictly periodic flow can be arranged cyclically within a hyper-period with its period value as a multiple factor.

3 Path Selection for Easier Scheduling Under a topological configuration with multi-stage connected TTEthernet switches and full-duplex accessing end systems, an optimal path for each on-line scheduled virtual link is to be selected as an orderly set of physical links. The time scale selection prefers its physical link with less occupied time slots; and the end-to-end path choice prefers less hops it has to cross mid-way switches. Then if we assign the sum of ones within each physical link’s time-scale as its weight value, a shortest path algorithm can be applied to find a path with the least sum of ones.

1934

G. Ania et al.

The hyper-period is equal to 60 mile second; the unit have eight time slots, exist sixteen hops, including twelve ES and four Sw. For the virtual link, we use a random matrix of zero and ones, which shows how many time slot we have available in a virtual link. It is represented by the number of ones that the link shows as a result, therefore know the total of ones in each link, show the route that have more ones as the best route. To execute the proposed virtual link Algorithm 1 use as a software tool MATLAB, find the short path for the source and destination employ the Dijkstra algorithm because it has a lower complexity than Bellman Ford. Also we have a positive weight values, comparing to breadth-first search (BFS) we need integer weights in each edge, so the Dijkstra algorithm require it take the maximum numbers of ones as a best route to send the TT messages show result in Fig. 4.

Fig. 4. Short path result for virtual link

On the other hand, the result of physical link is the same from the first event. However, in this case our premise is to know how many jumps need to do the message from the source to the destination. Once the number of hops is known, depending on the different routes taken to send the message from the source to the destination, the best route is chosen, this being the short path, which includes the least amount of hops present. It can be represented through Fig. 5, indicating the shortest route that takes into account the least amount of hops. The algorithm has the same structure form Algorithm 1 instead use breadth-first search as an algorithm short path because assumes that all weights are equal.

An On-Line ASAP Scheduling Method

1935

Fig. 5. Short path result for physical link

Usually both outcomes coincide in terms of the number of ones present in each virtual link as well as in the number of jumps that the message should make from the source to the destination, as can be seen in Figs. 4 and 5. Nevertheless, in view that the matrix used to know the number of ones present in the virtual link is random, sometimes the result is different as can be seen in Figs. 6 and 7. In this case, we proceed to consult the weight of each link, it is added to the total of availed time slots in each link, so chosen the route with more of this because grants greater capacity to send a message even if the message need to do more jumps. It is why choose the smallest number of occupied slots along the path.

Fig. 6. Short path result for virtual link different from physical link

1936

G. Ania et al.

Fig. 7. Short path result for physical link different from virtual link

The case, which has two different sub-optimal merits, can demonstrate our strategy on how to assign a virtual link to a set of physical links. How to make decision when the smallest sum and the smallest hops become dilemma? As an observation, in some sense the easiness of scheduling actually depends on time-scales after joining the scheduled flow into the path other than before doing it. Therefore, we got this criterion. Definition 2 (The weight of a strictly periodical flow) For a flow with a period value of p, when it be considered in an on-line scheduling within a hyper-period h, the weight of it is defined as Eq. 1 where l is the length of its package in the unit of time-slot. w¼

h l p

ð1Þ

Remark The weight value for a strictly periodical flow gives the number of time slots, which are expected to be occupied in a hyper-period. Definition 3 (The number of hops) The number of hops is defined as the number value of the middle way nodes (as switches generally), and is noticed as H. Definition 4 (The current path weight, CPW) The current path weight is defined as the sum values of each physical link edge from the source to the destination along the given path, and is noticed as Wcur(P), where P is noticed as a given path. Equation 2. I. e. Wcur ðPÞ ¼

dest X

wi

ð2Þ

i¼src

where i is the index of {src_node, middle_way_node#1, middle_way_node#2, …, dest_node}, and the wi is the weight value of the physical link i.

An On-Line ASAP Scheduling Method

1937

Definition 5 (The expected path weight, EPW) If the path with the smallest sum (noticed as Psum with hops Hsum) and the path with the smallest number of hops (noticed as Phop with hops Hhpo) are not the same, a path with a smaller expected path weight Wexp shall be the optimal choice. I.e., the EPW for the path Psum is equal to Wcur(Psum) + Hsum  w, and the EPW for the path Phop is equal to Wcur(Phop) + Hhop  w, in both of which the “w” stands for the weight of the strictly periodical flow to be on-line scheduled. Remark It’s observed that Hsum is no less than Hhop, as a consequence if the EPW of the path Psum is also smaller than the one of the path Phop it’s shown that the former has an overwhelming easiness for schedule.

4 Time Slots Allocation To allocation a feasible set of time slots for a given VL for which had been chosen a path, we need compensate the technical latency and store-forward latency along its cross switches in the first step. Accounting for relationship between transmission windows along the path, we refer to literature [2] to “integrate” each of time scales with a phase shifted ahead by its latency in the unit of time-slot. For example, the physical link i, j and k, and switch a and b with latency values da and db respectively as shown in Fig. 8, the time scale of link j and link k were shifted ahead and wrapped in da and da + db respectively as shown in Fig. 8. In the second step, we mask each shifted time-scale into an updated one. The logical operator “and” shall be adopted so that the survived “0” time-slots in the up-dated one are the final available time resource for the on-line scheduled virtual link (see Fig. 9).

Fig. 8. Original time scales belonging to physical links

1938

G. Ania et al.

Fig. 9. Shifted/wrapped time scales and the updated one

In the next step, we employ the ASAP principle to choose a suite of time-slots for the given online scheduled strictly periodic virtual link. It is noticeable that the instances of this virtual link form a comb-like occupation from current decision point in time. Just need to find a first-fit position for it with phase-shift forwarding and wrapping (see Fig. 10).

Fig. 10. Find a feasible allocation for the comb-like instances

5 Request and Acknowledge Protocols to Free Conflict In order to avoid collisions of messages belonging to two virtual links, there must be a series of assumptions that allow the correct development of the topology, to do not have not data loss or frame traversing delay occurs due to collision if one of these occurs, the assumptions are described below:

An On-Line ASAP Scheduling Method

1939

From Sect. 3, when a collision occurs between two different virtual links and they are from different switch the best way to choose one of them is take the link with more ones, however when they are from the same switch we need to consider the follows assumptions: • Create a signal request to tell the downstream nodes a virtual link want to occupy a suite of time-slots from now point-in-time. • Create a signals refuse by switch agency – If not refused, make a reserve operation, when a switch having making a reserve receive an acknowledgment, it will change the state of reserve to occupy. – Create Signals acknowledgment by the destination ES – Make a backward from the destination to the source (inversing the normal path). The switch will broadcast the changed PHY link time-slot vectors by BE messages to all concerned ESes.

6 Conclusions To coexist in the same communication system TTEthernet is an extension of the switched Ethernet (IEEE 802.3) which allows for mixed critical traffic. An On-line ASAP scheduling method for Time-Triggered messages was presented as result the scheduling model make the time scale concerned to physical link and divide the timescale by time slot unit. The model is base in shortest path principle is to make the path of messages with the least number of ones in the virtual link. At the same time, the selection path for easier scheduling based on multi-stage connected TTEthernet switches and full-duplex accessing end systems. The time scale selection prefers its physical link with less occupied time slots. If the result in both links are different, we proceed to consult the weight of each link. The allocation time slots need to compensate the technical latency and store-forward latency, then mask each shifted timescale into an updated using logical operator “and”, later employ the ASAP and finally to avoid data loss or frame traversing delay occurs due to collision was required design a series of assumptions are described to allow the correct development of the topology.

References 1. AS6802 S (2011) Time-triggered ethernet. Obtenido de http://www.sae.org/technical/ standards/AS6802 2. Coelho R (2017) Buffer analysis and message scheduling for real-time networks. PhD Thesis, Thechnische Universität Kaiserslautern 3. Craciunas SRO (2016) Combined task- and network-level scheduling for distributed timetriggered systems. J Real-Time Syst 52(2):161–200 (Springer, Austria) 4. Goossens J (2003) Scheduling of offset free systems. J Time-Critical Comput Syst 54(24): 239–258 (The International) 5. Honeywell (2018) Orion exploration flight test-1. Obtenido de https://www.nasa.gov/pdf/ 663703main_flighttest1_fs_051812.pdf

1940

G. Ania et al.

6. Kopetz H (2008) The rationale for time-triggered ethernet. In: 2008 real-time systems symposium. IEEE, Barcelona, Spain 7. Kopetz HGG (1993) A time-triggered protocol for fault-tolerant real-time systems. In: Faulttolerant computing. FTCS-23 the twenty-third international symposium on fault-tolerant computing. Toulouse, France 8. NASA (2018) Application specific integrated circuits based on TTEthernet ready for first Orion test flight. Obtenido de https://aerospace.honeywell.com/en/press-release-listing/2014/ may/application-specific-integrated-circuits-based-on-ttethernet-ready-for-first-orion-test-flight 9. Steiner W (2010) An evaluation of SMT-based schedule synthesis for time-triggered multihop networks. In: 31st IEEE real-time systems symposium. IEEE, San Diego, CA, USA 10. Steiner WBG (2011) TTEthernet: time-triggered ethernet. TTTech Academic Publications. CRC Press. Obtenido de https://www.tttech.com/company/academic-publications/ 11. Zhong Zheng FH (2016) The research of scheduling algorithm for time-triggered ethernet based on path-hop. In: 35th digital avionics systems conference (DASC). IEEE/AIAA

Soil pH Value Prediction Using UWB Radar Echoes Based on XGBoost Tiantian Wang(B) , Chenghao Yang, and Jing Liang University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave, Chengdu 611731, China wtt [email protected]

Abstract. This paper proposed an algorithm to predict soil pH value using ultra-wideband (UWB) radar echoes. Compared with the existing work, instead of classifying soil pH value via echoes, this paper predicted soil pH value using machine learning (ML) method—extreme gradient boosting (XGBoost) at the first time as far as we know. In this experiment, we collected a total of 7 types of soil UWB radar echoes with different pH values. The echoes were split into train set and test set. The prediction results were compared with actual pH values via mean squared errors (MSE). Analysis results show that this method can achieve a very low MSE that is 3.6 × 10−7 . Keywords: Soil pH value prediction

1

· UWB radar echoes · XGBoost

Introduction

The extraction of soil characteristics plays an important role in intelligent agriculture. The effective classification and prediction of soil characteristics can help farmers better monitor soil conditions. Realizing the automation and real-time monitoring of soil is not only the desire of agricultural people, but also the wish of scientific research personnel. The characteristics of soil include VWC, pH values, dielectric constant, etc. Soil pH value is an important factor limiting the production and quality of crops. Most crops are not resistant to too acid or too alkali soil. Therefore, it is quite important to investigate soil pH value. Due to the relationships between soil characteristics, some algorithms of VWC processing can be transferred to other parameters processing, like soil pH value. In recent years, the research on soil VWC has become more and more mature [1–4], such as soil moisture retrieval and soil VWC classification. Although the migration of algorithms satisfy different soil characteristics research, the way that those algorithms solve the problem is similar, which belongs to the category of classification problems.

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1941–1947, 2020 https://doi.org/10.1007/978-981-13-9409-6_235

1942

T. Wang et al.

There are some examples about ML method applied to the classification of soil characteristics. In [5, 6], the author applied fuzzy logic method to extract characteristics of soil echoes, and then adopted the root of mean square error (RMSE) between the characteristics of test data and that of the template data to evaluate the test results, and finally obtained the final classification results. The correctly recognition rate (CRR) only can reach 50–70%. In [7], features are extracted using fuzzy logic and then classified by machine learning. This method improved CRR and a CRR at 80–95% is got, but the feature database is necessary. In [8], the time-frequency image is obtained by time-frequency analysis of the UWB radar echoes, and then the image is processed by deep learning convolution neural network (CNN) to obtain the classification results. This method using CNN to extract feature instead of extracting them manually and only suitable for images data, whose value belongs to [0, 255]. All of the above methods need to extract the characteristics of the soil echoes and establish a feature database based on categories of classification. In order to get rid of establishing feature database and convert classification into prediction, we use a machine learning method—XGBoost [9–12] to predict the pH value of soil echoes directly. The results of classification problem are discrete (e.g. Type 1 . . . K), while those of prediction problem are continuous (e.g. pH = 4.5). The main contribution has been summarized as follows: (1) A sample database is established, including 7 types of soil pH values. Machine learning method is applied in the academic field of soil characteristics extraction. (2) A soil pH prediction system based on XGBoost is proposed, which is used in soil UWB radar echoes to predict soil pH values. The rest of this paper is arranged as follows. In Sect. 2, it presented data preprocessing. In Sect. 3, soil pH value prediction using XGBoost algorithm is discussed, including detail information about XGBoost and simulation results analysis. Finally we conclude in Sect. 4.

2

Data Preprocessing

We collected 7 types of soil pH values (i.e., pH = 3.50, 6.86, 7.67, 8.27, 8.99, 10.39, 12.26) using P440 UWB radar and pH3000, 800 groups of soil UWB radar echoes for each pH value and the discrete time length of each echo is 480 ns. The efficient echo length is only 300 ns from 99 to 398 ns after splitting original echo as Fig. 1a shows. By analysing the mean, standard deviation, maximum, and minimum values of each echo, which is shown in Fig. 1b. We found that the amplitude values of echoes are too large and the distribution intervals are inconsistent, so echoes are normalized to improve training speed and robustness of the algorithm. The data is normalized by Eqs. (1) and (2), where X is one group of soil echoes and x(t) is one point of each echo. The normalization keeps the echoes’ axes the same and moves the time-series into the interval of [0, 1] approximately.

Soil pH Value Prediction Using UWB Radar

x(t) − mean(X) std(X)

(1)

x(t) − min(X) max(X) − min(X)

(2)

x(t) = x(t) =

(a)

(b)

105

5

10

4

Amplitude (mV)

Amplitude (mV)

105 mean values standard deviation values maximum values minimum values

8

3 2 1 0 -1 -2

6 4 2 0 -2

-3

-4

-4 -5

1943

0

50

100

150

200

250

300

-6 3.5

6.86

Time index (unit increment is 61 ps)

7.67

8.27

8.99

10.39

12.26

pH values

Fig. 1. a Original echo (pH = 3.50); b The analysis of 7 types soil echoes with different pH values

3 3.1

Soil pH Value Prediction Using XGBoost Algorithm XGBoost in Soil pH Value Prediction

We take soil echoes as input data of XGBoost algorithm, which is the core part of entire system shown in Fig. 2. The boosting tree algorithm is adopted in XGBoost to obtain final prediction function, from which we can get the prediction soil pH values. It can be described as follows (3) and (4), where N is the size of UWB radar echoes, M is the number of trees, i ∈ [1, 2, . . . , N ], T is a tree, fM (xi ) is the final prediction function, yˆi stands for the prediction pH value, m and m − 1 stand for m-th and (m-1)-th tree. fM (xi ) =

M 

T (xi ; θm )

(3)

m=1

yˆi = fm (xi ) = fm−1 (xi ) + T (xi ; θm )

(4)

Tree T1 . . . TM are not only weak learners but also prediction functions, while their learning abilities are weak. The results of each T are soil pH values, but they are far away from the truth soil pH values. In order to enhance the prediction ability of our model, we using Eq. (5) to tune model parameter θm during m-th tree training. N  M   L(yi , yˆi ) + Ω(fm ) (5) θm = arg min θm

i=1

m=1

1944

T. Wang et al.

1 Ω(fm ) = γT + λ ||w| |2F (6) 2 The objective function consists of the regular term and the loss between predicted pH value and true pH value, where yi is the truth soil pH value, L(yi , yˆi ) is the UWB received echoes from soil

Soil echoes preprocessing

Train data

Model XGBoost (tune parameter)

Prediction results less than the threshold

Yes

Ground truth pH values from pH3000

Model

No

MSE

MSE Prediction results

Ground truth pH values from pH3000

Test data: the rest echoes with unknown pH values

Fig. 2. The flow chart of entire system

loss function, T is the leaf tree, and w is the score of leaf. γ, λ are the coefficient of Eq. (6) in order to avoid over-fitting. XGBoost adds regular item in objective function to control the complexity of this model. The regular term contains the number of leaf nodes in the tree and the L2 norm of output score, which is the leaf nodes’ score. 3.2

Algorithm Simulation and Results Analysis

The algorithm consists of three steps. Firstly, predict training data using the algorithm, compute the RMSE between prediction result and truth pH values. Then adjust the model parameters, and eventually using the adjusted model to process the test data and obtain the final prediction results. During the training stage, the RMSE is used for evaluating system, which is formula (7), where N stands for the size of sample, yi is the actual pH value and f (xi ) is the predicted pH value. The evaluation index of prediction result is MSE, and its formula is Eq. (8).   N 1  [yi − f (xi )]2 (7) L= N i=1

Soil pH Value Prediction Using UWB Radar

L=

1945

N 1  [yi − f (xi )]2 N i=1

(8)

We use the ratio 4:1 as the train-test split and take the echoes of each pH value from 1 to 640 as train set and from 641 to 800 as test set. So the size of train set is 4480 and that of test set is 1120. The pH values of each type soil UWB echoes are used as the labels. According to above processing, we treat the problem as prediction problem, whose evaluation index is MSE not the accuracy rate. This is also the difference between classification and prediction. We applied the five-fold cross-validation method. The training set is randomly divided into five parts, four parts for training and the remain part for testing. Five possible combinations of the process are repeated and the model with the lowest average test error is selected as the final model. In order to analysis algorithm’s robustness, white Gaussian noise of 0, 10, 20, 30 dB is added in original echoes. Take cross-validation fold 1-th and 3-th for example, Fig. 3 is the RMSE for train and validation. We can find that

(a) 100

(b) 100 original SNR = 30dB SNR = 20dB SNR = 10dB SNR = 0dB

10-2

10-3

10-2

10-3

10-4

10-5 100

original SNR = 30dB SNR = 20dB SNR = 10dB SNR = 0dB

10-1

RMSE

RMSE

10-1

10-4

200

300

400

500

600

700

10-5 100

800

200

train rounds of Fold-1th

(c)

500

600

700

800

(d) 101 original SNR = 30dB SNR = 20dB SNR = 10dB SNR = 0dB

original SNR = 30dB SNR = 20dB SNR = 10dB SNR = 0dB

100 RMSE

100

RMSE

400

train rounds of Fold-3th

101

10-1

10-2

100

300

10-1

10-2

200

300

400

500

600

700

validation rounds of Fold-1th

800

100

200

300

400

500

600

700

validation rounds of Fold-3th

Fig. 3. The RMSE for train and validation a, c Fold 1-th b, d Fold 3-th

800

1946

T. Wang et al.

original echoes with SNR = 30 dB present a better result during testing stage. The start point of train and validation RMSE curve of each fold is between 0.1 and 0.15, and the end point is around 0. This analysis reflects that the XGBoost algorithm has better adaptability and robustness to solve our problem. The MSE for training stage is 5.621 × 10−5 and for testing stage is 3.6 × 10−7 . Figure 4a shows the MSE for different SNRs, and it also presents that when SNR is equal or larger than 30 dB, MSE is the same to 0. The training time for original echoes is 115s and Fig. 4b depicts training time for different SNRs, from which we can know that when SNR = 20 dB, the time 123 s is similar to original echoes. The testing time is so short that we don’t depict it.

(a)100

(b) 300

Training time (second)

train test

MSE

10-2

10-4

10-6

10

-8

0dB

10dB

20dB

SNR

30dB

40dB

250 200 150 100 50 0 0dB

10dB

20dB

30dB

40dB

SNR

Fig. 4. a MSE for different SNRs; b Training time for different SNRs

4

Conclusion

In this paper, soil UWB radar echoes with 7 types of pH values are investigated, which is pH = 3.50, 6.86, 7.67, 8.27, 8.99, 10.39, 12.26. The XGBoost algorithm is used for our prediction problem and the results about echoes adding white Gaussian noise show that machine learning method, XGBoost, is suitable for our study. During our simulation, the echoes are normalized for the reason that the amplitude values of echoes are fluctuation. After normalizing, the system performance is improved. The ratio of train set and test data is 4:1 and the fivefold cross-validation method is used to select our optimal XGBoost model. A very low MSE score is achieved, which is 5.621 × 10−5 for training and 3.6 × 10−7 for testing. Acknowledgements. This work was supported by the National Natural Science Foundation of China (61671138, 61731006), and was partly supported by the 111 Project No. B17008.

Soil pH Value Prediction Using UWB Radar

1947

References 1. Schaap MG, Leij FJ, Van Genuchten MT (2001) Rosetta: A computer program for estimating soil hydraulic parameters with hierarchical pedotransfer functions. J Hydrol 251(3–4):163–176 2. Verhoest NE, De Baets B, Vernieuwe H (2007) A takagi-sugeno fuzzy rule-based model for soil moisture retrieval from SAR under soil roughness uncertainty. IEEE Trans Geosci Remote Sens 45(5):1351–1360 3. Lunt I, Hubbard S, Rubin Y (2005) Soil moisture content estimation using groundpenetrating radar reflection data. J Hydrol 307(1–4):254–269 4. Barrett B, Dwyer E, Whelan P (2009) Soil moisture retrieval from active spaceborne microwave observations: an evaluation of current techniques. Remote Sens 1(3):210–242 5. Zhu F, Liu H, Liang J (2015) Soil moisture retrieval using fuzzy logic based on UWB signals. In: 2015 international conference on wireless communications & signal processing (WCSP). IEEE, New York, pp 1–5 6. Liang J, Zhu F (2018) Soil moisture retrieval from UWB sensor data by leveraging fuzzy logic. IEEE Access 6:29846–29857 7. Liang J, Liu X, Liao K (2018) Soil moisture retrieval using UWB echoes via fuzzy logic and machine learning. IEEE Int Things J 5(5):3344–3352 8. Wang T, Liang J, Liu X (2019) Soil moisture retrieval algorithm based on TFA and CNN. IEEE Access 7:597–604 9. Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp 785–794 10. Chen T, He T, Benesty M, Khotilovich V, Tang Y (2015) Xgboost: extreme gradient boosting. R Package Version (4-2):1–4 11. Kohavi R et al (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: IJCAI, vol 14, pp 1137–1145. Montreal, Canada 12. Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin

A Novel Joint Resource Allocation Algorithm in 5G Heterogeneous Integrated Networks Qingtian Zeng(&), Qiong Wu, and Geng Chen(&) College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China {qtzeng,gengchen}@sdust.edu.cn

Abstract. Heterogeneous Integrated networks is an inevitable trend in the development of next-generation networks, in which a key issue that must be considered and addressed is how to enable any user to obtain QoS guaranteed services at any time and any place. In the heterogeneous integrated networks, we consider a heterogeneous network scenario with overlapping coverage of IEEE802.11b wireless access points and macrocell base stations and 5G, and we use the convex optimization method to maximize the total system capacity while effectively satisfying the minimum rate constraint of the delay-limited type and the proportional fairness of the best-effort type. The numerical results shows the performance enhancement with the bandwidth and power joint allocation can be achieved in 5G heterogeneous integrated network. Keywords: Heterogeneous integrated network Bandwidth allocation

 QoS  Power allocation 

1 Introduction The future development of mobile communication is toward the convergence of heterogeneous wireless networks. In this environment, different forms of access technologies and different underlying technologies are organically integrated in the same physical area. Only by organically combining these different networks can we provide users with seamless switching roaming services. This requires spectrum resource integration for radio resources of different radio access networks in a heterogeneous network environment, joint resource scheduling to achieve radio resource sharing among networks, and avoiding through unified dynamic resource allocation. Excessive waste of wireless spectrum resources further improves the utilization of wireless resources and guarantees users’ satisfaction with network services. These networks, in which user equipment can simultaneously transmit its data over multiple RATs, are named multi-radio access systems [1]. Heterogeneous network multi-radio access refers to the cooperation between heterogeneous wireless networks through cooperation between multiple wireless access networks and control over multiple wireless connections. It can improve network capacity, resource utilization, reduce the power consumption and enables a high degree of integration between heterogeneous networks. Each user terminal has a multi-mode interface and implements a reconfigurable software radio technology, which being able to simultaneously © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1948–1957, 2020 https://doi.org/10.1007/978-981-13-9409-6_236

A Novel Joint Resource Allocation

1949

access some or all of the radio access networks and perform parallel service data transmission on different frequencies. Gong et al. [2] optimizes the capacity of all users, maximizes the capacity of users with the worst performance, minimizes the power loss of all users, and solves the problem by using convex optimization theory to obtain the best joint bandwidth and power allocation strategy. Under the scenario of heterogeneous wireless network supporting terminal multi-master transmission, [3] proposed a joint bandwidth and power allocation algorithm based on system capacity maximization, which uses GA to optimize the solution and obtain the best joint resource allocation strategy. In [4], the joint power allocation and subchannel selection algorithm based on non-orthogonal multiple access technology is introduced. By dividing the existing bandwidth resources of the network into N subchannels, the network throughput is maximized and the network performance is improved. Choi et al. [5] proposes to use the joint allocation method of bandwidth resources and power resources to improve system capacity in a cognitive heterogeneous network environment. The numerical analysis results show that the system capacity can be improved by using this algorithm. However, only the uplink of data transmission is considered in this document, and the performance of end-to-end communication is not considered. The end-to-end utility embodies the quality of the entire link, and it is meaningless to consider only the optimization of the access bandwidth at the source node without considering the overall link quality. Therefore, it is necessary to consider resource allocation and optimization of access bandwidth from the perspective of end-to-end communication. In [6], for the issue of fairness, the research on energy-saving resource allocation algorithm based on fairness OFDMA network is carried out. Wang et al. [7] proposed a multi-user collaborative energy-optimization-based resource allocation algorithm for OFDMA systems. The algorithm jointly performs relay selection, subcarrier allocation and power allocation under the premise of satisfying the user QoS. In [8], an algorithm for allocating bandwidth and power for different services is proposed, and the distributed joint resource allocation algorithm proposed by the optimization method is used to maximize the throughput of the system. In [9], different types of services in heterogeneous networks are distinguished. The power and bandwidth of the system are allocated through optimization methods to ensure the QoS requirements of real-time users while maximizing system throughput. At the same time, the simulation results show that the algorithm guarantees the fairness of best-effort users. Ismail and Zhuang [10] proposed a distributed multi-service resource allocation algorithm based on maximizing the total utility of users, and uses Lagrangian to solve the problem, and obtains the optimal bandwidth allocation strategy for business needs. In [11], considering the two factors of user demand and network resources, a dynamic bandwidth allocation algorithm based on adaptive transmission rate is proposed, which dynamically adjusts the user transmission rate according to resource availability. Under the constraint of satisfying the user’s transmission rate requirement and network capacity, the optimal utility function is optimized, and the problem is solved through the iterative process to obtain the optimal bandwidth allocation scheme. However, under the condition of satisfying the user’s QoS requirements, not only the user side performance but also the network side performance optimization design is needed to obtain the optimal allocation scheme. Esmailpour and Nasser [12] proposed a throughput maximizing WiMAX network bandwidth allocation algorithm that supports different service QoS

1950

Q. Zeng et al.

requirements. According to the characteristics and requirements of ongoing communication services and new arrival services, the algorithm adopts flexible architecture, such as packet scheduling and admission control strategy, performs dynamic allocation of available bandwidth, and maximizes system capacity as an optimization function to achieve fair and efficient resource allocation. In [13], a novel joint power control and resource allocation algorithm is proposed to minimize the FBS transmission power, the optimal resource allocation strategy is obtained under the constraints of high priority and best effort for user service QoS. Estrada et al. [14] proposed a joint base station association and power allocation algorithm in a cellular heterogeneous network downlink transmission system. Tai et al. [15] proposed a cellular heterogeneous network uplink power allocation algorithm to achieve resource allocation by maximizing total network throughput while meeting the cross-layer interference of macro base stations and home base stations, the QoS requirements of users and use distributed algorithms to determine the optimal power allocation strategy. For the network scenarios of macrocell and microcell heterogeneous fusion, [16] proposed a joint subchannel selection and power allocation algorithm. It is assumed that different layer base stations can share subchannels by optimizing network throughput to optimize subchannel selection and power allocation strategies. While ensuring the QoS of different users, whether it is possible to expand the coverage of the network and achieve seamless connection and improve the communication capacity is the key to the success of heterogeneous network convergence. This is also the willingness of different network operators to integrate the network. The premise, therefore, system capacity analysis is another focus of heterogeneous network convergence research.

2 System Model The heterogeneous network system designed in this paper includes an IEEE802.11b wireless access point, a macrocell base station and a heterogeneous network scenario with overlapping coverage composed of 5G. As shown in Fig. 1. Each MMT user terminal has multiple radio interfaces capable of accessing multiple RATs. MMT transmits their data in parallel by spanning different frequency bands of heterogeneous wireless networks. As shown in Fig. 1, in a heterogeneous wireless network convergence overlapping coverage area, three user terminals send out service requests together, can be classified into two types according to the service type requested by the user terminal. One type is a delay-limited class, it needs to ensure its minimum transmission rate to meet its delay requirements, such as audio and video, and the other is a best-effort class, it is necessary to ensure the proportional fairness of the user terminal, ensure that each MMT can get the corresponding resources. Therefore, the following constraints must be met for these two types of user terminals: ri [ Rmin i ; i ¼ 1; 2; . . .; k

ð1Þ

A Novel Joint Resource Allocation

RAT 1

MMT 1

RAT 2

1951

RAT 3

MMT 2

MMT M

Fig. 1. Multiple wireless access scenarios in a heterogeneous network

rk þ 1 : rk þ 2 : . . . : rM ¼ ck þ 1 : ck þ 2 : . . .cM

ð2Þ

is the minimum rate requirement of the delay-limited user terminal, ri is the Rmin i proportional fairness factor of the best-effort user terminal. In order to guarantee multi-radio access mode transmission of all MMTs, each MMT should obtain corresponding bandwidth and terminal transmit power from multiple RATs. After bandwidth allocation, each MMT experiences a different channel gain over the bandwidth in each RAT. Suppose ɡ is fixed for a sufficiently small time interval. Therefore, according to the Shannon capacity formula of the Gaussian channel, the data rate ri that MMT can achieve can be defined: ri ¼

N X j¼1

  gij pij bj xij log2 1 þ xij

ð3Þ

In order to maximize the system capacity of multiple wireless access in a heterogeneous converged network, the transmission rate of all access user terminals is taken as an objective function of the optimization problem. Considering the system bandwidth, power constraints and the service requirements of heterogeneous services, the problem of maximizing the capacity of multiple wireless access scenarios in heterogeneous converged networks translates into the following optimization problems: max Rðx; pÞ ¼ max

M X N X i¼1 j¼1

  gij pij bj xij log2 1 þ xij

ð4Þ

Subject to M X i¼1

xij  Xj ;

8j

ð5Þ

1952

Q. Zeng et al. N X

pij  Pi ;

8i

ð6Þ

j¼1 N X j¼1

 bj xij log2 ri rK þ 1

¼

 gij pij  Rmin 1þ i ; xij ci

cK þ 1

;

8i 2 1; 2; . . .; K

ð7Þ

8i 2 K þ 1; K þ 2; . . .; M

xij ; pij  0;

ð8Þ

8i; j

ð9Þ

where X is the total frequency bandwidth allocated by the RAT to the MMT, P is the total transmission power of the MMT to the RAT, and bð0  b  1Þ represents the system efficiency of the radio access network. The objective function in (4) is concave with respect to {x, p}. This means that an optimal solution can be derived, where the local maximum is also the global maximum [17].

3 The Proposed Algorithm For the optimal solution of the objective function, the Lagrange multiplier method is used to transform the objective function into a Lagrangian function. The function is as follows:   M X N   X gij pij bj xij log2 1 þ L xij ; pij ; kj ; li ; mi ; xi ¼ xij i¼1 j¼1 þ

N X

kj X j 

j¼1

i¼1 N X j¼1

N X j¼1

M X

! xij þ

M X i¼1

li Pi 

N X j¼1

! pij

þ

K X

mi

i¼1

!   M X gij pij min  Ri bj xij log2 1 þ xi þ xij i¼K þ 1 

bj x1j log2

  ! N g1j p1j cK þ 1 X gij pij  1þ b xij log2 1 þ x1j ci j¼1 j xij

ð10Þ

The variables k, x; l and m are non-negative Lagrangian multipliers. By taking the derivatives of k and l, we can get the Karush-Kuhn-Tucker (KKT) condition.   bj gij pij @L gij pij   ¼ bj log2 1 þ  kj @xij xij xij þ gij pij ln 2

ð11Þ

A Novel Joint Resource Allocation

bj gij xij @L  ¼  li @pij xij þ gij pij ln 2

1953

ð12Þ

By making the formulas (11) and (12) zero, the relationship between BW and power distribution can be obtained.   b 1  ð13Þ pij ¼ xij li ln 2 gij kj Xj 

M X

! xij

¼0

ð14Þ

¼0

ð15Þ

i¼1

li Pi 

N X

! pij

j¼1

From (13), in order to get the optimal x and p values, we need one of them. For the optimal x value, the Newton iteration method is used in this paper. By selecting the appropriate iteration step size, the iterative bandwidth, power and system throughput are effective. The bandwidth allocation iteration update is as follows:   @L þ K þ1 K ¼ xij þ a ; 8i; j ð16Þ xij @xij where a is the fixed step size of the iteration. From the obtained xij , the above formula is used to obtain pij . In order to get the optimal solution, the Lagrangian parameters are updated by the gradient projection method and updated as follows: " !# þ M X K þ1 K K kj ¼ kj þ n1 xij  Xj ð17Þ i¼1

" lKi þ 1 ¼ lKi þ n2

N X j¼1

!# þ pKij  Pi

ð18Þ

The superscript K represents the Kth iteration. n is the fixed step size of the Lagrangian factor iteration. Through the above iteration, the terminal transmit power and access bandwidth of multiple radio access systems in a heterogeneous network can be solved.

4 The Simulations Results The performance of multi-radio access bandwidth and power joint allocation optimization algorithm in heterogeneous networks is evaluated by numerical simulation. It is assumed that three networks can provide wireless access to all user terminals in the

1954

Q. Zeng et al.

area, the spectrum bandwidths of the three networks are 20 MHz, 10 MHz and 50 MHz, respectively, and the channels have the same efficiency b ¼ 1. We consider a circular unit with a radius of 300 m. The simulation scenario selects three MMT users, and the MMTs of three users simultaneously access three RAT wireless access networks. The MMT has a maximum power of 50 mW and is randomly distributed in the RAT region. 35 Allocated to MMT1 by RAT1 Allocated to MMT1 by RAT2 Allocated to MMT1 by RAT3 Allocated to MMT2 by RAT1 Allocated to MMT2 by RAT2 Allocated to MMT2 by RAT3 Allocated to MMT3 by RAT1 Allocated to MMT3 by RAT2 Allocated to MMT3 by RAT3

30

Bandwidth (MHZ)

25 20 15 10 5 0

0

500

1000

1500

2000

2500

3000

Iteration¨K

Fig. 2. Bandwidth allocation 50 Allocated to MMT1 by RAT1 Allocated to MMT1 by RAT2 Allocated to MMT1 by RAT3 Allocated to MMT2 by RAT1 Allocated to MMT2 by RAT2 Allocated to MMT2 by RAT3 Allocated to MMT3 by RAT1 Allocated to MMT3 by RAT2 Allocated to MMT3 by RAT3

45 40

Power (mW)

35 30 25 20 15 10 5 0

0

500

1000

1500

2000

Iteration¨K

Fig. 3. Power distribution

2500

3000

A Novel Joint Resource Allocation

1955

In the scenario designed in this paper, Figs. 2 and 3 show that when the number of iterations is about 950, the bandwidth and power will converge to the optimal solution, and the user with larger bandwidth allocation will have a larger power allocation. Therefore, the heterogeneous network resource allocation algorithm can effectively allocate bandwidth and power resources to maximize throughput. 1.2

BE MMTs Method in[5]

1.1

FI

1

0.9

0.8

0.7 10

12

14

16

18

20

SNR¨dBm

Fig. 4. Proportional fairness comparison

The service quality proportional fairness (FI) formula is as follows:  PM FIðri Þ ¼

ðM 

2 i¼k þ 1 ri P 2 kÞ M i¼k þ 1 ri

As shown in Fig. 4, the quality of service guarantees of different user terminals are compared. The power and bandwidth joint allocation algorithm for multiple radio access in heterogeneous networks is superior to the algorithm proposed in [5] in terms of service quality assurance, ensure the system throughput of the heterogeneous network under the guarantee of the service quality requirements of the two service types.

5 Conclusion By analyzing the key technologies of resource allocation for multiple wireless access, we propose IEEE 802.11b wireless access point, macrocell base station and 5G overlapping coverage heterogeneous network convergence, and realize maximize joint distribution of the system capacity by effective iterative method. The evaluation results based on the proposed scheme show that the heterogeneous network convergence joining 5G has significant advantages in bandwidth and power allocation.

1956

Q. Zeng et al.

Acknowledgements. This work is supported by the National Natural Science Foundation of China under Grant No. 61701284, 61472229, 31671588 and 61801270, the China Postdoctoral Science Foundation Funded Project under Grant No. 2017M622233, the Innovative Research Foundation of Qingdao, the Application Research Project for Postdoctoral Researchers of Qingdao, the Scientific Research Foundation of Shandong University of Science and Technology for Recruited Talents under Grant No. 2016RCJJ010, the Sci. & Tech. Development Fund of Shandong Province of China under Grant No. 2016ZDJS02A11, ZR2017BF015 and ZR2017MF027, the Taishan Scholar Climbing Program of Shandong Province, and SDUST Research Fund under Grant No. 2015TDJH102.

References 1. Furuskär A (September 2002) Allocation of multiple services in multi-access wireless systems. In: Proceedings of international workshop mobile wireless communication network, pp 261–265 2. Gong X, Vorobyov SA, Tellambura C (2011) Joint bandwidth and power allocation with admission control in wireless multi-user networks with and without relaying. IEEE Trans Signal Process 59(4):1801–1813 3. He L, Su X, Zeng J et al (2012) Joint resource allocation method in heterogeneous wireless networks based on genetic algorithm. In: International conference on wireless communications & signal processing (WCSP), pp 1–5 4. Lei L, Yuan D, Ho CK et al (2015) Joint optimization of power and channel allocation with non-orthogonal multiple access for 5G cellular systems. In: IEEE global communications conference (GLOBECOM). IEEE Press, San Diego, pp 1–6 5. Choi Y, Kim H et al (2010) Joint resource allocation for parallel multi-radio access in heterogeneous wireless networks. IEEE Trans Wireless Commun 9(11):3324–3329 6. Yang K, Martin S, Yahiya TA et al (2014) Energy-efficient resource allocation for downlink in LTE heterogeneous networks. In: IEEE conference on vehicular technology, pp 1–5 7. Wang Y, Wang X, Shi J (2014) A stackelberg game approach for energy efficient resource allocation and interference coordination in heterogeneous networks. In: IEEE international conference on computer and information technology, pp 694–699 8. Sundar RS, Kumar SN (2012) Performance improvement of heterogeneous wireless networks using modified newton method. Int J Software Eng Appl (IJSEA) 3(3):79–90 9. Mao J, Hu Z, Lian RR, Tian H et al (2012) Optimal resource allocation for multi-access in heterogeneous wireless networks. In: Proceedings of IEEE vehicular technology conference. Spring 10. Ismail M, Zhuang W (2012) A distributed multi-service resource allocation algorithm in heterogeneous wireless access medium. IEEE J Sel Areas Commun 30(2):425–432 11. Chen G, Hu J, Xia W et al (2012) A dynamic bandwidth allocation algorithm based on transmission rate adaptation in heterogeneous wireless networks. In: International conference on wireless communications & signal processing (WCSP), pp 1–6 12. Esmailpour A, Nasser N (2011) Dynamic QoS-based bandwidth allocation framework for broadband wireless networks. IEEE Trans Veh Technol 60(6):2690–2700 13. Hatoum A, Langar R, Aitsaadi N et al (2012) QoS-based power control and resource allocation in OFDMA femtocell networks. In: IEEE global communications conference (GLOBECOM), pp 5116–5122 14. Estrada R, Jarray A, Otrok H (2013) Energy-efficient resource allocation model for OFDMA macro-femtocell networks. IEEE Trans Veh Technol 62(7):3429–3437

A Novel Joint Resource Allocation

1957

15. Tai MH, Tran NH, Do CT (2015) Power control for interference management and QoS guarantee in heterogeneous networks. IEEE Commun Lett 19(8):1402–1405 16. Shen K, Wei Y (2014) Distributed pricing-based user association for downlink heterogeneous cellular networks. IEEE J Sel Areas Commun 32(6):1100–1113 17. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge

A Vehicle Positioning Algorithm Based on Single Base Station in the Vehicle Ad Hoc Networks Geng Chen1(&), Xueying Liu1, Qingtian Zeng1, and Yan Wen2 1

College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China [email protected] 2 College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China

Abstract. This paper mainly designs a vehicle positioning algorithm based on single base station, which can both consider the angle and distance between the vehicle and the base station. The known vehicle location environment is free of any occlusion. The propagation loss prediction model is used to measure the distance between the base station and the vehicle according to the known transmitting power and receiving power. The uniform antenna array are installed on the base station, and the signal angle is obtained by ESPRIT algorithm. Finally, the coordinate system is constructed by using geometric model, and the coordinates of vehicles are calculated according to the triangular formula. If the speed or acceleration of the vehicle is known, the coordinates of the vehicle in the next second can be obtained through physical knowledge, so as to achieve vehicle location. By comparing with the literatures, the positioning scheme designed in this paper is more accurate than that in the selected literatures. Keywords: Single base station  Propagation loss prediction model  ESPRIT algorithm  Vehicle location

1 Introduction In order to solve the problem of inconvenience of vehicle transportation, our country gradually integrates communication technology and transportation technology. Wireless positioning method is the fastest developing method. Many researchers improve the accuracy of vehicle positioning system through vehicle target positioning algorithm in VANET. It is of great significance to the maximum utilization rate of people’s travel and road resources. Literature [1] studied the application of AI to provide reliable positioning information for various land vehicle navigation applications. In [2] data fusion technology is used to combine some known positioning technologies to provide powerful positioning. Literature [3] proposes a low-complexity prediction model, Ref. [4] studies the location of GSM network based on intensity measurement of signal reception. Literature [5] the concept of differential GPS is adopted. Reference [6] proposes a new © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1958–1968, 2020 https://doi.org/10.1007/978-981-13-9409-6_237

A Vehicle Positioning Algorithm Based on Single Base Station

1959

reliable broadcast routing model and the concept of multi-dimensional gain prediction. Reference [7] proposes a localization protocol for vehicular ad hoc networks. In this paper, when GPS is occluded, vehicle location algorithm based on single base station is studied. COST-231-Hata model is used to determine the distance of the vehicle based on power loss. ESPRIT algorithm is used to estimate the angle of the vehicle, and geometric model is used to locate the position of the vehicle at the current time.

2 System Model In this paper, the vehicle positioning environment is in the case of regular driving without any shield. Mainly combines the advantages of ranging system and angle measuring system, using distance and angle positioning at the same time. Firstly, the distance between the target vehicle and the base station is measured according to the power loss prediction model. The frequency domain ESPRIT algorithm is used to estimate the multi-path angle of arrival, and a uniform antenna array is installed on a single base station to transmit radio frequency signals to the surrounding environment. When the vehicle receives this signal, the vehicle sends a signal with location information to the base station to calculate the relative angle between the vehicle and the base station. The estimation of the transmission path is mapped to a geometric model problem to solve the position of the vehicle. The position of the vehicle in the next second can be obtained from the known speed or acceleration of the vehicle. From Fig. 1, there are four main steps: 1. 2. 3. 4.

Calculating the angle of arrival direction of signal. Calculating the attenuation degree of the arrival power and calculating the distance. Reducing errors by means of many times. Positioning vehicle coordinates by geometric relations.

Fig. 1. Vehicle positioning model

1960

G. Chen et al.

3 The Proposed Vehicle Positioning Algorithm Based on Single Base Station 3.1

Measuring Distance Based on Power Loss Model

COST-231-Hata model can be used to calculate the path loss by the known signal frequency and transmission distance, the effective height of transmitting antenna and receiving antenna. The calculation formula is as follows: L ¼ 46:3 þ 33:9 lg fc  13:82 lg hte þ ð44:9  6:55 lg hte Þ lg d  aðhte Þ

ð1Þ

aðhre ÞðdBÞ ¼ ð1:1 lg fc  0:7Þhre  ð1:56 lg fc  0:8Þ

ð2Þ

hre is receiving antenna height, d is transmission distance, fc is carrier frequency, hte is transmitting antenna height. The height of transmitting antenna is 30 m, the height of receiving antenna is 1 m, and the frequency of signal is 2 GHz. When the signal works in suburban environment, the relationship between the distance of signal and power attenuation can be obtained as follows: LðdBÞ ¼ 139:2 þ 35:22 lg d

3.2

ð3Þ

Estimating the Angle of Vehicle

In order to obtain the angle of the arrived signal, the main methods of DOA estimation based on uniform linear array is ESPRIT algorithm. The calculation steps of ESPRIT algorithm are as follows: The whole antenna array is divided into two sub-antenna arrays X and Y, which are the same in all aspects, except that there is a known displacement vector offset. The distance between the elements is d. h is the angle between the direction of the arrived signal and the normal line, it is known that the phase difference between the adjacent elements are both d sin h. The phase difference between the elements is defined as the phase space ri ¼

2pd sin hi c

ð4Þ

The signal will be disturbed by noise when it is transmitted. It is assumed that Gauss white noise with 0 mean and r2 variance. The expression of the received signal is XðtÞ ¼ AsðtÞ þ vðtÞ

ð5Þ

YðtÞ ¼ AUsðtÞ þ vðt þ 1Þ

ð6Þ

A Vehicle Positioning Algorithm Based on Single Base Station

U ¼ diag½ejr1 ; ejr2 ; . . .ejrd 

1961

ð7Þ

hk is the angle of the arrival of the number K signal source. U is a diagonal matrix that formed by the phase delay between two subarrays. U is named rotation operator, c is the signal wavelength. d is the spacing of antenna arrays. A is the signal direction matrix. Rxx ¼ EfXðtÞX H ðtÞg ¼ APAT þ r2 I

ð8Þ

Rxy ¼ EfXðtÞY H ðtÞg ¼ APUT AT þ r2 Z

ð9Þ

Eigenvalue decomposition of Rxx and Rxy the minimum eigenvalue can be obtained by eigenvalue decomposition: kmin ¼ r2 . Cxx ¼ Rxx  kmin I ¼ Rxx  r2 I ¼ APAH

ð10Þ

Cxy ¼ Rxy  kmin Z ¼ Rxy  r2 Z ¼ APUH AH

ð11Þ

When Cxx  cCxy is composed of matrix bundles, decompose their generalized eigenvalues. Cxx  cCxy ¼ APð1  cUH ÞAH , only c ¼ ejri , matrix bundles are singular matrices with unsatisfactory rank matrices, So c ¼ ejri is the generalized eigenvalue of the matrix bundle. According to the formula ri ¼

2pd sin hi c

ð12Þ

So the angle of the incoming wave of the signal can be obtained at this time. The steps as follows: 1. 2. 3. 4.

3.3

Take K snapshots of two subarrays and get Rxx and Rxy . Decomposition of eigenvalues to get the minimum eigenvalues. Constructing Matrix Beam fCxx ; Cxy g from Minimum Eigenvalues. The eigenvalues of matrix beams are obtained, and the angle of arrival is obtained from the phase formula. Vehicle Positioning Algorithm Based on Power Loss Prediction Model and ESPRIT

3.3.1 Location of Uniform Speed Vehicle In Fig. 2, with base station as the origin, the vehicle is at a constant speed v. At position (x1, y1), the fixed base station sends a call signal to the surrounding area. When the reader on the vehicle license plate around the base station receives the call signal, the RFID in the vehicle license plate sends a specific signal to the base station. The base station locates the vehicle according to the specific signal received, assuming that the distance of the vehicle is at an angle of zero.

1962

G. Chen et al.

Fig. 2. Uniform speed location coordinates

x1 ¼ d1  cos a

ð13Þ

y1 ¼ d1  sin a

ð14Þ

Then the next second, the position of the vehicle is set to ðx2; y2Þ, after calculation. The position of the vehicle can be predicted at this time ðd1  cos a; d1  sin a þ vÞ. 3.3.2 Location of Variable Speed Vehicle In Fig. 3, the vehicle travels from position ðx1; y1Þ to position ðx2; y2Þ with acceleration a. Assuming that the speed is v, the speed of the vehicle in the next second is v þ a. At this time, the distance between position ðx1; y1Þ and position ðx2; y2Þ is

Fig. 3. Variable speed location coordinates

A Vehicle Positioning Algorithm Based on Single Base Station

s ¼ vt þ

at2 a ¼ vþ 2 2

1963

ð15Þ

It can be deduced from the triangular geometric relation. x2 ¼ x1

ð16Þ

y2 ¼ y1 þ s ¼ d1 sin a þ v þ

a 2

ð17Þ

So, at this point, the position of the vehicle is ðd1 cos a; d1 sin a þ v þ a2Þ.

4 The Simulations Results 4.1

Simulation Parameters

The frequency of radio frequency signal is 2 GHz. The height of transmitting antenna is 30 m. The height of receiving antenna is 1 m. Change of SNR from −20 to 0 dB. The array spacing of the antenna array is k2. 4.2

Simulation Results

4.2.1 Vehicle Distance Simulation Results Fixed the angle of the vehicle, assuming that the signal-to-noise ratio is −5, −15 dB, draw the comparison between the real position of the vehicle and the position of the vehicle as follows: As shown in Figs. 4 and 5, assuming that the angle of the vehicle is fixed, the vehicle distance location becomes more and more accurate with the increase of signalto-noise ratio. SNR = —15 dB

1.6

Location without noise Location with noise

1.4

Ordinates (km)

1.2 1 0.8 0.6 0.4 0.2 0

0

0.2

0.4

0.6

0.8

1

1.2

Abscissa (km)

Fig. 4. Distance location under −15 dB

1.4

1.6

1964

G. Chen et al. SNR = —5 dB

1.4

Location without noise Location with noise

1.2

Ordinates (km)

1 0.8 0.6 0.4 0.2 0

0

0.2

0.4

0.6

0.8

1.2

1

1.4

Abscissa (km)

Fig. 5. Distance location under −5 dB

4.2.2 Vehicle Angle Simulation Results Table 1 shows the angle and error values measured under different antenna elements and signal-to-noise ratios at actual signal arrival angles of p3. Table 1. Angle error when the actual angle of arrival of the signal is 60° Number of antennas 5 7 9 11 13

−20 dB 7.5407 4.5278 3.4963 8.0897 7.2450

−15 dB 2.5898 4.2841 4.7201 6.0104 0.3412

−10 dB 1.2846 3.4453 1.8630 0.3483 0.0717

−5 dB 0.6537 0.0560 0.0778 0.0440 0.1280

−3 dB 1.3756 0.3372 0.1365 0.0171 0.0485

0 dB 0.1479 0.0286 0.0551 0.0660 0.0015

After running N = 1000 times, the average error of the system is obtained as follows: Merror ¼

PN

errori N

i¼1

ð18Þ

The influence of snapshot number and oscillator number on angle error can be obtained from Table 1, as shown in Figs. 6 and 7:

A Vehicle Positioning Algorithm Based on Single Base Station

1965

The Effect of Array Number L on Estimation Error

1.4

Estimation error(o)

1.2 1 0.8 0.6 0.4 0.2 0

0

5

10

15

20

25

30

Array number

Fig. 6. Effect of array number on location error The Effect of Snapshot Number on Estimation Error

0.3

Estimation error(o)

0.25

0.2

0.15

0.1

0.05

0

0

20

40

60

80

100

120

140

160

180

200

Snapshot number

Fig. 7. Effect of snapshot number on location error

The simulation results of angle at different SNR are as follows. Figures 6 and 7 shows that with the increase of snapshots and vibration elements, the positioning accuracy is higher. As shown in Figs. 8 and 9, with the increase of signal-to-noise ratio, the angle positioning of fixed vehicles becomes more and more accurate.

G. Chen et al. SNR = —15 dB

0.9

Location without noise Location with noise

0.8

Ordinates (km)

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

Abscissa (km)

Fig. 8. Angle location under −15 dB SNR = —5 dB

1

Location without noise Location with noise

0.9 0.8

Ordinates (km)

1966

0.7 0.6 0.5 0.4 0.3 0.2

0.3

0.4

0.5

0.6

0.7

0.8

Abscissa (km)

Fig. 9. Angle Location under −5 dB

0.9

1

A Vehicle Positioning Algorithm Based on Single Base Station

1967

4.2.3 Vehicle Angular Distance Integrated Location As shown in Figs. 10 and 11, when the angle and distance of vehicle change, with the increase of signal-to-noise ratio, vehicle location becomes more and more accurate.

SNR = —15 dB

1.2

Location without noise Location with noise

Ordinates (km)

1

0.8

0.6

0.4

0.2

0 0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Abscissa (km)

Fig. 10. Vehicle location under −15 dB

SNR = —5 dB

1.5

Ordinates (km)

Location without noise Location with noise

1

0.5

0 0.2

0.4

0.6

0.8

1

1.2

Abscissa (km)

Fig. 11. Vehicle location under −5 dB

1.4

1968

G. Chen et al.

5 Conclusion This paper focus on the vehicle positioning algorithm based on single base station. The following two research are accomplished. (1) According to the ratio of receiving power and transmitting power of vehicle, measure the distance between vehicle and base station. (2) A uniform antenna array are installed on the base station, and the angle of vehicle relative to the base station is measured by ESPRIT algorithm according to the direction of arrival. The performance of vehicle positioning under different signal-tonoise ratios is simulated by using MATLAB and the results show that the proposed algorithm has high accuracy in vehicle positioning in open space than that in the selected literatures. Acknowledgements. This work is supported by the National Natural Science Foundation of China under Grant No. 61701284, 61472229, 31671588 and 61801270, the China Postdoctoral Science Foundation Funded Project under Grant No. 2017M622233, the Innovative Research Foundation of Qingdao, the Application Research Project for Postdoctoral Researchers of Qingdao, the Scientific Research Foundation of Shandong University of Science and Technology for Recruited Talents under Grant No. 2016RCJJ010, the Sci. & Tech. Development Fund of Shandong Province of China under Grant No. 2016ZDJS02A11, ZR2017BF015 and ZR2017MF027, the Taishan Scholar Climbing Program of Shandong Province, SDUST Research Fund under Grant No. 2015TDJH102, the Qingdao Philosophy and Social Sciences Planning Project under Grant No. QDSKL1801131, and the Ministry of Education Humanities and Social Sciences Research Youth Fund Project under Grant No. 17YJCZH187.

References 1. Noureldin A, El-Shafie A, Bayoumi M (2010) GPS/INS integration utilizing dynamic neural networks for vehicular navigation. Inf Fusion 12(1):48 2. Boukerche A, Oliveria HABF, Nakamura EF, Loureiro AA (2007) Vehicular ad hoc networks: a new challenge for localization-based systems. Comput Commun 31(12):2838 3. Boukerche A, Rezende C, Pazzi R (2009) Improving neighbor localization in vehicular adhoc networks to avoid overhead from periodic messages. In: Proceedings of global telecommunications conference (GLOBECOM) 4. Brida P, Cepel P (2006) The accuracy of RSS based positioning in GSM networks. In: International conference on microwaves, radar and wireless communications, MIKON 5. Lee E-K, Oh SY, Gerla M (2011) RFID assisted vehicle positioning in VANETs. Perv Mobile Comput 8(2):167 6. Junting H (2016) A reliable broadcast routing model based on multi-dimensional gain prediction and the information fusion theory for VANETs 7. Lagraa N, Bachir M, Benkouider S (2010) Localization technique in VANETs using clustering (LVC). Int J Comput Sci Issues

Heterogeneous Wireless Network Resource Allocation Based on Stackelberg Game Shouming Wei, Shuai Wei(&), Bin Wang, and Sheng Yu Communication Research Center, Harbin Institute of Technology, Harbin, China [email protected], [email protected]

Abstract. With the development of mobile communication and information technology, various wireless access technologies have emerged in the private networks, which are heterogeneous in the access and services. Aiming at the allocation of heterogeneous wireless private network resources, this paper presents a resource allocation algorithm based on multi-master and multi-slave Stackelberg game, and designs a user utility function mainly based on bandwidth and private network operator utility function for network service capability. And a distributed iterative algorithm that needs local information is used to obtain the perfect balance of the sub-game of the game model, and finally obtain the optimal bandwidth strategy of the user and the optimal network service capability strategy of the private network operator. Keyword: Heterogeneous network

 Stackelberg game  Resource allocation

1 Introduction In heterogeneous private network environment, network operators can ensure their own profits by formulating appropriate network service capabilities. Because the user can also abandon the carrier that is being used, re-select to access a network of private network operators. This interactive feature between the private network operator and the user is a typical Stackelberg game problem [1]. In the problem of resource allocation of heterogeneous wireless police networks, the network operator that prioritizes the network service capability strategy is the leader, and the user acts as a follower to obtain the bandwidth demand strategy after obtaining the strategy of the network operator. This paper mainly considers the users who choose the flexible business. And we mainly consider two kinds of operators in the heterogeneous wireless police networks, one is public network LTE, and the other is private network LTE.

2 Stackelberg Game with Multi-master and Multi-slave Stackelberg game is mainly composed of leaders and followers. Suppose there are m leaders and n followers in the game. Use M ¼ f1; 2; . . .; mg to represent the set of leaders, and N ¼ f1; 2; . . .; ng to represent the set of followers x ¼ fx1 ; x2 ; . . .; xm g is the leader’s strategy combination, the policy set is X y ¼ fy1 ; y2 ; . . .; yn g is the follower’s strategy combination, the policy set is Y Mi ðx; yÞ represents the profit function © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1969–1976, 2020 https://doi.org/10.1007/978-981-13-9409-6_238

1970

S. Wei et al.

of leader i and Nj ðx; yÞ represents the profit function of follower j. This problem becomes a multi-master multi-slave Stackelberg game problem. The strategy chosen by the leader is x, then the set of Nash equilibrium points obtained by the non-cooperative game between followers is given as FðxÞ: Nj ðx; yj ; yj Þ  Nj ðx; yj ; yj Þ

ð2:1Þ

FðxÞ ¼ fy ¼ ðy1 ; . . .; yj ; . . .; yn Þg

Strategy combin y satisfies (2.1), y is a user non-cooperative game Nash equilibrium point with parameters U ¼ fx; y; Mi ðx; yÞ; Nj ðx; yÞg i 2 M; j 2 N is the equilibrium problem of the two-stage master-slave Stackelberg game, the problem can be given as: 9ðx ; y Þ 2 X  Y. Mi ðxi ; xi ; y Þ  Mi ðxi ; xi ; y Þ; i 2 M; y 2 FðxÞ

ð2:2Þ

3 Resource Allocation Based on Stackelberg Game In the Stackelberg, the operator is defined as the leader, the user is defined as the follower. Leader is the owner of the network resource and the maker of the strategy by providing the user bandwidth to meet the needs of the user’s business, and using the network service capability strategy to influence the user’s bandwidth demand strategy. Follower is the demanders of network resources. Users adjust their bandwidth demand strategies according to the network service capabilities of different networks and the gains they receive. Assume that the operator collection: M ¼ f1; 2; . . .mg, j 2 M and user collection: N ¼ f1; 2; . . .ng, i 2 N. Use pj to represent the network service capability strategy of the private operator in the collection M. Operators’ network service capability strategy are defined as p ¼ ðp1 ; p2 ; . . .; pm Þ. The network service capability is mainly divided into the data service capability voice service capability, and the service capability is the basis for the user to select the network for completing the service. qij denotes the bandwidth user requests from access network. qi ¼ ðqij ; qij Þ is the bandwidth requirement vector of the current user i, qij represents the bandwidth policy of the user i other than the network j, the user’s bandwidth requirement policy vector is defined as q ¼ ðq1 ; q2 ; . . .qn Þ. If capacity of network j is Cj , qij is bandwidth for user i accessing P network j, and ni¼1 qij  Cj . User utility function: Among the networks that users access, and we mainly can use the bandwidth-based benefits and the cost of acquiring network services to form the user’s utility function. The user utility functions in set N is: Fi ðp; qi ; qi Þ ¼ Ui

m X j¼1

! qij



m X j¼1

pj ðpj ; qij Þ 

m X j¼1

ðqij Þ

ð3:1Þ

Heterogeneous Wireless Network Resource Allocation

1971

Ui represents the revenue function of the user when his bandwidth demand vector is qi , Pj ðpj ; qij Þ is the cost function by user i when accessing network j, Dj ðqij Þ is the cost of the delay in network j caused by network congestion to user i. This paper we use UðQÞ ¼ a lnð1 þ QÞ [2] represents bandwidth-based user utility functions in elastic traffic flows. The bandwidth-related linear network service capability scheme is used to measure the cost of users acquiring network bandwidth. And it is given as: Pi ðqi Þ ¼

m X

pj qij ; qij [ 0

ð3:2Þ

j¼1

Reference [3] sets the delay cost function as a polynomial function related to network load. This function can ensure that each user’s bandwidth demand strategy and network service capability strategy can be predicted, and non-cooperative game between users. Setting the transmission rate of user i is traffic flow satisfying Poisson distribution. Assuming set M total load of all users is Qj  Cj , network will be congested. bj is the constant and delay cost function is given as: ( Dj ðqÞ ¼

bj Cj Qj

; Cj [ Qj 1; Cj  Qj

ð3:3Þ

Private operator utility function: The revenue obtained by the operator is defined as the utility of the private operator, and it is given by the cost function: Gj ðp; qÞ ¼ pj Qj , P Qj ¼ ni¼1 qij is current load of the network j, pj is the current network service capabilities of the network j.

4 Nash Equilibrium Point of User Non-cooperative Game In the Nash equilibrium point of a non-cooperative game, each player participant has the largest utility function value. But not every non-cooperative game can achieve Nash equilibrium. And gives the operators network service capability strategy p ¼ ðp1 ; p2 ; . . .; pn Þ. User utility function is given as: Fi ðp; qi ; qi Þ ¼ a ln 1 þ

m X

! qij



j¼1

m X j¼1

qij pj 

m X j¼1

bj Cj  Qj

ð4:1Þ

bj [ 0

ð4:2Þ

first find the first-order partial derivative of qij , bj @Fi ðp; q; qi Þ a Pim ¼  pj  ; 1 þ j¼1 qij @qij ðCj  Qj Þ2 then find second-order partial derivatives for qij .

1972

S. Wei et al.

2bj @Fi2 ðp; q; qi Þ ai ¼  \0; 2  2 P @qij ðCj  Qj Þ3 1þ m q ij j¼1

qij [ 0

ð4:3Þ

Since the value of the second-order partial derivative is less than zero, the utility function of users is strictly convex, and there is a Nash equilibrium point. The inverse inductive [4] method is usually used to solve the Stackelberg game. This paper proposes a distributed iterative algorithm based on incomplete information to obtain a subgame perfect Nash equilibrium solution. Time t, p(t) is used to indicate the network service capability policy that the network informs the user. Set up a dynamic model in which the rate of change in user bandwidth is proportional to the gradient of his utility function. According to the theorem [4] the iterative algorithm can reach the Nash equilibrium point when it converges. dqij @Fi ðp; q; qi Þ ¼ q_ ij ¼ ds @qij

ð4:4Þ

s is the time variable, during the time interval between the current time t and the next time t + 1, the bandwidth policy iteration equation of user i can be given as: qij ðs þ 1Þ ¼ qij ðsÞ þ vi q_ ij

ð4:5Þ

vi is the user bandwidth policy adjustment step size. After the user obtains the Nash equilibrium, the operator will adjust the network service capability strategy through the operator’s marginal utility. Iterative equation of network service capability strategy is given as:  pj ðt þ 1Þ ¼ pj ðtÞ þ wj

@Gj ðpðtÞ; qðtÞÞ @pj ðtÞ

 ð4:6Þ

The boundary utility of the network can be calculated by a small variation (e ¼ 105 ). It can be given as: @Gj ðpðtÞ; qðtÞÞ Gj ð. . .; pj ðtÞ þ e; . . .Þ  Gj ð. . .; pj ðtÞ  e; . . .Þ  @pj ðtÞ 2e

ð4:7Þ

Iterative process: (a) At time t, the operator formulates a network service capability strategy based on the marginal utility and boundary benefits shown in Eqs. (4.7) and (4.6). (b) After receiving the message of the new network service capability policy, the user adjusts its bandwidth demand policy according to Eqs. (4.5) and (4.4) at each time Ds interval until the user’s utility reaches a maximum value, and all users reach the Nash equilibrium.

Heterogeneous Wireless Network Resource Allocation

1973

(c) If all private operators also get the most utility at this time, the iteration stops. On the contrary, the private operator will return to step (a) to continue the iterative process.

5 Simulation Results In the simulation model, a public network LTE base station and a private network LTE base station are set up to form a heterogeneous wireless network environment. And we set the public network LTE transmission bandwidth to 30 Mbit/s and the private network LTE transmission bandwidth to 15 Mbit/s. Within the coverage of two networks, there are 8 a ¼ 2 and 8 a ¼ 1:5 users. The initial bandwidth policies of the two types of users on both networks are 0, and the initial two network operators service capability policy strategies are both 0.1 under the initial conditions, b1 ¼ b2 ¼ 1, v1 ¼ v2 ¼ 0:1, w1 ¼ w2 ¼ 0:01. Figure 1 shows the bandwidth policy curve during the user iteration of a ¼ 2. The bandwidth of the private network LTE network acquired by the user gradually increase, reaches a maximum at 1.30 Mbit/s, and then decrease slowly until it stabilizes at 1.25 Mbit/s. The user’s bandwidth in the public network LTE network gradually increase, and tends to be stable when it reaches 3.49 Mbit/s. 3.5

User bandwidth Mbit/s

3 2.5

Public network LTE Private network LTE

2 1.5 1 0.5 0

0

50

100

150

Number of iterations

Fig. 1. a ¼ 2 changes in bandwidth policy during user iteration

Figure 2 shows the bandwidth policy curve for the user iteration of a ¼ 1:5. The bandwidth of the private network LTE network acquired by the user reaches the maximum at 1.26 Mbit/s during, then decreases slowly until it stabilizes at 1.19 Mbit/s. The user’s bandwidth in the public network LTE network increases to 2.59 Mbit/s then remain stable.

1974

S. Wei et al. 3

User bandwidth Mbit/s

2.5 2

Public network LTE Private network LTE

1.5 1 0.5 0

0

50

100

150

Number of iterations

Fig. 2. a ¼ 1:5 changes in bandwidth policy during user iteration

Figure 3 shows the changes in the two networks throughputs (loads) during the user’s iteration. As the user requests bandwidth in the public network LTE, the load will rise to 27.99 Mbit/s and then remain stable. The load in the private network LTE network increases to 10.4 Mbit/s, then remains stable at 9.99 Mbit/s. 30

Throughput (Load) Mbit/s

25 Public network LTE private network LTE

20 15 10 5 0

0

20

40

60

80

100

120

Number of iterations

Fig. 3. Network throughput varies with the number of user iterations

Figure 4 shows the impact of the three different factors of user benefit, obtain bandwidth cost, and latency cost on the a ¼ 2 user utility function. As can be seen from the figure, as the bandwidth requested by the user to the public network LTE and the private network LTE is gradually increased, the user’s revenue and the cost of obtaining the bandwidth increase, and the convergence tends to be slightly reduced, and finally stabilizes. The user’s cost of delay based fluctuates as the network load increases, eventually reaching stability at Nash equilibrium point.

Heterogeneous Wireless Network Resource Allocation

1975

Utility function and influencing factors

3.5 Bandwidth revenue Obtain bandwith latency User utility function

3 2.5 2 1.5 1 0.5 0

0

50

100

200

150

Number of iterations

Fig. 4. The influence of three factors on the user utility function of a ¼ 2

Figure 5 shows the subgame perfect Nash equilibrium of the Stackelberg game of two heterogeneous wireless networks. The intersection of the two curves is the Nash equilibrium point p , at which the network service capability strategy can simultaneously satisfy the maximum utility of the two operators. The coordinates at the intersection of the curve correspond to the optimal network service capability strategy of public network LTE and private network LTE respectively.

1 0.9 Private network LTE Public network LTE

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 5. Nash equilibrium point of public network LTE and private network LTE games

1976

S. Wei et al.

6 Conclusions In this paper, after the operator determines the network service capability strategy, the user constitutes a non-cooperative game problem. Based on the assumption of the user utility function concave function, the existence of non-cooperative game Nash equilibrium points between users is ensured. A distributed iterative algorithm based on incomplete information is proposed, which obtains the optimal bandwidth strategy of the user and the best network service capability strategy of the operator. Acknowledgements. This paper is supported by National Key R&D Program of China (No. 2018YFC0807101).

References 1. Fudenberg D, Tirole J (2002) Game theory. China Renmin University Press, Beijing 2. Jiang Z, Ge Y, Li Y (2005) Max-utility wireless resource management for best-effort traffic. IEEE Trans Wirel Commun 4(1):100111 3. Altman E, Basar T, Jimenez T et al (2002) Competitive routing in networks with polynomial costs. IEEE Trans Autom Contr 47(1):92–96 4. Rosen JB (1965) Existense and uniqueness of equilibrium points for concave n-person games. Econometrica 33(3):520–534 5. Alpcan T, Basar T (2002) A game-theoretic framework for congestion control in general topology networks. In: Proceedings of the 41st IEEE conference on—decision and control. Las Vegas, Nevada, USA, pp 1218–1224 6. Li (2014) Research on power control technology in broadband wireless networks based on game theory. Beijing University of Posts and Telecommunications 7. Wang BB, Liu KJR, Clancy TC (2010) Evolutionary cooperative spectrum sensing game: how to collaborate? IEEE Trans Commun 58(3):890–900 8. Elias J, Martignon F (2010) Joint spectrum access and pricing in cognitive radio networks with elastic traffic. In: IEEE international conference on communications (ICC 2010), pp 1–5 9. MacKenzie AB, DaSilva LA (2006) Game theory for wireless engineers. Synth Lect Commun 1(1):1–86 10. Badia L, Lindstrom M, Zander J, Zorzi M (2003) Demand and pricing effects on the radio resource allocation of multimedia communication systems. Globecom 7:4116–4121 11. Niyato D, HossainE (2006) Cooperative game framework for bandwidth allocation in 4G heterogeneous wireless networks. In: IEEE conference on communications society, pp 4357–4362

On the Performance of Multiuser Dual-Hop Satellite Relaying Huaicong Kong1, Min Lin2(&), Xiaoyu Liu1, Jian Ouyang1, and Xin Liu3 1

College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China 2 The Key Laboratory of Broadband Wireless Communication and Sensor Network Technology, Ministry of Education, Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected] 3 School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China

Abstract. This paper studies the system performance of a multiuser dual-hop satellite relaying network where the threshold-based decode-and-forward (DF) is adopted at the relay. Specifically, we can first get the maximum signal-to-noise ratio (SNR) formula of the satellite relaying network. Then, by supposing that both of the uplink and downlink channels are submitted to Shadowed-Rician (SR) fading, the exact expression for the system outage probability (SOP) of the satellite relaying network is derived. Moreover, we also obtain the asymptotic SOP with a high-SNR to provide more insight into the performance in terms of coding gain and diversity order. Finally, numerical examples not only demonstrate the correctness of the performance analysis, but show the impacts of the system parameters on the SOP of the satellite relaying network. Keywords: Multiuser dual-hop satellite relaying  Threshold-based decode-and-forward  System outage performance

1 Introduction Recently, satellite communication (SatCom) has been considered to be an indispensable method for future mobile communication (B5G/6G), owing to its potential ability to provide mobile and fixed high data rate services for the users over a wide coverage [1, 2]. In this context, many existing works have conducted the optimization design or performance analysis for SatCom [3–8]. However, they only take the downlink into account. In actual, the satellite is commonly used as a dual-hop relay, which first This work is supported by the Funds for International Cooperation and Exchange of the National Natural Science Foundation of China (Grant No. 61720106003), the National Natural Science Foundation of China (Grant No. 61801234), the Natural Science Foundation of Jiangsu Province (Grant No. BK20160911), and Postgraduate Research & Practice Innovation Program of Jiangsu Province (Grant No. KYCX19_0950). © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1977–1985, 2020 https://doi.org/10.1007/978-981-13-9409-6_239

1978

H. Kong et al.

receives signals from mobile terminals (MTs) via uplink channels and then forwards them to the earth stations (ESs) through downlink channels. By considering this fact, several works have investigated the performance of satellite relaying system in the scenario of single user [9–11]. In particular, the exact expressions of outage probability (OP), ergodic capacity (EC) and average symbol error probability (ASEP) for a satellite relaying system with amplify-and-forward (AF) protocol have been derived in [9], whereas a decode-and-forward (DF) satellite relaying system has been analyzed in [10]. Besides, the authors in [11] have investigated the influence of hardware impairments of the performance for satellite relaying network. In SatCom, the satellite often provides services for a lot of users within the coverage area. Motivated by this observation, we study a dual-hop multiuser thresholdbased DF satellite relaying network, where all the links experience Shadowed-Rician (SR) fading. This is a more generalized and practical scenario compared with existing single user case [9–11]. As such, we first derive the novel and exact expression of the system outage probability (SOP) for the proposed satellite communication system. Moreover, we further obtain asymptotic SOP expressions in terms of a high-signal-tonoise (SNR) to show the diversity performance and illustrate that the achievable diversity order is limited by a smaller number of MTs and ESs.

2 System Model We consider a dual-hop satellite relaying network with multiple users in this section where N mobile terminals (MTs) each acted as source node (S), K earth stations (ESs) each acted as destination node (D) and a satellite relaying (R) as shown in Fig. 1. Due to the long distance, the direct link between any S and D is unavailable. In addition, it is assumed that Frequency Division Multiple Access (FDMA) scheme is employed in the considered system where the total carrier bandwidth is split up into uplink

:S-R

downlink :R-D

R

hs ,1

hd ,1

hd ,2

hs ,2 hs , N

hd ,K

S1

D1

D2

S2 SN

DK

Fig. 1. System model of a multiuser satellite relaying system

On the Performance of Multiuser Dual-Hop Satellite Relaying

1979

many sub-bands, each of which is allocated to one user, thus no interference exists among the users. The overall transmission is carried out into two phases. In the first phase ðS ! RÞ, the i-th S sends its signal xs;i ðtÞ with normalized power to the satellite via the uplink channel hs;i , and the received signal at R can be described as ys;i ðtÞ ¼

pffiffiffiffiffiffiffi Ps;i hs;i xs;i ðtÞ þ ni ðtÞ; i 2 f1; 2; . . .; N g

ð1Þ

where Ps;i denotes the transmit power at the i-th S, and ni ðtÞ the zero mean additive white Gaussian noise (AWGN) with variance r2i . As such, the formula of the instantaneous signal-to-noise (SNR) at R can be given by  2 cs;i ¼ cs;i hs;i  where cs;i ¼

Ps;i r2i

ð2Þ

is the average transmit SNR of the i-th S. In the second phase (R ! D),

the satellite adopts the threshold-based DF relay protocol. In other word, if cs;i  cth with cth being a predetermined threshold, R first decode xs;i ðtÞ with success, and then forward the signal to the destinations. If not, R cannot decode correctly. Thus, by letting D as the decoding set for the source, it should be noted that there exist 2N possible decoding subsets for N sources. Thus, the total set S can be expressed as S ¼ f;; D1 ; D2 ; . . .; Dn ; . . .; D2N 1 g

ð3Þ

where Dn denotes the n-th decoding subset of N sources, the null set ; has no available source. Therefore, we can define that if if

cs;i \cth ; i 2 f1; 2; . . .; N g; then cs;i  cth ; i 2 Dn then n ; cs;j \cth ; j 2 D

D¼; D ¼ Dn

ð4Þ

 n is the complementary set of Dn . In this case, the expression of signal which where D received at the k-th D can be written as yk;i ðtÞ ¼

pffiffiffiffiffi Pi hk;i xk;i ðtÞ þ nk ðtÞ;

i 2 Dn

ð5Þ

where Pi is the transmitted power of R to the k-th D, nk ðtÞ the zero mean AWGN with variance r2k , and xk;i ðtÞ the re-encoded signal with normalized power. Then, the output SNR at the k-th D is denoted as  2 ck;i ¼ ck;i hk;i  ; i 2 Dn ð6Þ where ck;i ¼ rP2i is the average receive SNR for the k-th D. In order to achieve optimal k

system performance, in the absence of collaboration between users, we obtain the maximum end-to-end output SNR formula as follows

1980

H. Kong et al.

cd;i ¼ max ck;i

ð7Þ

1kK

3 Outage Performance Analysis In this section, we first pay attention to the derivation of the analytical SOP expression, and then analyze the asymptotic outage behavior of the considered network. 3.1

Exact Outage Probability

SOP is a measuring important indicator of communication system which is defined as the probability that the instantaneous output SNR of the system falls below a certain threshold cth . In the considered system, we consider two cases, one of which is that all the transmit signal xs;i ðtÞ are decoded by mistake and the other one of which is that the output SNR of the MTs belonging to the subset Dn falls below cth , where every MT has the same sending probability. Therefore, the SOP can be obtained as Pout ðcth Þ ¼ PrðD ¼ ;Þ þ

N 2X 1

PrðD ¼ Dn Þ

n¼1

X 1   Pr cd;i  cth D j j n i2Dn

ð8Þ

According to (4), PrðD ¼ ;Þ and PrðD ¼ Dn Þ can be, respectively, given by PrðD ¼ ;Þ¼

N Y i¼1

¼

N   Y Pr cs;i \cth ¼ Fcs;i ðcth Þ; PrðD ¼ Dn Þ

Yh

i2Dn

i¼1

1  Fcs;i ðcth Þ

iY n j2D

ð9Þ Fcs;j ðcth Þ

with Fcs;i ðcth Þ being the cumulative distribution function (CDF) of cs;i . Similar to most of the previous works [9–11], by assuming that both S ! R and R ! D are subject to SR fading distribution, the probability density function (PDF) of the channel gain jhv j2 with v 2 fs; d g devoting the uplink and downlink, respectively, is given by [12] fjhv j2 ð xÞ ¼ av expðbv xÞ1 F1 ðmv ; 1; dv xÞ; x  0 where av ¼ 2b1 v



2bv mv 2bv mv þ Xv

mv

ð10Þ

; bv ¼ 2b1 v ; dv ¼ 2bv ð2bvXmvv þ Xv Þ. Xv and 2bv are the average

power of line-of-sight (LOS) and multipath components, respectively, and mv is the Nakagami parameter. Meanwhile, 1 F1 ða; b; zÞ represents confluent hypergeometric function [13, Eq. (9.14.1)]. According to (10), the PDF of cv ¼ cv jhv j2 can be expressed as



av bv x dv x fcv ð xÞ ¼ exp  ;x0 1 F1 mv ; 1; cv cv cv

ð11Þ

On the Performance of Multiuser Dual-Hop Satellite Relaying

1981

With the help of Maclaurin series expansion for the exponential function ex and the table of integrals in [13], the CDF of cv ¼ cv jhv j2 can be derived as

X

1 av dv x dv x þ N x1 F1 mv ; 2; F l þ 1; m ; l þ 2; 1; v 2 2 cv cv cv l¼1

Fcv ðxÞ ¼ where N ¼ ð1Þl

av blv xðl þ 1Þ

ð l þ 1Þ

ðl þ 1Þ!cv

ð12Þ

and 2 F2 ða1 ; a2 ; b1 ; b2 ; zÞ is the bivariate confluent hyper-

geometric function [13, Eq. (9.14.1)]. On the basis of (7) and (12), Prðcd  cth Þ in (8) is given by K   Y Pr cd;i  cth ¼ Fck;i ðcth Þ k¼1

( ! ! ) K 1 X Y ak;i cth dk;i cth dk;i cth þ N ¼ 1 F1 mk;i ; 2; 2 F2 l þ 1; mk;i ; l þ 2; 1; ck;i ck;i ck;i k¼1 l¼1

ð13Þ Meanwhile, according to (12), (9) can be rewritten as ( ! N Y as;i cth ds;i cth PrðD ¼ ;Þ¼ 1 F1 ms;i ; 2; cs;i cs;i i¼1 þ

1 X

2 F2

l¼1

PrðD ¼ Dn Þ ¼

! ) ds;i cth l þ 1; ms;i ; l þ 2; 1; N cs;i

" ! ! #) 1 X as;i cth ds;i cth ds;i cth þ N 2 F2 l þ 1; ms;i ; l þ 2; 1; 1 F1 ms;i ; 2; cs;i cs;i cs;i i2Dn l¼1 ( ! ! ) 1 X Y as;j c ds;j cth ds;j cth th þ N  2 F2 l þ 1; ms;j ; l þ 2; 1; 1 F1 ms;j ; 2;   cs;j c c s;j s;j  l¼1 j2D Y

ð14aÞ

(

1

ð14bÞ

n

Finally, by substituting (13) and (14a), (14b) into (8), the exact SOP expression of the considered network can be obtained directly. 3.2

Asymptotic Outage Probability

To estimate the system performance of the considered network further, we conduct the asymptotic SOP formula in terms of high SNR with cv ! 1ðv ¼ s; d Þ to indicate the diversity order and coding gain. Thus, the asymptotic SOP can be deduced by 1

Pout ðcth Þ ¼

N Y i¼1

1

Fc ðcth Þ þ s;i

N 2X 1

Yh

n¼1 i2Dn

1  Fc1 ðcth Þ s;i

iY n j2D

Fc1 ðcth Þ s;j

K X 1 Y F 1 ðc Þ jDn j k¼1 ck;i th i2Dn

ð15Þ

1982

H. Kong et al.

To obtain the analysis expression of (15), we need to evaluate the asymptotic behavior for the CDF of cv . Since the asymptotic analysis of Fc1v ð xÞ is up to the lowest order in terms of cv at a high-SNR, according to (12),   where in the case of x ! 0, p Fq ð:; :; :Þ has the property p Fq a1 ; . . .; ap ; b1 ; . . .; bq ; x ! 1, thus the asymptotic CDF of cv can be denoted as

av 1 xþO cv cv

Fc1v ð xÞ ¼¼

ð16Þ

where OðÞ stands for higher order terms. By employing (16) into (15), an asymptotic high SNR expression of P1 out ðcth Þ in regard to the diversity order and coding gain can be given as [14]    G p1 c d þ O cGd out ðcth Þ¼ Ga

ð17Þ

where the diversity order Gd and coding gain Ga are obtained as Gd ¼ minfN; K g

and

Ga ¼

8 > > > > > > > > < 1 > cth

> > > > > > > :

1 cth

N Q

i¼1 1 cth

N Q i¼1

as;i

as;i þ

K Q

k¼1

N1

N Q k¼1

ad;k

ð18Þ ;

ad;k

K1

N\K

N1

;

;

N¼K

ð19Þ

N [K

4 Numerical Results Several numerical examples are provided in this section to confirm the correctness of the performance analysis and reveal the influence of various system parameters on the SOP of the considered network. Here, we set cth ¼ 3 dB, and consider three SR fading scenarios are shown in Table 1. Besides, taking after the previous works, we assume cs;i ¼ cr;k ¼ c. In all plots, the user combination, namely, (N, K) means that N resource nodes and K destination nodes. Figure 2 shows the SOP of the considered system for different numbers of S and D nodes. We can look out that the analytical performances are consistent with the Monte Carlo simulations, verifying the correctness of the theoretical formula. Further, the SOP of the considered network is advanced as the increase of S and D nodes. Besides, it can also be observed that the exact results tend to the asymptotic SOP in terms of high SNR, and the diversity order equals minfN; K g, both of which agree well with the results of (18) and (19). Meanwhile, in Fig. 3, we show the SOP curves for the

On the Performance of Multiuser Dual-Hop Satellite Relaying

1983

Table 1. Channel parameters of the considered network Parameters Frequent heavy shadowing (FHS) Average shadowing (AS) Infrequent light shadowing (ILS)

b 0.063 0.126 0.158

m 0.739 10.1 19.4

X 8.97  10−4 0.835 1.29

Fig. 2. SOP versus c for different numbers of S and D nodes

Fig. 3. SOP versus c for different shadowing scenarios

considered system with different shadowing scenarios. It can be found that when the channel is under FHS fading, the system has the worst performance.

1984

H. Kong et al.

5 Conclusion We have conducted performance analysis of a multiuser satellite relaying network with threshold-based DF relay protocol. In particular, we have obtained the exact SOP formula of the satellite communication network. Furthermore, the asymptotic SOP formula in terms of high SNR has been developed to show the coding gain and diversity order of the considered network. Several numerical examples have been presented to confirm the correctness of the performance analysis. Our results highlighted the impact on the number of S and D nodes to the performance and provide useful guidance for design of satellite communication.

References 1. Jia M, Gu X, Guo Q, Xiang W, Zhang N (2016) Broadband hybrid satellite-terrestrial communication systems based on cognitive radio toward 5G. IEEE Wirel Commun 23 (6):96–106 2. Jia M, Liu X, Gu X, Guo Q (2017) Joint cooperative spectrum sensing and channel selection optimization for satellite communication systems based on cognitive radio. Int J Satell Commun Network 35(2):139–150 3. An K, Lin M, Liang T, Wang J, Wang J, Huang Y, Swindlehurst AL (2015) Performance analysis of multi-antenna hybrid satellite-terrestrial relay networks in the presence of interference. IEEE Trans Commun 63(11):4390–4404 4. An K, Lin M, Ouyang J, Zhu W (2016) Secure transmission in cognitive satellite terrestrial networks. IEEE J Sel Areas Commun 34(11):3025–3037 5. Lin Z, Lin M, Wang J, Huang Y, Zhu W (2018) Robust secure beamforming for 5G cellular networks coexisting with satellite networks. IEEE J Sel Areas Commun 36(4):932–945 6. Lin Z, Lin M, Ouyang J, Zhu W, Chatzinotas S (2018) Beamforming for secure wireless information and power transfer in terrestrial networks coexisting with satellite networks. IEEE Signal Process Lett 25(8):1166–1170 7. Lin Z, Lin M, Wang J, De Cola T, Wang J (2019) Joint beamforming and power allocation for satellite-terrestrial integrated networks with non-orthogonal multiple access. IEEE J Sel Top Signal Process 13(3):657–670 8. Lin Z, Lin M, Ouyang J, Zhu W, Panagopoulos AD, Alouini M (2019) Robust secure beamforming for multibeam satellite communication systems. IEEE Trans Veh Technol https://doi.org/10.1109/tvt.2019.2913793 9. Miridakis NI, Vergados DD, Michalas A (2015) Dual-hop communication over a satellite relay and shadowed Rician channels. IEEE Trans Veh Technol 64(9):4031–4040 10. Bhatnagar MR (2015) Performance evaluation of decode-and-forward satellite relaying. IEEE Trans Veh Technol 64(10):4827–4833 11. Guo K, Lin M, Zhang B, Zhu W, Wang J, Tsiftsis TA (2019) On the performance of LMS communication with hardware impairments and interference. IEEE Trans Commun 67 (2):1490–1505 12. Abdi A, Lau WC, Alouini MS, Kaveh M (2003) A new simple model for land mobile satellite channels: first- and second-order statistics. IEEE Trans Wireless Commun 2(3):519– 528

On the Performance of Multiuser Dual-Hop Satellite Relaying

1985

13. Gradshteyn IS, Ryzhik IM (2007) Table of integrals series, and products. Academic Press, New York 14. Wang Z, Giannakis GB (2003) A simple and general parameterization quantifying performance in fading channels. IEEE Trans Commun 51(8):1389–1398

Architectures and Key Technical Challenges for Space-Terrestrial Heterogeneous Networks Yang Zhang1(&), Chao Mu1, Zhou Lu1, Fangmin Xu2, and Ye Xiao2 1

2

China Academic Electronic and Information Technology, Beijing 100041, People’s Republic of China [email protected] School of Information and Communication Engineering, Beijing University of Post and Telecommunications, Beijing 100876, People’s Republic of China

Abstract. Possessing the capability of breaking through the limitation of geographic conditions, Space-Terrestrial Heterogeneous networks play a significant role for global coverage capacity, which however pose new challenges to the network architecture design. In this paper, we first propose novel architectures for the considered space-terrestrial heterogeneous networks. On this basis, some key technical issues in the developed architectures are comprehensively discussed, including protocol design, Integrated Routing and so on, which pave the way for extensive applications of the Space-Terrestrial Heterogeneous networks. Keywords: Space-Terrestrial heterogeneous networks  Network architectures  Protocols architectures  Integrated routing

1 Introduction Recently, with the gradual improvement of the ground mobile communication system, the transmission rate and real-time performance of LTE, 5G and other ground mobile communication systems are becoming more and more perfect. However, recent studies show that more than 4 billion people of the world’s population is still blocked from network services. In urban, remote and rural areas, the terrestrial coverage is inefficient with the dramatically increasing capacity demands. Therefore, satellite networks will play a significant role for its global coverage capacity by excluding the limitation of geographic conditions, which have attracted much attention from both the research community and industry. As the unique communication systems, satellite networks can not only provide network services in rural areas, but also support Internet of things, paving the way to new applications, including smart agriculture, environmental monitoring, and smart transportation and so on. Some satellite systems have been proposed and built, such as Inmarsat, Iridium [1], WGS [2], TSAT [3] and so on. The International Maritime Satellite (Inmarsat) system realizes air interface signal processing, service switching, interconnection with the ground network, system operation management and operation support through the ground gateway station. The Iridium ground network consists of system control part © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1986–1992, 2020 https://doi.org/10.1007/978-981-13-9409-6_240

Architectures and Key Technical Challenges for Space-Terrestrial

1987

and the gateway station to interconnect with other ground network. Recently, some software-defined architecture for satellite networks is proposed in [4–6]. Through the investigation and analysis of typical space-based information systems at home and abroad, the networking architecture can be classified into three categories: Satellite-Ground Network, Space-Based Network and Space-Ground Network. Satellite-Ground Network is widely used network architecture at present, including Inmarsat, Intelsat, WGS and other systems. It is characterized by the fact that there is no networking between the satellites in the sky, but the global service capability of the whole system is implemented through globally distributed ground stations. Spacebased network is another networking architecture, and the typical system is Iridium and so on. Its characteristic is that it adopts inter-satellite networking to form an independent space-based network. The whole system can run independently of the ground network. Space-ground network is between the above two networking architectures. A typical example is the TSAT project which is characterized by the cooperation of space-based network and ground networks to form a space-ground integrated information network. Under this networking framework, space-based networks make use of their high position and broad coverage to achieve global coverage. This paper proposes architectures of space-terrestrial heterogeneous networks, and discusses some key technologies. The remainder of this paper is organized as follows. In Sect. 2 we propose architectures of Space-Terrestrial Heterogeneous networks. In Sects. 3 and 4, we present the protocol framework and the routing protocol of SpaceTerrestrial heterogeneous networks, respectively. Finally, we conclude our paper in Sect. 5.

2 Architecture of Space-Terrestrial Heterogeneous Networks Considering the situation of our country, we believe that space-ground Network is a suitable network structure for the space-ground integrated information network. The Space-Terrestrial Heterogeneous network is formed by the interconnection of GEO space-based network, LEO space-based network, ground-based node network. The heterogeneous network systems, including special satellite communication systems, the ground Internet, the mobile communication network and other ground networks, can provide safe and reliable network and information services for users of land, sea, air and space (Fig. 1). The GEO space-based network is composed of backbone nodes in geosynchronous orbit through laser and microwave links. It realizes backbone interconnection and backbone access functions, and provides global relay and broadband access services for space-based and surface users. The LEO space-based network consists of low-orbit access nodes forming an integrated low-orbit constellation, which provides global mobile communication, broadband access and regional enhanced communication services, as well as information services including aerial/navigational target surveillance, data acquisition and return. Ground-based node network is constructed by interconnecting several backbone nodes. It has the functions of interconnecting space-based backbone/access nodes, interconnecting ground network, and supporting the spaceground integrated operation and the maintenance control and application services.

1988

Y. Zhang et al.

GEO space-based network

LEO spacebased network special satellite communication systems

ground-based node network

other ground networks

mobile communication network

ground Internet

Fig. 1. The composition of the space-ground integrated information network

According to the physical structure of the network, the main functions of the integrated information network are transmission, management and security. The functional configuration of each node is shown in Fig. 2. Users Space-based Relay users

LEO Space-based nodes

GEO Space-based nodes

Space -based Broadband access users

Ground Internet

Ground-based backbone nodes

Fig. 2. The functional configuration of network node

LEO Space-based nodes: These nodes mainly provide casual access services for mass users. According to the differences of system design and user services, there are many technical systems, including on-board routing and on-board transparent

Architectures and Key Technical Challenges for Space-Terrestrial

1989

forwarding. The former mode supports on-board user access and routing capabilities, while in the latter mode, there is no on-board processing capability, and user access and routing functions are implemented on the ground. In terms of network transmission, the routing and switching functions within the access network are implemented. For security management and control, some modules are deployed, including a small number of security management and control modules within the access network, access control module and user-oriented service gateway management module. GEO Space-based nodes: These nodes mainly provide relay/broadband access services for a small number of key users and also have a variety of technical systems access functions compared with space-based access nodes, including on-board routing and on-board transparent forwarding mode. The former mode can realize on-board user access and routing capabilities. The latter mode does not have on-board processing capabilities so that user access and routing functions are complemented on the ground. In terms of network transmission, the routing and switching functions are realized within the space-based backbone network and between the space-based backbone network and the space-based access network respectively. Besides, lightweight spacebased network security management and control module, access control module and border gateway management module are deployed in order to realize partial security management of the space-based backbone network and the space-based access network in terms of security management and control. Ground-based backbone nodes: These nodes undertake user access functions of space-based access nodes and space-based backbone nodes under on-board transparent forwarding mode. In the section of network transmission, ground-based backbone nodes provide interconnection functions with space-based backbone nodes, spacebased access nodes and the ground Internet and mobile communication networks. What’s more, in term of security management and control, there has deployed the access control module, the border gateway management module and the security control center to realize the security control function of the space-ground integrated information network.

3 Protocols Architectures At present, the network protocol system related to the space-ground integrated information network can be divided into three main categories: TCP/IP, CCSDS and DTN. The main characteristics and applicable scenarios of different network protocol systems are shown in the following table: CCSDS protocol is a “tailor-made” protocol for space missions. With good protocol performance and perfect protocol system, this protocol is widely used by the international space industry, yet it is hard to realize directly interconnection with the ground Internet so that the protocol conversion is necessary. According to the development experience of the ground network, the protocol system for the space-ground integrated network is designed based on the following principles: adhere to the layering thought and simplify the protocol design; simplify the routing strategy and improve the efficiency of management and control according to the hierarchical and domain-divided system; apply the new network technology to realize the continuous evolution of the network based on the open protocol architecture.

1990

Y. Zhang et al.

As shown in Fig. 3, the framework of the protocol system consists of three parts: network transmission, integrated application and security management. The main functions of each part are introduced as follows:

Fig. 3. The framework of the space-ground integrated network protocol architecture

Network transmission: This function mainly implements the bearing functions of service applications, including physical transmission, network routing, end-to-end transmission and so on. Based on TCP/IP protocol of the terrestrial Internet, this part combines with the advantages of space network CCSDS and DTN protocol and introduces the software definition network (SDN) and information center network (ICN) according to characteristics of the space-ground integrated information network, which can realize the on-demand transmission of information in the network. Integrated applications: This function mainly realizes the service application functions of user side, including time statistic, remote sensing, measurement and control, command and control, conventional services and so on. According to the applicable scenarios of the space-ground integrated information network, the integrated application usually combs and analyses the information transmission service requirements of various services on the network, and adopts the concept of service-oriented architecture (SOA) to support the development of different tasks in the space-ground integrated information network. Security management: This part mainly realizes the controllable and manageable functions of the network, including security protection and operation management and control. To solve the problem of the open electromagnetic environment of space-based network and the limited resources of nodes, the space-ground integrated network adopts the idea of space-terrestrial coordination and hierarchical management to realize the effective protection and efficient management and control of the space-ground integrated information network through the lightweight on-board deployment and the centralized and unified security management and control on the ground.

Architectures and Key Technical Challenges for Space-Terrestrial

1991

4 Integrated Routing of Space-Terrestrial Heterogeneous Networks There exists a demand to achieve network interconnection and integration of spaceterrestrial heterogeneous networks. Therefore, based on Ipv6 address expansion space, we need to study the unified addressing scheme of network layer combined with the technical system characteristics of each network system. This study aims at benefiting the design of the space network routing protocol, reducing the complexity of spatial network routing, addressing technology, and making the integration of space networks and ground networks routing mechanism more convenient. Furthermore, we also research and formulate multi-network integration protocol, transition protocol and mechanism, and some key technologies, including multi-dimensional network protocol, high dynamic scalable routing protocol, pre-planning adaptive dynamic routing protocol in medium and low orbit, coarse/fine-grained multi-protocol interaction and network end-to-end quality of service (QoS) guarantee, which achieves the convergence and interconnection of space-terrestrial heterogeneous networks. In order to achieve integration, the space-terrestrial heterogeneous network will provide support for high-speed mobile and highly dynamic nodes, as well as support for the access of massive heterogeneous network users under the overall framework of the dual backbone of space and ground. The core protocol of network layer adopts IPv6 protocol, and the space network address adopts IPv6 address format. In order to adapt to space-constrained computing, storage and bandwidth resources, we can address addresses of space network so that the aggregation of addresses and the extensibility of routing can be improved and the needs of highly dynamic network nodes and access nodes can be satisfied. Here, we mainly consider the global uncast address, link-local address, site local address and other addresses which do not need to be advertised. These addresses can directly follow the existing IPv6 address mode and the multicast address can also follow the existing scheme. Outside the space network, the integrated network interconnection center plays an important role in connecting the space network with the Internet and the mobile communication network, while the effect mainly embodied in the notification of the route prefix and accessibility calculation in the network layer. Because the border gateway protocol BGP on the Internet is policy-based, the valley-free characteristic of inter-domain routes is determined by the commercial relationship between autonomous systems. Thus, it is necessary to establish a certain BGP relationship between space networks and external autonomous systems. Within the space network, the main components include the ground-based backbone network, the space-based backbone network, the space access network and various users. The whole space network is not a single autonomous system, but it is also divided into several autonomous systems and a hierarchical and multi-priority routing mechanism is adopted. The division of autonomous systems within the space network should follow: (1) Satellites within an autonomous system belong to a single management organization; (2) an autonomous system should not be divided into multiple constellations as far as possible. If the autonomous system has to be split into different constellations, the Multihoming technology can be used for multi-path addressing.

1992

Y. Zhang et al.

In space network routing, space network devices are extremely scarce in their computing and storage capabilities. Static/OpenFlow routing greatly weakens the functions of general network devices and only has stronger demands for centralized controllers, which fits the characteristics of space network devices to a certain extent. At the same time, the space network needs distributed strategies to ensure that the space network has self-healing mechanism when the satellite-ground link fails. Therefore, the integrated network adopts extended inter-domain routing protocol BGP+ , extended intra-domain routing protocol OSPF+ , and static/OpenFlow configuration flow table. The topology control strategy of satellite network directly affects the design of routing technology. Because of the strict orbit motion of satellite, the dynamic topology of satellite network presents periodicity and predictability. Based on this characteristic, the routing technology of satellite network generally adopts the topology control strategy to shield the dynamic of topology, including virtual topology strategy, virtual node strategy and coverage area division method. For the static topological sequence, the routing optimization calculation is carried out.

5 Conclusions The space-terrestrial heterogeneous network is an important method to solve the problem that the coverage of terrestrial mobile communication systems is seriously inadequate and cannot meet the communication needs of aviation, ocean and remote areas. Based on the current design of the space-ground integrated information network, this paper proposes a space-terrestrial heterogeneous network architecture, and studies the protocol framework, network-wide integration routing and so on, which has provided a thought of constructing space-ground integration network. Acknowledgements. This work is supported in part by Beijing Municipal Science and Technology Commission Research under Project Z17110005217001, and National Key Research and Development Project under grants 2016YFB0800305.

References 1. Iridium Satellite Communications. http://www.iridium.com. Cited 15 Jan 2019 2. Wideband Global SATCOM (WGS) Satellite-Aerospace Technology. http://aerospacetechnology.com/projects/wgs-satellite/. Cited 15 Jan 2019 3. Transformational Communications Satellite (TSAT). http://www.globalsecurity.org/space/ systems/tsat.htm. Cited 15 Jan 2019 4. Xu S, Wang X-W, Huang M (2018) Software-defined next-generation satellite networks: architecture, challenges, and solutions. IEEE Access 6:4027 5. Liu Z, Zhu J, Pan C, Song G Satellite network architecture design based on SDN and ICN technology 6. Bertaux L, Medjiah S, Berthou P, Abdellatlf S, Haklrl A, Gelard P et al (2015) software defined networking and virtualization for broadband satellite networks. IEEE Commun Magaz 5:54–60

Design and Implementation of the Coarse and Fine Data Fusion Based on Round Inductosyn Li Jing(&), Cui Chenpeng, and Zhao Xin Beijing Institute of Space Mechanics & Electricity Beijing, Beijing 100094, China [email protected]

Abstract. In order to achieve the absolute angle in the measuring system, the coarse channel and fine channel data of the absolute round inductosyn need to be fused. Firstly, this paper propose two coarse and fine data fusion methods based on round inductosyn, specifically analyzed the advantage and disadvantage of the two methods. Secondly a improved look-up-table method was applied in the double-channel angle measurement system and was validated in the high and low temperature environment experiments. The experiment results show that the data fusion algorithm is correct and reliable, and can still obtain accurate absolute angle under high and low temperature condition. Keywords: Inductosyn

 Coarse and fine  Dada fusion

1 Introduction In sampling servo systems which require high measurement precision, the inductosyn, as a feedback element of angular positions, can work with high precision in measuring [1]. Because of the low accuracy of single-channel angle measurement system, a lot of scholars have been studied in double-channel angle measurement method [2], such as two channel resolver and round inductosyn. The advantage of this two channel angle measuring system is that it can archive both a high measuring range and a high measuring accuracy. However, the two channels are independent of each other and have independent zero values. In order to get the correct angle, a reasonable angle data fusion method must be performed. Some data fusion methods were applied by hardware circuit and their weakness is need complex circuits and the expansiveness is low [3–6]. Correct-table method and Glide-period method need the interval of zero error is less than 0.5°, but it’s very hard to archive in the practice system. In order to resolve the problem, this paper proposes two coarse and fine data fusion methods based on round inductosyn, specifically analyzed the advantage and disadvantage of the two methods. Besides, a improved look-up-table method is designed based on FPGA and a upper-computer software is designed based on Labview to display the angle real time in the practical double-channel angle measurement system.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 1993–2001, 2020 https://doi.org/10.1007/978-981-13-9409-6_241

1994

L. Jing et al.

The high and low temperature environment experiment results show that the data fusion algorithm is correct and reliable.

2 Principle of Inductosyn The inductosyn is a multiple electromagnetic sensor device used as position measurement system [7]. The inductosyn sensor belongs to the group of encoder type positioning sensors [8]. Now we used the inductosyn is made by the Jiujiang Precision Testing Technology Research Institute and its speed radio is 1:180. The inductosyn contains coarse channel and fine channel. When the angle is rotated 360°, the coarse channel outputs a complete sine curve and the fine channel outputs a complete sine curve every 2° [9]. Giving the rotor excitation signal us, as show in (1). us ¼ Um sin xt

ð1Þ

Then two outputs coarse channel of stator are uA and uB , as show in (2) (

uA ¼ Um sin xt sin h

ð2Þ

uB ¼ Um sin xt cos h

The two outputs fine channel of stator are uA1 and uB1 , as show in (3), N is number of pole pairs, now, N = 180 (

uA1 ¼ Um sin xt sinðN  hÞ

ð3Þ

uB1 ¼ Um sin xt cosðN  hÞ The schematic is as follow Figs. 1 and 2. 1

excitation excitation coarse sin coarse cos

0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

1000

2000 3000

4000

5000 6000

7000 8000

Fig. 1. Coarse signal of inductosyn

9000 10000

Design and Implementation of the Coarse and Fine Data 1

1995

excitation excitation fine sin fine cos

0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Fig. 2. Fine signal of inductosyn

The coarse and fine dual channel signals enter the closed-loop angle resolving chip through the pre-amplification, band pass filtering, amplitude modulation and phase modulation unit and obtain the digital signals, then the coarse and fine digital signals are fused based on FPGA. There is a deviation between the fused data and the actual angle, and the error needs to be corrected based on FPGA. The whole block diagram is show in Fig. 3: sin\cos signals The coarse channel of preamplifier the inductosyn sin\cos signals The fine channel of preamplifier the inductosyn

12 bit coarse signal Band-pass filter

Amplitude /phase modulation

Loop tracking RDC 16 bit fine signal

Band-pass filter

Amplitude /phase modulation

Loop tracking RDC

Coarse and fine data fusion

24 bit fusion data

Fig. 3. Block diagram of the resolving of the inductosyn

3 Analysis of Data Fusion Algorithms 3.1

Simple Table Look-up Algorithm

Ideally, it is considered that the position of zero of the coarse and fine channels should be aligned. For the 180-pair induction synchronizer, the change of the coarse and fine signals should be strictly in accordance with the principle that when the coarse signal pass 1°, the fine signal pass 180°. However, in the actually system, if the zero deviation is too large, there will be a jump after data fusion, the range of the jump is 1 times of the measurement cycle. The purpose of the data fusion is to correct the jump, the principle of the data fusion is to correct the coarse data with the fine data, and combine the high bits of the corrected coarse data and the fine data into the final angle measurement result.

1996

L. Jing et al.

The specific data fusion steps are as follows: Firstly, get the same bits of the coarse and fine data, if the bits of the coarse data is less than the fine data, it is need to add zero to complement the same bits; Secondly, the coarse data is enlarged 180 times to obtain the 24 bit coarse data signals; Thirdly, compare the high bits A (here bit 1 and bit 2) of fine data and the relevant bits B (here bit 9 and bit 10) of the coarse data shown in Table 1. If A is 00 and B is 11, then “+1” operation is added to the corresponding bits of the coarse data (here is bits 1–8); if A is 11 and B is 00, then “−1” operation is added to the corresponding bits of the coarse data (here is bits 1–8); Finally, the new fused 24 bits digital data is obtained by combining the corrected coarse high-order data (here is the bits 1–8) with all fine data (here is the bits 1–16). Then divide the 24 bits digitals by 180 to get the fused digital angle.

Table 1. Connection of fusion of coarse and fine channel Coarse data 1

2

3

4

5

6

8

9

10

11

12

Fine data

1

2

3

4

7

13 14 5

6

15

16

7

8

17 18 9

10

19

20

21 22

23

24

11

12

13 14

15

16

The margin of the simple table loop-up method is the zero deviation is less than 0.5° of the coarse and fine data, and the speed radio is 180 accurately, but in the real system it is not the case mostly. Figure 4 is data fusion result of the actual system used the simple table look-up method.

Simple table look-up algorithm

12

data fusion coarse data fine data

10

angle(°)

8 6 4 2 0

0

0.5

1

1.5

2

2.5

3

sample data

Fig. 4. The data fusion results used the simple table loop-up method

3.5 4

x 10

Design and Implementation of the Coarse and Fine Data

1997

The Figure shows there are still jump after data fusion through simple table l lookup method. Analysis the reason, record the coarse original code when fine data is cross the angle 0. The coarse code is 288, 608, 928, 1248, 1568, 1888 between 0 and 12°, thus, the period code of coarse is 320, so in the coarse system, when the coarse angle pass 320 * 360/65,536 = 1.7578°, the fine angle pass 360°, so the speed radio is 360/1.7578 = 204. In order to solve the problem, the following method is designed. 3.2

Improved Table Look-Up Algorithm

In order to solve the limitation of the method of A, an improved table look-up algorithm is designed. The steps of this algorithm are as follows: 1. For the system with small rotation range (e.g. 0–12° in this paper), we need zero pre-aligned of the coarse and fine data. The method is recording coarse original code when fine data is cross the angle 0, such as data1, data2, data3…and data1 is the data which is closest to the limit position, then, the period = data2 − data1, the zero deviation = data1 − period. 2. All coarse data minus zero deviation to get the new coarse data, so as to ensure the zero error of the two channels. 3. Calculate the speed radio K of the actual system. K = 65,536/period. 4. The coarse data is enlarged K times to obtain the 24 bit coarse data signals. 5. Based on Table 1, operation the bit 8 of coarse data according to the bit 9–10 bits of fine data. 6. Use coarse 1–8 bits and 9–24 bits to form new fused 24-bits data. The diagram of flow chart is show in Fig. 5.

start

Coarse=coarse*K

Input coarse and fine data Zero Pre-aligned of the coarse and fine data (period= data2-data1 Deviation=data1-period Coarse=coarse-deviation) Calculate the speed radio of actual system (K=65536/quian)

Correct the high bits of coarse(bits 1-8) based on table 1

Coarse(bits 1-8)+fine(bits 9-24)

end

Fig. 5. Data fusion algorithm

1998

L. Jing et al.

The reddening part is the improvement point, according to this method, the actual system is fused again and the result below is obtained (Fig. 6). Improved table look-up algorithm

14

data fusion coarse data fine data

12

angle(°)

10 8 6 4 2 0

0

0.5

1

1.5

2

2.5

3

sample data

3.5

4

x 10

Fig. 6. The data fusion results used the improved table loop-up method

The result shows that the improved method solves the problem of simple table look-up algorithm, and the angle is continuous after fusion. The essence of the improvement is two points, one point is correct the zero deviation, the other point is correct the actual speed radio.

4 Implementation of Data Fusion and Thermal Test The improved table look-up algorithm is implemented based on FPGA, FPGA receives the coarse data and fine data and obtains fusion data, at the same time, the FPGA sends the fusion data via a serial port in real-time to PC, PC using Labview program to receive and store the coarse and fine and angle after integration, system structure is show in Fig. 7. The green part is the implementation of the data fusion.

Design and Implementation of the Coarse and Fine Data Hardware Circuit Module coarse channel

signal processin g circuit

RDC circuit

fine channel

signal processin g circuit

RDC circuit

1999

FPGA Module Zero prealigned

Calculate actual speed radio

Correct the high bits of coarse data

Interface circuit

Get fusion data

Coarse data+fine data

sample

Drawing display

Save fusion data

Save coarse/fine data

Labview Module

Interface circuit

Interface circuit

Video processing circuit

Video Module

F

Fig. 7. Actual system structure block diagram

Figure 8 is the actual operating system using inductosyn, the system consists of voice coil motor, inductive synchronizer, and large size mirror. The target is to perform small-angle oscillating scanning imaging.

voice coil motor

Scanning mirror

inductosyn

Fig. 8. Actual scanning system using inductosyn

Place the inductosyn in a high and low temperature and control the ambient temperature to 14 °C (low temperature), 20 °C (common temperature), 26 °C (high temperature), and control the absolute type round inductosyn in small range of motion,

2000

L. Jing et al. Thermal test results 12

high temperature low temperature common temperature

10

angle(°)

8

6

4

2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

sample data

1.8

2

5

x 10

Fig. 9. Fusion results in different temperature

after acquisition of absolute type fusion data and original data, then obtain the zero deviation under different temperature conditions. Figure 9 is different data fusion under different temperature, the result shows the data fusion is continuous and indicates that the designed data fusion algorithm is suitable and can certain harsh environment when working in a small range of temperature changes. Under the circumstance, it has higher reliability and the angle measurement system is more stable.

5 Conclusions In order to solve the jump of data fusion used inductosyn, this paper designs the two different fusion algorithms, analyzed the disadvantage of a simple table loop-up method and the advantage of the improved table loop-up method. Then, a improved look-uptable method was applied in the double-channel angle measurement system based on FPGA and was used in an actual scanning angle measuring system. Finally, the validity and reliability of the data fusion algorithm are verified. The results shows that the conversion of two channels independent angle value to absolute angle value can be realized, the margin of fusion failure caused by external factors is increased, and the reliability of angle measurement system is improved.

Design and Implementation of the Coarse and Fine Data

2001

References 1. Hou X-Y, Jin L-X (2012) Implementation of an angle measuring system of inductive inductosyn. Opt Precis Eng 12(22):92–95 2. Wang XM, Guo SJ (2011) The multistage revolver transformer angle principle and implementation method. Shanxi Electr Technol 06:24–25 3. Dongchun Z, Yahong J (2006) Coarse-fine data coupling in double-channel and measuring system. Technol Measur Contr 25(9):29–30 4. Wengui P, Jing F, Yu Z (2012) Circuit design and software compensation in an inductosyn angle measuring system. Sci Technol Eng 12(22):5484–5488 5. Xie R, Wang Y, Jiang Z (2009) Angle data coupling method for a coarse-fine double channel angular transducer, China, 200910050315.6 6. Zhan XH, Chen BG (2012) Study on angle combination algorithms of two-channel multi-pole resolver. Micromotors 12:16–17 7. Ni GF (2013) Research and application in high precision dynamic angle measurement system. Aviat Precis Manuf Technol 49(02):18–19 8. Yang Y (2014) Research on rotrary transformer. Hunan Agric Machin 41(11):46–47 9. Li J (2013) The operating principle and application of rotary transformer. Equipm Manuf Technol 10:149–150

Lossless Flow Control for Space Networks Zhigang Yu(B) , Xu Feng, Yang Zhang, and Zhou Lu China Academic Electronic and Information Technology, Beijing 100041, People’s Republic of China [email protected]

Abstract. With the advancing of space technology, space network attacks more and more attentions in both academic and industry. In a network, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver. Traditionally, flow control mechanism used in Internet allows the node to drop coming packets when the node does not have enough buffer to store. Undutifully, it will induce lots of power waste. However, powers is the most resource in the space environment. In this paper, we propose a lossless flow control to save power. Instead of dropping packets, LFC sends the congestion information to the source node by backpresure. Finally, efficient analysis results are presented to prove the efficiency of the proposed mechanism.

Keywords: Space network

1

· Flow control · Lossless · Power efficiency

Introduction

With the rapid development of space technology, human activities have expanded from land to deep sea and deep space. In order to effectively guarantee the needs of communication tasks in the fields of ocean exploration and space exploration, a space-sky-ground-sea integrated network based on the mobile communication network, Internet and space network has been constructed to realize the deep integration of space network and ground network [1]. As an important part of the space-sky-ground-sea integrated network, space network has the advantages of wide area coverage, flexible networking and not limited by geographical environment. It can provide diversified network communication services for resource survey, disaster monitoring, meteorological observation, emergency communications, military applications and so on. It will certainly have a profound and tremendous impact on economic and social development and national defense construction of the mainland [2]. Space network is a communication network composed of various satellites, planes, space stations and other spacecrafts deployed in different orbits and c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2002–2010, 2020 https://doi.org/10.1007/978-981-13-9409-6_242

Lossless Flow Control for Space Networks

2003

Fig. 1. This figure shows an example of space network: the left part of this figure is the Iridium satellite constellation with 66 active satellites in orbit, the right part is full global coverage map of Iridium.

carrying out different missions, and ground stations interconnected by space microwave/laser links, which extends the network territory from near-earth space to far-field new space. In recent years, especially with the rapid development of inter-satellite networking technology, high-speed inter-satellite links can be used to achieve interconnection between satellites, eliminating the dependence of traditional satellite communications on ground stations. New space network projects emerge in endlessly, such as Iridium [3], Oneweb [4], O3b [5], Starlink [6], Telesat [7]. Figure 1 shows the Iridium satellite constellation and the coverage map of the constellation. Human beings have made their minds to build a space network available to anyone, anywhere [8]. As an important supporting technology of space network, protocol has attracted wide attention in academia, and has achieved a lot of research results. On the ground, Internet and mobile communication trend to be all-IP network architecture, which means that they deliver all user data in IP packets, and provide users with “always-on IP” connectivity (IP application and IP forwarding for short) [9]. Furthermore, most promising technology such as 5G, IoT and AI, are also based on IP. Therefore, many researchers have suggested that IP protocol may be used in space networks. Accordingly, a series of experiments, such as IRIS (Internet Routing In Space) [10], OMNI (Operating Mission as Nodes on the Internet) [11], NGSI(Next Generation Space Internet) [12], are carried out to prove the possibility of using IP in space. However, in the aspect of flow control, the best-effort feature within IP (allowing packets to be dropped when the storage) is not appropriate [13]. Actually, in view of the characteristics, space network is in electromagnetic open space, satellite resource such as power, storage and computing are extremely limited [1]. In order to effectively guarantee the availability and reliability of space services, the efficiency of microwave/laser transmission, it is urgent to propose more power-saving and efficient flow control mechanism. At present, most of researchers at home and abroad have made some achievements in the research of routing algorithms or route planning in space network [14, 15]. As far as I know, few studies have been done on the mechanism

2004

Z. Yu et al.

of flow control. In this paper, we propose lossless flow control (LFC) for space networks. Instead of dropping packets, LFC sends the congestion information to the source node by backpressure. This paper will try to provide valuable suggestions and reference for the follow-up research of space network flow control (Fig. 2).

GEO

MEO

700~2000 km 10000~20000 km 36 000W km

LEO

HAP

10 km

FGH

Ground

10 km

GND

Fig. 2. This figure demonstrates the typical structure of space network.

The remainder of the paper is organized as follows. Section 2 gives the basic definitions, and also presents the preliminary definitions used in the rest of the paper. The motivation of this paper is presented in Sect. 3. Section 4 presents the proposed lossless flow control. The analysis results for different flow control mechanisms is given in Sect. 5. Finally, Sect. 6 concludes this paper.

2 2.1

Related Works Space Network

Space network mainly includes high orbit satellite (GEO), medium orbit satellite (MEO), low orbit satellite (LEO), high altitude platforms (HAP), space station, aircraft and other spacecraft network nodes, as well as a variety of inter-satellite, satellite-to-ground communication links, links will interconnect network nodes together to form a multi-level, three-dimensional space network architecture. As the extension and expansion of ground network, space network realizes the interconnection and interoperability between different orbiting satellites, space

Lossless Flow Control for Space Networks

2005

stations, aircraft and other aerospace vehicles, as well as between ground stations. It extends the network territory from near-earth space to far-field new space, and from two-dimensional coverage of the ground to three-dimensional new ubiquitous space. Recently, in may 2019, 60 Starlink Internet satellites was successfully launched into orbit. It is possible to realize the dynamic migration of woven networks and services which heralds a significant step towards space networks. microwave/laser link

microwave/laser link

drop

Source

Space Router A

Space Router B

Space Router C

DesƟnaƟon

Fig. 3. This figure shows the lossy flow control mechanism used in traditional network.

Source

Router B

Router A

Time out

waste of power

Router C

DesƟnaƟon

drop

Retransmit

Fig. 4. This figure shows the time line of packet transmission from the source to destination node.

2.2

Flow Control

Generally, in data communication network, flow control is the mechanism of managing the rate of data transmission between two nodes to prevent a fast

2006

Z. Yu et al.

sender from overwhelming a slow receiver. On the ground, IP protocol stacks are the most widely used protocol to interconnect networks in the world. In the nature of IP, the default flow control is lossy flow control which means a router is permitted to drop incoming packets in case of congestion or buffer storage is used up. Figure 3 shows the lossy flow control mechanism used in traditional networks. As shown in the figure, packets are sent from the source node (left) to the destination (right). Each router has 5 input buffer to store incoming packets. At present time, the buffer storage in router C is used up, the buffer storage in router B has one free buffer to store incoming packet, the buffer storage in router A has 4 free buffer to store incoming packet. Then, at the next moment, the router C has to drop the incoming packet transmitted from router B, the router B will receive the packet transmitted from router A, the router A will also receive the packet from router source. The dropped packets will be retransmitted after a certain time interval. The detail will be discussed in the Sect. 3.

3

Motivation

Even though the lossy flow control mechanism allows a router to drop packets and results in unnecessary retransmission, being the default option for ground Internet, it performs well. There are at least two reasons: – The transmission time delay is negligible on the ground. Generally, on the ground, a router is not too far from the neighbor router, suppose the distance between two routers is about 1200 km, i.e. the transmission time delay is only about 4 ms. However, in the space network, the typical distance between the GEO satellite and the ground saltation is 36,000 km, i.e., the transmission time delay is more than 120 ms (30 times of ground case) [9]. Unfortunately, the huge transmission time delay in space network makes the cost of packet dropping and retransmission even more expensive. – The transmission power consumption is acceptable on the ground. On the ground, fiber and copper wire are the most commonly used transmission media, within which the attenuation of signal intensity is relatively small. Otherwise, the power is easily accessible on the ground. These two reasons make transmission power consumption on the ground be acceptable. But in contrast to ground, the power in space is quite limited, the long distance make the attenuation of microwave/laser signal intensity to be great. Thus, the transmission power consumption cannot be just ignored. To sum up, the lossy flow control mechanism can not be directly introduced to space network. Different to ground network, the power is quite restricted and the transmission time delay is great in space network. So if we allow packets to be dropped, then the power used to transit them will be wasted, the retransmission time delay will be large, which will undoubtedly limited the performance of the whole system.

Lossless Flow Control for Space Networks

2007

Figure 4 shows the time line of packet transmission from the source to destination node. At the present time, the buffer storage in router C is full, then the packet form router B to router C has to be dropped. After a certain time (i.e., the timer goes out), the source will sense this dropping and packet will be retransmitted. Obliviously, the drop action will introduce a series of cost for the network, one is the waste of power to packet transmission, the other is the extra time delay due to retransmission. Therefore, more efficient and power-saving flow control mechanism must be defined for space network.

4

Lossless Flow Control

To gird of the disadvantages of lossy flow control, we propose lossless flow control (LFC) for space network. LFC’s main idea is that the packet will be transited from upstream router to downstream router unless the downstream router has enough free buffer to store the packet. But how to realize this goal?

microwave/laser link

microwave/laser link

data space

Source

Space Router A

control space=1, 1 free buffer for new packet

data space

Space Router B

control space=0, no free buffer for new packet

space

Space Router C

DesƟnaƟon

Fig. 5. This figure shows the lossless flow control mechanism proposed for space network.

In addition to traditional flow control, LFC introduces one parameter space, which keeps track of the number of free buffer in downstream router. No packet is allowed to leave the recent router unless it has enough space. Space is decremented when a new packet leaves the recent router and incremented when a buffer in the downstream router is released. The parameter space can be propagated to the upstream router either with additional links or embedded inside the packets that travel to the opposite direction. Figure 5 shows detailedly the principle and flow chart of the proposed lossless flow control. Packets are sent from the source node to the destination node through router A, B, C. The space=1 in outer A tracks the free buffer in router B, that means the packet in router A can be delivered to router B successfully. The space=0 in router B records the free buffer in router C, that means the packet in router B will store and wait until the buffer in router C is released and the space in router B increases.

2008

5

Z. Yu et al.

Discuss

In this section, we will discuss the properties of the LFC in detail. First we discuss the implement details of the space router with the proposed LFC, then we compare the flow chart of proposed LFC. 5.1

Implement Details

The main components of traditional router consists of input buffer unit (IB), routing computing unit (RC), switching allocation unit (SA) and output buffer unit (OB), each component represents one step of the router pipeline. The input unit receives the packet and stores it in the input buffer, then the routing computing unit extracts the source/destination address from the header of packets, computes the output port of the next hop. Switching allocation unit will subsequently switch the packet to the corresponding output port. Output buffer unit is mainly responsible for tracking the information of the downstream router to support the decision of routing computing and switch allocation [16].

Source

Router A

Router B

Router C

DesƟnaƟon

T1 T2

Length(pktA)>space No enough buffer in Router C

T3 Store & Wait

pktB departures Router C, occupied buīer is free

T4 T5

T6 space=space+ Length(pktB) Length(pktA) generated in chronological order, the correlation between the input data is very high, which will affect the performance of the neural network. In this case, we can use experience replay to break the correlation between data. Network selection can be divided into experience tuple A, as shown in Fig. 1. Experience tuple is stored in playback

Heterogeneous Network Selection Algorithm Based on Deep

2015

memory and expressed in D. Then, the training data of the neural network are sampled uniformly and randomly by sampling. In order to further reduce the correlation among input data, a target network is built to deal with the TD error. The network parameter used to compute Q(s , a ; θ) is the same as that of the action-value function the target r + γmax  a

Q(s, a; θ). An update that increases Q(s, a; θ) would also increase Q(s , a ; θ) , and therefore bringing correlation and possibly leading to oscillations or divergence of the policy [10]. To further reduce the correlation, DQN uses a separate network to generate the target, whose network parameters are denoted by θ− .  every Nu steps. More precisely, network Q is cloned to obtain a target network Q Therefore, the network parameters update to: ∧

Q (s , a ; θ− ) − Q(s, a; θ)]∇Q(s, a; θ) θ← − θ + α[r + γmax  a

(8)

Fig. 1. NSDQN algorithm

Algorithmic input: Iterative round number T , state feature dimension n, action set A, step size α, attenuation factor γ, exploration rate ε, current Q network Q, target Q network Q , sample number m of batch gradient descent, target Q network parameter update frequency C. Algorithmic output: Q network parameters 1. Random initialization of all States and actions corresponding to the value Q. Random initialization of all parameters θ of the current Q network, initializa Empty the collection tion of parameters θ− = θ of the target Q network Q. of empirical playback D.

2016

S. Yu et al.

2. Iteration: (1) The current network state s is obtained, including the number of users of different services in different networks, the services needed by users to access the network, and the distance between users and different network base stations. (2) In Q network, s is used as input to get the Q value output corresponding to all actions of Q network. Select the corresponding action a in the current Q value output by ε-greedy method. (3) Executing the current action a under state s to get the corresponding eigenvector s and reward r of the new state. (4) Store tuple {s, a, r, s } in experience playback set D. (5) s = s . (6) Sampling m samples {sj , aj , rj , s j } , j = 1, 2 · · · m from Empirical Playback Set D, Calculate the current target Q value yj :yj = rj + γmaxa Q(φ(s j ), a j , θ). m 2 1 (7) Using mean square error loss function m j=1 (yj − Q(sj , aj , θ)) , all parameters of Q network are updated by gradient back propagation of neural network. (8) If T %Nu = 1, update target Q network parameters θ− = θ.

5

Simulation Results

Assuming that the session arrival rate of the overlapping cell obeys Poisson distribution with parameter λ0 = 800 h−1 , voice and data services are evenly distributed in it. The session duration of voice service obeys the exponential distribution of parameter 1/μ1 = 120 s. The size of data downloaded by data service obeys the exponential distribution of parameter 1/μ2 = 8 Mb. Minimum data traffic rate is set to 150 Kbps, if this speed is not reached, the session will be blocked. Figure 2 shows the distribution of the two services in the two networks before and after learning. It can be clearly seen that in the initial stage of simulation, the two services are basically equally distributed between the two networks, which is obviously not the result we want, but in the end, most of the voice services choose to access PDT while most of the data services choose to access B-TrunC, which is just like the attributes that B-TrunC network is suitable for data services and PDT network is suitable for voice services. Figure 3 shows the distribution of two mobile users in two networks before and after learning. There are more static end users than mobile terminal users among users using B-TrunC. More mobile users than static end users among users using PDT. However, due to the advantage of B-TrunC network bandwidth, more users tend to choose B-TrunC. Therefore, the number of network access of B-TrunC is larger than that of PDT, whether in mobile or static state. Figure 4 shows the change of the cell’s session blocking rate in the case of distinguishing the location of the terminal from that of the network. In the previous iteration process, the blocking rate converges faster without distinguishing

Heterogeneous Network Selection Algorithm Based on Deep

2017

Fig. 2. Distribution of two services in two networks

Fig. 3. Distribution of two mobile users in two networks

the location of the terminal network because there are two fewer parameters in the input layer of the neural network without considering the location of the terminal, which leads to faster convergence of the neural network than that of the neural network with considering the location of the terminal. As the number of iterations increases, the blocking rate can converge to a smaller state when distinguishing the location of the terminal in the network. Although the blocking rate will converge when distinguishing the location of the terminal in the network, it is always larger than that when distinguishing the location of the terminal.

2018

S. Yu et al.

Fig. 4. Performance comparison of blocking rate

6

Conclusion

In this paper, DQN is used in the selection of heterogeneous wireless networks. Considering the network load, the business attributes of initiating sessions, the mobility of terminals and the difference of maximum bandwidth due to the location of terminals in the cell, the JRRM controller can reasonably allocate each session to the most suitable network according to the network characteristics. This not only ensures the service quality of business, but also ensures the full use of network resources. Acknowledegments. This paper is supported by National Key R&D Program of China (No.2018YFC0807101).

References 1. Wen-feng MA (2018) User association for load-balance in heterogeneous M2M networks. Adv Sci Ind Res Cent. (Proceedings of 2018 2nd international conference on modeling, simulation and optimization technologies and applications (MSOTA 2018). Advanced science and industry research center: science and engineering research center) 8 2. WANG XG (2018) Heterogeneous network selection algorithm based on principal component analysis. Adv Sci Ind Res Cent. (Proceedings of 2018 international conference on computer, communications and mechatronics engineering (CCME 2018). Advanced science and industry research center: science and engineering research center) 6 3. Kaelbling LP, Littman ML, Moore AW (2005) Reinforcement learning: an introduction. IEEE Trans Neural Netw 16(1):285–286 4. Kaelbling LP, Littman ML, Moore AW (1996) Reinforcement learning: a survey. J Artif Organs 5. Doppler K, Rinne M, Wijting C, Ribeiro C, Hugl K (2009) Device-to-device communication as an underlay to LTE-advanced networks. IEEE Commun Mag 6. Nie J, Haykin S (1999) Q-learning-based dynamic channel assignment technique for mobile communication systems. IEEE Trans Veh Technol 7. Haddad M, Altman Z, Elayoubi SE et al (2010) A nash-stackelberg fuzzy q-learning decision approach in heterogeneous cognitive networks. In: IEEE global telecommunications conference (GLOBECOM2010)

Heterogeneous Network Selection Algorithm Based on Deep

2019

8. Simsek M, Czylwik A (2012) Decentralized q-learning of LTE-femtocells for interference reduction in heterogeneous networks using cooperation. In: Proceedings of 2012 international ITG workshop on smart antennas WSA 9. Tabrizi H, Farhadi G, Cioffi J (2012) Dynamic handoff decision in heterogeneous wireless systems: q-learning approach. In: Proceedings of 2012 IEEE international conference on communications (ICC) 10. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533

Vertical Handover Algorithm Based on KL-TOPSIS in Heterogeneous Private Networks Chen-Guang He1,2(B) , Qiang Yang1 , Shou-Ming Wei1,2 , and Jing-Qi Yang1 1

Communication Research Center, Harbin Institute of Technology, Harbin, China {hechenguang,weishouming}@hit.edu.cn, {yangqiangcr7,yangjq95}@163.com 2 Key Laboratory of Police Wireless Digital Communication, Ministry of Public Security, Harbin, People’s Republic of China

Abstract. At present, China’s police mobile communication network mainly includes Police Digital Trunking (PDT) and Broadband Trunking Communication (B-TrunC) networks. It is an urgent task to build a heterogeneous network of broadband and narrowband networks under the background of mobile Internet. Vertical handover technology is an indispensable key technology in heterogeneous networks. According to the characteristics of private network communication, this paper divides the services into four types, and then calculates the subjective weight and objective weight of each network attribute. And a Technology for Order Preference by Similarity to an Ideal Solution algorithm based on Kullback-Leibler divergence (KL-TOPSIS) is proposed to sort candidate networks. Simulation and numerical results show that this algorithm has better vertical handover performance under different services. Keywords: Private network

1

· PDT · B-TrunC · Vertical handover

Introduction

Private mobile communication network is a wireless communication network that provides command and dispatch services for specific users to meet the communication requirements under special circumstances. At present, China’s police mobile communication network mainly includes PDT and B-TrunC. Among them, PDT is a narrowband digital communication system with Chinese independent intellectual property rights, and B-TrunC is a standard for private broadband trunking system based on TD-LTE developed by China. At present, there are some problems in the construction of police private network, which are caused by the physical isolation of broadband and narrowband private networks. It is urgent to achieve breakthroughs and innovations in key technologies of network convergence. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2020–2028, 2020 https://doi.org/10.1007/978-981-13-9409-6_244

Vertical Handover Algorithm Based on KL-TOPSIS in Heterogeneous

2021

Vertical handover refers to the handover between access points of different network technologies, so vertical handover is an indispensable technology in heterogeneous network. Pahlavan et al. in [1] introduces the concept of Dwelling Timer, that is, start the timer when the RSS of the new network is larger than the current network. If the condition is still satisfied when the timer expires, the handover will begin. Nasser et al. in [2] introduces the concept of hysteresis level H, when RSS of alternative network is larger than current network and the difference between them is greater than H, handover occurs. Alkhawlani et al. in [3] introduces AHP as a multi-attribute subjective weight decision algorithm. In [4], the Simple Additive Weighting (SAW) is used to determine the target network by combining the different weighted results. Verma et al. in [5] uses Gray Relation Analysis (GRA) as the handover decision algorithm. Literature [6, 7] combines Artificial Neutral Network (ANN) with Fuzzy Logic Theory for vertical handover process, which can deal with fuzzy uncertain information and obtain the optimal target network through system output. This paper presents an improved vertical handoff algorithm based on MultiAttribute Decision Making (MADM). Firstly, according to the characteristics of private network communication, the services are divided into four types. Secondly, the subjective and objective weights of network attributes are calculated by Analytic Hierarchy Process (AHP) and entropy method respectively. Finally, the KL-TOPSIS algorithm is used to sort candidate networks and select the networks with the highest score for vertical handover. The remainder of the paper is organized as follows. Section 2 derives the system model and gives the specific parameters of each candidate network. The specific algorithm is given in Sect. 3. Simulated and numerical results are provided in Sect. 4. Section 5 presents the conclusion.

2

Heterogeneous Network System Model

In this paper, we consider a heterogeneous network composed of PDT, B-TrunC and WLAN, and user terminals have the opportunity to communicate with the access points of the three networks. We chose seven network attributes for research, namely Transmit power, Bandwidth, Receiving sensitivity, Rate, Network load, Coverage radius, Power consumption rate, are denoted by Pt , B, Sr , R, L, rc , Cp . The specific attribute parameters of each candidate network are shown in Table 1. Table 1. Attribute parameters of each network Pt (dBm) B (kHz) Sr (dBm) R (kb/s) L(%) rc (km) Cp PDT

37

B-Trunc 23 WLAN

10

12.5

−116

9.5

25

22.5

2100/12

1 × 104

−95

5 × 104

30

1.5

2240/12

50

0.3

2150/12

4

2 × 10

−100

4

5 × 10

2022

C.-G. He et al.

According to the characteristics of private network communication, the services are divided into four types: Voice scheduling service, Video scheduling service, Trunking data service and Dispatch Center (DC) instruction service. Voice scheduling service is the most basic type of service in heterogeneous networks. The dispatch center uses voice call to schedule mobile terminals. Therefore, it requires a very low latency, and has high requirements for receiving sensitivity, transmission rate, and power consumption rate. Video scheduling service is also a real-time service, and the dispatch center uses a video call to obtain real-time video information on the scene. This requires an ideal network with sufficient bandwidth resources and a fast transmission rate, while the network load cannot be too large to avoid congestion. Trunking data service is that the dispatch center sends data packets to a group of mobile terminals in the form of group communication, such as transmission of pictures, video streams. Mobile terminals can also browse web pages, video on demand and other operations. Such services require high bandwidth, but are not very sensitive to delay. DC instruction service is a long-term continuous service. The dispatch center sends control signal information periodically to mobile terminals, such as registration, authentication and other control instructions. The data of this kind of service is usually small, and the requirement of delay is not very strict, but because of the long duration, the requirement of terminal power consumption rate is higher.

3 3.1

Algorithm Description Subjective Weight Calculation Based on AHP

AHP uses a layered idea to decompose a complex problem into multiple relatively simple subproblems [8]. The following is the specific process for calculating subjective weights using AHP. (a) Constructing comparison matrix: The comparison matrix is constructed by comparing pairs of attribute factors. According to the 1–9 scale method, the relative importance between two attributes is expressed, as shown in Table 2. Define comparison matrix C = [cij ]n×n , where n represents the number of attribute factors and cij represents the relative importance of attribute i to attribute j. cij satisfies cij > 0, cij = c1ji , cii = 1. Table 2. 1–9 importance scale cij

Relative importance level

1

i and j are equally important

3

i is slightly more important than j

5

i is more important than j

7

i is much more important than j

9

i is extremely more important than j

2,4,6,8 the median between each level

Vertical Handover Algorithm Based on KL-TOPSIS in Heterogeneous

2023

(b) Weight calculation: Calculate the maximum eigenvalue λmax of the comparison matrix C, the corresponding eigenvector is V =(v1 , v2 , . . . , vn ), and normalize V according to Eq. (1). The normalized eigenvector α=(α1 , α2 , . . . , αn ) is the subjective weight of each attribute factor obtained by AHP. vj α j = n k=1

vk

(1)

However, a consistency check is required, and the CR is calculated by (2). When CR < 0.1, the comparison matrix passes the check and accepts the α, otherwise the comparison matrix is re-modified. RI is related to the matrix order n. When n = 7, RI = 1.32. CR =

3.2

λmax − n (n − 1) RI

(2)

Objective Weight Calculation Based on Entropy Method

The comparison matrix constructed by AHP is based on human experience. We introduce the entropy method to calculate the objective weights, so as to modify the subjective weight. The following is the process of entropy method. (a) Constructing parameter matrix: Select m candidate networks and n attribute factors, then xij is the parameter value of the attribute j of the network i. We can obtain the candidate network parameter matrix X = [xij ]m×n . (b) Standardization of attribute parameters: The attributes are divided into positive indicator and negative indicator, standardize them in different ways. For positive indicators, such as Pt , B, R, rc . xij =

xij − min {x1j , · · · , xmj } max {x1j , · · · , xmj } − min {x1j , · · · , xmj }

(3)

For negative indicators, such as Sr , L, Cp . xij =

max {x1j , · · · , xmj } − xij max {x1j , · · · , xmj } − min {x1j , · · · , xmj }

(4)

For convenience, the normalized data are still represented by xij . (c) Calculate the proportion of network i under attribute j. xij pij = m i=1

xij

(5)

2024

C.-G. He et al.

(d) Calculate the entropy of the attribute j. hj = −k

m 

pij ln (pij )

k=

i=1

1 ln (n)

(6)

(e) Calculate the objective weight of each attribute. 1 − hj j=1 (1 − hj )

β j = n

(7)

Obtain objective weight vector β = (β1 , β2 , . . . , βn ) of n network attributes. 3.3

Candidate Network Sorting Based on KL-TOPSIS

We combine subjective weights with objective weights into comprehensive weights W =(w1 , w2 , . . . , wn ) using Eq. (8). αj × βj j=1 (αj × βj )

wj = n

(8)

Then, the improved TOPSIS algorithm is proposed to sort candidate networks. The traditional TOPSIS algorithm selects the best solution by calculating the euclidean distance between the attribute vector of the candidate solution and the positive and negative ideal solution [9]. However, when a candidate scheme is close to the positive ideal solution and the negative ideal solution, the traditional TOPSIS algorithm may deviate in network sorting. Therefore, this paper uses KL divergence to calculate distance, and KL-TOPSIS method can effectively avoid the errors caused by the above situation. The process is as follows. (a) Standardization of parameter matrix X in 3.2(a). xij xij = 

m i=1

(9) x2ij

For convenience, the normalized matrix are still represented by X. (b) Calculate the weighted matrix: The weighted matrix X is obtained by combining the weighted value with the normalized matrix. xij = xij · wj

(10)

(c) Determine the positive ideal solution V + = (v1+ , v2+ , . . . , vn+ ) and the negative ideal solution V − = (v1− , v2− , . . . , vn− ).  maxi xij , j is Positive indicator + vj = (11) mini xij , j is Negative indicator  mini xij , j is Positive indicator vj− = (12) maxi xij , j is Negative indicator

Vertical Handover Algorithm Based on KL-TOPSIS in Heterogeneous

2025

(d) Calculate the relative entropy distance between each candidate network and positive and negative ideal solutions by using KL divergence, denoted by + − − + − − D+ = (d+ 1 , d2 , . . . , dn ), D = (d1 , d2 , . . . , dn ).  n   1 − vj+ vj+  + + + vj lg (13) + 1 − vj lg di = xij 1 − xij j=1 d− i

=

n 

 vj−

j=1

 1 − vj− vj−  lg + 1 − vj− lg xij 1 − xij

(14)

(e) Calculate the comprehensive evaluation value of each candidate network. Ti =

d− i d− + d+ i i

i = 1, 2, · · · , m

(15)

The descending order of Ti is the sorting result of each candidate network. The higher the ranking, the more suitable this network is for handover.

4

Simulation Results

The simulation scenario in this paper uses the heterogeneous network in Sect. 2. It is assumed that the RSS of PDT, B-TrunC and WLAN are all large than their respective thresholds, that is, all three networks can be accessed. The specific attribute parameters of each candidate network use the data in Table 1. First, comparison matrix is constructed according to the requirements of different service for each attribute. The comparison matrix of voice scheduling service, video scheduling service, trunking data service and DC instruction service are C1 , C2 , C3 and C4 . ⎡ ⎤ ⎡ ⎤ 1 1/2 1 6 5 4 3 3 2

⎢2 ⎢ ⎢7 ⎢ C1 = ⎢ ⎢6 ⎢4 ⎢ ⎣4 ⎡

1 ⎢7 ⎢ ⎢3 ⎢ C3 = ⎢ ⎢6 ⎢5 ⎢ ⎣2 4

1/7 1/6 1 1/2 1/3 1/4 1/5

1/6 1/4 1/4 1/3 1 1/5 1/2 1/7 1/7 1/3 1/4 ⎢5 1 1/5 1/4 1/3 1/2 ⎥ 4 1/2 1/2 3 2 ⎥ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ 2 3 4 5 ⎥ ⎢ 2 1/4 1 1/6 1/6 1/2 1/3 ⎥ ⎥ ⎢ 1 2 3 4 ⎥ C2 = ⎢ 7 2 6 1 1 5 4 ⎥ ⎥ ⎥ ⎢ 1/2 1 1 2 ⎥ 6 1 1 5 3 ⎥ ⎢7 2 ⎥ ⎣ 3 1/3 2 1/5 1/5 1 1/2 ⎦ 1/3 1 1 2 ⎦ 1/4 1/2 1/2 1 4 1/2 3 1/4 1/3 2 1







1/7 1/3 1/6 1/5 1/2 1/4 1 1/2 1/4 1/3 1/5 1/2 1/7 ⎢ 2 1 1/2 1/2 1/3 2 1/3 ⎥ 1 5 2 3 6 4 ⎥ ⎥ ⎢ ⎥ ⎢4 2 1/5 1 1/4 1/3 2 3 ⎥ 1 1/5 1/3 2 1/5 ⎥ ⎥ ⎢ ⎥ ⎢ 1/2 4 1 2 5 3 ⎥ 5 1 1/3 5 1/3 ⎥ ⎥ C4 = ⎢ 3 2 ⎥ ⎢5 3 1/3 3 1/2 1 4 2 ⎥ 3 3 1 2 1/3 ⎥ ⎥ ⎢ ⎥ ⎣ 2 1/2 1/2 1/5 1/2 1 1/5 ⎦ 1/6 1/2 1/5 1/4 1 1/3 ⎦ 1/4 1/3 1/3 1/2 3 1 7 3 5 3 3 5 1

2026

C.-G. He et al.

According to the above comparison matrix and the algorithm in Sect. 3, the combination weights of voice scheduling service, video scheduling service, trunking data service and DC instruction service can be calculated as W1 , W2 , W3 , W4 . W1 = (0.0624, 0.0429, 0.3082, 0.2012, 0.1107, 0.2114, 0.0631) W2 = (0.0614, 0.1653, 0.0395, 0.2604, 0.2515, 0.1262, 0.0956) W3 = (0.0616, 0.3454, 0.0862, 0.2065, 0.1397, 0.0844, 0.0762) W4 = (0.0726, 0.0749, 0.0794, 0.1520, 0.1888, 0.1059, 0.3264) According to the network attribute parameters in Table 1 and the weight sets of different service types, the KL-TOPSIS algorithm is used to obtain the comprehensive evaluation values of three candidate networks and normalize them. Then the network is sorted and compared with SAW method. The results are shown in Fig. 1.

(a) Voice scheduling service

(b) Video scheduling service

(c) Trunking data service

(d) DC instruction service

Fig. 1. Network sorting results under four service types

Vertical Handover Algorithm Based on KL-TOPSIS in Heterogeneous

2027

As shown in Fig. 1a, the network sorting result of KL-TOPSIS under voice scheduling service is PDT>WLAN>B-TrunC, which is basically consistent with SAW. It can be seen that PDT has greater advantages than WLAN and B-TrunC in this case. PDT network can really complete voice service well, and its voice quality is clear and the cost is small. As shown in Fig. 1b, the result of KL-TOPSIS under video scheduling service is B-TrunC>WLAN>PDT, while the three network scores obtained by SAW are very close, and can not give a clear sorting result. Video scheduling services need larger bandwidth and faster response speed, while WLAN usually has a larger network load, so B-TrunC ranks higher. As shown in Fig. 1c, the result of KL-TOPSIS under trunking data service is WLAN>B-TrunC>PDT, and the score of WLAN is close to B-TrunC. This is because there is no strict delay requirement for trunking data services, so there is little difference in the use of WLAN or B-TrunC. At this case, the PDT score obtained by SAW is close to B-TrunC, which can not reflect the advantage of B-TrunC in data transmission. As shown in Fig. 1d, the three networks scores obtained by KL-TOPSIS and SAW under DC instruction service are similar. This is because the packets transmitted by the DC instruction services are relatively small and have no strict delay requirements. Therefore, all three networks can be used to transmit DC instruction services.

5

Conclusion

In this paper, we introduce an improved MADM-based vertical handover algorithm for heterogeneous network, which combines subjective weight and objective weight when calculating weight. Then we use KL-TOPSIS in network sorting to make the results more scientific and reasonable. The simulation and numerical results show that the proposed algorithm can effectively select the most suitable network for vertical handover according to the type of service, and has better performance than the traditional SAW. Acknowledgements. This paper is supported by National Key R&D Program of China (No.2018YFC0807101).

References 1. Pahlavan K, Krishnamurthy P, Hatami A et al (2000) Handoff in hybrid mobile data networks. IEEE Pers Commun 7(2):0–47 2. Nasser N, Hasswa A, Hassanein H (2006) Handoffs in fourth generation heterogeneous networks. Commun Mag IEEE 44(10):96–103 3. Alkhawlani MM, Alsalem KA, Hussein AA (2011) Multi-criteria Vertical Handover by TOPSIS and fuzzy logic. In: 2011 international conference on communications and information technology (ICCIT). IEEE

2028

C.-G. He et al.

4. Savitha K, Chandrasekar C (2011) Vertical Handover decision schemes using SAW and WPM for network selection in heterogeneous wireless networks. Int J Comput Sci Issues 8(3) 5. Verma R, Singh NP (2013) GRA based network selection in heterogeneous wireless networks. Wirel Pers Commun 72(2) 6. Ghahfarokhi BS, Movahhedinia N (2013) Context-aware handover decision in an enhanced media independent handover framework. Wirel Pers Commun 68(4):1633– 1671 7. Shanmugavel S, Sivakami T (2013) Performance analysis of fuzzy logic based vertical handoff decision algorithm for heterogeneous networks. Asian J Sci Res 6(4):763–771 8. Saaty TL (2008) Decision making with the analytic hierarchy process. Int J Serv Sci 1:83–C98 9. Li Q, Zhao X, Lin R et al (2014) Relative entropy method for fuzzy multiple attribute decision making and its application to software quality evaluation. J Intell Fuzzy Syst Appl Eng Technol 26(4):1687–1693

A Deep Deformable Convolutional Method for Age-Invariant Face Recognition Hui Zhan(B) , Shenghong Li, and Haonan Guo School of Cyber Space Security, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240, China [email protected], [email protected], [email protected]

Abstract. With the rapid development of deep learning, face recognition also finds its improving dramatically. However, facial change is still a main effect to the accuracy of recognition, as some complex factors like age-invariant, health state and emotion, are hard to model. Unlike some previous methods decomposing facial features into age-related and identity-related parts, we propose an innovative end-to-end method that introduces a deformable convolution into a deep learning discriminant model and automatically learns how the facial characteristics changes over time, and test its effectiveness on multiple data sets.

Keywords: Face recognition convolutional networks

1

· Age-invarian · Deformable

Introduction

Face recognition is one of the most popular tasks in computer vision, and significantly improved by the development of Deep Convolutional Neural Network (DCNN). The research of face recognition concentrates on general face recognition (GFR) and usually consists of three parts: face alignment and detection, feature extraction, face recognition or identification. However, the accuracy of GFR is limited by illumination, pose, expression and age. Age-invariant face recognition (AIFR) is a sub-task in GFR and has captured widespread attention in security monitoring for decades, such as searching for missing children and pursuing fugitives. Facial feature changing with age is a complex process that increases the difficulty of AIFR. From birth to teenager, the craniofacial growth with age is an important factor of face changes. After reaching adulthood, the change of skin and texture becomes the most important factor of face changes. Figure 1 can clearly perform the process of a person’s facial feature changing with age. Meanwhile, different individuals have specific age-independent characteristics, for instance, some people look young even if c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2029–2037, 2020 https://doi.org/10.1007/978-981-13-9409-6_245

2030

H. Zhan et al.

they are in middle age, while others look older than their actual age. That makes it impossible for AIFR to recognize faces through a simple model.

Fig. 1. The process of a person’s facial change with age from FG-NET [1]

The researches of AIFR focus on two categories of approach: generative approaches and discriminative approaches. The traditional generative approaches [2, 3] simulate the AIFR by synthesizing fixed-age samples. These approaches require a large mount of assumptions and parameters that make the entire task computationally expensive but ineffective. This situation has been improved with the development of Generative Adversarial Networks(GAN). However, the generative schemes [4, 5] still remain several significant drawbacks, such as instability, difficulty in optimizing end-to-end models, and are nearly impossible to model some important factors like social environment, diet, accident. The discriminative approaches [6–10] concentrate on constructing a discriminative model for identification or recognition by extracting the identifyspecific feature which is unconcerned with age from the model. The discriminative models have small amount of calculation and achieve better performance. However, how to design a discriminative model to extract identify-relative feature is still a challenge. Since that a person can easily recognize another person even if they have not seen each other for years, the process of facial feature changing with age is a continuous and distinguishable process. Motivated by previous researches, we regard facial feature changing process as the micro-deformation and propose a novel method to solve the problem of AIFR based on Deformable Convolutional Network (DCN) [11], which automatically learns facial deformation in an end-toend way and extracts distinguishable feature, finally achieves AIFR. The main contributions of this paper are summarized as follows: 1. We propose the facial feature changing with age as a process of facial deformation, and build a model to extract identify-specific feature, which does not need any restrictive assumption. 2. Extensive experiments prove the proposed model’s effectiveness and simplicity.

A Deep Deformable Convolutional Method

2

2031

Related Work

The difficulty of GFR is how to maximize the inter-personal distance meanwhile minimize the intra-personal distance. A series of approaches achieve good results by optimizing loss function. Reference [12] adds feature center in softmax loss function to optimize the intra-class distance. Reference [13] regards GFR as a metric learning task by combining contrastive loss [14] and softmax loss, and raises significance changes in GFR. Reference [15] introduces triplet loss to targetedly minimize the intra-class distance and maximize the inter-class distance. References [16, 17] directly improve softmax loss by using weight normalization, setting bias to 0, and learning angular margin between different individuals. However, the general face recognition method cannot be transplanted directly to age-invariant face recognition. In AIFR, a typical method is hidden factor analysis (HFA) [6], which considers face feature as a linear combination of age-specific factor and identifyspecific factor. Based on this assumption, HFA employs LBP to extract face texture feature and utilizes Expectation-Maximization algorithm (EM) to iteratively estimate identify-specific factor from face texture feature. Reference [7] modifies HFA and introduces another factor to simulate appearance changes of different individuals. Reference [8] assumes that people will look similar when they grow old if they have similar appearance when they are young, thus proposes Cross-Age Reference Coding (CARC) to extract identify-feature from deep face feature. Reference [9] extends deep convolution neural network to extract facial feature and develops a Latent Factor guided CNN (LF-CNN) strategy to combine CNN and EM algorithm to capture the identify-specific feature from observed facial feature. Reference [10] decomposes facial age-specific factor and facial identify-specific factor into orthogonal components. OE-CNN’s loss combines the A-softmax loss and age factor regression loss, achieved the state-of-art recognition accuracy in AIFR. In AIFR, the generative approaches require extensive hyperparameters and complex CNN model to learn the change regularity of face with age, that increases the difficulty for training and convergence. The discriminative approaches extract face features and utilize EM algorithm, that divides the training steps and can not fully utilize the facial feature change rules with age. In order to solve the problem, we propose to utilize DCN-base end-to-end method to automatically learn facial feature change rules with age, reduce the number of hyperparameters, enhance the ability to extract features and improve robustness of the model. Compared with the previous methods of age-invariant face recognition, most of them utilize EM algorithm to estimate the hidden variables in order to extract identity-related features by iteration. There are several limitations in these methods: First, the training process of EM algorithm and CNN are different, which requires a two-step strategy to train convolution layers and EM layers separately. This training strategy is not an end-to-end method, resulting in a complex training process and a low convergence speed. Second, extensive and accurate age labels are required when utilizing the EM algorithm to estimate hidden

2032

H. Zhan et al.

identity-specific factor. However high-quality datasets with age-invariant faces are limited, reducing the generalization ability of model. Third, The EM algorithm estimates the hidden identity-specific feature by iterations rather than learning the intrinsic rules of facial feature change, so hidden information in images, such as the craniofacial growth changes and skin texture changes with age, are not fully utilized.

3

Proposed Method

DCN is first introduced in [11] for object detection models and achieves significant progress in modeling geometry transformation. It comes up with a new convolutional operation, which is based on the idea of adding learnable position offsets to the regular grid sampling locations in the feature maps as Fig. 2. DCN extends Spatial Transformer Networks and improves the spatial information capability and generalization ability of current CNN networks.

Fig. 2. Deformable convolution adds learnable position offsets to the regular grid sampling locations in the feature maps

In general, 2D convolution consists of 2 steps: sampling area using a grid R over feature map x, and use linear function weighted by w to extract information of sampled area. For each center location pb on face features x, we can define regular grid as R = {(−1, −1), (0, −1), . . . , (0, 0), . . . , (0, 1), (1, 1)}.

(1)

a regular output feature is defined as y(pb ) =

 pn R

w(pn ) ∗ x(pb + pn )

(2)

A Deep Deformable Convolutional Method

2033

where pn enumerates the grid R. Augmented with offsets {Δpn |n = 1, 2, . . . , N } in DCN, sampling location pn becomes pn + Δpn , the output feature y(pb ) for each location pb is defined as y(pb ) =



w(pn ) ∗ x(pb + pn + Δpn )

(3)

pn R

Figure 3a, b showing that the deformable convolution generalizes various transformations for scale, (anisotropic) aspect ratio and rotation.

Fig. 3. a is the regular 2D Conv, b is deformable Conv, c is a regular 2D Conv adding dilated Conv, d combines dilated Conv and deformable Conv to extend receptive fields

To prevent too many extra learnable parameters be introduced, we only replace one layer of the model with deformable convolution. In face recognition deep CNN, the low-level feature layers extract spatial texture information such as corners and edges. The high-level feature layers contain semantic information representing age and identity information. Reference [11] add deformable convolution layers in stage 5 of ResNet, but we have presented extensive experiments to confirm the position of them. The details will be introduced in the section of Experiments. If the receptive field is limited to a small grid, the advantage of deformable convolution cannot be effectively utilized in face recognition. To extend the feature extraction ability of the model, we introduce dilated convolution [18] injecting holes in standard 2D convolution as shows in Fig. 3c, d. Since downsampling operations like pooling or multi-stride convolution making the high level feature resolution too small, dilated convolution is used to maintain the receptive field and keep feature information as many as possible.

4

Experiments

In this section, we introduce the implementation details in our networks and conduct extensive experiments to evaluate our network on several cross-age datasets including Cross-Age Celebrity Dataset (CACD) [8] and FG-NET dataset [1] and general face recognition dataset LFW [19].

2034

H. Zhan et al.

We use a general face dataset CASIA [20] which contains 494,414 images including 10,575 individuals and a face ageing dataset MORPH [21] which consists of about 52,000 images from about 13,000 individual with age range from 16 to 77 as training data. We utilize CASIA to train a basic model and then finetune on the MORPH to learn the changes between ages. For each image, we firstly detect and align single face using MTCNN [22]. And then, data augmentation is implemented to solve the problem of data imbalance such as brightness, sharpen, flipping and cropping. The model implements resnet50 with Batch Normalization after each convolution layers, weight decay rate 0.005 and 50% dropout after every block. The initial learning rate is 1e−3, and drop to 1e−4 when the loss does not decreases. The batch size is 192. 4.1

Experiment on CACD Datasets

CACD dataset contains 163,446 images from 2000 individuals with age ranges from 10 to 62 years old. Meanwhile in this dataset, images are various illumination, pose, and makeup, increasing the difficulty in AIFR. CACD-VS is the subset of CACD dataset, which contains 2000 pairs of images from the same person and 2000 pairs from different person. We implement deformable convolution in the four blocks of the resnet50 separately and the experimental results are shown in Table 1. The result shows that the model achieves a better performance by adding deformable convolution at the last block, because the semantic information of identification and age hidden in high-level features. Table 2 shows performance of different methods on CACD-VS. Table 1. Performance of deformable convolution in different blocks of resnet50 on CACD-VS Method Acc (%) Non-DC 97.5

4.2

Block1

98.3

Block2

97.6

Block3

94.7

Block4

99.2

Experiment on FGNET Datasets

FGNET dataset contains 1002 images from 82 individuals with age ranges from 0 to 69 years old. Following the testing scheme in [23], we evaluate the rank-1 identification rate on FGNET and compare with state-of-the-art methods. FGNET is more challenge as its age range is wider than CACD. Table 3 compares our method with other traditional or DNN-based methods and proves our method’s effectiveness in AIFR.

A Deep Deformable Convolutional Method

2035

Table 2. Performance of different method on CACD-VS Method

Acc (%)

HFA [6]

84.4

CACD [8]

87.6

LF-CNN [9] 98.5 Ours

99.2

Table 3. Performance of different methods on FGNET Method

Rank-1 identification rate (%)

HFA [6]

69.0

MEFA [24]

76.2

LF-CNN [9] 84.4 Ours

4.3

87.3

Experiment on LFW Datasets

To evaluate the generalization ability of our model, experiment on the famous GFR dataset LFW [19] is shown in this subsection. LFW dataset contains 13,233 images from 5749 individuals. We construct 6000 pairs of images from LFW. The result in Table 4 shows that deformable convolution has a fairly good performance, almost as good as state-of-the-art ones in GFR. Because our model has been pretrained in some AIFR datasets beforehand, but not in wider GFR datasets, the comparison is not a completely fair one. Table 4. Performance of different method on LFW Method

Acc (%)

FaceNet [15]

99.65

DeepID2+ [13]

99.47

Center Loss [12] 99.28 SphereFace [16] 99.42 LF-CNN [9]

5

99.47

OE-CNN [10]

99.50

Ours

99.27

Conclusion

In this paper, we propose a novel discriminative approach to solve the problem of AIFR by introducing deformable convolution and dilated convolution.

2036

H. Zhan et al.

Deformable convolution learns facial feature changes with age and dilated convolution expand the model receptive field. Our method significantly improves the feature extraction ability, enhances the ability to extract feature and improves robustness of the model. Acknowledgements. This work was supported by the National Key Research and Development Project of China under Grant 2016YFB0801003.

References 1. The FG-NET aging database, http://www.fgnet.rsunit.com/ 2. Geng X, Zhou ZH, Smith-Miles K (2007) Automatic age estimation based on facial aging patterns. IEEE Trans Pattern Anal Mach Intell 29(12):2234–2240 3. Park U, Tong Y, Jain AK (2010) Age-invariant face recognition. IEEE Trans Pattern Anal Mach Intell 32(5):947–954 4. Antipov G, Baccouche M, Dugelay JL (2017) Face aging with conditional generative adversarial networks. In: IEEE international conference on image processing (ICIP). IEEE, New York, pp 2089–2093 5. Yang H, Huang D, Wang Y et al (2018) Learning face age progression: a pyramid architecture of GANS. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 31–39 6. Gong D, Li Z, Lin D et al (2013) Hidden factor analysis for age invariant face recognition. In: Proceedings of the IEEE international conference on computer vision, pp 2872–2879 7. Li H, Zou H, Hu H (2017) Modified hidden factor analysis for cross-age face recognition. IEEE Signal Process Lett 24(4):465–469 8. Chen BC, Chen CS, Hsu WH (2014) Cross-age reference coding for age-invariant face recognition and retrieval. In: European conference on computer vision. Springer, Cham, pp 768–783 9. Wen Y, Li Z, Qiao Y (2016) Latent factor guided convolutional neural networks for age-invariant face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4893–4901 10. Wang Y, Gong D, Zhou Z et al (2018) Orthogonal deep features decomposition for age-invariant face recognition. In: Proceedings of the European conference on computer vision (ECCV), pp 738–753 11. Dai J, Qi H, Xiong Y et al (2017) Deformable convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 764–773 12. Wen Y, Zhang K, Li Z et al (2016) A discriminative feature learning approach for deep face recognition. In: European conference on computer vision. Springer, Cham, pp 499–515 13. Sun Y, Chen Y, Wang X et al (2014) Deep learning face representation by joint identification-verification. Adv Neural Inf Process Syst, 1988–1996 14. Chopra S, Hadsell R, LeCun Y (2005) Learning a similarity metric discriminatively, with application to face verification. CVPR 1:539–546 15. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823 16. Liu W, Wen Y, Yu Z et al (2017) Sphereface: deep hypersphere embedding for face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 212–220

A Deep Deformable Convolutional Method

2037

17. Wang F, Xiang X, Cheng J et al (2017) Normface: l2 hypersphere embedding for face verification. In: Proceedings of the 25th ACM international conference on multimedia. ACM, New York, pp 1041–1049 18. Yu F, Koltun V (2015) Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 19. Huang GB, Ramesh M, Berg T, Learned-Miller E (2007) Labeled faces in the wild: a database for studying face. In: Recognition in unconstrained environments 20. Yi D, Lei Z, Liao S, Li SZ (2014) Learning face representation from scratch. arXiv:1411.7923 21. Ricanek K, Tesafaye T (2006) MORPH: a longitudinal image database of normal adult age-progression. In: 7th international conference on automatic face and gesture recognition, 2006. FGR 2006. IEEE Computer Society 22. Zhang K, Zhang Z, Li Z et al (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process Lett 23(10):1499– 1503 23. Li Z, Park U, Jain AK (2011) A discriminative model for age invariant face recognition. IEEE Trans Inf Forensics Secur 6(3):1028–1037 24. Gong D, Li Z, Tao D et al (2015) A maximum entropy feature descriptor for age invariant face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5289–5297

Weight Determination Method Based on TFN and RST in Vertical Handover of Heterogeneous Networks Chen-Guang He1,2(B) , Jing-Qi Yang1 , Shou-Ming Wei1,2 , and Qiang Yang1 1

Communication Research Center, Harbin Institute of Technology, Harbin, China [email protected],{yangjq95,yangqiangcr7}@163.com 2 Key Laboratory of Police Wireless Digital Communication, Ministry of Public Security, People’s Republic of China, Harbin, China [email protected]

Abstract. With the development of public security private network communication, the traditional narrowband Police Digital Trunking (PDT) network is difficult to meet the requirements of image and video transmission. As a broadband trunking network, Broadband Trunking Communication (B-TrunC) fills the gap of broadband wireless network in private network communication. How to realize vertical handover between two heterogeneous networks in terminals has become the focus of research. In this paper, a subjective weight method based on Triangular Fuzzy Number (TFN) and an objective weight method based on Rough Set Theory (RST) are proposed. Among several attribute parameters affecting network performance, the weight of each attribute is determined, which provides a theoretical basis for network vertical handover.

Keywords: Heterogeneous networks

1

· TFN · RST · Vertical handover

Introduction

In recent years, under the dual driving of technology and services, mobile informationization of public safety industry has achieved remarkable results in terms of policy support, technology evolution and practical application. PDT standard, as a trunking communication standard with China’s independent intellectual property rights, plays an important role in public security, fire protection and other private networks. It can ensure that the terminal can quickly access the public security dispatch platform during emergencies such as earthquakes, fires, and social security, which can realize flexible networking, high-efficiency command and dispatch, high-quality voice and data transmission. However, with the increasing demand of users, the traditional narrowband services are difficult to meet the requirements of visual application scenarios in complex situations. Therefore, B-TrunC based on LTE technology has emerged. B-TrunC can c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2038–2045, 2020 https://doi.org/10.1007/978-981-13-9409-6_246

Weight Determination Method Based on TFN and RST

2039

transmit not only voice services, but also image and video services, so as to ensure that the public security organs can grasp the comprehensive information of the scene in real time on the dispatching platform and deploy the police force more reasonably. When switching between two heterogeneous networks, PDT and B-TrunC, it is necessary to consider the influence of multiple attributes. It is also important to determine the weight of each attribute in the service. The rest of this paper is organized as follows. Section 2 reviews the related work. We introduce the subjective weight model based on TFN and the objective weight model based on RST in Sect. 3 and Sect. 4, respectively. In Sect. 5, the comprehensive weight is introduced. Simulation and numerical analysis are provided in Sect. 6. Section 7 proposes the conclusion.

2

Related Work

Vertical handover technology in heterogeneous networks has many methods. Xu et al. in [1] introduce the use of received signal strength (RSS) and wireless transmission loss to reduce redundant handover times. Torfi et al. in [2] use Fuzzy Analytic Hierarchy Process (FAHP) to evaluate relative weights, and both [2,3] use TOPSIS algorithm to rank networks to determine the priority of candidate networks. Manisha and Singh [4] use game theory to simulate session and background services, and use evolutionary and bankruptcy games for network selection. Both Fuzzy Logic Processing (FLP) adopted by Ji et al. in [5] and Entropy Weight adopted by Sheng et al. in [6] are methods to obtain objective weights. Zineb et al. in [7] and Xing et al. in [8] model the network characteristics based on network and user sample data. They use neural network method to select the network of heterogeneous networks. This paper mainly uses subjective weight based on TFN and objective weight based on RST. The subjective weight was obtained by evaluating the attribute parameters based on the experts experience, with certain subjective factors. The objective weight is to use the historical data to summarize the attribute parameters, aiming to find out the relationship with the decision attributes, and then obtain the weight. By combining subjective and objective weights, we can find the weights of the attributes that determine the vertical handover, which can provide reference for the vertical handover between PDT and B-TrunC in police private network.

3

Subjective Weight Based on TFN

Since there are many parameters in the heterogeneous network, multiple attributes need to be considered in vertical handover. Therefore, a multi attribute decision making method is needed to create a decision matrix to help us make decisions. The TFN is a subjective judgment algorithm. The various attribute parameters in the network are quantized according to their importance, and the fuzzy decision matrix is obtained. The definition of its importance level is shown in Table 1. aij represents the relative importance level of an attribute value i to j.

2040

C.-G. He et al. Table 1. The importance level of TFN aij

Relative importance level

1

i and j are equally important

3

i is slightly more important than j

5

i is more important than j

7

i is much more important j

9

i is extremely more important than j

2, 4, 6, 8 The median between each level

Generally, the attribute parameters of networks include service bandwidth, received signal strength, delay, jitter, packet loss rate, network load, security level, and quality of service, etc. The method to determine the weight is as follows: (1) For the selected attribute parameters {a1 , a2 , . . . , an }, we get the decision matrix by obtaining a pairwise comparison of each attribute from multiple experts: ⎡ p p ⎤ a11 a12 · · · ap1n ⎢ ap21 ap22 · · · ap2n ⎥ ⎢ ⎥ Ap = [aij ]pm×n = ⎢ . . . (1) . ⎥ ⎣ .. .. . . .. ⎦ apn1 apn2 · · · apnn where p is the number of the experts, p = 1, 2, . . . , t. aij represents the relative importance level and satisfied aii = 1, aij × aji = 1, (i, j = 1, 2, . . . , n). (2) Integrate the expert’s decision matrix into a decision matrix and determine the triangular fuzzy number, i.e., t p 1 aij p , uij = max(apij ) (2) lij = min(aij ), mij = p p p (3) Using the triangular fuzzy number to construct the fuzzy evaluation factor matrix E: ⎡ −l12 u1n −l1n ⎤ 1 1 − u12 2m12 · · · 1 − 2m1n ⎢ −l21 −l2n ⎥ ⎢ 1 − u21 ⎥ 1 · · · 1 − u2n 2m21 2m2n ⎥ (3) E = (eij )n×n = ⎢ ⎢ ⎥ .. .. . . .. .. ⎣ ⎦ . . −ln1 un2 −ln2 1 − un1 1 2mn1 1 − 2mn2 · · · u −l

ij ij where sij = 2m is the standard interest rate difference, and its value ij reflects the degree of ambiguity of expert evaluation. The larger sij is, the larger the degree of ambiguity.

Weight Determination Method Based on TFN and RST

2041

(4) Calculation adjustment decision matrix Q: ⎡ ⎤ 1 m12 · · · m12 ⎢ m21 1 · · · m2n ⎥ ⎢ ⎥ Q=M ×E =⎢ . .. . . .. ⎥ ⎣ .. . . ⎦ . ⎡

mn1 mn2 · · · 1 −l12 1 − u12 2m12 · · · 1 −

1

⎢ −l21 ⎢ 1 − u21 2m21 ×⎢ ⎢ .. ⎣ . −ln1 1 − un1 2mn1 1 −

1 .. .

un2 −ln2 2mn2

··· 1 − .. . ···

u1n −l1n 2m1n u2n −l2n 2m2n

.. . 1



(4)

⎥ ⎥ ⎥ ⎥ ⎦

where the matrix M is a square matrix composed of the median mij of all triangular fuzzy numbers in the decision matrix. (5) Converting the adjustment decision matrix Q into a decision matrix P with a diagonal of 1 by column, i.e. P = (Pij )n×n and satisfy Pij = 1/Pji . (6) We use compatibility matrix analysis method to transformed matrix P , and obtain compatibility matrix R = (rij )n×n , as

n

n rij = Pik · Pkj (5) k=1

Which satisfies rii = 1, rij = 1/rji . (7) Calculate the weight of the attribute parameters ωi , ωi = ci /

n

ck (i = 1, 2, . . . , n)

(6)



n

n rik (i = 1, 2, . . . , n) ci =

(7)

k=1

k=1

Through the above method based on TFN, the subjective weight of each attribute parameter can be expressed as Wsub = [ω1 , ω2 , . . . , ωn ].

4

Objective Weight Based on RST

RST is a knowledge set constructed by various attribute parameters in wireless network, and integrates the knowledge set into a large knowledge base. RST relies on two important attribute sets, conditional attribute set and decision attribute set. The specific steps are as follows: (1) Firstly, all attribute parameters are divided into conditional attribute set and decision attribute set. The selected parameters as conditional attributes include service bandwidth, received signal strength, delay, jitter, packet loss

2042

C.-G. He et al.

rate. The decision attribute is the network performance level. Then establish a multi-attribute decision matrix A:

A = (aij )m×n

n1 = n2 .. . nm

⎡ p1 p2 · · · pn ⎤ a11 a12 · · · a1n ⎢ a21 a22 · · · a2n ⎥ ⎢ ⎥ ⎢ .. .. . . .. ⎥ ⎣ . . . ⎦ . am1 am2 · · · amn

(8)

where lines represent heterogeneous wireless network sets N = {n1 , n2 , . . . , nm }, columns represent the conditional attributes set P = {p1 , p2 , . . . , pn }, decision attribute is expressed as P  = {pn+1 }. (2) Since there is partially redundant data in the decision matrix and different attributes have different units, it is necessary to normalize these data. For heterogeneous networks, service bandwidth and received signal strength are beneficial attributes. The larger the better. So the normalization process is as follows: aij − (aij )min , i = 1, 2, . . . , m (9) bij = (aij )max − (aij )min Delay, jitter and packet loss rate are cost attributes. The smaller the better. The normalization process is as follows: bij =

(aij )max − aij , i = 1, 2, . . . , m (aij )max − (aij )min

(10)

Finally, the normalized decision matrix B can be expressed as,

B = (bij )m×n

n1 = n2 .. . nm

⎡ p1 p2 · · · pn ⎤ b11 b12 · · · b1n ⎢ b21 b22 · · · b2n ⎥ ⎢ ⎥ ⎢ .. .. . . .. ⎥ ⎣ . . . ⎦ . bm1 bm2 · · · bmn

(11)

(3) The dependence of decision attribute P  on condition attribute P is γP (P  ), as Card(P OSP (P  )) , 0 ≤ γP (P  ) ≤ 1 γP (P  ) = (12) Card(U ) where P OSP (P  ) represents the lower approximation of knowledge R, i.e. the positive domain. Card(·) represents the cardinality of a set. And dependence of decision attribute P  on condition attribute P − pi is γP −pi (P  ), as γP −pi (P  ) =

Card(P OSP −pi (P  )) , 0 ≤ γP −pi (P  ) ≤ 1 Card(U )

(13)

(4) The importance degree of condition attribute i in all evaluation parameters sets is σP  (Pi ), and the expression is, σP  (Pi ) = 1 −

γP −pi (P  ) γP (P  )

(14)

Weight Determination Method Based on TFN and RST

2043

(5) Finally, we normalize the weight of importance σP  (Pi ), and the objective weights are: σP  (Pi ) , i = 1, 2, . . . , n (15) φi =  σP  (Pi ) i

Then, we can get the objective weight Wobj = [φ1 , φ2 , . . . , φn ] based on RST.

5

Comprehensive Weight

When calculating the comprehensive weight W , we think that subjective weight and objective weight are equally important, each accounting for 50% i.e., W = Wsub × 50% + Wobj × 50%

(16)

In this way, we can get the comprehensive weight values corresponding to each attribute parameter in heterogeneous networks, which can help us to choose a better network to access.

6

Simulation and Numerical Analysis

In this paper, Delay (DL), Received Signal Strength (RSS), Jitter (JT), Packet Loss Rate (PLR) and Service Bandwidth (BW) are used as attributes for PDT and B-TrunC heterogeneous networks. Calculate the weights of five attribute parameters by subjective weights based on TFN and objective weights based on RST, respectively. Finally calculate their comprehensive weights. Five kinds of attribute parameters were simulated by MATLAB. The voice service and the video service are respectively constructed into a triangular fuzzy matrix and calculated by the TFN method. The subjective weight values under each attribute are shown in Table 2. Table 2. The subjective weights of various attribute parameters based on TFN Attribute

DL

RSS

JT

PLR

BW

Voice Service 0.4227 0.2704 0.1607 0.0950 0.0512 Video Service 0.0938 0.3017 0.0552 0.2152 0.3341

For objective weights, DL, RSS, JT, PLR, BW should be taken as conditional attributes and quantified. Taking the quality of voice or video service as decision attribute. Using multiple sets of data, the degree of influence of each conditional attribute on the decision attribute is calculated to determine its objective weight. In this paper, 50 sets of data are used to quantify the parameters in each set of data, and the weights of each conditional attribute are shown in Table 3.

2044

C.-G. He et al.

Table 3. The objective weights of various attribute parameters based on RST Attribute

DL

RSS JT

PLR BW

Voice Service 0.36 0.30 0.24 0.08 0.02 Video Service 0.16 0.26 0.12 0.22 0.24

Fig. 1. Comprehensive weight of voice service and video service

It can be seen that the objective weights obtained in Table 3 are basically consistent with the subjective weights obtained in Table 2. The two results are combined together by Eq. (16) to obtain the comprehensive weights in Fig. 1. In Fig. 1, we can see that different services focus on different comprehensive weights. For voice services, delay is an important attribute, which has a large weight. Received signal strength and jitter also affect the quality of voice. Compared with the first three items, the weight of packet loss rate and service bandwidth is smaller. This is due to people’s limitations on the resolution of sound and image. A small amount of packet loss rate will not feel a significant decline in voice quality, but it is very sensitive to delay and jitter. Only a small delay or jitter can cause distortion of voice. Therefore, for voice services, delay and jitter are more important than packet loss rate. However, for video services, because video files are large and require large bandwidth to transmit, bandwidth has a large weight. At the same time, received signal strength is very important for any service. The packet loss rate will cause the video to be stuck. Compared with the delay and jitter, the video service pays more attention to fluency, so the packet loss rate will be relatively larger.

Weight Determination Method Based on TFN and RST

7

2045

Conclusion

In this paper, the subjective weight method based on TFN is introduced firstly, which makes the expert’s judgment on the attribute in an range and reduces the influence of subjective factors on the results. Then, the objective weight method based on RST is used to get the weight of attribute parameters directly through data. The simulation results show that the objective weight is close to the subjective weight, which also proves the accuracy of the scheme design. Finally, the comprehensive weights are obtained, and the weights of each attribute in vertical handover of heterogeneous networks are determined under the conditions of delay, received signal strength, jitter, packet loss rate and service bandwidth. This provides a theoretical basis for the terminal to perform vertical handover between PDT and B-TrunC networks. Acknowledgements. This paper is supported by National Key R & D Program of China (No. 2018YFC0807101).

References 1. Xu P, Fang X, He R et al (2013) An efficient handoff algorithm based on received signal strength and wireless transmission loss in hierarchical cell networks. Telecommun Syst 52(1):317–325 2. Torfi F, Farahani RZ, Rezapour S (2010) Fuzzy AHP to determine the relative weights of evaluation criteria and Fuzzy TOPSIS to rank the alternatives. Appl Soft Comput 10(2):520–528 3. Lotfollahzadeh T, Shayesteh MG, Kalbkhani H et al (2016) Technique for order of preference by similarity to ideal solution based predictive handoff for heterogeneous networks. IET Commun 10(13):1682–1690 4. Manisha, Singh NP (2015) Efficient network selection using game theory in a heterogeneous wireless network. In: IEEE international conference on computational intelligence and computing research, Madurai, pp 1–4 5. Ji X, Zhang J, Zhu S (2015) A novel vertical handoff algorithm for UMTS and WiMAX heterogeneous overlay networks. In: International conference on information science and control engineering, Shanghai, pp 578–581 6. Sheng J, Qi B, Dong X, Tang L (2012) Entropy weight and grey relation analysis based load balancing algorithm in heterogeneous wireless networks. In: 2012 8th international conference on wireless communications, networking and mobile computing, Shanghai, pp 1–4 7. Zineb AB, Ayadi M, Tabbane S (2017) QoE-based vertical handover decision management for cognitive networks using ANN. In: 2017 sixth international conference on communications and networking (ComNet), Hammamet, pp 1–7 8. Xing H, Mu D, Ge X, Chai R (2013) An NN-based access network selection algorithm for heterogeneous networks. In: 22nd wireless and optical communication conference, Chongqing, pp 378–383

Deep Learning-Based Device-Free Localization Using ZigBee Yongliang Sun(&), Xiaocheng Wang, and Xuzhao Zhang School of Computer Science & Technology, Nanjing Tech University, Nanjing, China [email protected]

Abstract. With the rapid development of the Internet of Things (IoT), the demands for location-based services (LBS) are increasing day by day. At present, most of the indoor localization technologies require targets to carry terminal devices, which limits the practical application of indoor localization. In this paper, we propose a deep learning-based device-free localization system using ZigBee. The system employs ZigBee nodes as sensor nodes, which can locate the targets through measuring received signal strength (RSS) among these sensor nodes. In the off-line phase, we collect the RSS data of some specific locations and construct a localization model through training a deep learning convolutional neural network (CNN) model. In the on-line phase, we are able to calculate target location coordinates with the trained CNN model. The experimental result shows that the mean error of the proposed deep learning-based device-free localization system is 0.53 m, which could be a technical solution for human target localization in indoor environments. Keywords: Deep learning

 CNN  Device-free  Localization  ZigBee

1 Introduction The rise of the Internet of Things (IoT) provides plenty of resources for location-based services (LBS) [1]. At the same time, high-accuracy localization technology also promotes the development of IoT, which is able to provide LBS in indoor environments, such as hospitals, shopping malls, and airports [2]. In recent years, device-free indoor localization technology has emerged. The indoor device-free localization can be also called indoor passive localization. It does not need targets to carry terminal devices or tags [3, 4]. Meanwhile, it also can protect the target privacy better. Compared with Wi-Fi and Bluetooth, ZigBee protocol is more suitable for wireless networking, so we use ZigBee nodes as the sensor nodes for device-free localization in this paper. We assume that a human target is in the localization area, some wireless links between the ZigBee sensor nodes will be shadowed, which can influence received signal strength (RSS) measured by the sensor nodes. The RSS data of the whole network can be compiled into an RSS matrix. When the target is at a different location in the localization area, the RSS matrix is different. So we make use of the measured RSS matrices and estimate the location coordinates of the target using a localization model.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2046–2049, 2020 https://doi.org/10.1007/978-981-13-9409-6_247

Deep Learning-Based Device-Free Localization Using ZigBee

2047

2 Proposed Localization Model Because the RSS data are in RSS matrix format with dimensions 16  16 that are similar to image data and a deep learning convolutional neural network (CNN) [5, 6] is usually used for processing image data, we construct the localization model with CNN. The input RSS matrix rss for training the CNN can be denoted by 2 6 6 rss ¼ 6 4

rss1;1 rss2;1 .. .

rss16;1

rss1;2 rss2;2 .. .

rss16;2

  .. .

rss1;16 rss2;16 .. .

3 7 7 7 5

ð1Þ

   rss16;16

where element rssi;j in the matrix is the RSS value received by sensor node i from sensor node j. The output of the CNN is a location coordinate vector L ¼ ðx; yÞ, which is the location of the target when the RSS matrix rss is collected. So the localization model can be trained and denoted by a nonlinear function F as follows L ¼ F ðrssÞ

ð2Þ

After training the CNN localization model, when a human target is in the local^ ð^x; ^yÞ ization area and an RSS matrix rss0 can be obtained, the location coordinates L¼ of the target can be calculated by ^ ¼ F ðrss0 Þ L

ð3Þ

3 Experimental Setup, Results and Analyses We used 16 CC2530 ZigBee nodes from Texas Instruments as the sensor nodes that were represented by the black dots as shown in Fig. 1. These sensor nodes were deployed with a gap of 1.8 m in an office room with dimensions of 7.2 m  7.2 m. We also used another ZigBee node as a coordinator which was connected to a laptop to collect RSS data. We selected 52 specific locations in the room with a gap of 0.6 m that were represented by ‘’ in Fig. 1. A person stood at each location for 5 min so that enough RSS data could be collected. In this paper, a total of 5543 RSS matrices are collected. Because we deployed 16 sensor nodes, we could get the RSS matrices with dimensions of 16  16. We used 4500 of these matrices and their location coordinates as the training samples. The other 1043 RSS matrices and their location coordinates were the testing samples. The number of iterations was set to 600. The model was run on a computer with Intel Core i5 4200 h 2.8 GHz CPU, 8 GB RAM, GTX 850 m 4G GPU, and Windows 10 64-bit operating system with python 3.7 and spyder 3. We use 1043 testing samples to test the CNN localization model. The localization errors of all the testing samples are shown in Fig. 2. The maximum error is 3.32 m and the mean error is 0.53 m. As shown in Fig. 2, although some localization errors are

2048

Y. Sun et al.

Fig. 1. The plan of the office room

large, most of the errors are less than 1 m. This proves that the generalization ability of the model for the testing samples and the model is able to locate the human target indoors within a reasonable error range.

Fig. 2. The localization errors of different testing samples

Deep Learning-Based Device-Free Localization Using ZigBee

2049

4 Conclusions In this paper, we propose a deep learning-based device-free localization system using ZigBee. The system does not need a target to carry terminal device, which extend the application range of indoor localization. We employ 16 ZigBee nodes as sensor nodes in an office room and 1 ZigBee node as a coordinator to collect the RSS data. We construct a localization model using a deep learning CNN model. The RSS matrices with dimensions of 16  16 are the inputs of the CNN model and the location coordinates are the outputs. We use 1043 testing samples to test the CNN localization model and the mean error is 0.53 m, which means the proposed deep learning-based device-free localization system can achieve a satisfactory performance. Acknowledgements. This work was supported by the National Natural Science Foundation of China under Grant No. 61701223 and the Natural Science Foundation of Jiangsu Province under Grant No. BK20171023.

References 1. Zanella A, Bui N, Castellani A, Vangelista L, Zorzi M (2014) Internet of things for smart cities. IEEE Internet Things J. 1(1):22–32 2. Sun YL, Meng WX, Li C, Zhao N, Zhao KL, Zhang NT (2017) Human localization using multi-source heterogeneous data in indoor environments. IEEE Access 5:812–822 3. Sun YL, Wang XC, Zhang XZ, Zhang XG (2018) ZigBee-based device-free wireless localization in internet of things. 2018 In: 3nd EAI international conference on machine learning and intelligent communications, pp 1–8 4. Sun YL, Zhang XZ, Wang XC, Zhang XG (2018) Device-free wireless localization using artificial neural networks in wireless sensor networks. Wireless Commun Mobile Comput 2018:1–8 5. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-Based learning applied to document recognition. Proc IEEE 86(11):2278–2324 6. Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

A Modified Genetic Fuzzy Tree Based Moving Strategy for Nodes with Different Sensing Range in Heterogeneous WSN Xiaofeng Yu1(B) , Bingjie Zhang1 , Hanqin Qin1 , Tian Le1 , Hao Yang1 , and Jing Liang2 1

2

China Ship Development and Design Center, Wuhan, Hubei, China yuxiaofeng [email protected] Department of Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China

Abstract. This paper introduces a modified genetic fuzzy tree (GFT) based nodes moving strategy in heterogeneous WSN. Different from the former work [1], there are two kinds of nodes with different sensing range. In order to testify the tracking performance of the GFT moving strategy in this kind of WSN, we put forward a weighted Centroid localization algorithm. Simulation results show that the modified GFT moving strategy still suits for this kind of WSN. The tracking performance of the heterogeneous WSN improves a lot by using the moving strategy. Keywords: Modified GFT · Heterogeneous WSNs performance · Weighted centroid

1

· Tracking

Introduction

Early studies on WSNs are largely homogeneous. In this kind of WSNs, all nodes have the same characteristics such as energy supply, communication styles, memory storage, etc. However, later researches prove that these WSNs have bottlenecks in performance and scalability [2, 3]. In order to fulfill the need of some complex applications in the real word, the heterogeneous designs are more and more popular [4,5]. The heterogeneous designs incorporate many nodes with vary capabilities. Among the applications of the heterogeneous WSNs, target tracking is very hot. The existing research on target tracking in heterogeneous WSNs are mainly three categories:cluster based [6], prediction based [7] and tree based [8]. In the first algorithm, targets are tracked by dynamical cluster building according to their movements. As for the second one, the prediction based algorithm consists of three parts: a prediction model, a wake up mechanism and a recovery mechanism. It aims at the balance between energy consumption and missing rate. In the tree based method, the network shows a topology of a graph, then all the nodes and their links constitute a tree. There is a common c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2050–2056, 2020 https://doi.org/10.1007/978-981-13-9409-6_248

A Modified Genetic Fuzzy Tree Based Moving Strategy

2051

trait in the three approaches that the nodes deployed in the networks are all steady. This is a waste of energy and not good for the lifetime of the networks. As a consequence, more and more researches focus on the mobile WSNs. In mobile WSNs, the key issues are the mobility models of the nodes. Generally, the mobility models of the nodes can be categorized as individual models and group models. There are many studies on the individual models, such as Random Waypoint [9], Gauss-Markov [10], environment-aware [11], etc. As for group models, the relevant researches are few because of the complexity [12]. To the best of our knowledge, much of the group models aim to balance the upload of the networks, not to track the targets accurately. In our former work [1], we proposed a group mobility model of the nodes based on the genetic fuzzy tree (GFT). This model utilizes a two-layer fuzzy tree system to determine which nodes move and how they move. The fuzzy tree system consists of two fuzzy inference systems (FISs). The first one computes a score for each node, then the moving nodes are chosen according to their scores. The second one decides their moving distances. This moving strategy shows good performances in target tracking. However, in the heterogeneous WSN proposed by [1], the nodes only varies in the battery energy, mobility and target recognition ability. Their sensing range is assumed the same. So, in order to study the influence of the nodes’ sensing range variation on the moving strategy, we just change the heterogeneous WSN model in this paper. Contributions in this paper are as follows: 1. In this paper, we introduce a different heterogeneous WSN model that there are two kinds of nodes deployed in the network: Type A and Type B. The sensing range of Type A is r1 while Type B’s is r2 . Beside this, the nodes of each type also vary in the battery energy, mobility and target recognition ability. 2. For the proposed heterogeneous WSN in this paper, we put forward a modified Centroid localization algorithm called Weighted Centroid. For each type nodes, we use the Centroid localization algorithm proposed in [1] to calculate the target’s position. Say, the estimated position of the target for type A is (xea , yea ) and Type B’s is (xeb , yeb ). Then, the final estimated position of the target can be represented as: xe = a ∗ xea + b ∗ xeb ,

ye = a ∗ yea + b ∗ yeb

(1)

In (1), a + b = 1. 3. According to the Weighted Centroid localization algorithm, we put forward a modified GFT based nodes moving strategy. In the simulation part, we investigate the influences of some network-wide parameters on the moving strategy’s performance. Apart from the numbers of the deployed nodes and moving nodes, the tracking error varies with the sensing range of the two kinds of nodes and the weighted values.

2

The GFT Moving Strategy and Moving Model

The modified GFT based moving strategy is illustrated by Fig. 1 (similar as [1]). Two FISs determine the moving nodes and their moving plans. The first one

2052

X. Yu et al.

chooses the group of moving nodes according to their metrics: energy, mobility and target recognition ability while the other one plans the moving groups’ trajectories using some distance information. Metrics of Type A nodes

Metrics of Type B nodes

Moving nodes selection FIS Moving nodes of Type A and Type B nodes Path Planning FIS

Moving distance of the target in the time unit

Distance between each moving node and the target

GA optimization Final nodes moving strategy Fig. 1. The GFT based moving strategy

2.1

Moving Model of the Heterogeneous WSN

In this part, we will describe the moving model of the heterogeneous WSN in the application of target tracking. As is shown in Fig. 2, Type A nodes and Type B nodes are represented by the black dots and black triangles respectively. The target resides in the black square at time t and the hollow square stands for its position at time t+1. The red square is the estimated position of the target at time t+1 by the Weighted Centroid algorithm. To track the target more accurately, we choose some Type A or Type B nodes which can’t sense the target (beyond the circles) to move towards the hollow square. The hollow dot and hollow square are their positions after moving. How to choose the moving nodes and the way they move are decided by the GFT algorithm. Furthermore, we can divide the heterogeneous WSN into two parts to track the target. As is shown in Fig. 3. The hollow square is the target’s position at time t+1, its real position is (x, y). For each kind of nodes, we utilize the GFT algorithm to locate the target’s position. For Type A nodes, we assume that there are NA nodes can sense the target after moving. The positions of the NA

A Modified Genetic Fuzzy Tree Based Moving Strategy

d1

2053

d

d2

Type A nodes

Type B nodes

The target at time t

The target at time t+1

The targets estimated position at time t+1 of static WSN

The chosen Type A node to move

The chosen Type B node to move

Fig. 2. Moving model of the heterogeneous WSN

nodes are (xi , yi ), i = 1, 2, . . . , NA , then the estimated position of the target can be calculated as: N N A A xi yi (2) xea = i=1 , yea = i=1 NA NA This is the same for Type B nodes, we assume that the target’s estimated position of Type B nodes is (xeb , yeb ). As a consequence, the target’s final estimated position is xe = a ∗ xea + b ∗ xeb , ye = a ∗ yea + b ∗ yeb

(3)

Then, the localization error of the Weighted Centroid algorithm can be computed as:  error =

3

2

2

(x − xe ) + (y − ye )

(4)

Simulations and Analysis

In this part, we will present some simulation results to investigate the tracking performance of the moving strategy in this kind of heterogeneous WSN. The heterogeneous network is deployed in the 10 m × 10 m area, to have a better understanding, we give some definitions firstly.

2054

X. Yu et al.

(a)

(b)

(c) Fig. 3. Illustration of the weighted centroid localization algorithm

– N : The number of the nodes deployed in the WSN. – Nm : The number of the moving nodes. – N1 , r1 : N1 is the number of Type A nodes deployed and r1 represents their sensing range. – N2 , r2 : N2 is the number of Type B nodes deployed and r2 represents their sensing range. During the simulation part, we set the target in (5 m, 5 m) and its next position is (7 m, 8 m). We take the localization error of its next position as the tracking error, then we study the influences of the parameters below on the tracking performance. Figure 4 shows the tracking error varies with the number of deployed nodes N . We set the number of moving nodes Nm = 2. Among them, one is Type A and another is Type B. Besides, the weight values in (1) are a = b = 0.5. In Fig. 4, the solid lines stand for the tracking error of the target in static WSN while the dashed lines represent that in the movable WSN. They together shows that the GFT based node moving strategy also performs well in the heterogeneous WSN proposed in this paper. Moreover, for smaller sensing range r1 and r2 , the tracking performance can be better. Figure 5 compares the tracking performance with different weighted values a, b in (1). In this part, we set the sensing range r1 = 1 m and r2 = 2 m. Also, the moving nodes Nm = 2 and one is Type A node, the other is Type B. In Fig. 5, the three lines represents for the tracking error of the target in the heterogeneous WSN when the weighted values are (a = 2/3, b = 1/3), (a = 0.5, b = 0.5), (a = 1/3, b = 2/3) respectively. The three lines together show that smaller a can get smaller tracking error. That’s to say, in the weighted Centroid localization algorithm, the smaller sensing range nodes weights more, the GFT based nodes moving strategy performs better.

A Modified Genetic Fuzzy Tree Based Moving Strategy

2055

Fig. 4. Tracking error with different N

Fig. 5. Tracking error with different weight values

4

Conclusion

Based on the former work, this paper proposes a novel heterogeneous WSN. There exists two types of nodes and their sensing range is different. In order to make the GFT based nodes moving strategy suit for the network, we modify the strategy and put forward the Weighted Centroid localization algorithm. Simulation results show that the modified GFT based nodes moving strategy performs well upon the target tracking in the proposed network. Smaller sensing range of both type nodes can obtain better tracking performance. Moreover, for the weighted Centroid localization algorithm, the smaller sensing range nodes weight more, the GFT based nodes moving strategy also performs better.

2056

X. Yu et al.

Acknowledgements. This work was supported by the National Natural Science Foundation of China (61671138, 61731006), and was partly supported by the 111 Project No. B17008.

References 1. Yu X, Jing L (2018) Genetic fuzzy tree based node moving strategy of target tracking in multimodal wireless sensor network. IEEE Access 99:1–1 2. Gupta P, Kumar PR (2000) The capacity of wireless networks 3. Xu K, Hong X, Gerla M (2002) An ad hoc network with mobile backbones. In: Proceedings of the ICC, New York, vol 5, pp 3138–3143 4. Du X, Xiao Y, Guizani M, Chen HH (2007) An effective key management scheme for heterogeneous sensor networks. Ad Hoc Netw 5(1):24–34 5. Elhoseny M, Yuan X, Yu Z, Mao C, Riad AM (2015) Balancing energy consumption in heterogeneous wireless sensor networks using genetic algorithm. IEEE Commun Lett 19(12):2194–2197 6. W¨ alchli M, Skoczylas P, Meer M, Braun T (2007) Distributed event localization and tracking with wireless sensors. In: Wired/wireless internet communications. Springer, pp 247–258 7. Xu Y, Winter J, Lee W-C (2004) Prediction-based strategies for energy saving in object tracking sensor networks. In: Proceedings of the 2004 IEEE international conference on mobile data management. IEEE, pp 346–357 8. Tsai H-W, Chu C-P, Chen T-S (2007) Mobile object tracking in wireless sensor networks. Comput Commun 30(8):1811–1825 9. Camp T, Boleng J, Davies V (2010) A survey of mobility models for ad hoc network research. Wirel Commun Mobile Comput 2(5):483–502 10. Liang B, Haas ZJ (2002) Predictive distance-based mobility management for pcs networks. In: Eighteenth joint conference of the IEEE computer and communications societies, Infocom 99. IEEE 11. Ahmed S, Karmakar GC, Kamruzzaman J (2010) An environment-aware mobility model for wireless ad hoc network. Comput Netw 54(9):1470–1489 12. Gunasekaran S, Nagarajan N (2008) A new group mobility model for mobile adhoc network based on unified relationship matrix. WTOC 7(2):58–67

Wireless Indoor Positioning Algorithm Based on RSS and CSI Feature Fusion Shi-Xue Zhang1,2(B) , Xin-Yue Fan1,2 , and Xiao-Yong Luo1,2 1

School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China [email protected],{XiaoYong cqupt,zhangshixue1204}@163.com 2 Key Laboratory of Optical Communication and Networks in Chongqing, Chongqing 400065, China

Abstract. In complex indoor environments, non-line-of-sight propagation, multipath fading and shadowing effects can have a significant impact on indoor positioning, resulting in large positioning errors. Aiming at the problem of low positioning accuracy in wireless indoor positioning algorithm, this paper combines the advantages of RSS and CSI features to propose a wireless indoor positioning algorithm combining RSS and CSI features. Firstly, CSI data is filtered in time domain to diminish the impact of complex indoor environment on positioning accuracy. Secondly, use the principle of coherent bandwidth to decrease the CSI data dimension. Finally, the relationship between RSS and CSI is fused by confidence degree to determine the final position estimate. The experimental results show that the time domain filtering can reduce the environmental interference effectively. Compared with the algorithm of positioning using RSS or CSI only, the fusion algorithm has higher positioning accuracy. At the same time, the coherence bandwidth principle is used to lower the dimension, which reduces the complexity of the fusion algorithm. Keywords: Indoor positioning · Feature fusion · Channel state information (CSI) · Received signal strength (RSS)

1

Introduction

The position fingerprint positioning system based on WiFi has the advantages of simple deployment and low cost, which has become the major choice for indoor positioning technology [1]. At present, the location fingerprinting technology based on Received Signal Strength(RSS) is simple to implement and mature in technology, but the positioning error is large. RADAR [2] and Horus [3] select the signal mean as the characteristics of the signal and use the positioning criterion of shortest distance or the maximum probability criterion, which has lower c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2057–2067, 2020 https://doi.org/10.1007/978-981-13-9409-6_249

2058

S.-X. Zhang et al.

positioning accuracy about 2–5 m, the reason is that the signal strength average can only provide rough information about the received signal, and no more information about the physical layer of the signal can be obtained. Channel State Information (CSI) describes the degree of congestion of the channel during the propagation of the WiFi signal. It can characterize the characteristics of the WiFi signal more than the received signal strength. In [4], wireless packet transmission prediction is attempted and achieves accurate prediction based on the CSI acquired in the 802.11n wireless network interface card. In [5], a subcarrier is selected in the CSI information from continuous sampling, and the amplitude distribution is taken as the fingerprint. But this method does not play the total information feature in CSI data, and the actual positioning performance is low. In [6], aggregated CSI amplitude values on all subcarriers are used, and spatial diversity is utilized to improve the performance of the RSS based method. However, the ability to distinguish between different locations is not strong. In [7,8], CSI was acquired and combined with the knowledge of machine learning, which has better positioning effect and stronger robustness but higher algorithm complexity. In [9], A location distribution fingerprint generation method is designed, which includes CSI frequency diversity and spatial diversity features and improves the positioning accuracy. However, the positioning accuracy in the case of single AP needs to be improved. In [10], a hybrid positioning algorithm (HL) for RSS and CSI is designed. In the positioning stage, the algorithm selects the candidate reference position by the simultaneous considering the similarity of the matched reference position RSS and the CSI fingerprint, and obtains higher positioning accuracy than using only one type of feature information. In this paper, in order to deal with the problem of large positioning error in wireless indoor positioning, we designed an RSS-CSI Fusion Location Algorithm (RCF). The candidate positions are generated by using RSS and CSI-based methods respectively, and the weighted average of candidate positions is used as the position estimation value. When constructing the fingerprint database by CSI, an effective position feature is generated by using the different transmission receiving antenna pairs for the amplitude distribution of different subcarriers, and the accurate distinction between different positions is synchronized provided, whereby effectively improving the positioning accuracy.

2 2.1

Related Work Positioning Algorithm Based on RSS

Fingerprint localization using RSS can be implemented by the KNN algorithm. Consider an experiment with N reference points, the similarity between RSS collected at the test points and each reference point i can be expressed by the Euclidean distance:  N  2 (1) (rsst − rssi ) di = i=1

Wireless Indoor Positioning Algorithm

2059

where rssi and rsst represent the mean of RSS at reference points and the test points respectively. Although the KNN algorithm can match the fingerprint database rapidly, its performance is not optimal in establishment and matching of the fingerprint database in complex indoor environment, which results in insufficient positioning accuracy. 2.2

Positioning Algorithm Based on CSI

Usually, the classical CSI algorithm adopts the aggregated value of the CSI amplitude as the feature and uses the similar RSS algorithm for position matching in the online phase to obtain the estimated position. Compared with the single fingerprint feature of RSS, CSI can sense subtler environment information in the time and frequency domain, which enhances the perception of the Wi-Fi signal to the environment. Figure 1 shows the distance error when using RSS or CSI algorithm, respectively. It is obvious that the distance error of CSI algorithm is much smaller for most points. However, there are some points where the distance error is smaller when using RSS algorithm alone. In this paper, the two algorithms are combined to improve the positioning accuracy. 5

RSS D−CSI

4.5 4

Error distance(m)

3.5 3 2.5 2 1.5 1 0.5 0

0

5

10

15

20

25

30

Test positions index

Fig. 1. Distance error of RSS or CSI method

35

40

2060

S.-X. Zhang et al.

Fig. 2. Overall diagram

3

Positioning Algorithm Based on RSS and CSI Feature Fusion

This paper proposes an indoor positioning method based on the feature fusion of RSS and CSI, estimating the unknown location of the mobile device by using single WLAN AP. In the offline phase, the RSS and CSI data are collected at each reference point, and the original data is preprocessed separately: the RSS raw data is averaged, while the CSI raw data is subjected to time domain filtering. In order to reduce the computational complexity, the dimensionality reduction of the preprocessed CSI data is proposed. The fingerprint database consists of the preprocessed data and the position coordinates of the reference point. In the online phase, the signal feature of the test points are collected and the preprocessing of data is same as offline phase. Based on the RSS value, the candidate position can be screened by the KNN algorithm. Then the same number of the candidate position can be matched based on the CSI value at the test points. According to the degree of confidence, three most likely candidate positions will be selected from the two methods, and the weighted average of them will be the position estimate. The overall diagram is shown in Fig. 2. 3.1

Off-Line Phase

Due to channel interference, noise and multipath effects, the CSI original data are in dynamic range. Not all features contribute equally to the system performance, and the computational costs can be high because of the high dimensionality of

Wireless Indoor Positioning Algorithm

2061

the fingerprint vectors. Therefore, it is necessary to extract the robust features of localization by the time domain filtering method and reduce the dimension of the CSI feature. In order to characterize each path fully, the wireless propagation channel is modeled as a time linear filter called channel impulse response. Based on the time-invariant assumption, the CIR is described as follows: h(τ ) =

n 

ai e−jθi δ(τ − τi )

(2)

i=1

where ai , θi and τi are the amplitude, phase and time delay of the ith path respectively. n represents the total number of multipath, δ(τ ) is the Dirac delta function. In this study, the CSI raw data represent the channel response in frequency domain. We process frequency domain CSI into time domain CIR with inverse fast Fourier transform (IFFT). An example of CIR shown in Fig. 3, we can observe that the different paths come with different time delay. After the time domain filtering, the frequency domain CSI is reacquired using fast Fourier transform (FFT). Figure 4 shows the CSI results after time domain filtering. The wideband channel of 802.11n can provide abundant diversity in the frequency domain due to the multipath effect. The metric to evaluate the frequency diversity is coherence bandwidth. Coherence bandwidth is a characteristic of the propagation channel and considered as the minimum bandwidth, within which different frequencies of a signal are likely to experience comparable or correlated amplitude fading. In this study, we consider that coherence bandwidth is 5 MHz, and then divided the whole 20 MHz channel into 4 sub-channels. As a result, the channel responses of subcarriers within the different sub-channel can be viewed as fading independently. Finally, the CSI dimensional is reduced to 13.3% after averaging over each sub-channel. The 2×3 link is used in this paper, including two transmit antennas and three receive antennas. Each antenna receives CSI values of 30 subcarriers and 1 RSS value. In the offline phase, the signal data needs to be collected at each reference point. The RSS average value and the CSI amplitude value of the ith reference position are respectively recorded as rssi and csii , and the fingerprint feature of the ith position is expressed as ϕi = {csii , rssi }. Therefore, the fingerprint of the ith position is expressed as fi = {(xi , yi ), ϕi }, where (xi , yi ) is the coordinates of the ith position, and the fingerprint database is created as F = {f1 , f2 , . . . , fn }. In this paper, 1 AP and 1 receiving device are used, and each received packet obtains 2 × 3 × 30 original CSI values. 50 data packets are selected at each reference position, so each reference point has a total of 50 × 30 original CSI values. The channel response vector set of 50 received packets is set to Hamp = [H1 , . . . , Hm , . . . , H50 ], where the mth data packet is represented as data packet is represented as Hm = [H1 , H2 , . . . , Hn , . . . , H30 ]T , where Hn is the channel response of the nth subcarrier. Each subcarrier in each packet can be represented

2062

S.-X. Zhang et al. 12

10

Amplitude

8

6

4

2

0

0

5

10

15

20

25

30

35

Delay index

Fig. 3. CIR of CSI 15

Original value After filtering

14

Amplitude

13

12

11

10

9

8

0

5

10

15

Subcarrier index

20

25

30

Fig. 4. The results after td-filtering

by a matrix as:

⎤ |H1,1 | |H2,1 | · · · |H50,1 | ⎢ |H1,2 | |H2,2 | · · · |H50,2 | ⎥ ⎥ ⎢ =⎢ . ⎥ .. .. ⎦ ⎣ .. . ··· . |H1,30 | |H2,30 | · · · |H50,30 | ⎡

Hamp

(3)

where |Hm,n | represents the amplitude value corresponding to the channel response of the nth subcarrier in the mth data packet. After the dimension-

Wireless Indoor Positioning Algorithm

2063

ality reduction using the principle of coherent bandwidth, each subcarrier in each data packet can be represented by a matrix as:  ⎤ ⎡   H2,1 · · · H50,1 H1,1    ⎢ H1,2 H2,2 · · · H50,2 ⎥  ⎥ ⎢ H amp = ⎣   (4)  ⎦ H2,3 · · · H50,3 H1,3    H1,4 H2,4 · · · H50,4 n n and minimum value Hmin of the It needs to find the maximum value Hmax amplitude values of each reference position, and express the maximum and minimum values of all reference point amplitude values as follows: n n }, Hmin = min{Hmin }, 1 ≤ n ≤ 30 Hmax = max{Hmax

(5)

For the nth subcarrier, the system parameter Δ is defined as the interval length, and the amplitude values of all reference points are divided into U seg (H − Hmin )/ , the value of Δ is obtained by experiment, max ments, where U = Δ set as it is 0.5 in this paper. The number of amplitudes falling in the uth interval n is expressed  n as Nnu, and the CSI amplitude distribution of the nth subcarrier is N1 NU pn = 50 , . . . , 50 . The amplitude distribution of all subcarriers on one antenna pair is hpt,r = [p1 , · · · p4 ], and the amplitude distribution at each reference point hp1,1 hp1,2 hp1,3 . is Hp = hp2,1 hp2,2 hp2,3 3.2

Online Phase

In the online phase, we collect the RSS and CSI data at each test point, and use the same fingerprint generation method as the off-line phase to obtain the RSS     fingerprint and the CSI distribution fingerprint F = {f1 , f2 , . . . , fm } of the m test points. For the positioning of RSS, the KNN algorithm described in Sect. 2.1 is used. Then the calculated Euclidean distances are arranged in an order of magnitude, and the k reference points with the smallest distance are selected as candidates of position. In addition, this paper takes a k value of 4. After receiving the CSI data at the test point, the same preprocessing and fingerprint generation methods as the off-line phase are adopted. It should be noted that the parameters generated by the fingerprint are consistent with the off-line phase. That is, the same Hmax , Hmin and interval lengths Δ are used. The symmetry KL distance is used to measure the similarity between the test point and the reference point [11]: D(Hpt , Hpi ) =

6 

D (p (Hpj | {xt , yt }) , q (Hpj | {xi , yi }))

(6)

j=1

where Hpt is the CSI distribution (xt , yt ) of the mobile device at the test point, and Hpi represents the CSI distribution (xi , yi ) of the mobile device in the fingerprint database of the reference location.

2064

S.-X. Zhang et al.

After calculating the distance between each test point and all fingerprint libraries, the four positions with the smallest KL divergence are selected as candidate positions. The eight candidate positions generated by the two methods. For each candidate position, a circle with radius R is drawn, and the number of candidate position is calculated in all the nodes in the circle, which is called the confidence of the candidate position. The reference points are evenly distributed in the grid, where the size of interval is a. If the candidate positon is at the boundary of the circle, the confidence is increased by 0.5. Finally, eight points are sorted to select the three candidate positions with the highest confidence, which are used to calculate the target position.

4

Experiment Analysis

In order to verify the performance of the method in the actual indoor environment, the simulation experiment was carried out in a conference room environment of 8 m × 9 m. The experimental equipment consists of a router and a computer as transmitters and receivers respectively. The router model is D-link DAP2310. The computer is equipped with 64-bit Ubuntu 12.04LTS operating system and Intel 5300 NIC. In the experimental environment, the lower left corner is used as the origin of map coordinates, and its coordinates are set as (0, 0). The interval between adjacent reference points is 0.8 m. The layout of the conference room is shown in Fig. 5. There are 66 reference points set at equal intervals in the open space outside the conference room facilities and the experimental equipment. During the test, 38 test points are set randomly in the non-reference position. According to the description in Sect. 3.2, Fig. 6 compares the positioning error of RCF with different R values, and selects the R value with the best experimental results. In the positioning error analysis, the average positioning

Fig. 5. Layout of reference positions

Wireless Indoor Positioning Algorithm

2065

1 0.9 0.8

√ a < R ≤ 2a √ 2a < R ≤ 2a √ 2a < R ≤ 5a

0.7

CDF

0.6 0.5 0.4 0.3 0.2 0.1 0

0

2

1

3

4

6

5

Distance error(m)

Fig. 6. The different R of RCF 1 0.9 FIFS D−CSI RCF RSS HL

0.8 0.7

CDF

0.6 0.5 0.4 0.3 0.2 0.1 0

0

1

2

3

4

5

6

Distance error(m)

Fig. 7. CDF of distance error using five different localization systems

error and CDF are used to analyze the performance of the different algorithm. Figure 7 is the experimental CDF diagram. It can be seen from the figure that the probability of positioning error below 1.5 m is 63%, and the probability of lower than 2 m is 80%. The positioning effect is better. The average positioning

2066

S.-X. Zhang et al.

errors of the proposed algorithm are compared with those of other algorithms, and the results are shown in Table 1. Table 1. Comparison of mean distance errors in five method Method

RCF

HL

D-CSI FIFS

RSS

Mean distance error (m) 1.3426 1.6487 1.8419 1.8808 2.8807

5

Conclusion

This paper studies the wireless indoor positioning of RSS and CSI feature fusion, and uses only a single AP to estimate the location of the mobile device. Firstly, the RSS data is averaged. The candidate points are filtered by the classical KNN algorithm. Then, the CSI data is filtered in time domain and reduced in dimension. The spatial diversity and frequency diversity features of CSI data are used to construct the distributed fingerprint database. The symmetry KL divergence is used to determine the candidate point. Finally, feature fusion is performed by confidence degree and the final position is determined. The proposed fusion algorithm can effectively eliminate the interference caused by the multipath effect and improve the positioning accuracy, while ensuring that the complexity of the algorithm is lower than the existing fusion algorithm.

References 1. Sadhukhan P (2018) Performance analysis of clustering-based fingerprinting localization systems. Wirel Netw 1–14 2. Bahl P, Padmanabhan V (2000) RADAR: an in-building RF-based user location and tracking system. In: Infocom nineteenth joint conference of the IEEE computer communications societies. IEEE 3. Hatou S, Yamada M (2008) The Horus: location determination system. Wirel Netw 14(3):357–374 4. Halperin D, Hu W, Sheth A, Wetherall D (2010) Predictable 802.11 packet delivery from wireless channel measurements. In: ACM Sigcomm conference 5. Yan W, Jian L, Chen Y, Gruteser M, Jie Y, Liu H (2014) E-eyes: device-free location-oriented activity identification using fine-grained WiFi signatures 6. Xiao J, Wu K, Yi Y, Ni L (2012) FIFS: fine-grained indoor fingerprinting system. In: International conference on computer communications networks 7. Wang X, Gao L, Mao S, Pandey S (2017) CSI-based fingerprinting for indoor localization: a deep learning approach. IEEE Trans Veh Technol 66(1):763–776 8. Hao C, Zhang Y, Wei L, Tao X, Ping Z (2017) ConFi: convolutional neural networks based indoor Wi-Fi localization using channel state information 9. Xiao Y, Zhang S, Cao J, Wang H, Wang J (2017) Exploiting distribution of channel state information for accurate wireless indoor localization. Comput Commun 73–83

Wireless Indoor Positioning Algorithm

2067

10. Zhao L, Wang H, Li P, Liu J (2017) An improved WiFi indoor localization method combining channel state information and received signal strength. In: Chinese control conference 11. Chen Y, Guo Q, Sun H, Li Z, Wu W, Li Z (2018) A distributionally robust optimization model for unit commitment based on Kullback-Leibler divergence

Design and Verification of On-Board Computer Based on S698PM and Time-Triggered Ethernet Cuitao Zhang(&), Xiongwen He, Panpan Zhan, Zheng Qi, Ming Gu, and Dong Yan Beijing Institute of Spacecraft System Engineering, Beijing, China [email protected]

Abstract. Aiming at the problem that the time of current on-board computer processing complex computing tasks is tight and the data transmission rate between on-board computers is low, a high-performance on-board computer based on S698PM processor and time-triggered Ethernet is designed. The onboard computer uses a CPCI internal bus with a data transmission rate of 1 Gbps, which matches the 1 Gbps data transmission rate of time-triggered Ethernet, so that the computer has a high data throughput performance. In this paper, the principles of the computer design and the block diagrams of key modules are introduced in details, and the actual test results are given. The results show that the computing performance and the bus communication speed of the on-board computer are improved by 1–2 orders of magnitude compared with the current existing computers, and the design goal is successfully achieved. Keywords: Time-Triggered ethernet S698PM

 On-board computer  CPCI bus 

1 Introduction With the increasing requirements for functions and performances of the spacecraft electronic equipment, more and more information needs to be processed and transmitted by the spacecraft electronic equipment. Traditional processors and the 1553B bus will not be able to handle the increasing data processing and transmission requirements. Currently, The SPARC architecture processor BM3803 is widely used in on-board computers with a maximum frequency of 100 MHz. In some complex applications, the machine time of the BM3803 processor is already very tight and the processor needs to be upgraded. Similarly, the data transmission rate of the 1553B bus which communicates with each other between spaceborne computers is only 1 Mbps at present. In the spacecraft intelligence and networking applications with large amount of data, the data transmission rate of 1 Mbps becomes the bottleneck of the system performance improvement. In order to meet the requirements for the on-board computer processing ability and the bus communication speed in the spacecraft intelligence and networking © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2068–2074, 2020 https://doi.org/10.1007/978-981-13-9409-6_250

Design and Verification of On-Board Computer Based on S698PM

2069

applications, a high performance on-board computer based on S698PM processor and time-trigger Ethernet (TTE) is designed. The S698PM processor is a quad-core processor, also of SPARC architecture. The maximum frequency is up to 600 MHz. Its performance is nearly 20 times higher than that of BM3803 which is a widely used processor among several aerospace processors available in China. As the data processing performance and the throughput of the processor increase, the bus networks of the on-board computer also need to be improved. Under the premise of not reducing the reliability and safety of the bus network, a spaceborne high-speed bus network is designed based on time-trigger Ethernet with the data transmission rate of up to 1 Gbps, providing reference and guidance for the developments of China’s on-board high-performance computers and high-speed bus networks. And it is a useful exploration for the developments of the spacecraft intelligence and networking.

2 Introduction of the S698PM Processor and Time-Triggered Ethernet The S698PM processor is a radiation-tolerance, high-performance, high-reliability, high-integration, low-power multi-core parallel processor SOC chip [1]. It implements Symmetric Multi-Process Architecture (SMP) and follows the SPARC V8 standard. It is designed for high-end embedded real-time control and complex computing applications, and can work stably in harsh environments such as space irradiation. The internal structure of the S698PM processor is shown in Fig. 1.

Fig. 1. Block diagram of the S698PM processor

2070

C. Zhang et al.

The S698PM processor integrates four identical high-performance LEON4 cores, each consisting of a 32-bit RISC integer processing unit (IU), a double-precision floating-point processing unit (FPU), a high-speed L1 Cache, and a memory management unit (MMU), etc. The S698PM processor also integrates a rich set of on-chip peripherals, including GPIO, UART, timer, interrupt controller, debug support unit, memory controller, SPI master, I2C master and other functional modules. The S698PM processor integrates an MMU to support the temporal and spatial partitioning operating system with high reliability and security. The user can easily realize the highperformance multi-core parallel processing design of the embedded real-time control system. Time-triggered Ethernet [2–5] is now attracting more and more attention as the most advanced high-speed bus technology in the world. The failure rate of timetriggered Ethernet is less than 1  10−9 h−1, which is the highest level of DO254 safety certification in the aerospace industry. There are only TTE and 1553B buses that can achieve this level of security. This is why time-triggered Ethernet is chosen to replace the 1553B bus in this paper. NASA also determines the time-triggered Ethernet as the next-generation bus after investigating multiple bus technologies, and conducts flight verification on the newly developed Orion manned spacecraft. The TTE bus node has a reliable fault isolation boundary. The failure of a node in the system is difficult to affect the normal operation of other nodes in the system in time and space, and can effectively avoid fault diffusion. TTE bus meets the demands of high-speed, highreliability and high-security for spacecraft, and it is likely to become the widely used bus technology in the next-generation spacecraft.

3 Overall Design of the On-Board Computer 3.1

On-Board Computer Internal Bus Selection

The common computer internal buses are mainly the CPU external bus, ARINC659 bus, CPCI bus [6, 7] and RapidIO bus. The CPU external bus has a simple structure and a convenient interface, and does not require a dedicated protocol chip. The disadvantages are that it is limited by the CPU model, has poor versatility, and does not support multiple processors. The ARINC659 bus requires a dedicated bus protocol chip with a data transfer rate of only 60 Mbps, which does not meet the high-speed data transfer rate requirements. RapidIO bus is a new generation of high-speed serial bus. At present, there are problems in providing aerospace components. There is no aerospace bus switch chips available in China, and it is difficult to form an industrial chain in a short time. It is difficult to establish a serial bus ecosystem. The CPCI bus can achieve a data transmission rate of more than 1 Gbps, and has good versatility, supports multiple processors. The CPCI bus has relatively high reliability and security, and has been widely used in safety critical applications such as industrial control, real-time data acquisition, military systems, and intelligent transportation. The CPCI bus can meet the requirements of the spaceborne high-speed data transmission rate and high reliability and security. Therefore, in the high-speed data transmission occasion, the CPCI bus is used as the internal

Design and Verification of On-Board Computer Based on S698PM

2071

bus. In order to be compatible with the conventional low-speed serial bus module, the FPGA bridge is used to communicate with the low-speed serial module. 3.2

Composition of the On-Board Computer

According to the previous design principles and the internal bus selection scheme, the block diagram of the on-board computer is shown in Fig. 2. The power module provides the secondary power for the whole machine. Other modules are divided into two categories according to the data transmission rate. They are called high-speed modules and low-speed modules. The high-speed modules connect to the CPCI bus with a data transmission rate of 1 Gbps or more, including the CPU module, the routing module, the time-triggered Ethernet TTE module, and other expansion modules such as a multiplex module and a mass storage module. The low-speed modules mainly include a telemetry module, an instruction module, a thermal management module, and a channel gateway module. The low-speed modules communicate with each other primarily through a simple serial communication interface such as SPI and a low-speed serial communication FPGA bridge located inside the processor module. In addition to the processor module and the TTE module, other modules can be freely combined and expanded. Each module is independent of each other and does not depend on other modules.

Low Speed Bridge

Commands driver module Telemetry acquisition module

CPU module

Routing module

TTE module

Other High Speed modules

Power module

Primary power supply

Secondary power supply

CPCI Bus

Fig. 2. Block diagram of the on-board computer

This article will focus on the design of the processor module and TTE modules. Other modules are traditional modules and will not be described in detail. As the main control module of CPCI bus, the processor module adopts S698PM processor + FPGA architecture. The block diagram is shown in Fig. 3. In addition to traditional Flash and SRAM memory, the CPU module is configured with 512 MB of high-speed DDRII memory for complex computing applications. The FPGA integrates the CPCI host bridge function, and the interface logic with the CPU and the SPI interface which bridges over the low-speed modules. The CPU module is the core of the on-board computer system. The software runs on this module to complete the control and operation of other functional modules.

2072

C. Zhang et al.

Oscillator

PLL

Reset

Watch Dog

Debug DSU JTAG

EXTMemory controler

DATA

FPGA Nand Flash

FLASH 64MB

S698PM

Debug

Interrupt

PHY

Ethernet

R/W CTRL

40bit EDAC

SRAM 2MB

DDRII 512MB

ADDR

CPU interface

Flash interface

Second Pulse CPCI Host Bridge CPCI bus

SPI interface

Low speed module

SPI interface

Low speed module

Low speed serial interface

Fig. 3. Block diagram of the CPU module

The design of TTE module is mainly based on the time-triggered Ethernet end system chip TT6802-1-SE. The block diagram of the time-triggered Ethernet end system chip TT6802-1-SE is shown in Fig. 4. The TT6802-1-SE chip provides three 10/100/1000 Mbps configurable Ethernet interfaces through the RMII interface to form a triple redundant time-triggered network. The management CPU is integrated inside the chip to download and update the configuration data. The chip provides three host interfaces, namely CPCI interface, SPI interface and SpaceWire interface. Only the CPCI interface can achieve data transmission rate of more than 1Gbps. The three host interfaces can be selected by means of two external pins IFCSEL [1:0]. This module is configured as a CPCI interface to meet the high-speed data transmission requirements. The block diagram of TTE module based on this chip is shown in Fig. 5. The timetriggered network can be realized by designing only three PHY chips and other components on the periphery of the chip.

4 Design Verification In order to verify the actual performance of the on-board computer, test software was specially designed for computer performance testing. The test software is developed based on the real-time multi-task embedded operating system VxWorks. It mainly

Design and Verification of On-Board Computer Based on S698PM

2073

Fig. 4. Block diagram of TT6802-1-SE

Fig. 5. Block diagram of the TTE module based on TT6802-1-SE

includes three levels, multi-core partitioning operating system, board-level support package BSP and application software. The BSP is between the underlying hardware and the operating system. The main function includes initializing the internal registers of the CPU, driving the various interfaces, and so on. The application software mainly includes functions such as telemetry, telecontrol, housekeeping management, and time

2074

C. Zhang et al.

management, etc. Through the actual test, the on-board computer can run stably at the frequency of 400 MHz, and is nearly 20 times higher than the performance of BM3803, effectively solves the problem of the tense machine time of the computer. 512 MB of memory resources can support complex computing applications such as autonomous task planning. In order to test the performance of time-triggered Ethernet, the CPU module and the routing module simultaneously send data to the TTE module. The TTE module can handle about several Mbps telemetry data from the CPU module and nearly 100 Mbps image data from the routing module. In this way, the mixed transmission of safety-critical data and non-safety-critical data is realized. Compared with the traditional 1553B bus, the performance of TTE bus increases nearly a hundred times, and achieves the design goal.

5 Conclusion In order to solve the problem that the time of on-board computer processing complex computing tasks is tense and the data transmission rate between on-board computers is low, a high-performance on-board computer based on S698PM processor and timetriggered Ethernet is designed. Without reducing the reliability and security of the computer, the computing performance and the bus communication rate increases 1–2 orders of magnitude compared with current existing on-board computers. And it meets the requirements of spacecraft intelligent and networked applications. The on-board computer has universality and expansibility. It will meet the needs of the on-board computer in the future for a long time, and will have a good application prospect.

References 1. Zhuhai Orbita Aerospace Science & Technology Co., Ltd. (2018) High performance 32-bit multi-core processor SOC chip-S698PM user manual v4.4 [EB/OL]. https://www.myorbita. net, Nov 2018 2. SAE AS6802. Time-triggered ethernet. SAE Aerospace Standard, Nov 2011 3. Chen W, Zhou Y, Jiang W (2009) Research on performance analysis and scheduling algorithm of TTE protocol. Chin J Electron 37(5):1000–1005 4. Liu W, Li Q, Xiong H (2011) Research on time-triggered ethernet synchronization and scheduling mechanism. Aeronaut Comput Technol 41:122 5. TTTech (2008) D-INT-S-10-002. TTEthernet specification. TTTech Computertechnikc AG 6. PCISIG.PCI Local Bus specification (1998) 7. PICMG.CPCI specification R2.0 D3.0 (1999)

An Optimal Deployment Strategy for Radars and Infrared Sensors in Target Tracking Lanjun Li(B) and Jing Liang School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China [email protected]

Abstract. In target tracking, it is difficult to ensure that a target can be detected by both radars and infrared sensors simultaneously. In this paper, the dimensional-reduced particle swarm optimization (DRPSO) algorithm is proposed to find the optimal deployment for radars and infrared sensors when tracking multiple targets in a 3D area. Since DRPSO prevents premature convergence during optimization, an optimal deployment with higher tracking ability is obtained within fewer iterations compared with classical PSO. Keywords: Sensor deployment

1

· Target tracking · PSO · DRPSO

Introduction

Radars and infrared sensors are important for target tracking. When tracking a target, infrared sensor can obtain accurate angle measurement, but the ranging measurement is inaccessible. Also their detection ranges are small and can be affected by climate. Radars can obtain both angle and ranging measurement. However, the angle measurement accuracy is poor compared with infrared sensor. Additionally, when radar is working, it radiates high-power electromagnetic waves, which may expose its current position. Hence, it is difficult to meet comprehensive and precise tracking requirements by using radar or infrared sensor alone. In recent years, with the development of multi-sensor data fusion technology, radars and infrared sensors have been used simultaneously in most cases to form a radar-infrared sensor detection system which fully utilizes both sensors. At present, there are two main data fusion methods for radar and infrared sensor: parallel filtering [1] and sequential filtering [2]. After obtaining the measurement information about the target, extended Kalman filter (EKF) and unscented Kalman filter (UKF) can be applied to track targets.

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2075–2084, 2020 https://doi.org/10.1007/978-981-13-9409-6_251

2076

L. Li and J. Liang

The results above are discussed under the assumption that all targets can be detected. However, if sensors are not properly deployed, it may result in data insufficiency or waste of resources. Particle swarm optimization (PSO) algorithm is widely used in sensor deployment because of its simple iterative steps and relatively fast calculation speed. Many algorithms based on PSO are proved to be effective in sensor deployment. VFCPSO [3] uses virtual force to direct the updating of particles towards the better positions. PSGO [4] imports selection and mutation operators in PSO to overcome the premature fault. However, they only focus on how to maximize the coverage of the detection region. In this paper, we establish a model to evaluate the ability of radars and infrared sensors tracking multiple targets in a 3D area. When all sensors are optimally deployed, the tracking ability reaches its highest value. Since it is an optimization for a multi-dimensional and multi-peak problem, PSO suffers from premature convergence. To solve this problem, we propose a dimensional-reduced PSO (DRPSO) algorithm. DRPSO can reach the global optimal solution with fewer iterations compared with classical PSO by reducing dimensions for each particle. The organization of the paper is as follows. Section 2 introduces the model establishment for sensor deployment in target tracking. Section 3 introduces the details of DRPSO and the simulation results are described in Sect. 4. Finally, Sect. 5 concludes this paper and our future work.

2

Sensor Deployment Model in Target Tracking

2.1

Basic Description

Suppose Nt targets are in the detection area. Target i has a threat level wi . The position for target i can be described as:   (1) ti = xit , yti , zti ∈ R3 Then we have target matrix T and threat level vector W : T = [t1 , t2 , . . . , tNt ]

(2)

W = [w1 , w2 , . . . , wNt ]

(3)

NRa radars and NIn infrared sensors are used for target tracking. Their detection ranges are RRa and RIn . According to Sect. 1, RRa > RIn . The optimal deployment strategy is to find a reasonable location for every radar and infrared sensor meet the following requirements: – – – –

All targets should be detected by either sensor. Better if a target can be detected by both radars and infrared sensors. Better if a target can be detected by more than one sensor. The target with higher threat level should be prioritized.

An Optimal Deployment Strategy for Radars and Infrared Sensors

2.2

2077

Data Fusion

The measurement model for sensor Xi can be described as follows: Xi = [dXi , θXi , φXi ]

T

(4)

where dXi , θXi , φRi indicate distance, azimuth and elevation angle measurement of the target observed by sensor Xi . If sensor Xi is a radar, then Xi = Ri . If Xi is an infrared sensor, then Xi = Ii . The measurement matrix X is: T

X = [X1 , X2 , . . . , XN ] = [DX , ΘX , ΦX ]

(5)

where N denotes the number of radars or infrared sensors. Suppose each measurement is affected by a mean-zero, Gaussian white noise with different variances. The variance for sensor Xi and variance matrix are shown below: T  1 1 1 SXi = , 2 , 2 (6) 2 σdX σθXi σφXi i SX = [SX1 , SX2 , . . . , SXN ] = [SDX , SΘX , SΦX ]

T

(7)

2 2 2 where σdX , σθX , σφX indicate the variance of the noise effecting each meai i i surement. Since infrared sensors cannot obtain distance measurement, DI = 0 and SDI = 0. For simplicity, a centralized data fusion algorithm is applied for data fusion. The measurement after fusion is described as follows:   T T T T T · S + Θ · S · S + Φ · S Θ Φ DR · SD R I R I Θ Θ Φ Φ R R I R I (8) , , [d, θ, φ] = SDR 1 SΘR 1 + SΘI 1 SΦR 1 + SΦI 1

The variance of d, θ and φ after fusion is shown below:    2 2 2 1 1 1 σ d , σθ , σϕ = , , SDR 1 SΘR 1 + SΘI 1 SΦR 1 + SΦI 1

(9)

From Eq. (9), a more stable measurement for target tracking can be obtained by using multiple different sensors. Due to the limited number of sensors, its necessary to find a deployment strategy for sensors to obtain more stable measurements. 2.3

Scoring System

According to specific scenarios and task requirements, a scoring system is constructed to quantify the influence of each sensor combination on target tracking. Considering four types of possible combinations, we design the following scoring rules:

2078

L. Li and J. Liang

– If a target is not detected by any radars, it cannot be tracked due to the lack of distance measurement. The score is negative. – If the numbers of sensors are the same, the combination with different sensors has higher score. – If a target is detected by more than Nth sensors, although the tracking ability is high, it leads to waste of resources. The score is lower than the score of a target detected by Nth sensors. – If the types of sensors are the same, when the number of sensors is fewer than Nth , the combination with more sensors has higher score. It can be conclueded that the combination of exact Nth sensors including both radar and infrared sensor reaches the highest score. Suppose the score for target i is pi , the ability of sensors tracking multiple targets can be quantified as: S=

Nt 

wi pi

(10)

i=1

3

Dimensional-Reduced PSO Algorithm

PSO is a swarm-intelligence-based evolutionary algorithm. Each particle represents a potential solution to the optimization task. The particles fly through the search space to find the optimal solution. Let M denotes the swarm size. For each particle i (1 ≤ i ≤ M ), let pbesti denotes its own local best position, and xi denotes its current position. The global best position found by any particle during all previous steps is presented as gbest. f (·) is the fitness function of the optimal problem. According to Sect. 2, the fitness function can be described as: f (xi ) = S =

Nt 

wj pj (xi )

(11)

j=1

where xi is a 3(m + n) dimensional vector, representing possible positions for all sensors: 1 1 m m 1 1 n n , zRa , · · · , xm , yRa , zRa , x1In , yIn , zIn , · · · , xnIn , yIn , zIn ] xi = [x1Ra , yRa Ra

(12)

where m = NRa and n = NIn . During optimization, each particle updates its current velocity vi toward pbesti and gbest with the bounded random acceleration. pbesti and gbest are updated as follows: pbesti (t) if f (pbesti (t)) > f (xi (t)) (13) pbesti (t + 1) = if f (pbesti (t)) ≤ f (xi (t)) xi (t) gbest(t + 1) = arg max f (pbesti (t + 1)) pbesti

(14)

An Optimal Deployment Strategy for Radars and Infrared Sensors

2079

Then velocity and position of particle are updated as follows: vi (t+1) = winer (t)vi (t)+c1 r1 (t)[pbesti (t)−xi (t)]+c2 r2 (t)[gbest(t)−xi (t)] (15) xi (t + 1) = xi (t) + vi (t + 1)

(16)

where c1 and c2 are acceleration constants, r1 (t) and r2 (t) are two separate random functions in the range [0, 1], xi (t) and vi (t) represent the position and velocity of ith particle at time t. Variable winer (t) is the inertia weight used to balance global and local search, which is always set as winer (t) = wmax −

wmin − wmax ×t Max

(17)

where Max is the number of maximum iterations, wmin and wmax are the minimum and maximum values of the weight [5]. 3.1

Drawbacks of Classical PSO

In the early stage of searching, winer (t) is large and the global searching ability is strong. It is rather easy for particles to fall into a local optimal solution. Once it happens, according to Eq. (15), the particle flight velocity path are completely determined by winer (t) and v(t). Since winer (t) keeps decreasing, the global searching ability gets weak while local searching ability gets strong, which makes it difficult for particles to jump out of the local optimal solution. It happens commonly when solving multi-dimensional and multi-peak problems. In the following sections, we propose two methods to prevent it. 3.2

Nonlinear Inertia Weight Model

Linear inertia weight model as Eq. (17) is widely used in PSO algorithms. This method can speed up the searching process. When it comes to multi-dimensional and multi-peak problem, the method is less effective. Thus, a nonlinear model is proposed. winer (t) =

tπ wmin + wmax wmax − wmin cos( )+ 2 Max 2

(18)

By somoothing winer (t), the global and local searching time are both extented, particles can search in a wider space and find the optimal solution more accurately in the approximate area. 3.3

Dimensional Reduction for Particles

In sensor deployment model, we can determine whether PSO has found the best solution by checking whether each sensor has detected the target and whether each target has been covered by Nth different sensors. Assuming a sensor does

2080

L. Li and J. Liang

not detect any targets, it is called an imperfect sensor. Take radar for example, radar Sr whose coordinate is (xr , yr , zr ) is an imperfect sensor if:

(19) (xr − xi )2 + (yr − yi )2 + (zr − zi )2 > RRa i = 1, 2, · · · , Nt Since each radar and infrared sensor performs equivalently, we propose dimensional-reduced PSO (DRPSO) algorithm. DRPSO consists of multiple PSO with decreasing particle dimensions. In PSO, suppose the result does not change after tth times of iterations, we consider that PSO has obtained an optimal solution gbest1 , which represents the best deployment for all sensors during current PSO. Use Eq. (19) to determine whether there are imperfect sensors. If not, then gbest1 is considered as the global optimal solution. Suppose there are N imperfect sensors {I1 , I2 , · · · , IN }, perform PSO on imperfect sensors only and other sensors stay at their current positions. The dimension of each particle has decreased by 3(NRa + NIn ) − 3N , making it easier for PSO to find the global optimal solution. The computation cost will decrease as well. After tth times of iterations, check again. Loop until no impact sensors are in the field. The process of DRPSO is shown below Algorithm 1. Dimensional-Reduced PSO Algorithm create and initialize 3(NRa + NIn ) dimensional particle pi (i = 1, 2, · · · , M ) repeat: for each particle pi evaluate the score f (pi (t)) if f (pi (t)) > f (pbesti (t)) then pbesti (t) = pi (t) if f (pbesti (t)) > f (gbest(t)) then gbest(t) = pbesti (t) end for perform PSO updates on p using Eqs. (15) and (16) if f (gest(t + tth )) == f (gbest(t)) then find imperfct sensors {I1 , I2 , · · · , IN } create and initialize a 3N dimensional vector pi (i = 1, 2, · · · , M ) until no imperfect sensors in the area

4

Simulation Results

We consider a 100 × 100 × 100 space for simulation. Assume 14 targets are randomly deployed in the region and their threat levels are randomly generated in descending sort. 10 radars with detection range of 30 and 10 infrared sensors with detection range of 20 are deployed to track 14 targets. Parameters in PSO are set as: wmax = 0.9, wmin = 0.3, c1 = 2, c2 = 2, Max = 10, 000, tth = 2000 and M = 60. Suppose Nth = 3, the score for each sensor combination is designed in Table 1 based on the rules in Sect. 2.3.

An Optimal Deployment Strategy for Radars and Infrared Sensors

2081

Table 1. Scoring system Sensor combinations Score Sensor Combinations Score Radar Infrared sensor Radar Infrared sensor 2

1

100

2

0

20

1

2

100

1

0

10

3

0

80

0

x(x > 0)

−10, 000

1

1

40

y

z(y + z > 3)

50

The simulation results includes one figure and two tables. Figure 1 illustrates the simulation results. Table 2 shows the usage of each sensor when using the optimal deployment provided by three algorithms. Table 3 shows the sensor detection of each target before and after using PSO with linear inertia weight, PSO with nonlinear inertia weight and DRPSO. Traditional PSO finds its optimal solution after 1121 iterations. However, according to Tables 2 and 3, several sensors were not used and several targets didn’t get their highest scores, which means the result can still be improved. By using nonlinear inertia weight, the global search time is extended, which helps PSO find an optimal solution obtaining higher score after 2642 iterations, but it also suffers from sensors’ not being used effectively.

8000 DRPSO linear weight PSO nonlinear weight PSO

6000

score

4000 2000 0 -2000 -4000 1000 2000 3000 4000 5000 6000 7000 8000 9000

iteration times

Fig. 1. Algorithm performances

DRPSO firstly finds its optimal solution after 1762 iterations and jumps out of it when finding the result doesn’t change within 2000 iterations. PSO is performed again with fewer dimensions, a better solution can be obtained after 6474 iterations in total. Though two sensors haven’t been used in DRPSO, all targets have obtained high socres.

PSO (nonlinear)

DRPSO

2

4

4

2

6

1

3

1

4

1

#1

#2

#3

#4

#5

#6

#7

#8

#9

#10

#10

#9

#8

#7

#6

#5

#4

#3

#2

#1

4

0

0

0

0

0

0

3

1

2

#10

#9

#8

#7

#6

#5

#4

#3

#2

#1

4

4

4

6

1

0

1

4

3

0

#10

#9

#8

#7

#6

#5

#4

#3

#2

#1

0

1

0

3

2

1

0

4

0

1

#10

#9

#8

#7

#6

#5

#4

#3

#2

#1

3

2

4

3

3

4

1

3

1

4

#10

#9

#8

#7

#6

#5

#4

#3

#2

#1

2

2

1

3

2

1

1

0

0

2

Radar Target(s) Infrared Target(s) Radar Target(s) Infrared Target(s) Radar Target(s) Infrared Target(s)

PSO (linear)

Table 2. Sensor usage

2082 L. Li and J. Liang

An Optimal Deployment Strategy for Radars and Infrared Sensors

5

2083

Conclusions and Future Work

In the paper, we analyzed how sensor deployment affects target tracking and designed a scoring system to evaluate it. Then DRPSO is proposed to find the optimal deployment for radar and infrared sensor. Compared with traditional PSO, DRPSO can obtain better results with fewer iterations. In DRPSO, when particles stuck in a local optimal solution, certain dimensions are selected specifically to perform PSO again until all sensors are used. Future work may include using fuzzy logic system to improve the scoring system and taking targets’ movement into consideration. Table 3. Detection results for all targets Target Initialization

PSO (linear)

PSO (nonlinear) DRPSO

Radar Infrared Radar Infrared Radar Infrared

Radar Infrared

#1

1

0

2

1

2

1

2

1

#2

1

0

3

0

2

1

2

1

#3

2

0

2

1

2

1

2

1

#4

1

1

2

1

1

2

3

0

#5

2

0

1

2

2

1

2

1

#6

1

0

3

0

2

1

2

1

#7

1

0

3

0

2

1

2

1

#8

2

0

1

2

2

1

2

1

#9

1

0

2

1

2

1

2

1

#10

1

0

3

0

2

1

2

1

#11

1

0

1

0

3

0

2

1

#12

0

0

1

0

1

0

2

1

#13

3

0

2

1

2

0

2

1

#14

1

1

2

1

2

1

1

2

Score

−28410

7320

7830

8220

Acknowledgements. This work was supported by the National Natural Science Foundation of China (61731006, 61671138), and was partly supported by the 111 Project No. B17008.

2084

L. Li and J. Liang

References 1. Zhu A, Jing Z, Chen W, Wang L, Li Y, Cao Z (2008) Data fusion of infrared and radar for target tracking. In: International symposium on systems & control in aerospace & astronautics 2. Scheunert U, Cramer H, Aris P, Angelos A, Gerd W, Luisa A (2002) Multi sensor data fusion for object detection: challenges and benefit. ATA-TORINO 55(9/10):301–309 3. Wang X, Wang S, Ma JJ (2007) An improved co-evolutionary particle swarm optimization for wireless sensor networks with dynamic deployment. Sensors 7(3):354– 370 4. Li J, Li K, Zhu W (2007) Improving sensing coverage of wireless sensor networks by employing mobile robots. In: 2007 IEEE international conference on robotics and biomimetics (ROBIO). IEEE, pp 899–903 5. Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. In: Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), vol 3. IEEE, pp 1945–1950

Integrity Design of Spaceborne TTEthernet with Cut-Through Switching Network Ji Li1(&), Huagang Xiong1, Dong Yan2, and Qiao Li1 1

School of Electronic and Information Engineering, Beihang University, No. 37 Xueyuan Road, 100083 Haidian District, Beijing, China [email protected] 2 Beijing Institute of Spacecraft System Engineering, Beijing, China

Abstract. Due to current information transmission and interaction capabilities of the satellite main bus are difficult to meet the intelligent development trend. An integrated design based on the time-triggered Ethernet protocol is proposed. A method for fast generation and verification of CRC in the case of cut-through transmission is designed to further ensure communication quality. The network scheduling difference between variable length frame and fixed length frame forward communication is compared by SMT modeling and calculation, which show this design method has improved the network scheduling capability. In addition, this paper proposes further ideas on the issue of health monitoring under the Time-Triggered Ethernet protocol and cut-through switching. Keywords: Time-Triggered ethernet modulo theories (SMT)

 Cut-Through  CRC  Satisfiability

1 Introduction With the continuous development of space technology, the communication data exchange on satellites will increase greatly. In the future, intelligent processing requires high-speed information exchange between computing nodes. The rate needs to be 1G or higher. Inter-satellite communication currently uses DTN, and there have been many studies analyzing its performance [1]. The network bus that has been applied within the satellites, such as SpaceWire [2] whose maximum rate is only 200 Mbps, is difficult to meet the requirements. This paper uses TTE (Time-triggered Ethernet) as the on-board backbone network, which can be backward compatible with various buses such as 1553B and SpaceWire at the same time [3]. As a backbone network, it has successfully been applied on the Orion spacecraft in the United States. Its speed can reach 1 Gbps or higher and because of its global clock synchronization and the characteristics of predefined scheduling, it can solve the drawback that SpaceWire is prone to network congestion. It is a suitable choice for the current satellite bus. In order to maintain the high integrity and accuracy of the received data, the TTE network often uses the storeand-forward method for data transmission. This method can detect the error of the data packet entering the switch, but the data processing delay is large. The main reason is that the input and output terminals must undergo serial-to-parallel conversion. This cumbersome process will affect the response speed and cause high delay phenomenon. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2085–2092, 2020 https://doi.org/10.1007/978-981-13-9409-6_252

2086

J. Li et al.

In this paper, the cut-through method is used for network data exchange [4], and the method has fast forwarding rate, low delay and high throughput. However, the disadvantage is that when the bit error rate in the network is high, the switch will forward all the complete data packets and error data packets, which will bring many wrong communication packets to the entire switching network. Therefore, Sect. 2 has a brief introduction to the TTE network mechanism and proposes a method for fast detection of CRC [5, 6], which can detect whether the current data frame part is in error during data transmission. If the error occurs, the forwarding is stopped. If it is correct, the cutthrough is forwarded. Under the premise of ensuring the reliability of the network data, the forwarding rate of the communication task is improved. In addition, Sect. 3 uses the Satisfiability Modulo Theories (SMT) [7, 8] modeling to generate a TT time-triggered schedule that satisfies the constraints by setting constraints such as network topology and traffic relationships, thereby verifying the cut-through network in variable-length frame communication is more favorable to the whole network scheduling plan, and the overall performance of the network communication is improved. In addition, Sect. 4 makes a conclusion and then discusses the integrated processor PHM (Prognostic and Health Management) [9], and imagines how to perform network reconfiguration when more common equipment on the star are damaged.

2 Safety Design of Cut-Through Switching 2.1

A Brief Introduction to TTEthernet

The TTE network complies with the SAE AS6802 protocol and is a time-triggered communication mechanism. In TTEthernet, each terminal and switch starts to send and receive data at a specified time according to the requirements of the communication configuration table. The sending and receiving times of messages in the entire network are generated by offline scheduling and strictly executed during the communication process. During the execution process, all shared resources are not mutually conflicted. In the network, each terminal has at most one message to be sent at a certain time, and there is no case where multiple messages compete for the same output link. Since the local clock of each node has a deviation, the biggest feature of the protocol is the network-wide clock synchronization function, which synchronizes the local clock by transmitting a PCF frame and using a compression controller and a synchronous controller to satisfy the above communication mechanism. 2.2

CRC Fast Verification Method in Cut-Through Forwarding Mode

When the TTE backbone network uses the cut-through method to forward the exchange information, it is required to ensure the real-time performance of the CRC check. The CRC check is based on a polynomial calculation. When hardware data is transmitted, serial calculation is started by entering the register bit by bit or byte by byte. While in software calculations, quick lookup calculations can be performed using precomputed tables [10, 11]. Proposed a basic lookup table method for CRC [12]. Proposed a fast CRC computation using a software based parallelization scheme. In this

Integrity Design of Spaceborne TTEthernet

2087

paper, based on the look-up table method, the method of parallel computing is introduced. The steps are as follows: 1. Decomposing the input frame into a plurality of n bits fixed length segments. 2. Perform parallel calculations on each part with FCS; (calculate after receiving) 3. Calculate the current segment CRC value by XORing each segment of the CRC value in the current frame order. 4. Comparing the value calculated in step 3 with the value in the register in every n register cycles; 5. If they are the same, cut-through forwarding this segment, else the forwarding is stopped. 6. Repeat steps 3, 4 and 5 until the end of the frame; Among them, the C++ implementation code of step 2 and 3 is as follows: ALT_U32 Reverse_table_CRC32_follow_a_crc (ALT_U8 *ptr, int len, ALT_U32 * table, ALT_U32 crc) { ALT_U8 * p = ptr; int i; for (i = 0; i < len; i ++) crc = table [ (crc ^ (*(p + i))) & 0xff ] ^ ( crc  8); return * crc; }

Since the full frame structure can be divided into multiple segments in software calculation, and the data is in a known state when passing through the switch, it can be separately CRC-checked. Since the exclusive-OR operation method has a commutative law and a combination law, when multiple segments are respectively completed, the CRC check of the full frame can be completed by the combination calculation.

Fig. 1. Parallel computing structure

2088

J. Li et al.

It can be seen from Fig. 1 that at any intermediate time, since Crc_cur is known a priori according to the previous calculation (because the CRC calculation of each segment of n bits can be calculated with the initialized 0xFFFFFFFF, the beginning is no exception), Crc_t01, Crc_t02 and Crc_t03 can calculate their own CRC check value by the above function. Crc_t01 = Reverse_table_CRC32_follow_a_crc ( pointer to T01 segment, sizeof(T01 segment), *Table, 0xFFFFFFFF ); Crc_t02 = Reverse_table_CRC32_follow_a_crc ( pointer to T02 segment, sizeof(T02 segment), *Table, 0xFFFFFFFF ); Crc_t03 Reverse_table_CRC32_follow_a_crc ( pointer to T03 segment, sizeof(T03 segment), *Table, 0xFFFFFFFF );

Before receiving the T01 segment, it can be calculated with a sequence of lengths equal to T01 segment with all values of 0 and Crc_cur, and the result is Crc_zeros_follow_cur. Crc_zeros_follow_cur = Reverse_table_CRC32_follow_a_crc ( T01 segment with all values of 0, sizeof( T01 segment), Table, Crc_cur); Crc_t01_follow_cur = *( crc_t01 ^ Crc_zeros_follow_cur );

When parallel calculation to T01 segment, we can calculate Crc_t01_follow_cur by one step through Crc_cur, Crc_t01 and Crc_zeros_follow_cur. This value is the calculated value in the current register, and can be compared in real time during CRC check to ensure whether each segment is guaranteed. By using multiple CRC check registers in the switch, the transmission frame can be verified piece by piece. When the calculation speed is faster than the serial calculation speed, the cut-through exchange has the following meaning: because it is in the TTE backbone network. Spacecraft messages are often used in three redundant transmission modes. Since the messages are verified in a first-in-first-out manner in the time window, once a certain CRC check fails, the other redundant transmission data can be directly used. After transmission, the subsequent CRC check is the same as the original frame, and can be continuously executed to quickly forward under the premise of ensuring data integrity. The parallel algorithm generated by CRC-32 is compared with the bitwise calculation method and the table lookup method. These tests were written in C++ and executed on a computer with a 3.2 GHz RTM CPU. Each test uses the selected algorithm to measure the time at which the CRC is calculated on 10,000 256-byte random messages. Each algorithm performed 20 independent repetitive experiments and selected the smallest sample value, because the system may interrupt the process during the run, resulting in a prolonged time result. The time is calculated using the clock function in C++ and reported in the final time. The run functions for bitwise operations and forward lookup operations are as follows:

Integrity Design of Spaceborne TTEthernet

2089

ALT_U32 CRC32_bit_follow_a_crc ( ALT_U8 * ptr, ALT_U32 len, ALT_U32 gx , ALT_U32 crc )/*bitwise operations*/{ ALT_U8 i ; while ( len -- ) { for ( i = 1; i ! = 0 ; i = i  1 ) { if( ( crc & 0x80000000 ) ! = 0 ) { crc = crc  1 ; crc ^ = gx ;} else crc = crc  1 ; if( ( *ptr & i ) ! = 0 ) crc ^ = gx ;} ptr ++ ;} return ( ( ALT_U32 ) Reflect(crc,32)^0xffffffff);} ALT_U32 Direct_table_CRC32 ( ALT_U8 * ptr, int len, ALT_U32 * table )/*forward lookup operations*/{ ALT_U32 crc = 0xffffffff ; ALT_U8 * p = ptr ; int i ; for ( i = 0 ; i < len ; i ++ ) crc = (crc8)^table[(crc 24)^(ALT_U8)Reflect((*(p + i)),8 )]; return * (ALT_U32) Reflect(crc,32);}

Through testing, it can be found that the parallel algorithm has a certain improvement in the CRC check speed compared with other methods, and because it can be segment-by-segment check, the transmission data integrity and reliability are improved (Table 1). Table 1. CRC check results by benchmark with ms CRC-32

Bitwise operations 28.16

Forward lookup operations 7.03

Parallel 5.27

3 Network Planning Verification by SMT The TT network adopts a time trigger mechanism, and needs to establish a time schedule to ensure the forwarding of messages when the task is known. When the CRC real-time check can be performed under cut-through, the influence of the variablelength frame cut-through on the network planning of the general TTE backbone network needs to be discussed. In this paper, the scheduling scheme of the whole network is obtained by Satisfiability Modulo Theories (SMT). The constraints used are as follows:

2090

J. Li et al.

1. No conflict constraint—On any physical link, the transmission of any two TT messages should be non-conflicting, that is, the moment when one message is sent should be later than the time when another message is sent. 2. Path Dependency Constraint—The single hop delay of a TT message is within a specific range, and the lower limit is the fixed transmission delay and forwarding delay of the physical link. The upper limit is determined by the switch memory. 3. Synchronous transmission constraint—When a multicast message sends information to different ports, its transmission time should be the same. 4. End-to-end transmission constraint—the maximum delay upper bound of the message from the source node to the destination node to ensure the timeliness of the communication information. 5. Application layer constraints—different TT messages are set by different task requirements, and the minimum transmission interval between them should be set. The real-time CRC check is related to the second constraint. If the single-hop delay of the TT message is dhop, min(dhop) is the lower limit of the single-hop delay, which is determined by the fixed transmission delay of the physical link and the length of the TT store-and-forward frame. Here, the reservation is generally made with the maximum frame length in the network (fixed length frame). Max(dhop) is the upper bound of the single-hop delay. The memory size of the switch determines the forwarding storable

Fig. 2. TTE network topology example

Integrity Design of Spaceborne TTEthernet

2091

time. The narrow time window width corresponding to max(dhop)-min(dhop) is one cause of the failure of the whole network TT message scheduling. In the case of cutthrough, the switch knows the destination address when it receives the first 6 bytes of the packet, so it can decide which port to forward the packet to. With the CRC fast check technology, the single-hop delay lower bound (min) can be reduced to a fixed transmission delay close to the physical link, thereby improving the feasibility of network scheduling (Fig. 2). A greedy algorithm with a single hop delay as a priority for SMT specific solutions. Network parameters are as follows: 10 end systems, 4 switches (either pair switches are interconnected), 1G bandwidth, 500 random TT messages and random topology (each node randomly selects an access switch). The schedule was verified by 100 independent experiments; the topology and results are as follows (Table 2):

Table 2. Comparison of scheduling results before and after modifying constraints

Cut-through (variable length frame) Store and forward (fixed length frame)

Unable to schedule (ratio) (%) 7

Average scheduling time (s) 47.1

9

50.2

It can be seen from the above that the cut-through method has a certain positive impact on network scheduling in the case of ensuring security.

4 Conclusion and Future Work This paper discusses the use of TTE network protocol as a backbone structure for integrated electronic systems on spacecraft. In order to reduce end-to-end delay and switch memory load, the cut-through information interaction mode is used for communication. In this communication mode, data integrity and correctness are guaranteed by an improved CRC parallel check method, which can be extended to any available register width. Therefore, it has strong achievability. In addition, the integrated electronic architecture is modeled and analyzed by SMT in the cut-through interaction mode. It is discussed that it is easier to guarantee the global scheduling of communication messages when building the network than storage and forwarding. One of the major problems of the TTE backbone network for satellite loading is that the frequency of equipment replacement on the satellite is low. Once the problem occurs, it needs to be solved by redundancy and fault-tolerant technology. The most commonly used solutions are hot redundancy backup and COM/MON fault tolerance mechanisms. These mechanisms are health monitored through PHM. When there are problems in more integrated processing nodes in the network, the whole network faces the problem of reconfiguration. Such problems need to be carried out without affecting the current TTE network communication tasks. It is the reconfiguration of network

2092

J. Li et al.

scheduling, and the entire network communication node. In the case where the switch is fixed, how to find the solution of the reconfiguration and the optimal solution are problems that need to be solved after the cut-through switching method is applied to the on-board TTE network.

References 1. Yang G, Wang R, Zhao K et al (2018) Queueing analysis of DTN protocols in deep-space communications. IEEE Aerosp Electron Syst Mag 33(12):40–48 2. Liu W, Niu Y, Cheng B et al (2016) Deterministic communication and distributed control of avionics based on SpaceWire-D: SpaceWire missions and applications, short paper. In: SpaceWire conference 3. SAE International Group (2011) Time-triggered ethernet: AS6802. SAE International Washington, D.C. 4. Zhanikeev M (2017) [IEEE 2017 IEEE international symposium on local and metropolitan area networks (LANMAN)—Osaka, Japan (2017.6.12–2017.6.14)]. In: 2017 IEEE international symposium on local and metropolitan area networks (LANMAN)—The switchboard optimization problem and heuristics for cut-through networking 1–3 5. Brown DT (2007) Cyclic Codes for error detection. Proc IRE 49(1):228–235 6. El-Khamy M, Lee J, Kang I (2015) Detection analysis of CRC-assisted decoding. IEEE Commun Lett 19(3):483–486 7. Steiner W (2010) An evaluation of SMT-based schedule synthesis for time-triggered multihop networks. In: 2010 31st IEEE real-time systems symposium. IEEE Computer Society 8. Craciunas SS, Oliver RS (2014) SMT-based task-and network-level static schedule generation for time-triggered networked systems. In: International conference on real-time networks and systems 9. Hou W, Yan J, Yao G (2014) A comprehensive validation approach of PHM system’s diagnosis and verification. In: Prognostics and system health management conference. IEEE 10. Sarwate DV (1988) Computation of cyclic redundancy checks via table look-up. Commun ACM 31(8):1008–1013 11. Kounavis ME, Berry FL (2008) Novel table lookup-based algorithms for high-performance CRC generation. IEEE Trans Comput 57(11):1550–1560 12. Engdahl JR, Chung D (2014) Fast parallel CRC implementation in software. In: International conference on control. IEEE

Image Mosaic Algorithm Based on SURF Qingfeng Sun(&), Hao Yang, Liang Wang, and Qingqing Zhang Department of Electronic Engineering, Anhui Technical College of Mechanical and Electrical Engineering, Wuhu, Anhui, China [email protected]

Abstract. Based on SURF feature detection operator, image feature points are extracted, matched and mosaic Wavelet transform is used to fuse the registered images and eliminate the edges between stitched images. Overall, the splicing effect is well. Keywords: SURF

 Mosaic  Registration  Fusion

Image Mosaic is a technique that combines multiple images with overlapping regions into a larger seamless image. Specifically, image mosaic is a panoramic image technology. For a group of image sequences with overlapping parts, a panoramic image containing all image sequences is formed through a series of operations such as spatial registration, image transformation, resampling and image fusion.

1 Image Mosaic Technology Image mosaic technology is widely used. In the field of remote sensing image processing, image mosaic technology is used to combine remote sensing images of multiple local areas into an image containing complete scenes; when constructing virtual scenes in virtual reality, image mosaic technology can be used to construct panoramic images of various types; in daily life, we can mosaic several photographs taken by digital cameras into a large picture; In addition, image mosaic technology has important application value in video compression, video retrieval, medical image analysis, military and other fields. There are many methods of image mosaic. Different algorithms and mosaic steps will have some differences. However, the approximate processes of different stitching algorithms are similar. Generally speaking, image mosaic includes the following three steps: (1) Image preprocessing Image preprocessing mainly prepares for image registration, so that the quality of the image can meet the requirements of image registration. When the image quality is not ideal for image mosaic, it is easy to cause some mismatches if the image is not preprocessed. Image preprocessing generally includes image denoising, image projection, image correction and so on.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2093–2098, 2020 https://doi.org/10.1007/978-981-13-9409-6_253

2094

Q. Sun et al.

(2) Image registration The quality of image mosaic mainly depends on the accuracy of image registration. The so-called image registration is to use a certain matching algorithm to find out the corresponding position of feature points or templates in the reference image in the image to be registered. The purpose of image registration is to find a spatial transformation to align the coordinates of overlapping parts of image sequences. Image registration algorithm not only needs less computation, but also ensures the accuracy of registration. Image registration is the key step of image mosaic technology, and also the focus of this paper. (3) Image fusion Images can be stitched after registration. However, due to the gray difference of images and other reasons, the stitched images are prone to brightness differences and stitching seams. Therefore, image fusion should be carried out after image stitching. The image fusion algorithm should satisfy the following requirements: in the process of image mosaic, the boundary transition should be natural, and the seam of image mosaic can be eliminated; the loss of original image information in the process of image mosaic should be minimized; and the image fusion algorithm should be applicable to the nature of the mosaic image.

2 SURF Feature Detection Operator The detection algorithm based on image feature points is the focus and hotspot in the field of image processing, especially in image registration and image mosaic. From classical Harris [1] corner detection algorithm to SIFT [2, 3] operator based on scale space theory, there are many algorithms based on image point features. In 2006, Herbert Bay made some improvements on the basis of in-depth study of SIFT algorithm, thus proposed SURF [4] (Speeded Up Robust Features) algorithm, that is, accelerated robust feature algorithm. Generally speaking, SURF has successfully improved SIFT in three aspects: feature point detection, main direction determination and feature descriptor. The SURF algorithm uses the technology of block filter, integral image and Harr wavelet response, which makes the algorithm better than SIFT algorithm in speed, accuracy and stability of different geometric transformations. SURF operator has three main steps: 1. Computing integral graph; 2. Scale space construction and feature detection; 3. Determining the main direction of SURF and calculating descriptor. 2.1

Computational Integral Images

Set IR ðxÞ represents the sum of the pixel values in a rectangular region consisting of an image origin and an X-point, as follows:

Image Mosaic Algorithm Based on SURF

IR ðxÞ ¼

jy ix X X

Iði; jÞ

2095

ð1Þ

i¼0 j¼0

X represents coordinates at points ðx; yÞ. As can be seen from Fig. 1, it only takes four operations to calculate the value of S, S = A − B − C + D, independent of the size of the rectangle.

Fig. 1. Integral image schematic

2.2

Scale Space Construction and Feature Detection

In order to avoid signal aliasing caused by de-ambiguity of discrete Gauss function, Bay et al. proposed to replace second-order Gauss filter with box filter. The first line from left to right is the second-order partial derivative of Gauss function in the direction of x, y and xy, which is denoted as Lxx , Lyy , Lxy ; and the next line is the corresponding 9 * 9 block filter template, whose convolution values are respectively Dxx , Dxy , Dyy and so on. Further solving the approximate value of Hessian matrix ΔH:  2 DðHÞ ¼ Dxx Dyy  0:9Dxy

2.3

ð2Þ

Determination of the Main Direction and Description Sub-calculation

The descriptor is constructed by statistical gradient histogram and determining the main direction of neighborhood. In order to ensure its rotation invariance, firstly, the Haar wavelet responses in the X and Y directions of the points in the neighborhood with the radius of 6S (s as the scale value of the feature point) are calculated, in which the Haar wavelet edge length is 4 s. Traversing the whole circular region, the direction of the longest vector is chosen as the main direction of the feature point (Fig. 2).

2096

Q. Sun et al.

Fig. 2. depicts the sub-schematic diagram

3 Image Mosaic Algorithms Based on SURF (1) Feature point extraction and matching. In the reference image and the image to be stitched, feature points are extracted by SURF operator, and matched by Nearest Neighbor (NN) algorithm. One feature point is extracted from one image, and its nearest and next-nearest feature points are searched in another image. If the ratio of the nearest Euclidean distance divided by the next nearest Euclidean distance is less than the given ratio threshold, it is considered as a pair of matching points. On the contrary, it is considered that the two feature points do not match. After confirming the initial matching pairs, combining with RANSAC method, the mismatching point pairs are removed, the transformation parameters are estimated, and the final registration results are obtained. (2) According to the result of feature point matching, the image deflection matrix is calculated, including the angle and the value of the offset. (3) Create a new image matrix, copy one image to the left, and stitch the other image to the first image according to the deflection matrix. (4) Wavelet transform [5] is used for mosaic image fusion to remove edge gap and smooth pixel difference (Figs. 3, 4, 5 and 6).

Fig. 3. Two original images

Image Mosaic Algorithm Based on SURF

Fig. 4. The match result based on SURF operator

Fig. 5. Image deflection

Fig. 6. Image mosaic result

2097

2098

Q. Sun et al.

4 Conclusion Combine with SURF-based image feature extraction and wavelet multi-scale technology, proposed an image mosaic algorithm. The experimental results show that the method proposed has high speed and good effect. Acknowledgements. This work is supported by key research project of natural science in colleges and universities in Anhui province (KJ2019A1153);2016 quality engineering project in Anhui province: virtual simulation experiment teaching center for industrial robot (2016xnzx 007).

References 1. Harris C, Satephens MJ (1988) A combined corner and edge detector. In: 4th Alvey vision conference, pp 147–152 2. Lowe DG (1999) Object recognition from local scale-invariant features. Int Conf Comput Vis 2:1150–1157 3. Lowe DG (2004) Distinctive image features from scale-invariant key points. Int J Comput Vis 60(2):91–110 4. Bay H, Tuytelaars T, Van Gool L (2006) SURF: speed up robust features. In: European conference on computer vision, pp 404–417 5. Mallat SA (1987) Compact multi-resolution representation: the wavelet model, pp 2–7

Research on Dynamic Performance of DVR Based on Dual Loop Vector Decoupling Control Strategy Hao Yang(&) and Liang Wang Anhui Technical College of Mechanical and Electrical Engineering, Wuhu, Anhui, China [email protected]

Abstract. In the distribution system, voltage sag/transient rise will cause unexpected power load outage. Dynamic Voltage Restorer (DVR) is usually used to compensate the grid voltage. DVR detection unit should have good dynamic performance to detect voltage amplitude and phase changes in order to trigger DVR in the rapid compensation of distribution network voltage fluctuations. In order to improve the power quality in power system, a DVR control strategy based on double loop vector decoupling control is proposed to improve the dynamic performance of DVR in compensation process. The experimental results show that the proposed control strategy can correct voltage sags more quickly than the traditional method, make DVR achieve better dynamic performance, and ultimately improve the power quality of the grid. Keywords: Micro grid separation

 Voltage unbalance  Positive and negative sequence

With the improvement of production automation and product quality requirements, power industry and commercial users are becoming more and more sensitive to power quality issues. Power quality accidents will cause equipment shutdown, product defects and damage to power equipment, which will bring huge economic losses to consumers and enterprises. Therefore, the study of power quality has become a hot research topic [1]. In view of the power quality problems caused by voltage sag/sag, there are some solutions at present [2]: such as installing UPS or voltage correction device to improve the ability of electrical equipment to withstand voltage fluctuation, etc. DVR is mainly used to protect sensitive loads which may be affected by the fluctuation of distributed voltage. This paper presents a dual-loop vector decoupling control strategy which can improve the dynamic performance of DVR. The characteristic of this strategy is that the linear regulation is used as the control basis, and the inductance current and compensation voltage are used as the control variables. Firstly, the inductance current and capacitance voltage of the filter are vector transformed, and then transformed into synchronous rotating coordinate system. Then decoupling operation is carried out, which makes all the control become DC. Then the traditional PI control method is used to control [3].

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2099–2106, 2020 https://doi.org/10.1007/978-981-13-9409-6_254

2100

H. Yang and L. Wang

1 DVR Control Strategy and Mathematical Model Generally, DVR has three compensation modes: minimum active power compensation [11], voltage invariant compensation [4], and minimum amplitude compensation [5]. In this paper, voltage invariant compensation is chosen to achieve optimal control. That is to say, after the sag/rise of the grid voltage, keep the load side voltage consistent with the voltage before fluctuation, and avoid the large phase jump of the load, so as to reduce the impact on the load. Voltage invariant compensation first establishes an accurate grid voltage reference. After comparing nominal voltage with actual voltage, the output voltage is adjusted. Therefore, the magnitude and phase angle of the voltage must be accurately restored. The control flow of the control strategy is to obtain the actual grid frequency and angle through three-phase phase phase-locked loop, and generate the rated voltage reference of the grid at the same time. When the grid voltage deviates from the voltage reference, the voltage reference, frequency and phase remain unchanged, and the difference between the standard voltage and the grid voltage is calculated. This difference is compensated by DVR. The equivalent circuit of DVR is shown in Fig. 3. The three-phase grid voltage is represented by vsx; the three-phase voltage and current of vsx are represented by vinvx and iLx; the voltage and current of filter capacitor are represented by vcx and ic; the compensation voltage and current provided by DVR are represented by vinjx and iinjx; the DC energy storage unit is represented by vdc; and the load voltage is represented by vloadx (Fig. 1).

Fig. 1. Single-phase equivalent circuit of three-phase DVR system

The control system of DVR consists of two cascaded closed-loop controllers: the outer loop controls the voltage at both ends of the filter capacitor.vCf ðtÞ; Current of

Research on Dynamic Performance of DVR Based on Dual Loop Vector

2101

filter wave capacitor in inner loop control flow iLf ðtÞ. It is assumed that the compensation voltage of DVR is the voltage at both ends of capacitor of VSC output filter. The compensation transformer is considered to be an ideal transformer with a 1: N turn ratio. There are the following formulas: (

vinj ðtÞ ¼ n  vCf ðtÞ iinj ðtÞ ¼ n  iS ðtÞ

ð1Þ

Using x instead of a, B and C as angular scales, according to Kirchhoff’s current law (KCL) and voltage law (KVL), various loop differential equations are listed. iLfx ðtÞ ¼ iCfx ðtÞ þ iinjx ðtÞ ¼ Cf

d vinjx ðtÞ þ iinjx ðtÞ dt

ð2Þ

d iL x ðtÞ ¼ 0 dt f

ð3Þ

vinvx ðtÞ  vinjx ðtÞ  Rf iLf x ðf Þ  Lf

By applying Clarke and Park transformations, formulas (2) and (3) are transformed into DQ synchronous rotating coordinates, and (4) and (5) are obtained. d ðdqÞ 1 ðdqÞ 1 ðdqÞ ðdqÞ vinj ðtÞ ¼ iLf ðtÞ  iinj ðtÞ  jwvinj ðtÞ dt Cf Cf

ð4Þ

d ðdqÞ 1 ðdqÞ 1 ðdqÞ 1 ðdqÞ ðdqÞ i ðtÞ ¼ vinv ðtÞ  vinj ðtÞ  Rf iLf ðtÞ  jwLf iLf ðtÞ dt Lf Lf Lf Lf

ð5Þ

Under the ideal power grid condition, the voltage vector is in the same direction as the d-axis. The grid voltage component in the d-axis direction is equal to its effective value, and the q-axis component is equal to zero. Therefore, the D component of the current vector becomes the active current component (d current), and the q axis component of the current vector becomes the reactive current component (q current). In order to improve the control performance of DVR, such as fast response performance and low steady-state error, it is necessary to improve the linearity of the controller. In this paper, a vector controller with coupling term as feed-forward term is used to drive DVR.

2 Control Design of DVR In order to improve the transient response of DVR and control the active power and reactive power of compensation voltage separately, a DVR control strategy based on double loop vector decoupling control is proposed in this paper. Firstly, the software phase-locked loop is used to extract the information of voltage amplitude and phase angle of power grid [7], and the vertical axis component of power grid voltage in synchronous coordinate system can be obtained. vd Quadrature component vq . When the power grid is normal and the phase-locked is correct, there are

2102

H. Yang and L. Wang

(

vd ¼ 220

vq ¼ 0

ð6Þ

As shown in Fig. 5, in the synchronous coordinate system [6], The reference voltage vd and vq generated in the synchronous frame by low pass filtering. The above information and the phase shift angle generated by PLL are used to reconstruct the static voltage reference. By comparing the static voltage reference with the instantaneous voltage in the power grid, the command signal of the PWM modulator can be obtained (Fig. 2).

Fig. 2. Detection and control scheme of voltage sag and sag compensation

The purpose of the controller is to keep the load voltage constant. The following assumptions are made: (1) The gate current is constant, independent of the current and voltage changes of the output LC filter. (2) The capacitance voltage and inductance current change linearly in a sampling period. (3) When the average values of capacitor voltage and inductor current are equal to half of the sum of the actual value and the reference value respectively when k times of sampling are taken. (4) The reference value of the k-th sampling of capacitor voltage and inductance current is equal to the actual value of the k + 1 sampling. In order to realize voltage vector control in digital controller, the formulas (4) and (5) need to be discretized. Formulas (8) and (9) are obtained by integrating the voltage components in a sampling period and dividing them by periods. ðdqÞ

i Lf

ðdqÞ

ðkÞ ¼ iinj ðkÞ þ j

 C   xCf  ðdqÞ f ðdqÞ ðdqÞ ðdqÞ vinj ðkÞ þ vinj ðkÞ þ vinj ðkÞ  vinj ðkÞ 2 Ts

ð8Þ

Research on Dynamic Performance of DVR Based on Dual Loop Vector ðdqÞ vinj ðkÞ

¼

ðdqÞ vinv ðkÞ

  L  xLf  ðdqÞ Rf  ðdqÞ f ðdqÞ ðdqÞ iLf ðkÞ þ iLf ðkÞ þ  þ iLf ðkÞ  iLf ðkÞ 2 Ts 2

2103

ð9Þ

ðdqÞ

vinj ðk þ 1Þ. Represents the average value in the sampling period K to K + 1. In order to eliminate the static error caused by non-linear factors such as noise or non-ideal elements in measurement, integral links need to be introduced into the control system. Finally, the double loop vector decoupling control block diagram is obtained (Fig. 3).

Fig. 3. Double-loop vector decoupling control block diagram of DVR

3 Simulation Verification In order to verify the performance of DVR with vector control strategy, a 50 Hz distribution system with sensitive load is designed, as shown in Fig. 1. The rated phase voltage of power grid is 220 V, the rated power of load side is 10 kW per phase, and the lowest power factor of load is 0.8 (inductive). Through the simulation of MATLAB, the compensation performance of vector decoupling control strategy under different grid voltage sags is verified. As shown in Fig. 4, at 0.04 s, a symmetrical voltage sag occurs and the voltage of each phase drops to 0.7 times of the rated voltage, which lasts for 0.06 s. For threephase unbalanced sag, vector decoupling control is also applicable. As shown in Fig. 5 the unbalanced sag occurs at 0.04 s. The A-phase voltage is 70% of the rated value, the B-phase voltage is 80% of the rated value, the C-phase voltage is 90% of the rated value, and the duration is 0.06 s.

2104

H. Yang and L. Wang

Figure 6 illustrates the performance of the vector decoupling control strategy in response to grid voltage spike. Symmetrical temporary rise occurs in 0.04 s, and the three-phase voltage of A, B and C rises to 1.2 times the rated voltage, lasting 0.06 s.

Fig. 4. Power grid voltage balance sag, DVR compensation performance

Fig. 5. Power grid voltage asymmetric sag, DVR compensation performance

Research on Dynamic Performance of DVR Based on Dual Loop Vector

2105

Fig. 6. Symmetrical voltage rise of power grid and DVR compensation performance

Fig. 7. DVR Compensation performance for asymmetric voltage Sag/Sag in power grid

In Fig. 7, a sag/rise occurs in 0.04 s. The voltage of phase A is 1.2 times of the rated voltage, phase B is 0.9 times, and that of phase C is 0.7 times of the rated voltage, lasting for 0.06 s. The simulation results show that the output compensation voltage of DVR is zero when the power grid works normally, and the control strategy can drive the DVR to detect the voltage sag quickly when the power grid voltage fluctuates, and compensate the three-phase voltage component of the appropriate phase to smooth the load voltage and eliminate the power supply voltage anomaly caused by the three-phase fault.

2106

H. Yang and L. Wang

Whether symmetrical voltage sag, asymmetrical sag, symmetrical sag, asymmetrical sag or asymmetrical sag/sag are controlled by vector decoupling, DVR can output compensation voltage quickly and keep ideal symmetry of load side voltage. Vector decoupling control shows good adaptability.

4 Conclusion A dual-loop vector decoupling control strategy for voltage sag/sag suppression is proposed in this paper. The control strategy constructs a double-loop control in the synchronous rotating coordinate system. The inner loop is an inductive current loop and the outer loop is an output voltage loop. The simulation results show that the control strategy has good adaptability in DVR when symmetrical or asymmetrical rise/sag occurs. Moreover, the response time is greatly reduced with the traditional DVR control method. On the load side, both voltage waveform and response time have achieved satisfactory results. Acknowledgements. This work is supported by key research project of natural science in colleges and universities in Anhui province (KJ2017A747); 2016 quality engineering project in Anhui province: virtual simulation experiment teaching center for industrial robot (2016xnzx007).

References 1. Ghosh A, Ledwich G (2002) Power quality enhancement using custom power devices. Kluwer Academic Publishers, United States 2. IEEE Recommended practice for monitoring electric power quality, IEEE Std. 1159, 1995. [3] Hingorani NG (1995) Introducing custom power. IEEE Spectrum 32(6):41–48 3. Vilathgamuwa DM, Perera AADR, Choi SS (2008) Performance improvement of the dynamic voltage restorer with closed-loop load voltage and current-mode control. IEEE Trans Power Electron 17(5):824–834 4. Etxeberria-Otadui I, Viscarret U, Bacha S, Caballero M, Reyero R (2002) Evaluation of different strategies for series voltage sag compensation. In: Conference on Rec IEEE PESC 2002, pp 1797–1802 5. Ghosh A, Joshi A (2002) A new algorithm for the generation of reference voltage of a DVR using the method of instantaneous sysmmetrical components. IEEE Power Eng Rev 22(1):63–65 6. Liu JW, Choi SS, Chen S (2002) Design of step dynamic voltage regulator for power quality enhancement. IEEE Trans Power Delivery 18(4):1403–1409

Facial Micro-expression Recognition with Adaptive Video Motion Magnification Zhilin Lei(B) and Shenghong Li School of Cyber Security, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240, China [email protected]

Abstract. Facial micro-expression commonly has an extremely short duration and subtle motion. At the same time, the micro-expression databases are rarely available for research. These are not conducive to directly introduce the deep neural network which is currently outstanding in the field of image recognition into facial micro-expression recognition. Recently, some researchers use the video motion magnification technology to magnify micro-expressions and achieve a good performance of enhancing the recognition results. This paper follows this idea and introduces an innovative way to adjust the amplification rate adaptively instead of the previous manual way. In addition, we use the extended continuous frames, instead of a single apex frame, to extract a concatenate feature map for the final classification. We have demonstrated through a series of experiments on the CASME II database that our method can effectively improve the accuracy of facial micro-expression recognition.

Keywords: Micro-expression recognition magnification · CASME II

1

· Video motion

Introduction

Facial micro-expressions(ME) are facial expressions that are naturally exposed in a very short time when people intend to conceal their real emotions [1]. The duration of a micro-expression is usually between 1/25 to 1/5 of a second and it occurs only in specific facial areas [2]. Due to the subtleness and brevity of the micro-expressions, it is difficult for the naked eye to directly see the occurrence of the micro-expressions. At the same time, micro-expressions implicitly contain people’s true emotions. Hence, the technology of facial micro-expression recognition has high application value in many fields such as clinical diagnosis, lie detection and etc. The process of micro-expression recognition can be mainly divided into four stages: face recognition, preprocessing, facial feature extraction and classification c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2107–2116, 2020 https://doi.org/10.1007/978-981-13-9409-6_255

2108

Z. Lei and S. Li

[2]. Currently, the face recognition methods are relatively mature for accurately locating the face area. Time interpolation model (TIM) [3] and Euler video magnification [4] are applied to the preprocessing stage to facilitate better extraction of features. Local binary pattern(LBP) [5] and its variants, such as LBP-TOP [6], are widely used in the feature extraction stage. Support vector machine (SVM) with RBF kernel [7] are often used in the final feature classification stage. In recent years, deep neural networks, represented by convolutional neural networks (CNN) [8], long-short-time memory networks (LSTM) [9] and etc, have made rapid progress in the field of image recognition. In many image classification tasks, features extracted by deep neural networks exhibit performance far beyond traditional manual features. In the area of micro-expression recognition, there are many methods based on CNN and LSTM have emerged [10,11]. These researches use deep features to characterize facial micro-expressions after preprocessing and use deep neural networks to make the final classification. In this paper, we used a deep neural network-based video motion magnification algorithm in the preprocessing stage of micro-expression video and proposed a method to adaptively generate appropriate magnification factors to ensure that each video sample is amplified to the appropriate extent. Next, we extend the micro-expression video to a fixed frame length and use VGG-16 [12] to extract the features of each frame and concatenate them into a feature map. Finally, we use a shallow CNN with rectangular kernels to get the final classification result. A series of comparative experiments on a typical database demonstrate the effectiveness of our method in improving the accuracy of micro-expression recognition.

2 2.1

Related Works Traditional Features and Classifiers

In the research works of micro-expression recognition in recent years, the manual features represented by LBP-TOP and its variants are widely used. Currently, many popular micro-expression databases use LBP-TOP as the feature extractor in their baseline method. For each pixel in the image, LBP encodes a fixed-length binary code vector into a histogram based on the brightness of each pixel in a certain radius around the pixel. On this basis, LBP-TOP generates histograms of images in three directions (XY, XT, YT) and concatenates them into a single histogram, which is used to extract inter-frame information. Despite this, it is still difficult for LBP-TOP to adequately extract enough information of every frames and some of the temporal information between frames are missed. The length of the micro-expression video is extremely short and often inconsistent, so the temporal interpolation model (TIM) [3] is often used to align the video length in the preprocessing stage. In past researches, researchers have mainly adopted linear interpolation algorithms in TIM, which might be improved by nonlinear interpolation algorithms. Wang et al. [4] used Eulerian Video Magnification (EVM) to enhance the amplitude of the micro-expression in the preprocessing stage. In the EVM,

Facial Micro-expression Recognition

2109

the Laplacian pyramid of the video frames sequence is firstly calculated, then the band of interest is selected by setting the bandpass filtering manually and multiplied by an magnification factor. The magnified video frames sequence is obtained by synthesizing the Laplacian pyramid. In the magnified video, the motion of facial expressions can be seen with the naked eye, which is very advantageous for feature extraction. After obtaining the features of micro-expressions, SVM is used to classify these vectors because of their good mathematical foundation and robustness [4]. SVM is a binary classification algorithm while all the micro-expression databases contain several expressions. Generally, the one-against-one method is used to process the multi-classification problem with the SVM. 2.2

Deep Neural Networks

In recent years, methods using deep learning techniques have also been introduced into the researches of micro-expression recognition. Min et al. [10] proposed a method based on transfer learning. More than 10,000 images selected from several macro-expression databases were preprocessed to train a ResNet-18 [13] which was pre-trained on ImageNet [14]. After that, the fine-tuning was carried out on CASME II. This method achieved a good accuracy of 75% under Leave-One-Subject-Out (LOSO) evaluation scheme. Anyway, this study still used the apex frame to perform the recognition which did not consider the temporal information between the frames. Khor et al. [11] proposed a model called Enriched Long-term Recurrent Convolutional Network (ELRCN). The Spatial Dimension Enrichment (SE) and Temporal Dimension Enrichment (TE) models are used to enrich the features of micro-expression sequences. The SE model stacks the optical flow images, optical strain images and grayscale images of a micro-expression sequence and extracts features by VGG-16. An LSTM was used to do the final classification. The TM model separately generates features of these three types of images by using pre-trained VGG-Face [15] and splices them to feed a LSTM for classification. Although the accuracy achieved by these two models was not very high under LOSO, it obtained better F1 and UAR scores.

3

Method

In this paper, we first perform preprocessing, such as nonlinear interpolation based frame expansion, on the micro-expression database, then use the learningbased video motion magnification algorithm [16] to amplify the motion of the micro-expressions and innovatively introduce the adaptive magnification factor. After obtaining the amplified micro-expressions, the frame sequence is passed through VGG-16 to extract a fixed-size feature map, then a shallow CNN with rectangle kernel is used for the final classification. To avoid over-fitting of the classification network, some macro-expression data are used to pre-train it. Figure 1 shows the process of our method.

2110

Z. Lei and S. Li

Fig. 1. Proposed architecture

3.1

Adaptive Video Motion Magnification

In [16], Oh et al. proposed Learning Based Video Motion Magnification (LVMM), which uses a deep neural network to improve the traditional EVM. The architecture mainly includes three parts: encoder, manipulator, and decoder. The input frame sequence is divided into two parts after passing through several convolutional layers and residual blocks in the encoder. One part is used to extract texture representations and the other part is used to extract shape representations. The shape representations of adjacent frames are amplified by the manipulator at a given magnification factor. The amplified shape representations and the previous texture representations are input to the decoder, and after several times of upsampling and convolution layers, the amplified frame sequence is obtained. Compared to EVM, LLVM eliminates the need to manually design features or select filter bands throughout the whole process. Filters can be learned autonomously from existing databases but researchers still need to manually adjust the magnification factor. We propose a method for obtaining adaptive magnification factors. In the training stage, for each micro-expression frame sequence in the database, the appropriate magnification factor is obtained by the following steps. Firstly, the feature sequence of these frames is extracted by VGG. For each frame from the second one, the cross-entropy between its feature and the first frame’s feature is calculated. Such a cross-entropy sequence exhibits an upward convex trend consistent with the amplitude of the micro-expression, and numerically reflects the magnitude of the motion in the micro-expression. Then, we perform the same processing on plenty of the same kind of macro-expression to get a similar average cross-entropy sequence. We hope that the amplified micro-expression can be similar to the macro-expression in the value of cross-entropy. According to this principle, We can give a magnification factor label to each sample in the micro-expression database. Finally, we train a neural network with the VGG feature of the micro-expression frame sequence as the input and output the fitting result of the magnification factor which is adaptive based on the motion of the micro-expression. In the test stage, the features of a given micro-expression

Facial Micro-expression Recognition

2111

frame sequence are firstly extracted by the VGG and input into this network to obtain an adaptive magnification factor, which is then sent to the LVMM network for further video motion magnification. 3.2

CNN Architecture

In [17], the research shows that the methods based on the apex frame, which are adopted in amounts of the researches in micro-expression recognition, behave good performance. However, we believe that in the micro-expression frame sequence, the frame and inter-frame changes other than the apex frame still contains a wealth of emotionally relevant information. In [11], LSTM was used in the classification stage to learn the temporal information contained between frames, but from the final experimental results, the improvement of classification results from LSTM itself is not very significant. In this paper, we want to extract the emotion information between the micro-expression frames without using any RNN. We pass the n-frame sequence through VGG-16 pre-trained on ImageNet to get n features of 4096-dimensional and concatenate these feature vectors to get the feature map with the size of n * 4096. Because of the insufficient number of samples in the current databases, we designed a shallow CNN with rectangular convolution kernels that better extract the information between adjacent frames to obtain the final classification results using based on the previous feature map. To avoid overfitting and accelerate convergence, we collected 593 macroexpression sequences in CK+ [18], and obtained their feature maps by the same way above to pre-train the CNN we just designed. Then the fine tuning is performed on CASME II.

4 4.1

Experiment Databases and Preprocessing

CASME II [7] is a spontaneous micro-expression video database containing 247 samples from 26 Asian participants with an average age of 22.03. The video frame rate is 200 fps and it contains five types of micro-expressions: happiness, disgust, repression, surprise and others. It also gives the starting frame, apex frame and ending frame of the micro-expression in each video. The distribution of samples is shown in Table 1. The database contains a cropped version. The crop method selects the area from the eyebrow to the chin from the first frame of the video and crops the subsequent frames by this area. Our experiment is based on the cropped version of CASME II. CK+ [18] is a facial macro-expression database containing 593 macroexpression video samples from 123 participants, 327 of which are labeled as anger, contempt, disgust, fear, happiness, sadness and surprise. Each video ends at the apex frame. Based on the similarity with the magnified micro-expressions, we manually divide the 593 samples into five categories corresponding to CASME

2112

Z. Lei and S. Li Table 1. Sample distribution of CASME II Class

CASME II

Happiness 32 Disgust

62

Others

99

Repression 27 Surprise

25

Total

245

II and crop them by the same method as CASME II to pre-train the CNN we design later. We need the subsequent feature map to have a fixed size, but the size of each sample in the databases is not necessarily the same. Since the feature space might be non-linear, we need to first perform frame expansion instead of resizing after extracting the feature map. For each sample frame sequence, we extend it to 140 frames as a new sample based on cubic spline interpolation.

Fig. 2. A sample in CASME II

4.2

Adaptive Video Motion Magnification

Figure 2 is the apex frame of a sample in the happy class of CASME II. It is almost impossible to see the motion of the corners of the mouth and forehead area. Figure 3 is the result of the sample being amplified by EVM. Based on [4], we make the magnification factor 20. The change of the corners of the mouth is more obvious, but the left side of the face has more distortion. Figure 4 is the same sample after LVMM at the same magnification factor. The changes in the corners of the mouth and the forehead are more obvious and the image has almost no distortion. Although a certain blur appears, the blur is smoother overall.

Facial Micro-expression Recognition

Fig. 3. Amplified by EVM

Fig. 4. Amplified by LVMM

Fig. 5. Under-amplified sample

Fig. 6. Adaptive-amplified result of Fig. 5

2113

2114

Z. Lei and S. Li

Fig. 7. Over-magnificent sample

Fig. 8. Adaptive-amplified result of Fig. 7

Figures 5 and 7 are the magnification results of the other two samples obtained by LVMM at the same magnification factor. The first one has almost no change compared to the original sample while the second one shows very severe distortion. Figures 6 and 8 are the magnification results under the magnification factor obtained by our method, which proves that the selection of the adaptive magnification factor improves the magnification result. Table 2. Architecture of our shallow CNN Layer

Configuration

conv1

Filter 6 * 140 * 128, stride 1 * 1, tanh

conv2

Filter 16 * 1 * 1, stride 1 * 1, tanh

fc1

256, tanh, batch normalization

fc2

256, tanh, batch normalization

softmax 5

4.3

Performance on CASMEII

In this experiment we use the LOSO cross-validation as the protocol to prevent bias caused by sample imbalance. Due to the extreme lack of training samples, we chose to design a simple shallow network that pre-trained on the CK+ to classify the obtained feature map. Through experiments, we found that using

Facial Micro-expression Recognition

2115

Table 3. Performance comparison Method

Recognition rate (%)

LBP-TOP + SVM 63.41 ELRCN-SE

47.15

ELRCN-TE

52.44

CNN-LSTM

60.98

EVM

75.30

Proposed method

77.50

the 140 * 128 kernal in the first layer can better extract temporal information. Table 2 shows the structure of the network, Table 3 compares the performance of our proposed method and some other methods, including LBP-TOP + SVM, EVM, ELRCN, and CNN-LSTM.

5

Conclusion

In this paper, we propose a method to recognize micro-expressions. We introduce the adaptive magnification factor to the learning based video motion magnification and apply it in the preprocessing stage of the CASME II database. We use an extended multi-frame feature map replaces the general apex frame and use a shallow CNN with rectangular convolution kernels for the final classification. We have found that the method can effectively extract the temporal information of the micro-expression. In future, we hope to extend our work by reducing the redundancy of the feature map.

References 1. Xu F, Zhang J-P (2017) Facial microexpression recognition: a survey. Acta Autom Sin 43(3):333–348 2. Takalkar M, Xu M, Wu Q, Chaczko Z (2018) A survey: facial micro-expression recognition. Multimed Tools Appl 77(15):19-301–19-325 3. Zhou Z, Zhao G, Guo Y, Pietikainen M (2012) An image-based visual speech animation system. IEEE Trans Circuits Syst Video Technol 22(10):1420–1432 4. Wang Y, See J, Oh Y-H, Phan RC-W, Rahulamathavan Y, Ling H-C et al (2016) Effective recognition of facial microexpressions with video motion magnification. Multimed Tools Appl 1–26 5. Ahonen T, Hadid A, Pietik¨ ainen M (2006) Face description with local binary patterns: application to face recognition. IEEE Trans Pattern Anal Mach Intell 28(12):2037–2041 6. Pfister T, Li X, Zhao G, Pietikainen M (2011) Recognising spontaneous facial micro-expressions. In: 2011 Proceedings of IEEE international conference on computer vision (ICCV). IEEE, pp 1449–1456 7. Yan W-J, Li X, Wang S-J, Zhao G, Liu Y-J, Chen Y-H et al (2014) CASME II: an improved spontaneous micro-expression database and the baseline evaluation. PLoS One 9:e86041

2116

Z. Lei and S. Li

8. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: NIPS, pp 1106–1114 9. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780 10. Min P, Zhan P, Zhihao Z, Tong C (2018) From macro to micro expression recognition: deep learning on small datasets using transfer learning. In: 13th IEEE international conference on automatic face and gesture recognition (FG 2018) 11. Huai-Qian JS, Khor C, Raphael P, Weiyao L (2018) Enriched long-term recurrent convolutional network for facial micro-expression recognition. In: 13th IEEE international conference on automatic face and gesture recognition (FG 2018) 12. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: ICLR 13. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 14. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: Proceeding of the CVPR 15. Parkhi OM, Vedaldi A, Zisserman A (2015) Deep face recognition. In: BMVC 16. Oh T, Jaroensri R, Kim C, Elgharib MA, Durand F, Freeman WT, Matusik W (2018) Learning-based video motion magnification. CoRR arXiv:1804.02684 17. Li Y, Huang X, Zhao G (2018) Can micro-expression be recognized based on single apex frame? In: 2018 25th IEEE international conference on image processing (ICIP). IEEE, pp 3094–3098 18. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: IEEE international conference on computer vision and pattern recognition workshops (CVPRW), pp 94–101

Computation Task Offloading for Minimizing Energy Consumption with Mobile Edge Computing Guangying Wang1(B) , Qiyishu Li1 , and Xiangbin Yu1,2 1

2

College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China [email protected] National Mobile Communications Research Laboratory, Southeast University, Nanjing, China

Abstract. Mobile edge computing (MEC) is envisioned as a promising technology for enhancing the computation capacities and prolonging the lifespan of mobile devices. In this paper, we study a single mobile user MEC system in which each base station is integrated with a MEC server in executing intensive computation tasks. The mobile user’s computation tasks can be partied into several parts to be offloaded to different MEC servers simultaneously. The mobile user’s energy consumption is minimized with the constraints of power and latency, and an iterative scheme is proposed to solve the optimization problem. The numerical results demonstrate the effectiveness of the proposed scheme. Keywords: Mobile edge computing Energy consumption

1

· Computation task offloading ·

Introduction

The future wireless mobile networks are expected to support a number of intensive computation applications such as virtual reality and autonomous driving. There is an increasing gap between the demand for complex programs and the availability of limited resources. Mobile edge computing (MEC) has emerged as a promising technique to improve the offloading efficiency and enhance computation capacity of mobile devices [1]. MEC is a new technology which enables mobile users to offload intensive computation tasks to edge servers for remote execution, the computation efficiency and latency performance can be significantly improved [2]. To reap these benefits, extensive research works have investigated efficient computation offloading schemes. In [3], the authors proposed the computation offloading scheme of mobile users as a decentralized game, and designed a game theoretic approach c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2117–2123, 2020 https://doi.org/10.1007/978-981-13-9409-6_256

2118

G. Wang et al.

to achieve the computation offloading. To solve the energy consumption minimization problem, the authors in [4] jointly optimized the offloading selection, radio resource allocation and computational resource allocation coordinately for a multi-mobile users MEC system, and proposed a Reformulation-LinearizationTechnique based Branch-and-Bound method. A unified MEC-WPT (wireless power transfer) system was considered in [5], and the authors developed an innovative framework to improve the MEC performance by minimizing the total energy consumption. The works in [6] focused on minimization of the weighted sum of energy consumption and execution duration and the authors proposed a novel belief propagation algorithm to optimize the task allocation in a distributed manner. In [7], the authors studied the multi-user computation partitioning problem and designed an offline heuristic algorithm to minimize the average completion time. In [8], the authors focused on the design of an energy efficient computation offloading mechanism for MEC system by energy-efficient computation offloading algorithm. The authors in [9] studied the orthogonal frequency-division multiple access (OFDMA) based multi-user computation offloading under different setups, with the objective of minimizing the users’ sum-energy consumption. Based on the estimated execution time and energy consumption in the computation offloading, the authors in [10] proposed an offloading framework, which reduces the energy consumption and shortens the response time. In [11], based on time-division multiple access (TDMA) and frequency division multiple access the authors considered energy efficient resource allocation in a multi-user MEC system. The authors in [12] formulated the task offloading problem as a joint optimization of the radio resources together with the computational resources. This paper studies a single mobile user mobile edge computing system in which each base station is equipped with a MEC server in executing intensive computation tasks. In order to improve the energy efficiency of the offloading system, we formulate a problem to minimize the mobile user’s energy consumption with the constraints of power and latency. To solve the optimization problem more efficiently, a computation offloading scheme is proposed. The numerical results demonstrate the effectiveness of the proposed scheme. The rest of the paper is organized as follows. The system model and problem formulation are introduced in Sect. 2. The proposed computation offloading scheme is presented in Sect. 3, and numerical results are shown in Sect. 4. Conclusions are finally drawn in Sect. 5.

2

System Model and Problem Formulation

As shown in Fig. 1, we consider a single mobile user’s computation offloading scenario. Mobile user can offload part of its computation task to a group of edge-servers. MEC servers are denoted by K = {1, 2, . . . , K}. Mobile user has a computation task SK to be executed within a time duration and the partial task offloading is considered, which can be partied into two parts for offloading to MEC server sk and local computing SK − sk , respectively.

Computation Task Offloading for Minimizing Energy Consumption

2119

Fig. 1. System model

Without loss of generality, we use gk to denote the channel power gain from mobile user to MEC server k and assume that the channel gains are ordered g1 > g2 > · · · > gk

(1)

Based on above, mobile user’s minimum total transmit power [13] for offloading sk to the MEC servers is  K  1 K 1 1 1 W σ0 tot 2 t W k=1 sk − − (2) Pk = W σ 0 k=1 gk gk−1 gK where W denotes channel bandwidth, σ0 is the background noise power. t denotes the transmission time for task sk from mobile user to MEC servers. We use fk to denote the computation rate of MEC server, fu is the computation rate of mobile user for local computation. Then, the total delay of the mobile user to compete its task is   sk SK − sk (3) Tu = max t + , k∈K fk fu where t + fskk denotes the total delay when mobile user offloads its workload to k MEC server. SKf−s denotes mobile users local computation delay for executing u its remaining workload. In this paper, we focus on the minimization of mobile user’s total energy consumption under the latency and power constraints. Mobile user’s total energy consumption can be partied into two parts for offloading to MEC server and local computing. For offloading to MEC server, the energy consumption mainly include transmission consumption tPktot . For local computaK 3 k tion, we model the energy consumption at mobile user as k=1 SKf−s η(fu ) = u K 2 k=1 η (SK − sk ) (fu ) and η is a coefficient depending on chip structure of mobile user.

2120

G. Wang et al.

Therefore, the optimization problem can be formulated as K 2 η (SK − sk ) (fu ) min tPktot +

(4a)

k=1

0 ≤ Pktot ≤ P max , k ∈ K

s.t.

0≤t≤T 0≤

SK −sk fu

K

max

(4b)

, k∈K

(4c)

≤ T max , k ∈ K

k=1 sk

(4d)

≤ SK

(4e)

denotes the maximum transmit power of mobile user, and constraint where P (4b) means the mobile user’s transmission power cannot exceed its requirement. Constraints (4c) and (4d) guarantee that the mobile user’s offloading transmission-delay and local computation-delay are no more than the latency requirements. Constraint (4e) means that mobile user’s offloaded task to MEC server can’t exceed its task requirement SK . max

3

Efficient Computation Task Offloading Algorithm

Based on the above analysis, the objective function of optimization problem (4a) can be transformed into   K  1 K 1 1 1 W σ0 2 t W k=1 sk − − Et = t W σ 0 k=1 gk gk−1 gK (5) K 2 η (SK − sk ) (fu ) + k=1

1 For simplicity, we define a = W σ0 , b = g1k − gk−1 , c= the first derivative of Et with respect to t is

1 W

K

k=1 sk ,

K c c c K ∂Et =a b2 t − at ln 2 2 b2 t k=1 k=1 ∂t t

c c K = a 1 − ln 2 b2 t k=1 t

k ∈ K. Then

(6)

The second derivative of Et with respect to t is c c c K c

c K ∂ 2 Et = −a ln 2 2 1 − ln 2 b2 t + a ln 2 2 b2 t 2 k=1 k=1 ∂t t t t (7) 2 K c 2c t = a(ln 2) 3 b2 k=1 t K ∂ 2 Et 1 1 > 0, W Since W σ0 > 0, g1k − gk−1 k=1 sk > 0, we can get ∂t2 > 0. Therefore, Et is strictly convex in t. From the above, the optimization problem is convex and the optimal solution can be efficiently obtained by standard convex optimization solvers. Therefore, we propose an iterative algorithm to solve the optimization problem, as summarized in Algorithm 1.

Computation Task Offloading for Minimizing Energy Consumption

2121

Algorithm 1 The Optimization Algorithm

  1: Initialization: Transmission power Pktot , k ∈ K, computation task {SK } , k ∈ K, delay requirement T max , iteration step t0 and precision ε 2: Sort the {gk } , k ∈ K in  descending order 3: 4: 5: 6: 7: 8:

4

1 ,k ∈ K Calculate g1k − gk−1 repeat For t = 0 : t0 : T max , calculate Et and Et+t0 by formula(5) Compare the results of Et and Et+t0 until |Et+t0 − Et | > ε Output Et .

Numerical Results

In this section, we evaluate the performance of the proposed scheme by the simulation. For the single mobile user case, we set up a 3-BS scenario with the three BSs located at (500, 0), (−500, 0) and (0, 500), respectively. Meanwhile, mobile user is located within a circle with the center at (0, 0) and radius of 100m. The consequent random channel power gains from mobile user to the BSs

used in simulations are {gk }k∈K = 1.7130 ∗ 10−7 , 9.7653 ∗ 10−8 , 4.9635 ∗ 10−8 . We set the computation task SK = 4 Mbits, delay requirement T max = 4s, and transmission power P max = 0.5 J/s . fu = 8 Mbits/s for mobile user, and {fk }k∈K = {15, 15, 15} Mbits/s for MEC server. The computer used in simulation is equipped with an AMD 2.10 GHz dual core and 3.04 GB RAM, and the software is MATLAB R2014a. Some numerical results are shown to confirm our algorithm’s effectiveness.

Fig. 2. The energy consumption versus delay requirement

In Fig. 2, the results of energy consumption versus delay requirement with computation tasks SK = 4 Mbits are shown. As expected, the energy consump-

2122

G. Wang et al.

tion declines as the delay requirement loosens. At the same time, we get the minimum energy consumption 0.35 J when delay requirement is 3.85 s. After a minimum point, however, the energy consumption starts to increase because of the concentration of requests to the MEC servers.

Fig. 3. The energy consumption with different computation tasks

Figure 3 shows the energy consumption versus delay constraints under different computation tasks SK . In this simulation, we set the three different computation tasks are SK = 4 Mbits, 6 Mbits, 8 Mbits. As shown in Fig. 3, it is observed that the energy consumption under different tasks decreases as t increases, and the energy consumption increases as computation tasks increase, as expected. We can also get the time results for executing three different computation tasks t1 = 3.62 s, t2 = 3.73 s, t3 = 3.85 s when each Et comes to its minimum value. The reason is that the larger computation tasks can result in an increase in each MEC servers workload, and when the computation rate and delay requirement are fixed, the time decreases accordingly.

5

Conclusion

This paper studies a single mobile user mobile edge computing system in which each base station is equipped with a MEC server in executing intensive computation tasks. In order to improve the efficiency of the offloading system, we formulated a problem to minimize the mobile user’s energy consumption with the constraints of power and latency. To solve the optimization problem more efficiently, an iterative algorithm is proposed. The numerical results demonstrate the effectiveness of the proposed scheme. Acknowledgement. This work was supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province [No. (KYCX19 0191)].

Computation Task Offloading for Minimizing Energy Consumption

2123

References 1. Mach P, Becvar Z (2017) Mobile edge computing: a survey on architecture and computation offloading. IEEE Commun Surv Tutor 19(3):1628–1656 2. Wang S, Zhang X, Zhang Y et al (2017) A survey on mobile edge networks: convergence of computing, caching communications. IEEE Access 5:6757–6779 3. Chen X (2015) Decentralized computation offloading game for mobile cloud computing. IEEE Trans Parallel Distrib Syst 26(4):974–983 4. Zhao P, Tian H, Qin C et al (2017) Energy-saving offloading by jointly allocating radio and computational resources for mobile edge computing. IEEE Access 5:11255–11268 5. Wang F, Xu J, Wang X et al (2018) Joint offloading and computing optimization in wireless powered mobile-edge computing systems. IEEE Trans Wirel Commun 17(3):1784–1797 6. Li J, Wu A, Chu S et al (2018) Mobile edge computing for task offloading in small-cell networks via belief propagation. In: IEEE international conference on communications (ICC), pp 1–6 7. Yang L, Cao J, Cheng H et al (2015) Multi-user computation partitioning for latency sensitive mobile cloud applications. IEEE Trans Comput 64(8):2253–2266 8. Zhang K, Mao Y, Leng S et al (2016) Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks. IEEE Access 4:5896–5907 9. Mao Y, Zhang J, Song SH et al (2017) Stochastic joint radio and computational resource management for multi-user mobile edge computing systems. IEEE Trans Wirel Commun 16(9):5994–6009 10. Lin YD, Chu ETH, Lai YC et al (2015) Time-and-energy aware computation offloading in handheld devices to coprocessors and clouds. IEEE Syst J 9(2):393– 405 11. You C, Huang K, Chae H et al (2017) Energy-efficient resource allocation for mobile-edge computing offloading. IEEE Trans Wirel Commun 16(3):1397–1411 12. Sardellitti S, Scutari G, Barbarossa S (2015) Joint optimization of radio and computational resources for multicell mobile-edge computing. IEEE Trans Sig Inf Process Netw 1(2):89–103 13. Wu Y, Qian L, Mao H et al (2018) Optimal power allocation and scheduling for nonorthogonal multiple access relay-assisted networks. IEEE Trans Mobile Comput 17(11):2591–2606

Soil pH and Humidity Classification Based on GRU-RNN Via UWB Radar Echoes Chenghao Yang(B) , Tiantian Wang, and Jing Liang University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave., Chengdu 611731, China [email protected]

Abstract. This paper proposed a new method to classify soil with different pH values and humidity based on GRU-RNN via ultra-wideband (UWB) radar echoes. Five categories of UWB soil echoes with different soil parameter, the pH values and water contents, are collected and investigated by GRU-RNN. And the simulation experiment results indicate that compared with LSTM-RNN, GRU-RNN has a better classification performance and has a shorter execution time. This can be an evidence that GRU-RNN method is more suitable for the study of other soil parameters. Keywords: GRU Humidity · Soil

1

· RNN · UWB radar echoes · pH values ·

Introduction

With the rise of deep learning in recent years, the application of deep learning in various traditional fields has become a hot topic of current research. In the practical application of smart agriculture, humidity as well as the value of pH are extremely important soil parameters. For a long time, there have been many studies discussing the potential relationship between soil parameters such as soil pH or soil humidity and soil signal parameters. A common method is to analyze soil reflection echoes to map certain soil parameter [1]. S. Lambot proposed a ground penetrating radar (GPR) model for soil parameter inversion through UWB radar echo, which is one of the most important studies on soil parameter retrieval. The study indicated that soil with different parameter such as pH values and water content has different soil electrical properties [2]. By analyzing the soil echo signals, we can identify the dielectric properties of the soil from the GPR signal [3]. However, the above studies did not explicitly suggest an algorithm or scheme that can directly link soil echo signals and soil parameter information directly. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2124–2131, 2020 https://doi.org/10.1007/978-981-13-9409-6_257

Soil pH and Humidity Classification Based on GRU-RNN

2125

Based on this, many studies have previously explored the possibility of related schemes, for example, [4] applies ANN (Artificial Neural Network) to analyze soil reflex echoes and retrieve specific values for soil humidity. [5] applies long shortterm memory (LSTM) to classify soil pH through ultra-wideband (UWB) radar echoes. But the above work can only identify the soil based on one parameter. So, we propose a GRU (Gated Recurrent Unit)-based deep learning network to classify soil with different parameter, specifically, pH values and water content, at the same time. Our contributions are summarized as follows: – We collected five categories of soil echo signals from UWB ultra-wideband radars with different soil parameter, specifically, pH values and water content, and collected 1600 samples of each soil as raw data. The data set is manually labeled by measured soil pH and water content. The GRU network is used to classify different soil categories. As far as we know, this is the first time that GRU-RNN has been used to classify soils with different pH values and water contents at the same time. – Compared with the LSTM-based recurrent neural network used in the related works,GRU has a higher accurate rate than LSTM, and the operating speed is 1.29 times higher than that of LSTM in the same hardware environment. The rest of this article is organized as follows. In the second part of the article, the data collection scene and the hardware used in the experiment are introduced. In the third part, the labeling and preprocessing process of the original data set is introduced, and the GRU-based RNN network is briefly introduced. In the fourth part, we analyzed the GRU layer and training performance and compared it with the training performance of RNNs with the same network structure but using LSTM units. Finally, we conclude in the fifth section.

2

Field Experiment

The soils sample collection experiment was conducted in Chengdu, Sichuan Province, China from March 2019 to April 2019. We chose a bare soil of about 30 m2 in the west of Chengdu. We used flat areas as an environment for collecting soil echo signals to ensure that there are no wide-ranging changes in other properties of bare soil. In the experiment, the pH of the soil was manually adjusted by adding acetic acid or caustic soda, and the water contents of the soil was adjusted by adding different amounts of water. The ultra-wideband radar module P440 is used in this paper because of its advantages of easy portability, low energy consumption and good penetration. Our equipment is built as shown in Fig. 2. We installed the radar module on a wooden frame 0.9 m above the ground. The two antennas are mounted perpendicular to the ground, allowing the UWB radar signal to propagate perpendicular to the exposed ground. The reflected signals from the soil surface are combined in the rake receiver structure and then loaded into the PC via the USB port. The unit sample interval in the signal is 61 ps. Since there is no direct mapping

2126

C. Yang et al.

between UWB radar echoes and soil pH and soil water content, it is necessary to manually label each collected soil echo signal. For each UWB radar measurement echo signal sample, we used the pH data collected by pH 3000 and the water content value collected using the moisture meter to mark. Soil pH was measured using pH 3000 as shown in Fig. 1c. And the water contents was measured using FieldScout TDR 300 as shown in Fig. 1d. Then we get the measured pH values and water contents as the label of our data set.

Fig. 1. a Field environment; b P440-MRM; c pH 3000; d FieldScout TDR 300

3 3.1

Soil Classification Data Preprocessing

In this experiment, we collected five soil samples with different water contents and pH values, respectively soil (a) is moisture = 23.3% and pH = 8.04; soil (b) is moisture = 31.8%and pH = 9.04; soil (c) is moisture = 38.3% and pH = 12.26; soil (d) is moisture = 41.1% and pH = 3.5; soil (e) is moisture = 41.8% and pH = 7.09), and each echo has a discrete time length of 480 ns. But not the entire sample is an echo of the surface reflection of the soil. Therefore, we need to intercept

Soil pH and Humidity Classification Based on GRU-RNN

2127

the original echo signal to obtain a valid signal containing soil information in the sample. First, as shown in formula (1), we consider the coupling effect of the antenna and the propagation of the signal in the air. In the air, the transmission speed v of the signal is the same as the speed of light c, and the propagation distance s is the distance between the ultra-wideband radar and the soil surface of 0.9 m. The time delay τ between the two sampling points is 61 ps.

Fig. 2. Complete UWB reflected signal

In Eq. (2), m represents the first part of the signal that needs to be intercepted, which is the coupling effect of the antenna and the propagation of the signal in the air. 2s (1) t= v t (2) m= τ The second segment in the signal is a simulated high-efficiency segment, as shown in Fig. 2. When UWB is working, the relative dielectric constant ε of the soil is between 4 and 40. Equation (3) represents the propagation speed of the radar signal in the soil. c (3) v=√ r The radar signal travels under the soil at a distance of 0.5 m. Therefore, using Eqs. (1) and (2) again, we can obtain the number m of samples in the echo signal that propagate in the soil. We consider the sequence consisting of this sample point to be the active component of the signal and normalize it using Eq. (4). x(t) =

x(t) − mean(X) std(X)

(4)

2128

3.2

C. Yang et al.

GRU Algorithm for Soil Classification

LSTM and GRU are proposed as solutions to the short-term memory problem of RNN [6]. They use an internal structure called a “gate” to adjust the information flow, and the gate structure determines which information in the sequence needs to be preserved. The LSTM structure has three door structures, an input gate, a forgotten gate, and an output gate [7]. The structure of the GRU unit is shown in Fig. 3. The GRU unit has two gate structures, namely the reset gate and the update gate [8]. The update gate acts like the Forgotten Gate and Input Gate in LSTM. It determines what information to forget and which new information needs to be added. The reset gate is used to determine the extent to which the previous information was forgotten. In the GRU structure diagram, rt represents the output of the reset gate. Zt indicates the output of the GRU update gate. Equations (5)–(9) represent the forward transfer process of the GRU model. rt = σ (Wr · [ht−1 , xt ])

(5)

zt = σ (Wz · [ht−1 , xt ])   ˜ t = tanh W · [rt ∗ ht−1 , xt ] h h

(6) (7)

˜t ht = (1 − zt ) ∗ ht−1 + zt ∗ h

(8)

yt = σ (Wo · ht )

(9)

The update gate is used to control the degree to which the status information of the previous moment is brought into the current state. The larger the value of the update gate is, the more the status information is brought in at the previous moment.

Fig. 3. GRU framework

Since the GRU has fewer gate-structures than the LSTM, the GRU is faster to calculate. In this paper, the input feature length of the data set of the input convolutional neural network is 300, and the classification vectors of the manually

Soil pH and Humidity Classification Based on GRU-RNN

2129

labeled labels are “1, 2, 3, 4”, which correspond to radar echo sample types with different pH values and water content. The softmax function is used in the output layer of the neural network. The scalar output can be mapped to a probability output. The function form is as follows: ezj σ(z)j = K zi i e

(10)

We use formula (11) as the loss function of multi-class task, assuming that the number of target classification categories is K, there are target variables y ∈ 1, 2,. . . , K, using the softmax function, the discrete multi category loss function can be expressed as follows: ⎡ ⎤ T (i) M K eθj x 1 ⎣   (i) (11) I y = j log K T (i) ⎦ . J(θ) = − θl x M l e i j The accuracy rate is defined by the following formula:

M i i i I yr = yp A(%) = M

4

(12)

Simulation and Analysis of Soil Classification

In this section, we will consider five categories of soil echo signal samples with different pH values and water contents. And the problem is equivalent to a classification problem with a target classification number of 5. Figure 4 shows two kinds of soil echoes with different pH values and water contents. For each type of soil, we collect 1600 samples. We use a 4:1 ratio to divide the data set into a training set and a validation set. 10 5

(a) 8

soil signal with moisture = 23.3% and pH = 8.04

6

soil signal with moisture = 31.8% and pH = 9.04

6

4

4

2

2

0

0

-2

-2

-4

-4

-6

-6

-8

10 5

(b) 8

-8 0

50 100 150 200 250 300 350 400 450 500

Time Index (unit increment is 61ps)

0

50 100 150 200 250 300 350 400 450 500

Time Index (unit increment is 61ps)

Fig. 4. The time index of soil signal with different pH values a moisture = 23.3%, pH = 8.04; b moisture = 31.8%, pH = 9.04

2130

C. Yang et al.

Fig. 5. The training progress of two kinds of RNN. a GRU; b LSTM

Fig. 6. The loss of two kinds of RNN’s training progress. a GRU; b LSTM

During the experiment,we built two recurrent neural networks with the same structure under the Keras platform [9]. The two networks have the same structure, the number of layers, and the number of nodes in each layer [10]. The difference is that the two networks use LSTM and GRU respectively. The structure of the RNNs includes a gate structures layer(LSTM or GRU), a batchnormalization layer, a fully connected layer and a softmax layer. We used a preprocessed data set to train the two networks and record the time spent on each epoch, the simulation results are shown in Table 1. The simulation was running on a GPU model GTX 1080 Ti.And finally we compare the performance of the two networks on the verification set. The training progress shows in Figs. 5 and 6.

5

Conclusion

In this paper, we totally investigated five categories of soil pH and water content the raw data after normalized are classified by using LSTM and GRU-RNN. Each type of soil has fixed pH and water content parameters. We collected 1600

Soil pH and Humidity Classification Based on GRU-RNN

2131

Table 1. The accuracy of LSTM and GRU Type

LSTM GRU

Accuracy

0.920

Training time 45 s

0.957 35 s

samples of UWB soil echoes for each kind of soil and used the ratio 4:1 as traintest set split. We compared the accuracy performance and training time of the two kinds of RNN.The classification results show that the accuracy rates of soil echoes are 0.920 (LSTM) and 0.957 (GRU), respectively. And the training speed of the GRU network is increased by about 1.29 times compared with that of the LSTM. With a higher training speed and better classification performance, the GRU system can replace LSTM in some areas. And in the future work, this system can be extended for the complete soil parameters retrieval task and applied to the application of large-area situation. Acknowledgements. This work was supported by the National Natural Science Foundation of China (61671138, 61731006), and was partly supported by the 111 Project No. B17008.

References 1. Lambot S, Slob EC, Idesbald VDB, Stockbroeckx B, Vanclooster M (2004) Modeling of ground-penetrating radar for accurate characterization of subsurface electric properties. IEEE Trans Geosci Remote Sens 42(11):2555–2568 2. Njoku EG, Jackson TJ, Lakshmi V, Chan TK, Nghiem SV (2003) Soil moisture retrieval from AMSR-E. IEEE Trans Geosci Remote Sens 41(2):215–229 3. Lambot S, Slob EC, Van den Bosch I, Stockbroeckx B, Scheers B, Vanclooster M (2004) Estimating soil electric properties from monostatic ground-penetrating radar signal inversion in the frequency domain. Water Resour Res 40(4):1035–1042 4. Jing L, Liu X, Liao K (2017) Soil moisture retrieval using UWB echoes via fuzzy logic and machine learning. IEEE Internet Things J 99:1 5. Greff K, Srivastava RK, Koutnik J, Steunebrink BR, Schmidhuber J (2016) LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst 28(10):2222–2232 6. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117 7. Gers FA, Schmidhuber J, Cummins F (2002) Learning to forget: continual prediction with LSTM. In: International Conference on Artificial Neural Networks (2002) 8. Chung J, Gulcehre C, Cho KH, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. Eprint Arxiv 9. Manaswi NK (2018) Deep learning with applications using python. Understanding and working with Keras [J], pp 31–43. https://doi.org/10.1007/978-1-4842-35164 (Chapter 2) 10. Rui F, Zuo Z, Li L (2017) Using LSTM and GRU neural network methods for traffic flow prediction. In: Youth Academic Conference of Chinese Association of Automation

Bit Error Rate Analysis of Space-to-Ground Optical Link Under the Influence of Atmospheric Turbulence Xiao-Fan Xu(&), Ni-Wei Wang(&), and Zhou Lu China Academic of Electronics and Information Technology, No. 11 Shuangyuan Road, Shijingshan District, Beijing 100041, China [email protected], [email protected]

Abstract. Laser communication has the characteristics of wide bandwidth, high data rate, and low power consumption, which is an important way to realize the information exchange of large amount. However, the transmission quality is deeply affected by atmospheric absorption, scattering, turbulence, and background light, which bring certain challenges to its reliability. This paper firstly summarizes the mathematical models of the main factors, and simulates the error rate of the incoherent optical link under the influence of beam wander, beam scintillation, and beam spreading. Finally, the paper further proposes a fullelement BER analysis model including the influence of atmospheric turbulence, absorption and scattering, and background light. Keywords: Space-to-ground optical link  Atmospheric turbulence  Bit error rate analysis model  Beam wander  Beam scintillation  Beam spreading

1 Introduction In recent years, the demand for high-speed Internet, real-time big data stream, and highdefinition video conferencing is increasing day by day, which challenges the traditional communication technologies. In the field of wireless communications, RF spectrum resources are increasingly scarce, and is severely restricted, forcing people to use higher frequency bands. Compared with traditional radio frequency communication, wireless optical communication has rich bandwidth resources, low power consumption, small antenna size, high security, and does not require the use of spectrum resources by international organizations [1], making itself to be one of the best choices for highcapacity transmission. Especially for space communication, developed countries and regions such as the United States, Europe, and Japan are eager to conduct research and have done experiments for interstellar, space-to-ground, and deep space optical communication [2]. In order to better understand the satellite optical communication, experts have studied physical mechanisms of various influencing factors, and mathematically modeled the influence for simulating the quality of the space-to-ground optical link. However, due to the complexity of the mathematical model, usually only the effect of beam wander is considered. In this paper, all three types of influences due to © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2132–2139, 2020 https://doi.org/10.1007/978-981-13-9409-6_258

Bit Error Rate Analysis of Space-to-Ground

2133

atmospheric turbulence, including beam wander, beam scintillation, and beam spreading, are used to model the bit error rate (BER) of incoherent optical communication, and further, a comprehensive model including absorption and scattering, background light, and atmospheric turbulence is proposed. The remainder of this paper is organized as follows. In Sect. 2, we summarize the typical mathematical models for different influence factors. In Sect. 3, we optimize the BER analysis model by introducing additional influence of beam wander and beam spreading, and compare the simulation results with the typically used one which only considers the beam scintillation, and then propose the comprehensive model by further including the influence of absorption and scattering, background light, as well as the detector noise. Finally, we conclude our paper in Sect. 4.

2 Analysis of Influence Factors Mathematical models that describe absorption and scattering, background light, and atmospheric turbulence are summarized in this section. 2.1

Influence of the Absorption and Scattering

In the atmospheric channel, absorption and scattering are one of the important causes of signal loss. The absorption of laser light by the atmosphere is mainly caused by the interaction of atmospheric molecules with lasers of a specific wavelength. The scattering of the laser by the atmosphere is also wavelength dependent, and can be described by Beer-Lambert’s law. The attenuation of the cloud is usually modeled by the generalized Gamma distribution [3]. 2.2

Influence of the Background Light

In the space-to-ground optical communication, large link distance leads to high signal attenuation, implying that the influence of the background light cannot be ignored. The main sources of the background noise are sunlight and starlight, as well as the scattering. The solar spectrum can be represented by a black body with a temperature of approximately 5778 K, and its irradiance spectrum is mainly concentrated in the visible range, which peaks at 500 nm. At 850 nm, 1064 nm, and 1550 nm, the irradiance is decreased to 1/2, 1/3, and 1/10 of the peak, respectively. 2.3

Influence of the Atmospheric Turbulence

When the atmosphere is in a turbulent state, its refractive index changes randomly in space and time, which will cause constructive or destructive interference of the passing laser beam, resulting in random fluctuations in both beam intensity and phase. The theoretical basis for studying atmospheric turbulence is the Kolmogorov theory [4], which states that the inhomogeneity of the temperature produces wind, thus forming the vortices.

2134

X.-F. Xu et al.

The refractive index structure parameter, Cn2 , is the key to constructing the atmospheric turbulence model. It defines the strength of the atmospheric turbulence, therefore varies with time, geographical location, altitude and other factors. The Gurvich model, the Submarine Laser Communication-Day (SLC-Day) model, the Hufnagel-Valley (HV) model, the HV-Night model, the Greenwood model, etc. have been proposed [5, 6]. The HV model is one of the most widely used models at present, and is suitable for daytime inland environments. The model contains variables of high-altitude wind speed and near-surface turbulence intensity, which can more accurately describe various situations. The HV model is defined as [7]: Cn2 ðhÞ ¼ A expðh=100Þ þ 5:94  1053 ðv=27Þ2 h10 expðh=1000Þ þ 2:7  1016 expðh=1500Þ

ð2:1Þ

Depending on the size of the turbulence and of the laser beam, the effects of atmospheric turbulence are mainly divided into three categories: • Beam wander When the size of the turbulent vortex is larger than the size of the beam, the laser beam will be deflected by the turbulent vortex, leading to link failure. The displacement is a function of distance, wavelength, and initial beam size [8, 9]: ZH r2r

¼ 2:07

Cn2 ðzÞðL  zÞ2 WðzÞ1=3 dz

ð2:2Þ

h0

• Beam spreading When the size of the turbulent vortex is smaller than the size of the beam, the beam is diffracted and scattered, leading to wavefront distortion. The beam spreading radius for short-exposure time of the collimated Gaussian beam is [10]: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 u !1=6 3 u 2 K u 0 5 Wst ¼ W t1 þ 1:33r2R K5=6 41  0:66 1 þ K20

ð2:3Þ

• Beam scintillation When the size of the turbulent vortex is comparable to the size of the beam, the turbulent vortex works like a lens, converges or diverges the incident beam, leading to intensity fluctuations in both time and space domains. This phenomenon is the main factor to reduce the quality for the free space optical communication. Under weak atmospheric turbulence conditions, the scintillation index for plane waves and downlinks can be expressed as [10]: hZ 0 þL

r2I

¼

r2R

 2:24k

7=6

ðsecðhÞÞ

11=6

Cn2 ðhÞh5=6 dh h0

ð2:4Þ

Bit Error Rate Analysis of Space-to-Ground

2135

3 Analysis of Incoherent Space-to-Ground Optical Link On-Off Keying (OOK) is one of the mostly used modulation schemes for space-toground optical communication, which is less sensitive for the phase degeneration caused by the atmospheric turbulence compared with the coherent modulation schemes. 3.1

Probability Density of the Received Light Intensity

In the case of weak atmospheric turbulence, the probability density of beam wander is subject to Rayleigh distribution, and the probability of light intensity of beam scintillation is subject to log-normal distribution. In addition, the beam spreading effect can be considered by replacing the beam radius by the short-exposure time one. Under the combination effects of these phenomena, the probability density of the received light intensity is [11]: Z1 pW ð I Þ ¼ 0

2    I 2r 2 2 ln hI ð0;L Þi þ Wst2 þ 1 1 r r 6 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 exp  2  exp4 2rr 2r2I ðr; LÞ 2pr2I ðr; LÞ I rr

3

 r2I ðr;LÞ 2 2 7

5dr

ð3:1Þ Previous studies of the probability density of the received light intensity usually only consider beam scintillation. In fact, the beam spreading will push the distribution moving towards a small value, and the probability that the light intensity is lower than a certain value is greater than the case where only the beam scintillation is considered. For space-to-ground optical communication, assuming the satellite operates in geostationary orbit (GEO) with a height of 35,786 km, and the wavelength is 1550 nm. Further, taking the HV-21 model for the refractive index structure parameter, scintillation index the modified von Karman spectrum, internal and external scales 10 mm and 5.5 m, which are typical inland values. The probability density of received light intensity is obtained, as shown in Fig. 1. It is not strictly consistent with the normal distribution, and its distribution probability of large light intensity is slightly larger than that of small. 3.2

BER Analysis Under the Influence of Atmospheric Turbulence

In the process of OOK modulation, the error is mainly caused by the inconsistency between the detector response to the signal and the detection threshold, for example, a response of below the threshold for a “1” input, and above for “0” input. The BER of the system under the consideration of beam scintillation can be described as [12]: " # 1 0:23MF  r2I ð0; LÞ=2 pffiffiffi BER0 ¼ erfc 4 2rI ð0; LÞ

ð3:2Þ

2136

X.-F. Xu et al.

Fig. 1. Probability density of the received light intensity

When considering the beam wander, the optical communication system data rate (on the order of Gbps) is much faster than the beam wander frequency (below kHz), so the system error rate should be the ensemble average at each position: 1 BER ¼ 4

Z1 0

"

#   0:23MF  r2I ð0; LÞ=2  2r2 =W 2 r r2 pffiffiffi  2 exp  2 dr erfc 2rr rr 2rI ð0; LÞ

ð3:3Þ

And the beam spreading is considered by replacing the beam waist by the shortexposure time one. With the assumption mentioned in the last subsection, and in Gaussian beam condition without loss, the aperture of the on-board receiving terminal is assumed to be 250 mm, ground emitting terminal 1 m, leading to a power attenuation of 3:4  106 . Therefore, when the transmit power is 1 W, the received power is about 3.4 lW, with an intensity of 69.3 lW/m2. Setting the threshold to be 20 lW/m2, the relationship between the BER and the transmit power is simulated as shown in Fig. 2. It can be seen that the BER under the consideration of beam wander, beam scintillation, and beam spreading is larger than that only beam scintillation is considered. In addition, the former decreases slower than the latter as the power increases. Using this model, Eq. (3.3), the relationship between the emission aperture and the BER can also be obtained, which is convenient for the optimization of the emission aperture. 3.3

Comprehensive Model

Based on the influence of atmospheric turbulence, the effects of atmospheric absorption and scattering, and background light are further considered. According to the previous analysis, the influence of fog, rain and snow on the optical communication can be understood as the attenuation. Therefore, the total attenuation of the link is the integral of the attenuation coefficients for entire link:

Bit Error Rate Analysis of Space-to-Ground

2137

Fig. 2. BER Analysis of the OOK modulation system

Z aoverall ¼

coverall dl

ð3:4Þ

L

The influence of the background light can be understood as an wrong estimation of a weak signal that does not satisfy the threshold, that is, “0” is erroneously judged as “1”. For the OOK modulation system, the probabilities for misjudge of “0” as “1”, and “1” as “0” are [13]: Z1 P1=0 ¼

Zith p0 ðiÞdi; P0=1 ¼

ith

p1 ðiÞdi

ð3:5Þ

1

where the p0 ðiÞ and p1 ðiÞ are probability density of the detector output “i” when transmitting “0” and “1”, respectively, and are subject to the Gaussian distribution: " # " # 1 ði  m 0 Þ2 1 ði  m1 Þ2 ; p1 ðiÞ ¼ pffiffiffiffiffiffi exp  p0 ðiÞ ¼ pffiffiffiffiffiffi exp  2r20 2r21 2pr0 2pr1

ð3:6Þ

When APD detector is used:   m0 ¼ GeKbg þ Idc Ts ; m1 ¼ Ge Ks þ Kbg þ Idc Ts   r20 ¼ G2 Fe2 Kbg þ r2T ; r21 ¼ G2 Fe2 Ks þ Kbg þ r2T

ð3:7Þ

2138

X.-F. Xu et al.

where G is the APD gain, Kbg the photon count of the background light, Idc the dark current, Ts the signal light pulse time, Ks the photon count corresponding to the received light pulse power, F the additional noise factor, r2T the thermal noise:     Ks ¼ PTs g= hplanck f ; Kbg ¼ Pbg Ts g= hplanck f ; r2T ¼ 2kB TTs =RL

ð3:8Þ

The total BER of the system is then half of the sum of the BER for “0” and “1”, and is related to the received light intensity: 1 BERðI Þ ¼ 2

Z1 ith

" # " # Zith 1 ði  m 0 Þ2 1 1 ði  m1 Þ2 pffiffiffiffiffiffi exp  pffiffiffiffiffiffi exp  di þ di 2 2r20 2r21 2pr0 2pr1 1

ð3:9Þ Z1 BER ¼

BERðI ÞpW ðI ÞdI

ð3:10Þ

0

By substituting the received light intensity by I=aoverall , the BER model of an incoherent OOK modulation system has been constructed, considering the atmospheric absorbing and scattering, background light, atmospheric turbulence effects including beam wander, beam scintillation, and beam spreading, and detector receiving noise. According to this BER model, the system analysis can be completed by substituting the corresponding system models and different atmospheric conditions.

4 Conclusion Mathematical analysis of the space-to-ground optical link is important to the design of the optical communication system, which can be used to optimize the system parameters. In this paper, we emphasize the importance of the consideration of all effects due to the atmospheric turbulence by presenting the simulation results of the BER analysis with the mathematical model under the consideration of beam scintillation and of beam scintillation, beam wander, and beam spreading. In addition, we propose a comprehensive model which considers the atmospheric absorbing and scattering, background light, atmospheric turbulence effects including beam wander, beam scintillation, and beam spreading, and detector receiving noise. The comprehensive model gives a better way to analyze the space-to-ground optical communication system with OOK modulation, and will be a versatile tool for the system design. Acknowledgements. This work is supported by Beijing Municipal Science and Technology Commission Research under Project Z17110005217001.

Bit Error Rate Analysis of Space-to-Ground

2139

References 1. Chan VWS (2006) Free-space optical communications. J Lightw Technol 24(12):4750–4762 2. Xu XF, Lu Z (2018) Research status of mitigation techniques to assure the reliability of satellite-to-ground laser communications. J CAEIT 13(6):650–657 3. McCartney EJ (1976) Optics of the atmosphere: scattering by molecules and particles. Wiley, New York 4. Kolmogrov AN (1941) The local structure of turbulence in incompressible viscous fluid for very large reynolds’ numbers. Doklady Akademiia Nauk SSSR 30:301–305 5. Good RE, Belend RR, Murphy EA et al (1988) Atmospheric models of optical turbulence. In: Technical symposium on optics electro-optics, and sensors, Orlando 6. Valley GC (1980) Isoplanatic degradation of tilt correction and short-term imaging systems. Appl Opt 19(4):574–577 7. Tatarskii VI (1961) Wave propagation in a turbulent medium. McGraw-Hill, New York 8. Kaushal H, Kumar V, Dutta A et al (2011) Experimental study on beam wander under varying atmospheric turbulence conditions. IEEE Photon Technol Lett 23(22):1691–1693 9. Dios F, Rubio JA, Rodríguez A et al (2004) Scintillation and beam-wander analysis in an optical ground station-satellite uplink. Appl Opt 43(19):3866–3873 10. Andrews LC, Phillips RL (2005) Laser beam propagation through random media, 2nd edn. SPIE Press, Bellingham 11. Jiang YJ (2010) Theoretical and experimental researches on influences of atmospheric turbulence in the satellite-to-ground laser communication link. Harbin Institute of Technology, Harbin 12. Toyoshima M, Jono T, Nakagawa K et al (2002) Optimum divergence angle of a Gaussian beam wave in the presence of random jitter in free-space laser communication systems. J Optic Soc Am A 19(3):567–571 13. Gagliardi RM, Karp S (1995) Optical communications. Wiley-Interscience, New York

Performance Analysis of Amplify-andForward Satellite Relaying System with Rain Attenuation Qingquan Huang1,2, Guoqiang Cheng1(&), Lin Yang2, Ruiyang Xing3, and Jian Ouyang4 1

Communications Engineering College, Army Engineering University of PLA, Nanjing 210007, China [email protected] 2 The 63rd Research Institute, National University of Defense Technology, Nanjing 21007, China 3 Beijing Institute of Tracking and Telecommunications Technology, Beijing, China 4 College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China

Abstract. Outage probability (OP) is important performance metric of broadband satellite communication networks. This paper investigates an amplify-andforward dual-hop satellite relaying system operating above 10 GHz, where the satellite links which mainly affected by rain attenuation are assumed to following the double-lognormal distribution. By considering the satellite pattern and path loss, we obtain a new analytical expression for OP of the considered network. Numerical results have demonstrated the derived formula. Keywords: Amplify-and-forward

 Outage probability  Double-lognormal

1 Introduction In recent years, satellite communications have received much attention owing to their high throughput and seamless coverage [1–7]. Dual-hop satellite relay technology has been widely used to realize the communication between two terminals that are far apart [8]. The produced dual-hop satellite relay network (DSRN) can extend the coverage effectively by enhancing the spatial diversity, and alleviate the effects of fading. Thus, many works have investigated the performance of this network framework, such as [8–10]. In most existing works for DSRNs, the satellite-ground links are often assumed to follow the Shadowed-Rician fading [9, 10] or more general j  l fading [8]. However, broadband satellite system operated above 10 GHz, such as Ku band, the large-scale fading of the satellite-terrestrial links are mainly determined by the rain attenuation, which should follow the double-lognormal distribution. In fact, the analysis among double-lognormal channel is mathematically intractable due to the existence of compound logarithmic function. These observations motivate the work of this paper. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2140–2146, 2020 https://doi.org/10.1007/978-981-13-9409-6_259

Performance Analysis of Amplify-and-Forward Satellite

2141

Specifically, we studied an amplify-and-forward (AF) DSRN operated above 10 GHz. Here, the prevalent fading comes from rain attenuation and hence, usually modelled by a lognormal random process in dB (i.e. double-lognormal distribution in linear form). This result is mainly based on the popular DVB-S2 system [11]. Although the PDF of double-lognormal random variables consists of multiple logarithmic functions, we here obtain an analytical OP expression of the considered DSRN. It is the first time that such formula being derived, and the analytical expression has been validated by the simulation.

2 System and Channel Models 2.1

System Model

In this article, we investigate a DSRN comprising a source (S), a satellite relay (R), a destination (D). The system diagram is illustrated in Fig. 1. In this network, all of the nodes assumed to adopt single antenna. As presented in the related works, such as [12–14], we suppose that both the satellite uplink and downlink undergo double-lognormal fading. The whole communication occurred initwo phases due to relaying. In the first h phase, S sends signal x1 ðtÞ with E jx1 ðtÞj2 ¼ 1 to the satellite, yielding the output

signal at satellite can be denoted by y1 ðtÞ ¼

pffiffiffiffiffi P1 h1 x1 ðtÞ þ n1 ðtÞ

ð1Þ

where P1 presents the transmit power at S, h1 the uplink fading channel. Additionally, n1 ðtÞ the additive  white Gaussian noise (AWGN) at the satellite, and following 2 n1 ðtÞ  N 0; r1 .

R

h1

h2

S

D

Fig. 1. System diagram of the considered DSRN

2142

Q. Huang et al.

In the second phase, as AF relaying protocol is employ to assist the transmission, the satellite first enlarge the received signal y1 ðtÞ, the gain factor is denoted by G. Then, the amplified signal is forwarded to D. Thus, the corresponding received signal at D is given by pffiffiffiffiffi P2 h2 Gy1 ðtÞ þ n2 ðtÞ pffiffiffiffiffi  pffiffiffiffiffi ¼ P2 h2 G P1 h1 x1 ðtÞ þ n1 ðtÞ þ n2 ðtÞ;

y2 ðtÞ ¼

ð2Þ

  In general, we suppose the AWGN at D also following n2 ðtÞ  N 0; r22 and r2i ¼ jBT, with j ¼ 1:38  1023 J=K being Boltzmann constant and T ¼ 300 K the noise temperature. Here, the variable gain factor G can be given by [15] G¼

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi . ffi

P1 jh1 j2 þ r21 :

1

ð3Þ

According to (2), (3), the output signal-to-noise ratio (SNR) at the destination is given as following 2



P2 jh2 j G P1 jh1 j 2

P2 jh2 j2 P1 jh1 j2 þ r21

P 1 j h1 j 2

P2 jh2 j2 P1 jh1 j2 þ r21

r21

2

P2 jh2 j2 G2 r21 þ r22

¼

þ r22

,

c1 c2 ; c1 þ c2 þ 1

ð4Þ

. . where c1 , P1 jh1 j2 r21 , c2 , P2 jh2 j2 r22 . 2.2

Channel Models

To realistically model the satellite channel, both the effects of satellite beam pattern, and large-scale fading, free space path loss should be considered. Mathematically, the fading channel factor of the i-th link ði ¼ 1; 2Þ should be modeled as [1, 2] pffiffiffiffi hi ¼ Ci Yi ;

ð5Þ

with Ci being the beam pattern and path loss coefficient given by Ci ¼

pffiffiffiffiffiffiffiffiffiffiffiffiffi k Gi1 Gi2 ; 4pdi

ð6Þ

where k presents the carrier wavelength, di is the distance from the satellite to S or D. Additionally, Gi1 ; Gi2 denote the transmit and receive antenna pattern, respectively.   Then we can rewrite c1 , c2 as c1 , c1 Y1 ; c2 ¼ c2 Y2 with c1 ¼ P1 C21 r21 , c2 ¼ P2 C22 r22 .

Performance Analysis of Amplify-and-Forward Satellite

2143

Since the system operated at above 10 GHz, the large-scale fading Yi is mainly determined by the rain attenuation, which is usually modeled as double-log-normal random variables [16], whose probability density function (PDF) can be given as [12]  .pffiffiffiffiffiffi  h .  i 2pri y ln y exp ðlnð ln yÞ  bi Þ2 2r2i ; 0\y\1; fYi ðyÞ ¼  1

ð7Þ

then the cumulative distribution function (CDF) can be calculated as FYi ðyÞ ¼ Qððlnð lnðyÞÞ  bi Þ=ri Þ; 0\y\1;

ð8Þ

where ri is the scale parameter, and bi ¼ li þ lnðln 10Þ  ln 10, with li being the lognormal location parameter.

3 Outage Performance Analysis In general, the definition of the OP is the probability of that the SNR c lower than a predetermined threshold cth , namely, 

c1 c2  cth Pout ðcth Þ ¼ Prfc  cth g ¼ Pr c1 þ c2 þ 1





c1c2 Y1 Y2 ¼ Pr  cth c1 Y1 þ c2 Y2 þ 1

c1 c2 Since 0\Yi \1, we have 0\ c Yc11þc2cY1YY22þ 1 ¼ c =Y2 þ c =Y \ c 1 þ 1=ðY1 Y2 Þ

when cth 

c1 c2 c1 þ c2 þ 1,

1

Pout ðcth Þ ¼ 1; when

2

1

cth \ c þc1cc2þ 1, 1 2

2

1



c1 c2 þ c2 þ 1,

ð9Þ thus,

we can get that



cth cth   Pout ðcth Þ ¼ Pr Y1 Y2  ðc Y2 þ 1Þ c c c 2  2 1 2 c c c c þ 1 ¼ Pr Y2  th þ Pr th \Y2 \ th 1 c2 c2 c2 c1  cth 8 9 > > > >

> > < cth Y2 þ 1=c2

cth cth c1 þ 1 =  Pr Y1  \Y  2 > c1 Y2  cth =c2 c2 c2 c1  cth > > > > > |fflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflffl} : ; 8 > > >
> 

> cth Y2 þ ð1=c2 Þ

cth c1 þ 1 = cth c1 þ 1 Pr Y2 [ Y2 [ þ Pr Y1  > c1 Y2  ðcth =c2 Þ c2 c1   th > c2 c1   th > > > > |fflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflfflffl} : ; 

1

 c c c c þ 1 ¼ Pr Y2  th þ 1  Pr th \Y2 \ th 1 c2 c2 c2 c1  cth

 

c Y2 þ ð1=c2 Þ  th c1 þ 1 cth c1 þ 1 Pr Y þ Pr Y1  th [ [ Y 2 2 c1 Y2  ðcth =c2 Þ c2 c1  cth c2 c1  cth

Z 1

c c þ 1 c Y2 þ ð1=c2 Þ ¼ FY2 th 1 þ  c þ 1 FY1 th fY ðyÞdy th 1 c2 c1  cth c1 Y2  ðcth =c2 Þ 2 c2 c1  th |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} I1

ð10Þ

2144

Q. Huang et al.

Since I1 contains multiple logarithmic functions, the exact expression of integration I1 is intractable. In order to solve this problem, we here transform I1 into

Z1 I1 ¼

ð1  NÞFY1 0

ð1 þ cth Þ=c2 cth fY2 ðð1  NÞx þ NÞdx 1þ c1 ð1  NÞx þ Nð1 þ cth Þ=ðc1 þ 1Þ ð11Þ



þ 1Þ , then by resorting to [17, equation (25.5.33)], OP of the system can with N ¼ ccthððcc1c 2 1 th Þ be finally obtained as

8   P n cth c1 þ 1 > wi ð1  NÞfY2 ðð1  NÞxi þ NÞ þ F > Y2 c c c > 2 1 th < ; cth \ c þc1cc2þ 1   i¼1  1 2 c ð 1 þ c Þ= ð12Þ Pout ðcth Þ ¼ FY1 ccth 1 þ ð1NÞxi þ Nð1 thþ c 2Þ=ðc þ 1Þ > > 1 th 1 > : 1; cth  c þc1cc2þ 1 1

2

where xi ; wi denote the zeros of Legendre Polynomials and corresponding weight factors, respectively.

4 Numerical Results Next, we provide the simulation comparison to validate the accuracy of the derived formula. Here, we consider two fading scenarios for the double-lognormal fading satellite links, i.e. 3 dB ðli ¼ 0:6; ri ¼ 1Þ and 5.5 dB ðli ¼ 1:2; ri ¼ 1:01Þ average rain attenuation (ARA) [13]. For the convenience of presentation, we assume that average SNRs of the two hops satisfy c1 ¼ c2 ¼ c [8, 10]. Moreover, 106 channel realizations are adopted to present the results. As shown in Fig. 2, the accuracy of the obtained OP expression is demonstrated. Specifically, for both 3 and 5.5 dB ARA, the Monte Carlo (Mon) results match well with the analytical results (Ana), implying the validity of the derived closed-form expressions in (12). Moreover, the larger the average rain attenuation, the higher the OP.

Performance Analysis of Amplify-and-Forward Satellite

2145

Fig. 2. OP versus average SNR under different average rain attenuation

5 Conclusions This paper has studied the outage performance of an AF DSRN, where the satellite beam pattern, rain attenuation and free space path loss are considered. By using the double-lognormal distribution to characterize the satellite channels, we obtained a new analytical OP expression for this network with simulation comparison.

References 1. Lin Z, Lin M, Wang JB, Cola TD, Wang J (2019) Joint beamforming and power allocation for satellite-terrestrial integrated networks with non-orthogonal multiple access. IEEE J Sel Top Sign Process, pp 1–1 2. Lin M, Lin Z, Zhu W-P, Wang J-B (2018) Joint beamforming for secure communication in cognitive satellite terrestrial networks. IEEE J Sel Areas Commun 36(5):1017–1029 3. An K, Liang T, Zheng G, Yan X, Li Y, Chatzinotas S (2018) Performance limits of cognitive uplink FSS and terrestrial FS for Ka-band. IEEE Trans Aerosp Electron Syst, pp 1–1 4. Huang Q, Lin M, An K, Ouyang J, Zhu W-P (2018) Secrecy performance of hybrid satelliteterrestrial relay networks in the presence of multiple eavesdroppers. IET Commun 12(1):26– 34 5. An K, Liang T, Yan X, Zheng G (2018) On the secrecy performance of land mobile satellite communication systems. IEEE Access 6:39606–39620 6. Lin Z, Lin M, Wang JB, Huang Y, Zhu WP (2018) robust secure beamforming for 5G cellular networks coexisting with satellite networks. IEEE J Sel Areas Commun, pp 932–945

2146

Q. Huang et al.

7. Lin Z, Lin M, Ouyang J, Zhu W-P, Chatzinotas S (2018) Beamforming for secure wireless information and power transfer in terrestrial networks coexisting with satellite networks. IEEE Signal Process Lett 25(8):1166–1170 8. Zhang J, Li X, Ansari IS, Liu Y, Qaraqe KA Performance analysis of dual-hop DF satellite relaying over k − µ shadowed fading channels, pp 1–6 9. Miridakis NI, Vergados DD, Michalas A (2015) Dual-hop communication over a satellite relay and shadowed Rician channels. IEEE Trans Veh Technol 64(9):4031–4040 10. Arti MK (2017) A novel beamforming and combining scheme for two-way AF satellite systems. IEEE Trans Veh Technol 66(2):1248–1256 11. Morello A, Mignone V (2006) DVB-S2: The Second Generation Standard for Satellite Broad-Band Services. Proc IEEE 94(1):210–227 12. I. Ahmad, K. D. Nguyen, A. Pollok, and N. Letzepis, “Capacity analysis of zero-forcing precoding in multibeam satellite systems with rain fading.” pp. 1–6 13. Enserink S, Panagopoulos AD, Fitz MP (2014) On the Calculation of Constrained Capacity and Outage Probability of Broadband Satellite Communication Links. IEEE Wireless Communications Letters 3(5):453–456 14. Arnau J, Christopoulos D, Chatzinotas S, Mosquera C, Ottersten B (2014) Performance of the Multibeam Satellite Return Link With Correlated Rain Attenuation. IEEE Trans Wireless Commun 13(11):6286–6299 15. K. An, M. Lin, T. Liang, and J. Ouyang, “Secure transmission in multi-antenna hybrid satellite-terrestrial relay networks in the presence of eavesdropper.” pp. 1–5 16. ITU, “Tropospheric attenuation time series synthesis,” Geneva, Switzerland, 2012 17. Abramowitz M, Stegun IA (1966) Handbook of mathematical functions. Applied mathematics series 55(62):39

Threat-Based Sensor Management For Multi-target Tracking Yuqi Lan(B) and Jing Liang School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China [email protected]

Abstract. Multi-sensor management for multi-target tracking is a theoretically and computationally challenging problem. A multi-sensor management algorithm is proposed within the partially observed Markov decision process (POMDP) framework, and cardinality balanced multitarget multi-Bernoulli (CBMeMBer) filter is applied to track targets. The novelties lie in evaluating the threat degree of the targets using the analytic hierarchy process (AHP) and the threat-based sensor selection algorithm. Considering distance, speed and heading of targets, we use the AHP to evaluate the threat degree of targets at each sampling time. Based on the threat level of targets and detection distance of sensors, the sensor-target matching algorithm is proposed. Numerical studies are presented in the dynamical system. The simulation results show the feasibility of the method. Keywords: Target tracking CBMeMBer

1

· AHP · Sensor management ·

Introduction

In multi-sensor multi-target tracking, as the sensor network is on a large scale, determining which target the sensor should track at each sampling time is essential. The two major approaches to solve the problem based on Bayes theory are information-based sensor management method and task-based sensor management method, respectively. The information-based method mostly maximize the expected information gain from prior to posterior after taking the hypothesized sensor control action. R´ enyi divergence is used to quantify the information gain for sensor control with random set filters [1] and probability hypothesis density (PHD) filters [2]. Cauchy-Schwarz divergence is used in [3] as the information gain to track multitarget, and closed-form solution can be obtained under the assumption that the multi-target is modeled as Poisson random finite set. The notion of information measure based on the information gain has been discussed in [4,5]. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2147–2154, 2020 https://doi.org/10.1007/978-981-13-9409-6_260

2148

Y. Lan and J. Liang

The task-based method manages sensors by minimizing the physical performance criteria such as estimation errors. Posterior Cramer-Rao Bound (PCRLB) is used in [6] to control the multi-sensor array in multi-target tracking, and also adopted in [7] to manage sensors for multi-target detection and tracking with particle swarm optimization. Gostar et al. [8] manages the sensors by minimizing the estimation error with a multi-Bernoulli filter. Cramer-Rao Bound (CRB) is applied to solve the sensor managing problem [9]. Posterior expected error of cardinality and state (PEEC) is exploited in [10]. The two aforementioned approaches can efficiently manage the sensors for multi-target tracking. However, the different threat level of all targets, which is practically important, are ignored. For example, the guided missile and bombardment aircraft are at a high threat level while the threat level of surveillance plane is rather low. Also, targets with higher speed heading to the monitor centering should be taken into account. We adopt AHP to quantify the threat degree of targets and the proposed algorithm is used for sensor selection with CBMeMBer filter in target tracking. The paper is organized as follows. In Sect. 2, the threat-based sensor management algorithm to sensor control problem is proposed. Numerical studies are presented in Sect. 3 and the paper is concluded with Sect. 4.

2

Threat-Based Sensor Management Method

Though many methods can to evaluate the target threat level, we exploit the AHP for its simplification, and the short sampling period. Considering the distance between the target and the monitoring center, speed and heading of target from the target to the monitoring center, analytic hierarchy process is used to evaluate the threat degree of targets. Based on threat level of targets and the detection range of sensors, the sensor-target matching algorithm is proposed as follows.

The Target Threat P

The distance from target to the monitoring center C1

The velocity of the target C2

The Heading of the target C3

Fig. 1. Hierarchical model of a target threat

Threat-Based Sensor Management For Multi-target Tracking

2.1

2149

Target Threat Model

Hierarchical model. Considering the practical situation in battlefield, velocity of target and the distance to the monitoring center and heading are considered as three main factors to decide the threat level, as it shows in Fig. 1. Specifically the heading is the angle between the velocity vector and the distance vector. P-C layer matrix. The P-C layer matrix is the comparison matrix of any two factors (Ci , Cj ) affecting the threat level of the target (P). aij is the ratio of the influence of Ci and Cj to P. To quantify the ratio, define aij as follows: Table 1. Scale measuring the influence of factors on the target Scalar aij

Definition

1

Ci is the same as Cj

3

Ci is a little more influential than Cj

5

Ci is more influential than Cj

7

Ci is much more influential than Cj

9

Ci is absolutely more influential than Cj

2, 4, 6, 8

The ratio of Ci and Cj is between the above description

1, 1/2, . . . , 1/9 The ratio of Ci and Cj is opposite to the above

From Table 1, define C1 = 8, C2 = 7, C3 = 3. Thus the P-C matrix for the target threat is as following ⎛ ⎞ 1 8/7 8/3 A= ⎝ 7/8 1 7/3 ⎠ 3/8 3/7 1 Calculating the weight and Consistency test. The eigenvector corresponding to the maximum eigen value of A is the desired wight of the factors. The −n indicator for P-C matrix consistency is CI = λmax n−1 , where λmax and n are the maximum eigenvalue and dimension of P-C matrix A respectively. In this case, CI = 0. Refer to the consistency standard values, the random consistency indicator RI for n = 3 (A is 3 × 3 matrix) is 0.58. Thus, the random consistency ratio CR = CI RI = 0 < 0.1. The Inconsistent degree of P-C matrix is within the allowable range, and its eigen vector can be adopted as the weight vector. W = [ω1 , ω2 , ω3 ]T = (0.7243, 0.6338, 0.2716)T where ω1 , ω2 , ω3 are the influence of distance, velocity and heading on the target threat. Utility function matrix. The factors C can not directly determine the threat level since all factors have different ranges. Let μj be the impact on the threat level of Cj , where μ ∈ [0, 1]. The utility functions are as follows:

2150

Y. Lan and J. Liang

(1) The smaller distance and heading are, the higher threat level the target is at. The utility function for distance and heading is: μij = 1 +

cjmin

cjmax



cij

(1)

cjmax

(2) The higher the velocity is, the higher threat level the target is at. The utility function of velocity is as follows: μij = where

cij

(2)

cjmax

cjmin = min {cij } 1≤j≤3

cjmax = max {cij } 1≤j≤3

Integrated quantized value. With the weight vector W and the utility function matrix μ, the threat level of ith target can be expressed by the integrated quantized value Pi  Pi = ωj μij (3) 2.2

Sensor Management Model

The sensor management method is based on the assumption that each sensor can only track one target at one time. The parameter of sensor i is given by xs,i = [xs , ys , Ωi ]T are position and detection range respectively. The key of sensor management for multi-target tracking is to determine Uk∗ at each time k to ensure each target in the set T is tracked, where T isa set of whose targets ui,j , where threat degree is more than the threshold. And U ∗ = i∈(1,ns ) j∈(1,nk )

ui,j k = 1 when sensor i tracks target j at time k. ∗ at times k is as follows: The steps to determine Uk+1

Step 1 : Calculate the threat level degree for each target at time k; For target j, if thj > δ, tj ∈ T T is the targets that should be tracked; Step 2 : For each sensor i, choose the target j ∈ T with the highest threat level within the detection range, set u i,j k+1 = 1; Choose target for each sensor to  obtain the strategy Uk+1 ; ns   Step 3 : For any target tj ∈ T , if uwj k+1 ≥ 1, let Uk+1 = Uk+1 , that is end of algorithm; Else, turn to Step4;

w=1

Step4 : For each target j, if tj ∈ T and

ns  w=1

track target j, u w,j k+1 = 1, turn to step3.

uwj k+1 = 0, let the nearest sensor w to

Threat-Based Sensor Management For Multi-target Tracking

2151

Based on the description above, the sensor management procedure is shown in Fig. 2.

Start Target state Xk at Time K Prediction Calculate the threat level of target to get set T Randomly choose the initial sensor. Target-sensor matching to get Uk+1 Multi-target updte with the measurement Zk and Uk+1 End

Fig. 2. Flow of the proposed sensor selection algorithm

3

Numerical Studies

In this section, we demonstrate the simulation of 4 sensors tracking 4 targets using threat-based sensor selection method proposed in Sect. 2 with the target tracking implenmented by the SMC-CBMeMBer filter. 3.1

System Setup

The monitoring center is located at (0, 1000). The initial information of sensors are shown in Table 2 as follows: Table 2. Information of sensors Sensor i Position (m)

Detection range (m)

1

(1000, 1000)

1000

2

(−1000, 750)

1000

3

(−1000, 1500) 1000

4

(200, 1200)

1000

2152

Y. Lan and J. Liang

SMC-CBMeMBer filter is used for target tracking. The following nonlinear dynamical and measurement models are used. A nearly constant turn model having varying turn rate together with noisy bearings and range measurements xTk , ωk ]T comprises the planar is considered. The target state variable xk = [˜ T T position and velocity x ˜k = [xk , x˙ k , yk , y˙ k ] . The state transition model is



xk−1 + Gwk−1 x ˜k = F (ωk−1 )˜

(4)

ωk = ωk−1 + Δvk−1

(5)



2 I , vk−1 ∼ N(·; 0, σu2 I), Δ = 1s, σw = 15 m/s2 , σv = where wk−1 ∼ N ·; 0, σw π/180 rad/s. And the observation is given by

arctan(x k /yk ) + εk (6) zk = xk 2 + yk 2

where εk ∼ N (·; 0, Rk ) Rk = diag([σθ2 , σr2 ]T ), and σθ = (π/180) rad, σr = 5 m. 3.2

Simulation Results

 2 The distance can be determined by: C1 = xk 2 + (yk − 1000) . The velocity is C2 = x˙ 2k + y˙ k2 . The heading is the angle between the velocity vector and the distance vector. Tracking time is 100Δ. The target tracks, sensors and monitoring center is shown in Fig. 3. The target survival time and birth time is shown in Fig. 4, as well as the estimates and measurements. And target-sensor matching showing in Fig. 5.

Fig. 3. Target trajectories and the sensors and monitoring center. Start/stop positions for each track are shown with ◦/.

The target-sensor matching is shown in Fig. 5. The target-0 means the sensor doesn’t track any target at that time. From the simulation results, the proposed method can effectively match the sensor with targets based on the target threat degree.

Threat-Based Sensor Management For Multi-target Tracking

2153

Fig. 4. Measurement, true tracks and SMC-CBMeMBer filter estimates.

Fig. 5. Target-Sensor matching. Showing 4 sensors track the exact target at each observing time, where target 0 means not tracking any target.

2154

4

Y. Lan and J. Liang

Conclusion

In this paper, we proposed a threat-quantification method and a sensor-selection algorithm based on the threat level for multi-target tracking. AHP is used to evaluate threat degree of a target with distance, speed and heading of the target. Based on the threat degree of targets, a target-sensor matching algorithm is proposed for sensor management. Numerical studies show that the implementation of the proposed algorithm. Future work will involve more adaptive method to quantify the threat degree and moving sensor control. Acknowledgements. This work was supported by the National Natural Science Foundation of China (61731006, 61671138), and was partly supported by the 111 Project No. B17008.

References 1. Ristic B, Vo BN (2010) Sensor control for multi-object state-space estimation using random finite sets. Automatica 46(11):1812–1818 2. Ristic B, Vo BN, Clark D (2011) A note on the reward function for PHD filters with sensor control. IEEE Trans Aerosp Electron Syst 47(2):1521–1529 3. Hoang HG, Vo BN, Vo BT, Mahler R (2015) The Cauchy-Schwarz divergence for poisson point processes. IEEE Trans Inf Theory 61(8):4475–4485 4. Xiong N, Svensson P (2002) Multi-sensor management for information fusion: issues and approaches. Inf Fusion 3(2):163–186 5. Mark I, Kolba P, Collins LM (2007) Information-based sensor management in the presence of uncertainty. IEEE Trans Sig Process 55:2731–2735 6. Tharmarasa R, Kirubarajan T, Hernandez ML, Sinha A (2007) PCRLB-based multi-sensor array management for multitarget tracking. IEEE Trans Aerosp Electron Syst 43(2):539–555 7. Yan T, Han CZ (2017) Sensor management for multi-target detection and tracking based on PCRLB. In: 20th IEEE International conference on information fusion. IEEE Press, Piscataway, NJ, pp 136–141 8. Gostar AK, Hoseinnezhad R, Bab-Hadiashar A (2016) Multi-Bernoulli sensorselection for multi-target tracking with unknown clutter and detection profiles. Sig Process 119:28–42 9. Saurav S, Yimin DZ, Moeness GA (2018) Cramer-Rao type bounds for sparsityaware multi-sensor multi-target tracking. Sig Process 145(1):68–77 10. Wang XY, Hoseinnezhad R, Gostar A et al (2018) Multi-sensor control for multiobject Bayes filters. Sig Process 142:260–270

Research on Measurement Matrix Based on Compressed Sensing Theory Zhihong Wang, Hai Wang(&), Guiling Sun, and Yi Xu College of Electronic Information and Optical Engineering, Nankai University, Tianjin 300350, China [email protected]

Abstract. The basic theories of compressed sensing and measurement matrix are reviewed firstly, and then the equivalent conditions of the Null Space Property and Restricted Isometry Property for measurement matrix, the incoherence is introduced, including the theory and mathematical proof. On this basis, the construction methods and properties of several commonly used measurement matrices (random Gaussian matrix, Bernoulli random matrix, and Toeplitz matrix) are introduced. The time-domain sparse signals are used for simulation analysis. Simulation results show that the sparse signals can reconstructed when the measurement dimension M satisfies certain conditions. Considering the hardware implementation and storage space for matrix, and with the idea of circular matrix, this paper proposes a pseudo-random Bernoulli matrix. The simulation results show that the proposed matrix can realize reconstruction of sparse signal and is hardware-friendly, moreover, the required storage space is small. Keywords: Compressed sensing

 Measurement matrix  Incoherence

In traditional signal sampling method, in order to reconstruct the signal without distortion, the famous Nyquist sampling theorem [1] demonstrates that the sampling frequency should not be less than twice the highest frequency of the signal. In 2006, D. L. Donoho et al. formally proposed the concept of compressed sensing (CS) [2] by publishing the related paper in the journal titled IEEE Transactions on Information Theory. Compressed sensing theory breaks the bottleneck of the traditional sampling theorem in the aspect of sampling frequency. In CS theory, based on the sparsity or compressibility of the signal, the data acquisition and compression process are combined to realize the reconstruction of the original signal with a large probability from measurements whose dimension is far less than the original signal. Compressed sensing theory combines sampling and compression processes based on signal sparsity or compressibility to remove data redundancy, and uses non-adaptive linear projection and non-correlated measurements to reconstruct signal with great probability from measurements whose dimension is far less than the original signal [3–7]. For the compressible signal, the measurement matrix with M  N size is used to multiply a N-dimension signal to achieve the change from high-dimension to low-dimension and compress signal information into several larger components. Therefore, whether the measurement matrix structure is reasonable and whether it is hardware-friendly will © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2155–2162, 2020 https://doi.org/10.1007/978-981-13-9409-6_261

2156

Z. Wang et al.

directly affect whether the data measurement and reconstruction can be implemented in hardware and applied to the actual project.

1 Coherence Condition The premise of compressed sensing theory is that the signal is sparse or compressible, but the measurement matrix will determine whether the signal can be compressed to the maximum extent and whether the signal can be reconstructed at the receiver. The measurement matrix reduces the dimension of signal and compresses the information. Only when the measurement vector contains enough information, the original signal can be reconstructed with high probability at the receiver. Therefore, the measurement matrix should satisfy the Null Space Property (NSP) [8] and Restricted Isometry Property (RIP) [9–11]. In practical applications, it is difficult to prove that the matrix satisfies the NSP and RIP conditions, so an equivalent condition, incoherent condition, is proposed. The definition of incoherent condition is as follows that row vectors f/i g of the measurement   matrix U cannot be represented by the column vectors of sparse basis matrix wj and also column vectors of sparse basis matrix cannot be represented by row vectors of measurement matrix. The incoherent condition can be measured by the degree of coherence, which is defined as follows [11]:  N Definition 1 Suppose that U ¼ f/i gN1 is the measurement matrix and W ¼ wj 1 is the sparse basis matrix, their degree of coherence can be defined as: lðU; WÞ ¼

pffiffiffiffi   N  max  /i ; wj  1  i; j  N

ð1Þ

  The degree of coherence shows the correlation of f/i g and wj . As there must be    pffiffiffiffi a j which satisfies  uk ; wj   1 N , we can get lðU; WÞ  1. For any column vector wj of W, N  X   u k ; w  2 ¼ w 2 ¼ 1 j j l k¼1

2

ð2Þ

pffiffiffiffi

pffiffiffiffi so that lðU; WÞ  N . The range of degree of coherence is lðU; WÞ 2 1; N . U and W is maximal incoherent when lðU; WÞ ¼ 1. The theory proves that the smaller the degree of coherence between the measurement matrix and the sparse basis matrix is and the larger the compression ratio is, under the same conditions the higher the reconstruction accuracy will be. After relevant proof, the pairwise matrices with lower coherence suitable for application in CS theory at present are:

Research on Measurement Matrix Based on Compressed Sensing Theory

2157

pffiffiffiffi i2pjt (1) The sparse basis matrix is Fourier matrix as Wj ðtÞ ¼ 1 N e =N . The measurement matrix is with pulse basis as Uk ðtÞ ¼ dðt  kÞ. In this case lðU; WÞ ¼ 1 so that the original signal can be compressed and reconstructed successfully. (2) The sparse basis matrix W is made up with wavelet basis and measurement matrix U comes from Noiselet. (3) The coherence between Noiselet and wavelet basis is very small, for example, the pffiffiffi coherence between Noiselet and Haar wavelet is 2, between Noiselet and Daubechies D4 wavelet is 2.2, while between Noiselet and Fourier basis or pulse basis is 1. (4) When the sparse basis W comes from any deterministic basis and measurement matrix U comes from any random generated matrix, their coherence is about pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 log N . In previous simulation tests, the measurement matrix is usually a random Gaussian matrix and sparse basis matrix is a deterministic basis. The degree of coherence not only determines whether the original signal can be successfully reconstructed, but also affects the compression ratio of the measurement matrix, as shown in Theorem 1 [11]. Theorem 1 Suppose the sparse coefficients a of a N-dimension signal x 2 RN under the basis W is a K-sparse vector, when M satisfies M  Cl2 ðU; WÞK log N

ð3Þ

a can be reconstructed with a large probability by solving the optimization problem minksk1 ; s:t Hs ¼ y so that the original signal x can be obtained, where C is a constant. It can be seen from Eq. (3) that the smaller the value lðU; WÞ is means that the lower the coherence between U and W is, reducing the lower limit of the vector dimension M of the measurement matrix. With smaller M, the compression ratio of the signal can be the higher, which is beneficial to reduce the sampling frequency and ease the pressure of storage circuit and transmission circuit.

2 Common Measurement Matrices (1) Random Gaussian matrix The Gaussian random matrix is the most commonly used matrix in the simulation process. The specific method is to randomly generate a M  N matrix whose all entries satisfy the Gaussian distribution with independent and identical distribution and the pffiffiffiffiffi mean value is 0 and the variance is 1 M .

1 /i;j  N 0; pffiffiffiffiffi M

ð4Þ

where 1  i  M and 1  j  N. And the generated random Gaussian matrix satisfies the RIP condition [7, 12, 13]. When the row vector dimension of measurement matrix

2158

Z. Wang et al.

satisfies the condition M  cK logðN=K Þ, where c is a small constant and K is signal sparsity, the original signal can be reconstructed with high probability by a certain reconstruction algorithm [14]. (2) Random Bernoulli matrix The random Bernoulli matrix is generated similarly to the Gaussian matrix: a M  N matrix is generated that requires its entries to be independently obeyed by the Bernoulli distribution. ( /i;j 

p1ffiffiffi M  p1ffiffiffi M

p ¼ 1=2 p ¼ 1=2

ð5Þ

where p is the probability for the two values. The Bernoulli matrix generation method is very similar to that of the Gaussian matrix method. The related literature [12] proves that it satisfies the RIP condition, and like the Gaussian matrix, the original signal can be reconstructed with high probability when M  cK logðN=K Þ. (3) Toeplitz matrix and Circulant matrix The forms of Toeplitz matrix and Circulant matrix are as follows: 0

tN

B tN þ 1 B T ¼B .. @ . tN þ M1

tN1 tN .. .

tN þ M2

  .. .

t1 t2 .. .

   tM

1

0

C B C B C; C ¼ B A @

cN c1 .. .

cM1

cN1 cN .. .

cM2

  .. . 

c1 c2 .. .

1 C C C A

ð6Þ

cM

It can be seen from Eq. (6) that in the Toeplitz matrix and the circulant matrix, the elements on the same diagonal are equal, i.e. ti;j ¼ ti þ 1;j þ 1 , ci;j ¼ ci þ 1;j þ 1 . And it can be found the Toeplitz matrix becomes a cyclic matrix when ti ¼ tN þ i . The specific construction method of the matrix is as follows: first generate a random vector t ¼ ðt1 ; t2 ; . . .; tN Þ, and then cycle the generated vector according to the matrix to be constructed. If it is a Toeplitz matrix, it needs to supplement new elements. If it is a circular matrix, it can directly circulate the vector. Next, the column vector of the generated matrix is normalized to obtain a Toeplitz or a cyclic measurement matrix. According to the construction method of the Toeplitz matrix and the circulant matrix, this is very similar to the shift register in the circuit, so they are hardwarefriendly.

Research on Measurement Matrix Based on Compressed Sensing Theory

2159

Fig. 1. Reconstruction success rate of three common measurement matrices

3 Analysis of Common Measurement Matrix Performance (1) Analysis of exact reconstruction success rate Suppose a given signal with length N ¼ 256 and sparsity level K 2 f4; 8; 16; 32g. The length of measurements M is ½16; 256 and M is increased by 8 units from 16. To eliminate contingency, each test is repeated 100 times and the results are the mean. The exact reconstruction success rate results of Gaussian random matrix, Bernoulli random matrix and Toeplitz matrix is shown in Fig. 1. It can be seen from Fig. 1 that with the same sparsity level K, the larger the number of measurements M, the more values that can be accurately reconstructed. Besides, with the same number of measurements M, the larger the sparsity level K, the less values that can be accurately reconstructed. For the random Gaussian measurement matrix and random Bernoulli measurement matrix, the original signal can be reconstructed accurately when M  cK logðN=K Þ. For the Toeplitz measurement matrix, when M is a c logðN=eÞ multiple of the signal sparsity level, the original signal can be accurately reconstructed (c and e are fixed constants, where e\1). (2) Analysis of signal reconstruction error Given a time-domain sparse signal of length N ¼ 512, random Gaussian matrix, random Bernoulli matrix, Toeplitz, and circulant matrices are used as measurement matrices, respectively. In order to eliminate the contingency, each test is repeated 100 times and the results are the mean. The relative error is used to measure the reconstruction error. The relative error is defined as shown in Eq. (7), and several common matrix reconstruction waveforms are shown in Fig. 2. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi PN1 2 ð^ x  x Þ n n jj^x  xjj2 n¼0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ g¼ PN1 2 jjxjj2 n¼0 xn

ð7Þ

It can be seen from Fig. 2 that for time-domain sparse signals, random Gaussian measurement matrix, random Bernoulli measurement matrix, Toeplitz and cyclic measurement matrix can reconstruct signal successfully. The reconstruction error is shown in Table 1, which can be seen that for the same signal, the relative error of the Toeplitz measurement matrix is smaller when the number of measurements is fixed.

2160

Z. Wang et al.

Fig. 2. Results for time-domain sparse signals by several common measurement matrices Table 1. Relative error of three common measurement matrices over different M M = 0.3 N M = 0.5 N M = 0.8 N Random Gaussian measurement matrix 1.0306 0.1815 6.0657e−16 Random Bernoulli measurement matrix 0.9807 0.1800 5.1359e−16 Toeplitz measurement matrix 0.9622 0.1797 4.4542e−16

4 Improved Bernoulli Matrix The entries of random Bernoulli measurement matrix are subject to the Bernoulli distribution, as shown in Eq. (5). Considering the implementation in hardware, here (5) is improved by changing the value of entries in the matrix to 1 and −1, then (5) can be written as:  /i;j 

1 p ¼ 1=2 1 p ¼ 1=2

ð8Þ

Because in the hardware circuit, a sequence consisting of 1 and −1 entirely can be implemented by a shift register. a vector whose elements are all 1 and −1 subject to the Bernoulli distribution is set firstly, and then the idea of cyclic matrix is used to construct the remaining row vectors, thus obtaining the pseudo-random Bernoulli matrix whose elements obey the Bernoulli distribution and are values of 1 or −1. The advantage of this matrix is that it is convenient for hardware implementation with 1 or

Research on Measurement Matrix Based on Compressed Sensing Theory

2161

−1 values. Moreover, The idea of using the cyclic matrix reduces the storage space required for the matrix. The pseudo-random Bernoulli matrix is simulated and analyzed, and the reconstruction success rate is shown in Fig. 3. It can be seen from Fig. 3 that the pseudo-random Bernoulli circulant matrix with better hardware implementation capability can successfully reconstruct the signal, and the relative error of reconstruction is shown in Table 2.

Fig. 3. Reconstruction success rate with pseudo-random Bernoulli matrix

Table 2. Relative error of three common measurement matrices and pseudo-random Bernoulli circulant matrix Gaussian Bernoulli Toeplitz Pseudorandom Bernoulli circulant matrix

M = 0.3 N M = 0.4 N M = 0.5 N 1.0195 0.4221 0.1757 0.8409 0.4464 0.1800 1.0907 0.3677 0.1797 0.4923 0.3201 4.2638e−16

M = 0.6 N 0.1771 5.2044e−16 4.8584e−16 4.8074−16

M = 0.7 N 6.2680e−16 5.6501e−16 5.1987e−16 4.9503e−16

M = 0.8 N 6.3777e−16 5.3934e−16 4.9220e−16 4.2746e−16

2162

Z. Wang et al.

5 Conclusion In this paper, the equivalent condition of NSP and RIP, the incoherence of measurement matrix, is introduced. On this basis, the properties and construction methods of several commonly used measurement matrices are introduced, and then simulation tests for time-domain sparse signal are conducted with Orthogonal Matching Pursuit (OMP) [15] reconstruction algorithm. The simulation results show that several common matrices can realize complete signal reconstruction when the length of measurements M satisfies certain conditions. Based on random Bernoulli matrix, considering the hardware-implementation friendliness and learning from the idea of circular matrix, a pseudo-random Bernoulli measurement matrix is proposed. Main advantages of the proposed matrix are that it can be implemented easily in hardware and it requires small storage space. Simulation results show that the pseudo-random Bernoulli measurement matrix can achieve a complete reconstruction of signal well.

References 1. Hajela D (1990) On computing the minimum distance for faster-than-Nyquist signaling. IEEE Trans Inf Theory 36(2):289–295 2. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306 3. Baraniuk RG (2007) Compressive sensing [lecture notes]. Sig Process Mag IEEE 24:118– 121 4. Candes EJ, Romberg J (2006) Quantitative robust uncertainty principles and optimally sparse decompositions. Found Comput Math 6:227–254 5. Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. Inf Theory, IEEE Trans On 52:489–509 6. Candes EJ, Romberg JK, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59:1207–1223 7. Candes EJ, Tao T (2006) Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans Inf Theory 52:5406–5425 8. Cohen A, Dahmen W, DeVore R (2009) Compressed sensing and best k-term approximation. J Amer Math Soc 22:211–231 9. Candès EJ (2006) Compressive sampling. In: Proceedings of the international congress of mathematicians. pp 1433–1452 10. Sha W (2008) Introduction to compression sensing, University of Hong Kong 11. Donoho DL, Huo X (2001) Uncertainty principles and ideal atomic decomposition. IEEE Trans Inf Theory 47(7):2845–2862 12. Baraniuk R, Davenport M, DeVore R et al (2008) A simple proof of the restricted isometry property for random matrices. Constr Approx 28:253–263 13. Christensen O (2003) An introduction to frames and riesz bases. Birkhauser, Boston, Denmark 14. Wojtaszczyk P (2010) Stability and instance optimality for gaussian measurements in compressed sensing. Found Comput Math 10:1–13 15. Tropp J, Gilbert A (2007) Signal recovery from random measurements via orthogonal matching pursuit. Trans Inf Theory 53(12):4655–4666

PID Control of Electron Beam Evaporation System Based on Improved Genetic Algorithm Wenwu Zhu(&) Department of Electrical Engineering, Anhui Technical College of Mechanical and Electrical Engineering, Wuhu, Anhui 241000, China [email protected]

Abstract. According to the deposition rate collected by the crystal probe of the electron beam evaporation system, the output voltage of the crystal film thickness controller is used to control the voltage at both ends of the filament of the electron gun, so as to adjust the output power of the electron beam. According to the change of deposition rate, the closed-loop feedback control of PID in genetic algorithm is used to change filament voltage to stabilize deposition rate. Experiments show that by using this method to study the rate control algorithm in the process of electron beam evaporation, the surface of SiO2 film used in the production of experimental coatings can maintain a certain smoothness, and a relatively stable evaporation rate can be obtained. Other plating systems and different coatings used can be modified by appropriate parameters according to the algorithm, which can greatly improve the rate control effect. Keywords: Genetic algorithm

 PID  Electron beam evaporation system

1 Introduction Electron beam evaporation coating is a widely used coating technology at present. It relies on the electron beam of electron gun to directly bombard the surface of heating material for evaporation coating, which has high thermal efficiency. According to the deposition rate collected by the crystal probe of the electron beam evaporation system, the output voltage of the crystal film thickness controller is used to control the voltage at both ends of the filament of the electron gun. According to the variation of the deposition rate, the filament voltage, the electron beam current and the output power are controlled by PID closed loop feedback to stabilize the deposition rate.

Foundation Project: Natural Science Research Project of Anhui University (KJ2015A385); 2016 Anhui Quality Project Foundation Project: Anhui University Teaching Research Project (2016jyxm0196). © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2163–2168, 2020 https://doi.org/10.1007/978-981-13-9409-6_262

2164

W. Zhu

2 System Mathematical Model Electron beam evaporation system is a complicated control system. It is known that it is a first-order inertial link system with pure delay. It can be used as the basis of automatic control system modeling. The deposition rate transfer function is as follows [1]: Gp ðsÞ ¼

Kess Ts þ 1

ð1Þ

In electron beam evaporation coating, the output power, beam strength, pre-melting power, pre-melting holding time and coating time are selected as parameters according to equipment conditions and coating materials. ZZ-800 electron beam evaporation coating machine was selected to test the thickness of SiO2 coating material film. Many experiments were carried out under the same conditions as vacuum chamber background pressure of 5.0  10−3 pa, electron beam current intensity of 60 mA, evaporation time of 30 s and distance between evaporation crucible and base material of 60 cm. The step response curve method was used to determine the K, T and s parameters of deposition rate transfer function under the same experimental parameters. When the system works in manual state, when the system is balanced at a given value, a step signal is added suddenly, and the corresponding output changes accordingly. This is the step response curve of the output. When testing the step response curve, the open-loop operation of the electron beam evaporation system is manually adjusted to run stably around 1/3 of the output power set in the actual working condition, and then quickly adjusted to the set output power, i.e. adding a step input signal, so that the output of the controlled object changes accordingly, and finally stabilizes at a value. Using this method, the step response curves of deposition rate transfer function can be obtained on the basis of the average data recorded in many experiments under the same conditions. The three parameters of K, T and s, K = 4, T = 12 and s = 0.425, respectively, can be obtained by plotting. Therefore, the mathematical model of deposition rate transfer function of SiO2 thin films electroplated by electron beam evaporation system under the experimental conditions is obtained as follows: Gp ðsÞ ¼

4e0:425s 12s þ 1

ð2Þ

3 Tuning of Common PID Parameter Optimization Methods PID control is widely used in today’s industrial production. The optimization of PID control parameters is also very important. Through the optimization of parameters, the stable and efficient operation of the whole control system can be achieved. There are many methods to optimize PID parameters. Ziegler-Niehols empirical formula tuning method, CHR algorithm, Cohen-Coon method and ISTE optimal parameter tuning method are mainly selected to compare [2]. According to the characteristic parameters

PID Control of Electron Beam Evaporation System Based on Improved

2165

K, T and s, of the mathematical model, the parameters KP, KP and KD of the corresponding PID controller can be calculated. Z-N empirical formula tuning method is based on the step response of the controlled object. According to the empirical formula, open-loop tuning method is used to optimize the parameters of the PID controller. In practical application, the traditional ZN algorithm has been constantly improved after various transformations. The so-called Chicn-Hrones-Reswick (CHR) algorithm is one of them. Cohen and Coon put forward the improvement and perfection. The main working principle is to assign the dominant poles of the system to make the transition curve of the object attenuate at a 4:1 attenuation rate, so as to obtain the optimal setting value of PID parameters. The ISTE optimal parameter tuning rule is that the system obtained by the optimal criterion can achieve good results in restraining large errors, and it can restrain the deviation better in the process of adjusting the transition. Four kinds of PID parameter setting formulas and coefficient selection are shown in Table 1. Table 1. Four formulas for setting PID parameters Setting method Z-N empirical formula CHR Cohen-Coon ISTE optimal

Kp

Ti

Td

1:2T=Ks

2s

0.5s

0:6T=Ks ðT=KsÞð4=3 þ s=4TÞ

T sð32 þ 6s=TÞ=ð13 þ 8s=TÞ T=ð0:987 þ ðs=TÞð0:238ÞÞ

0.5s 4s=ð11 þ 2s=TÞ

ð1:042=KÞðs=TÞ0:897

0:385Tðs=TÞ0:906

According to the above formulas, the PID parameters of various tuning methods are calculated as shown in Table 2. Table 2. Four kinds of PID parameter tuning results Setting method Z-N empirical formula CHR Cohen-Coon ISTE optimal

Kp 8.4706 4.2353 9.5902 5.2140

Ki 9.9654 0.3529 9.1562 0.4252

Kd 1.8000 0.9000 1.4980 1.1679

Using matlab to simulate the PID parameters calculated by the above four methods, the simulation curves of the transient process response shown in Fig. 1 can be obtained. Figure 1a is Z-N empirical formula method, Fig. 1b is CHR method, Fig. 1c is C-C method, Fig. 1d is ISTE optimal method. Four methods can be found, Z-N empirical formula method and C-C method, both have large overshoot and long adjustment time. CHR method and ISTE optimal method have no overshoot, but the adjustment time is

2166

W. Zhu

longer, reaching more than 10 s. It can be seen that the four PID controller parameter tuning methods can not achieve satisfactory control results.

Fig. 1. Transition process response simulation curve

4 Genetic Algorithms to Optimize the Setting of PID Parameters Genetic algorithm (GA) is a kind of algorithm for optimization problems. It is a random optimization search method with orientation in the coding space of optimization problems. It implies parallelism and intrinsic robustness, which is the main symbol different from the traditional optimization search algorithm. The genetic algorithm has the advantages of simple algorithm and parallel processing. The genetic algorithm is used to optimize the parameters of the PID controller. The traditional Ziegler-Niehols algorithm is combined with the genetic algorithm. The parameters KP, KI and KD obtained by Ziegler-Niehols tuning method should be used in the system. The integral time constant should be increased appropriately, and the differential time constant

PID Control of Electron Beam Evaporation System Based on Improved

2167

should be adjusted appropriately to determine the parameters of KP, KI and KD. The parameters of PID are optimized and tuned. The specific steps are as follows: (1) Selection of initial population Because genetic algorithm can not directly operate on the parameter object, it needs to encode the parameter object first, so the range of parameters should be determined first, and then the coding string should be determined according to the system accuracy requirement. Each binary string represents the corresponding parameters. Connecting these binary strings together becomes the operational object of the algorithm and constitutes the initial population. (2) Selection of fitness function In order to obtain satisfactory dynamic performance, formula (3) can be used as the optimization objective function [3]: Z1 J¼



 w1 jeðtÞj þ w2 u2 ðtÞ dt þ w3  tu

ð3Þ

0

e(t) refers to the system error, u(t) represents the output of the controller, tu is the rising time, w1, w2, w3 are the weights. The fitness function is selected as follows: F ¼ 1J . (3) Genetic manipulation Different operators are used according to different operations. By selecting, crossing and mutating operators, new individuals are generated, and good individuals are searched from these individuals, and the global optimal solution is finally found. Sample size = 30, Pc = 0. 60, Pm = 0. 001. The genetic algebra can be adjusted in the debugging program, and new populations can be generated by operating basic genetic operators. (4) Termination Cycle Conditions Repeat the above steps until convergence, and output the optimized PID parameters. The genetic algorithm process is shown in Fig. 2. In the experiment, the sampling time is 30 s, the input instruction is step signal, the range of parameters is KP [0, 10], KI [0, 10], and KD is [0, 2], w1 = 0.999, w2 = 0.001, w3 = 2.0. After 100 generations of optimization, the optimized parameters KP = 9.7654, KI = 0.0098, KD = 0.3988, and the performance index J = 81.9836 are obtained. The simulation curve of the transition process response of genetic algorithm is shown in Fig. 2.

2168

W. Zhu

Fig. 2. Simulation curve of genetic algorithm transition process response

5 Conclusion According to the change of deposition rate, the closed-loop feedback control of PID in genetic algorithm is used to change filament voltage to stabilize deposition rate. Experiments show that by using this method to study the rate control algorithm in the process of electron beam evaporation, the surface of SiO2 film used in the production of experimental coatings can maintain a certain smoothness, and a relatively stable evaporation rate can be obtained. Other plating systems and different coatings used can be modified by appropriate parameters according to the algorithm, which can greatly improve the rate control effect.

References 1. Wang S (2008) Electron beam evaporation rate control. China Laser 35(10) 2. Summary of Wang W (2000) Advanced tuning methods of PID parameters. J Autom 5 26(3) 3. Xi Y, Zeng G, Zhang J (2006) Digital PID parameter tuning based on improved genetic algorithm. J Xi’an Univ Technol 12 22(4)

Author Biography Zhu Wenwu male, from Lujiang, Anhui Province in November 1976, Associate Professor, Department of Electrical Engineering, Anhui Vocational and Technical College of Mechatronics, Research Direction: Electronic Technology, Intelligent Control, Postal Code: 241000.

Doppler Weather Radar Network Joint Observation and Reflectivity Data Mosaic Qutie Jiela, Haijiang Wang(&), Jiaoyang He, and Debin Su Chengdu University of Information Technology, Chengdu 610225, Sichuan, China [email protected]

Abstract. This paper realized the interpolation and three-dimensional mosaic of Doppler weather radar data. Currently, the Adaptive Barnes interpolation method is generally recognized as an effective algorithm for weather radar. The smoothing scheme of this algorithm are improved to keep good characteristics of the raw volume data. Compared with the commonly used interpolation methods, the improved smoothing scheme of the Adaptive Barnes interpolation can get better CAPPI data without over-smoothing effect. Keywords: Anisotropic filtering

 Adaptive barnes interpolation  Mosaic

1 Introduction Using multiple Doppler radars work cooperatively in the same target area can expend the working area and the accuracy of CAPPI data inversion. Many scholars have achieved some research results in Radar Network mosaic [1–6]. The precipitation echoes distribution is inhomogeneous in all directions. So it would be better to interpolate scatter radar data into the uniform grid at the same height to facilitate access to derivative products. So far, Researchers have summarized several commonly used interpolation methods which include the Vertical Horizontal Linear Interpolation (VHI), the 8-point Linear Interpolation and the Adaptive Barnes Interpolation. By observing and analyzing the distribution characteristics of weather radar data, two characteristics can be concluded. Direction affects the distribution of radar data and the density of radar data increases with the decrease of distance to radar site.

2 Soomthing Strategy The Adaptive Barnes interpolation method can adapt the density of radar data automatically and adopts different scheme in different direction which called direction splitting characteristic. Equation (1) is the detail explanation. PN

f ¼ Pk¼1 N

wk fk

k¼1

wk

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2169–2172, 2020 https://doi.org/10.1007/978-981-13-9409-6_263

ð1Þ

2170

Q. Jiela et al.

In this equation, fk is the value of a grid point from the kth radar, wk is the wight function of each grid point, which can be explained by Eqution (2). ( wk ¼

h i 2 ðhk h0 Þ2 ðuk u0 Þ2 0Þ   exp  ðRk R kr kh ku 0

where f k is valid where f k is invalid

ð2Þ

kr , kh , k/ are smoothing parameters in radial, elevation and azimuth directions respectively. To comprehend these three parameters intuitively, the weight function has been split into three unit direction vectors ^er , ^eh and ^e/ which be shown in Fig. 1.

z eˆθ

eˆr

eˆφ r h

θ

y

φ

x

Fig. 1. The coordinate system for weather radar

Where r is the radial distance between gridpoint to radar station, and h is the vertical distance to z plane. Because Doppler weather radar adopts conical scanning mode, the spatial distribution of radar base data is anisotropic. So we set different smoothing parameters on each unit direction vector to achieve different smoothing effects which called anisotropic splitting technique. Fixing these three smoothing parameters is the common strategy. The smoothing effect of weight function depends on the direction of current grid point. In this paper, a new strategy is proposed. The characteristic of smoothing filtering is hazy because that the smoothing parameters is determined by the relative position of grid point to radar station. But smoothing terms in different direction has some relationship based on the spatial structure of data. The concrete relationship can be expressed by Eqs. (3–4). kr ¼ R2  k/

ð3Þ

kh ¼ cos2 h  k/

ð4Þ

When we set kh , and then computed k/ , kr by aforementioned equations, the characteristics of the filter are determined by the data resolution in the elevation direction. Therefore, components in other direction are filter by it. In the same way, if one of these three smoothing parameters is set ahead of time, relevant filtering results are obtained. By observing the distribution characteristic of Doppler weather data, the

Doppler Weather Radar Network Joint Observation and Reflectivity

2171

data density in elevation is large than azimuth direction. So we choose to set kh , and filtering component in azimuth and radial direction.

3 Verification To test the smoothing effect of the new scheme of Adaptive Barnes Interpolation method, the following experiment is done. Echo data detected by the Longyan, Quanzhou and Xiamen radars at around 8:30 am on June 3, 2019 are collected. On the horizontal plane, Grid points with a specification of 200 m  200 m are set. In vertical direction, the resolution of grid point is 1 km. In order to better compare the advantages and disadvantages of various methods, CAPPI images at 3 km height by each method are compared as shown in Fig. 2.

Fig. 2. (a–b) The CAPPI image of Longyan by each method

2172

Q. Jiela et al.

Figure 2a, b are CAPPI images of Longyan by 8-point interpolation and the Adaptive Barnes interpolation. The 8- point linear interpolation method shows nice performance in size and intensity but without any smoothing effect and with anti-noise ability. The scanning mode of CINRAD-SC radar is PPI (plane position indication) mode, the elevation is about 0.5° to 19.5°. Meanwhile, the azimuth angle is about 1degree and radial resolution is 200 m. So, we choose to set kh . In contrast, the improved Adaptive Barnes interpolation has shown better quality in detail. Figure 2c, d are the CAPPI images of Xiamen and Quanzhou by the Adaptive Barnes interpolation. Figure 2e, f are reflectivity mosaics by the Nearest Neighbor method and the Index Weight function method. Because of the characteristics of the Nearest Neighbor method, the obtained CAPPI image does not have continuity, and there are many mutations and cross sections. Therefore, there have obvious sections in the north of image. However, the index weighting method effectively fuses multiple CAPPI images.

4 Conclusion In this paper, an improved smoothing scheme of Adaptive Barnes interpolation method is proposed, which not only retains the structure characteristics of the weather data but also makes that the smoothing parameters of weight function can adjust its filtering effect flexibly with the spatial structure of data. We adopt these scheme to interpolate three CINRAD-SC radars into CAPPI images and carry out the 3D mosaic. By contrast, we find this scheme has distinct superiority in smoothness and other precipitation characteristics. Those grid data of several radars can be integrated effectively by the index weight method which get three-dimensional reflectivity analysis field in succession.

References 1. Serafin RJ, Wilson JW (2008) Operational weather radar in the United States: progress and opportunity. Bull Am Meteor Soc 81(3):501–518 2. Zhang J, Howard K, Langston C (2004) Three- and four-dimensional high-resolution national radar mosaic. Proc Erad 105–108 3. Askelson MA, Aubagnac JP, Straka JM (2010) An adaptation of the barnes filter applied to the objective analysis of radar data. Mon Weather Rev 128(9):3050–3082 4. Weygandt SS, Shapiro A Droegemeier KK (2002) Retrieval of model initial fields from single-doppler observations of a supercell thunderstorm. Part I: Single-doppler velocity retrieval. Mon Weather Rev 130(130):433 5. Xiao YJ, Liu LP (2006) Study of methods for interpolating data from weather radar network to 3-D grid and mosaics. Acta Meteor Sinica 64(5):647–656 6. Ray PS, Wagner KK, Johnson KW et al (1978) Triple-Doppler observations of a convective storm. J Appl Meteorol 17(8):1201–1212 7. Doswell Charles A (1977) Obtaining meteorologically significant surface divergence fields through the filtering property of objective analysis. Mon Weather Rev 105(7):885–892

Numerical Calculation of Combustion Characteristics in Diesel Engine Xudong Wang1(&), Chunhua Xiong1, Feng Wang2, Gaojun An1, and Dongkai Ma1 1

2

Institute of Military New Energy Technology, Beijing 102300, China [email protected] Equipment Department of Army, Beijing Martial Delegate Agency, Beijing 10012, China

Abstract. A type of vehicle diesel engine was taken as the research object, the numerical simulations of intake, fuel injection, mixture formation and combustion characteristics were carried out by a software, which including the drop model, vaporize model and breakup model etc. The distribution of fuel-air equivalence ratio and static temperature are computed and compared, in order to research on combustion property with different fuel in the vehicle diesel engine. Keywords: Diesel engine  Combustion characteristics calculation  Computer simulation

 Numerical

1 Introduction Study diesel combustion process can be employed generally experimental and numerical simulation, experimental study long period, high cost, adaptability than the difference, and due to the complex structure of the internal cylinder, poor working environment, appropriate sensors can not be installed, or do not have long-term tests Conditions, it is often difficult to get comprehensive in-cylinder combustion information. With the wide application of computers and the development of modern computing methods, numerical simulation technology provides an effective and accurate tool for diesel engine combustion process research.

2 Diesel Engine In-Cylinder Spray and Combustion Model The CFD simulation methods for the numerical simulation of in-cylinder diesel engine spray and combustion proceed normally need to establish the physical and mathematical models to reflect the nature of engineering problems, so abstract, simplify the problem, establish the relationship between the variables reflecting the engineering problems of differential equations, get the corresponding settlement conditions.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2173–2181, 2020 https://doi.org/10.1007/978-981-13-9409-6_264

2174

2.1

X. Wang et al.

Spray Mixing Process Gas Flow Turbulence Model

When the k  e standard model is used to solve the flow and heat transfer problems, the governing equations include the continuity equation, the momentum equation, the energy equation, the k equation, and the e equation [1]. 2.2

Breakup Model

The WAVE model in computer simulation software is used to simulate the fracture process of the jet surface. The model considers that the droplet splitting is due to the unstable growth of the droplet surface, and the outlet fuel droplets are dispersed under the liquid-gas normal shear interaction of the surrounding air. To form small droplets, the breaking time formula of the WAVE basic model is: s¼

X K C R

3:726  C  r KX

ð1Þ

In the formula: The maximum generation rate of surface waves; The wavelength of the surface wave; Constant for adjusting the breaking time; The radius of the droplet before separation, i.e. the radius of the mother droplet. After separation, the stabilizer droplet radius is: ( rs ¼ min 

 0:33 3pr2 Urel 2X   0:33 3r 2 K 4

ð2Þ

In the formula: Urel —relative speed of gas and liquid. The radius of the mother droplet changes according to the following rules: dr=dt ¼ ðr  rs Þ=s

2.3

ð3Þ

Spary-Wall Model

Using the Walljet1 spary-wall model in computer simulation software, the Walljet1 spary-wall model believes that the droplets will rebound or collide in the form of fluid jets after impacting the wall, depending on the size of the Weber. The division between these two results uses the critical Weber number. 2.4

Evaporation Model

The Dukowicz model was used to simulate the evaporation process of the fuel. The model characterizes the evaporation process of the fuel by describing the heat and mass transfer of the fuel. The model makes the following assumptions for fuel droplets:

Numerical Calculation of Combustion Characteristics in Diesel

2175

(1) Spherical symmetry model; (2) Quasi-steady state gas film; (3) It has a uniform temperature and uniform physicochemical properties along the diameter of the droplet; there is a gas-liquid equilibrium. 2.5

Turbulent Combustion Model

The combustion was simulated using the Eddy Dissipation Model. The basic idea of the vortex fracture model is that the turbulent combustion zone is filled with burnt gas and unburned gas groups, and the chemical reaction takes place at the interface of the two gas masses. The average chemical reaction rate depends on the turbulent action of the unburned gas mass. The rate of breaking down into smaller air masses, and the breaking rate is proportional to the decay rate of turbulent kinetic energy, i.e. proportional to e=k. Therefore, the expression of the chemical reaction rate Rfu is Rfu ¼ A

  Cpr Co min Cfu ; 2 ; B S 1þS j

e

ð4Þ

In the formula: A, B Cfu Co2 Cpr S

Model parameters, which are determined by the flame structure, the reaction between fuel and oxygen; The mass concentration of the fuel; The mass concentration of oxygen; The mass concentration of the combustion products; The ratio of oxygen to fuel mass at stoichiometric ratio.

3 Diesel Engine Cylinder Working Process Modeling 3.1

Computational Area Meshing

In order to take into account the practical effects of calculation and the reasonable application of computing resources in the actual calculation, the corresponding system is simplified: since the combustion chamber is a symmetric x-type combustion chamber, it can be based on the number of nozzles of the injector [2]. Simplify the combustion chamber and calculate part of the combustion chamber. The basic parameters of the diesel engine and its injection system are shown in Table 1. The combustion chamber is meshed by the computer simulation software ESE module. The injector nozzle of the engine is of the hole type, and there are 8 nozzle holes arranged uniformly in the circumferential direction. For the convenience of calculation, the combustion chamber can be regarded as the piston top. The center is symmetrically distributed, so only 1/8 of the combustion chamber is taken when generating the combustion chamber calculation grid, and the grid is encrypted at the boundary [3]. The ESE module automatically generates a moving mesh between the

2176

X. Wang et al. Table 1. Basic parameters of diesel engine and its injection system Spray hole diameter/mm fuel supply advance angle/(°) Plunger diameter/mm Injection opening pressure/MPa fuel pressure/Mpa Spray cone angle/(°)

0.35 28 13 20.6 0.25–0.3 140

piston and the cylinder head, so that the mesh size between the piston and the cylinder top is stretched and compressed with the relative movement of the piston and the cylinder, and the number of meshes is also changed accordingly. The mesh size and number inside the combustion chamber are fixed and move with the piston. Figure 1 is a three-dimensional mesh model.

Fig. 1. Three-dimensional grid

3.2

Calculate Boundary Conditions

(1) Calculation conditions The calculation condition is the maximum torque condition of the diesel engine, the speed is 1400r/min, the calculation starting angle is the intake valve closing angle, and the ending angle is the exhaust valve opening angle [4]. According to the valve phase of the diesel engine, the calculation interval is −130° to 130° CA. (2) Initial conditions of gas in the cylinder The turbulent kinetic energy and turbulence length scale of the combustion chamber flow field when the intake valve is closed is determined by:

Numerical Calculation of Combustion Characteristics in Diesel

2177

① Turbulent energy:

hv n

TKE ¼ ð3=2Þ  ð0:25 cmÞ2

ð5Þ

cm ¼ 2hv ðn=60Þ

ð6Þ

In the formula: Piston stroke; Diesel engine speed. ② Turbulence length scale: TLS ¼ h=2

h

ð7Þ

In the formula: Valve lift.

According to the above formula, the turbulent kinetic energy and turbulence length of the flow field in the combustion chamber when the intake valve is closed are calculated [5]. The calculation result is: TKE ¼ 147:015 TLS ¼ 0:0039 (3) Wall boundary conditions ① The bottom surface of cylinder head temperature, the temperature of the top surface of the piston, the cylinder wall temperature according to empirical values were taken: 554, 583, 410 K. ② The inner and outer surfaces of the compensation volume are set as a heat insulating wall surface. ③ The outer surface speed of the piston top surface, the cylinder wall surface and the compensation volume is zero, which is a static wall surface. ④ The top surface of the piston and the inner surface speed of the compensation volume are the speed of movement of the piston [6].

4 Analysis of Calculation Results The spray and combustion process in the diesel engine cylinder is calculated by computer simulation software. The calculated value of the cylinder pressure curve is compared with the experimental value as shown in Fig. 2.

2178

X. Wang et al.

Fig. 2. Comparison of simulation results and test results

It can be seen from Fig. 2 that the maximum error between the calculation result and the test result is less than 8.3%, and the average error is 4.39%. The accuracy of the model meets the requirements, and the simulation calculation and analysis of the incylinder spray and combustion of the diesel engine can be performed. 4.1

Diesel Engine Performance

Different types of diesel have different hydrocarbon contents and different densities. This type of diesel engine is a plunger type positive displacement pump. The volume of single-cylinder circulating fuel supply is constant [7]. The quality of fuel supply is proportional to the density of fuel sample. The performance of diesel engine is compared with the case of No.−35 automobile diesel and No.−35 light diesel. The impact of the fuel property settings is shown in Table 2. Table 2. Fuel property Fuel Automobile diesel Light diesel

C content/ % 86.46

H content/ % 13.54

Density/ (kg/m3) 832

Cycle fuel supply/mg 186.96

85.8

14.2

799

179.44

Change the fuel property model, simulation the working process of engine, comparison of cylinder pressure as shown in Fig. 3, the cylinder gas pressure and indicated power of automobile diesel are greater than light diesel.

Numerical Calculation of Combustion Characteristics in Diesel

2179

Fig. 3. Cylinder pressure

4.2

Combustion Characteristic

The fuel-air equivalence ratio is the ratio of the gas mixture of fuel and air, the greater the fuel-air equivalence ratio, the greater the concentration of the fuel gas mixture, under the same working conditions, the same injection volume of diesel fuel into the cylinder, the car diesel fuel-air equivalence ratio of light diesel fuel-air equivalence ratio, as shown in Figs. 4 and 5.

Fig. 4. Equivalence ratio of automobile diesel

Fig. 5. Equivalence ratio of light diesel

2180

X. Wang et al.

During the combustion process, the heat generated by the combustion causes the temperature of the gas in the cylinder to rise sharply. When the cylinder volume is constant, the temperature of the gas in the cylinder depends on the heat generated by the combustion of the fuel per unit time, and is injected into the cylinder due to the same injection law. The quality of the truck is higher than that of the light wood, so the heat release per unit time is large, so the temperature inside the cylinder is high, as shown in Figs. 6 and 7.

Fig. 6. Cylinder temperature of automobile diesel

Fig. 7. Cylinder temperature of light diesel

5 Summary In order to find out the combustion property in the vehicle diesel engine with two different fuels, a type of vehicle diesel engine was taken as the research object, the combustion and spray process was calculated using a simulation software. The results of numerical simulation show that the cylinder pressure and indicated power is higher when the engine fueled with automobile diesel than light diesel.

Numerical Calculation of Combustion Characteristics in Diesel

2181

References 1. Wang F, Zheng Z, Xiao J (2018) Numerical simulation analysis of pilot injection and post injection on emission characteristics during low temperature combustion process of diesel engines. Comput Simul 8:383–387 2. Liu H, Hu B, Chao J (2016) Effects of different alcohols additives on solubility of hydrous ethanol diesel fuel blends. Fuel 184:440–448 3. Wang Y, He Z (2010) Investigation on numerical simulation of methanol-internal combustion engine. Comput Simul 7:258–353 4. Lin Jie lun (1987) Numerical calculation of working process of internal combustion engine. Beijing Institute of Technology Press, Beijing 5. Ling-Ge S, Zhong-Chang L, Yong-Qiang H, Zhao-Jie S, Guang-Yong Z (2011) Discussion of transient response optimization strategies of turbocharged diesel engine under EGR step change operation. In: CDCIEM, p 423–427 6. Fujun Wang (2004) Computational fluid dynamics analysis. Tsinghua University Press, Beijing 7. Tan W (2010) Three-dimensional numerical simulation of the working process of direct injection gasoline engine in cylinder. Dalian University of Technology

A NOMA Power Allocation Strategy Based on Genetic Algorithm Lu Yin1,2(&), Wang Chenggong1, Mao Kai3, Bao Kuanxin1, and Bian Haowei4 1

2

4

Jiangsu Key Laboratory of Wireless Communications, Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected] Engineering Research Center of Health Service System Based on Ubiquitous Wireless Networks, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing, China 3 School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China

Abstract. The NOMA technology uses the power domain non-orthogonal multiplexing method to enable multiple users to occupy the entire frequency band simultaneously to transmit signals. In order to maximize the total transmission rate of the system, an effective method is to use the genetic algorithm for NOMA power allocation. In this paper, the NOMA downlink system model is constructed, and the objective function and constraints are analyzed. A NOMA power allocation strategy based on genetic algorithm is proposed. The algorithm distributes user power based on the criterion of maximizing total transmission rate, therefore the algorithm has random search capabilities and relatively low search complexity. The simulation results show that when the system transmits power or multiplexed users is fixed the proposed algorithm outperforms the fixed power allocation algorithm in the total transmission rate. The total system transmission rate of the genetic algorithm is similar to the full space search algorithm. As the number of multiplexed users increases, the computational complexity of genetic algorithms is much lower than that of fullspace search algorithms. Keywords: Non-orthogonal multiple access  Total system transmission rate Genetic algorithm  Computational complexity



1 Introduction As a new access method, NOMA technology has become one of the 5G core candidate technologies [1]. NOMA technology can be divided into power domain multiplexing NOMA, code domain multiplexing NOMA and other NOMA [2–4]. According to the performance evaluation index of NOMA system, designing the transmitter scheme and improving system performance has become the key research issues of the NOMA system [5–8]. In the NOMA system, the power allocation has a great impact on the user © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2182–2190, 2020 https://doi.org/10.1007/978-981-13-9409-6_265

A NOMA Power Allocation Strategy Based on Genetic Algorithm

2183

throughput performance which not only affects the total system throughput but also has a great impact on the throughput of each user. Therefore, a reasonable power allocation algorithm plays an important role in the NOMA system on effectively reducing the multiple access interface between user signals and increasing the throughput of the system [9]. Literature [10] studied a power and spectrum allocation method based on user deployment scenarios in a cell, dividing users into cell edges and cell center users and allocating more power to cell edge users to reduce inter-cell interference. More bandwidth resources are allocated to increase the average throughput and cell edge throughput of the entire cell, and the spectrum resource allocation is based on Fractional Frequency Reuse (FFR). In [11], a fixed power allocation algorithm is proposed. Because the algorithm only performs power allocation according to a fixed geometric ratio and ignores the current channel state of the user, the shortcoming of the algorithm is that the system performance is low, and the advantage is that the computational complexity of the system is higher. The fractional power allocation algorithm proposed in [12] distributes the user power according to the path loss ratio of the user. The algorithm has lower throughput performance and computational complexity than the full space search algorithm. The structure of this paper is as follows: Sect. 2 introduces the downlink NOMA system model, Sect. 3 proposes a power allocation algorithm based on genetic algorithm, Sect. 4 simulates and analyzes through system parameters, and Sect. 5 gives a summary.

2 System Model Figure 1 is a system model of the NOMA downlink, which describes a whole process of a cell downlink based on a power allocation transmission signal and signal reception based on serial interference cancellation (SIC).

Fig. 1. Downlink NOMA system model diagram

2184

L. Yin et al.

In Fig. 1, the number of orthogonal subcarriers used by the system for transmission is N, the total bandwidth of the system is B, and the bandwidth of a single subcarrier is B/N, thus there are K users requesting communication services in the cell, the power allocated by the user on subcarrier n is denoted as pi,n, and the signal transmitted by the i-th user on the n-th subcarrier is xi,n, the total transmit power ptot in the system is constant that is the sum of the transmit powers of all users cannot be greater than the total transmit power of the system. The multi-user shares the same time-frequency resource unit, and the number of users superimposed on the n-th sub-carrier is represented by kn. After superimposing kn users, the superimposed signal xn transmitted by the base station on the subcarrier n is expressed as xn ¼

kn X pffiffiffiffiffiffiffi pi;n xi;n :

ð1Þ

i¼1

The received signal of the user UEk on the subcarrier n can be expressed as yk;n ¼ hk;n xn þ wk;n ¼ hk;n

kn X pffiffiffiffiffiffiffi pi;n xi;n þ wk;n ;

ð2Þ

i¼1

where hk,n represents the complex channel gain of user k on the subcarrier n. Equation (2) can be expanded to kn X pffiffiffiffiffiffiffi pffiffiffiffiffiffiffi pi;n xi;n þ wk;n ; yk;n ¼ hk;n pk;n xk;n þ hk;n

ð3Þ

i¼1;i6¼k

pffiffiffiffiffiffiffi where hk;n pk;n xk;n is the desired signal that user k needs to obtain, Pkn pffiffiffiffiffiffiffi hk;n i¼1;i6¼k pi;n xi;n is the interference of other user signals on the user signal, and wk,n is the superimposed signal of Gaussian white noise and other cell interference. In the downlink of NOMA, the optimal sorting scheme of serial interference is sorted according to the signal tointerference and noise ratio, that is according to  2 . 2 PK     Pi;n hi;n Pj;n þ N0;j . The user eliminates the signals of other users after hi;n j¼1

correctly decoding them, thereby realizing the correct decoding of their own signals. After SIC detection, the signal to interference and noise ratio of the user UE-m on the n-th subcarrier is pm;n Cm;n Pm1 1 þ i¼1 pi;n Cm;n pm;n ¼ 1 ; Pm1 þ i¼1 pi;n Cm;n

SINRm;n ¼

ð4Þ

A NOMA Power Allocation Strategy Based on Genetic Algorithm

2185

 2 where Cm;n ¼ hm;n  =r2n represents the signal to noise ratio of the user m on the n-th subcarrier. According to Eq. (4), the transmission rate of the user UEm on the n-th subcarrier after the receiver is processed by the SIC in the downlink of the NOMA system is obtained. Rm;n ¼

B  log2 ð1 þ SINRm;n Þ: N

ð5Þ

The total transmission rate of the first n subcarriers is Rn ¼

K X B k¼1

N

 sk;n log2 ð1 þ SINRk;n Þ:

ð6Þ

If user k is allocated on subcarrier n, then sk,n = 1, otherwise sk,n = 0. It can be seen from Eqs. (4) to (5) that the base station can flexibly control the transmission rate of the user by adaptively controlling the transmission power ratio of the user. The power allocation scheme is very important, which directly affects the cell and capacity, the channel capacity of the edge users, and the fairness of the users.

3 Power Allocation Algorithm The target criterion of the power allocation algorithm studied in this paper is to maximize the total transmission rate of K users, which can be expressed as max

N X K X

sk;n Rk;n :

ð7Þ

n¼1 k¼1

The power allocated by the user in the power allocation algorithm satisfies the constraint: C1 : pk;n  0; 8k; n;

C2 :

N X K X n¼1 k¼1

pk;n \ptot ; 8k; n;

C3 :

K X

sk;n ¼ kn ; 8k; n

ð8Þ

k¼1

The objective function of this paper is a nonlinear function and there are discontinuity constraints in the objective function, since the genetic algorithm directly operates on the structural object, there is no limitation of the derivative and function continuity and it has inherent hidden parallelism and better global optimization ability. In order to effectively solve the constrained nonlinear optimization problem, this paper uses genetic algorithm to solve the user’s power allocation problem to maximize the objective function. The basic idea of the power allocation algorithm based on genetic algorithm is to search the initial power allocation matrix according to the power constraint condition.

2186

L. Yin et al.

The genetic algorithm is called a group, and the population is composed of a certain number of users allocated by users, and different users are based on the allocated subcarriers. Differently determining sk,n and according to the target criterion of maximizing the total transmission rate of the system, the conditionally optimal power allocation matrix is obtained through several iterative evolutions. The general flow of the genetic algorithm is shown in Fig. 2:

Determine the code

IniƟalizaƟon group

Whether the terminaƟon condiƟon is met

Y Local opƟmal soluƟon

Survival of the fiƩest

Cross

VariaƟon

Fig. 2. Genetic algorithm flow chart

In this algorithm, the user allocation on the subcarriers is randomly assigned and the number of users on each subcarrier is less than or equal to 2, and the basic steps of power allocation based on genetic algorithm are as follows: (1) Initializing the number of system users K, the number of orthogonal subcarriers N in the cell, and the cell radius; (2) Generating noise and determining the channel gain hk,n according to the number of users and the radius of the cell; (3) Set the total transmission power of the system, verify the relationship between the total transmission rate of the system and the number of multiplexed users; set the number of users, and verify the relationship between the total transmission rate of the system and the total transmission power of the system;

A NOMA Power Allocation Strategy Based on Genetic Algorithm

2187

(4) According to the total transmission power of the system and the number of system users, the system user is evenly distributed power, and the initial power allocation matrix P of the system user is determined; (5) Determining sk,n according to whether the user is allocated on the subcarrier, and using the building block hypothesis theory of genetic algorithm to process the objective function to obtain the optimal power allocation matrix P when the objective function is maximized; (6) Calculating the total system transmission rate under the current system conditions according to the obtained optimal power allocation matrix P; (7) Initializing the number of system users K, the cell radius R, the number N of orthogonal subcarriers in the cell and the corresponding power allocation matrix P.

4 Simulation and Analysis A. Simulation parameter The simulation platform built in this part studies the effects of five power allocation algorithms on the total transmission rate of the system. The parameters used in the simulation are shown in Table 1. Table 1. Simulation parameter table Simulation parameter Value Cell radius 200 m Carrier frequency 2 GHz Number of carriers 3 Total power limit 10 w Minimum power distribution per unit 0.1 w Channel model Frequency selective Rayleigh fading channel Channel estimation Ideal

B. Simulation results and analysis The following figures are the simulation results and analysis of this paper. It can be seen from the Fig. 3 that in the NOMA system, whether the fixed power allocation algorithm is used or the full space search algorithm is the genetic algorithm studied in this paper, the total system transmission rate increases with the number of multiplexed users N. In the case of fixed system total power allocation, the total transmission rate of the genetic algorithm proposed in this paper is similar to the fullspace search algorithm and better than the fractional power allocation algorithm, fixed power allocation algorithm and average power allocation algorithm.

2188

L. Yin et al. 45 Fixed power allocation algorithm Full space search power allocation algorithm Genetic algorithm

Total transmission rate(bps/Hz)

40

35

30

25

20

15

3

4

5

6

7

The number of multiplexed users(N)

8

Fig. 3. Reuse number of users and transmission rate diagram

It can be seen from the Fig. 4 that when the number of system multiplexed users is the same, the total transmission rate of the system increases with the increase of the transmission power. The total transmission rate of the power allocation method based on the genetic algorithm approximates the total transmission rate of the full-space search system and is better than the score.

Total system transmission rate(bps)

60

Average power allocation algorithm Fixed power allocation algorithm Fractional power allocation algorithm Full space search algorithm Genetic algorithm

55

50

45

40

35

30

10

20

30

40

50

60

Total transmission power of the system Ptot(W)

Fig. 4. Total transmit power and transmission rate diagram

A NOMA Power Allocation Strategy Based on Genetic Algorithm

2189

90

Computational complexity

80

Full space search power allocation algorithm Genetic algorithm

70 60 50 40 30 20 10 0

3

4

5

6

7

8

The number of multiplexed users(N)

Fig. 5. Comparison of computational complexity

Figure 5 is a comparison of the computational complexity of the full-space search algorithm and the proposed algorithm. It can be seen from the figure that the computational complexity of the full-space search algorithm and the genetic algorithm increases with the number of system users. When the number of system users reaches a certain level, the computational complexity of the full-space search algorithm is much higher than the computational complexity of the genetic algorithm. It can be seen that the system performance of the genetic algorithm is similar to the full space search algorithm. When the number of users exceeds a certain number, the computational complexity of the algorithm is low, and the performance of the power allocation algorithm based on genetic algorithm is better than the fixed power allocation algorithm. Therefore, the power allocation algorithm of the algorithm studied in this paper is superior to the full space search algorithm, the fractional power allocation algorithm, the fixed power allocation algorithm and the average power allocation algorithm.

5 Conclusions In this paper, the NOMA power allocation based on genetic algorithm is studied. The search of genetic algorithm is based on the group, it can automatically acquire and guide the optimized search space without determining the rules, and adaptively adjust the search direction. Therefore, it has a faster and more random search ability. In the search process, the user allocated on the subcarrier is searched according to the constraint to meet the requirements of the objective function. The process of genetic algorithm is simpler than the full space search algorithm, and the search complexity is low. In this paper, the users on the subcarriers are randomly assigned criteria and the total system capacity is maximized. The simulation is based on the total power of the fixed system and the number of subcarriers in the system. The simulation results show

2190

L. Yin et al.

that the genetic algorithm power allocation method studied in this paper has higher total system capacity than the fixed power allocation algorithm and the full space search power allocation algorithm. Acknowledgements. This work was supported by National Natural Science Foundation of China (61271236), Major Projects of Natural Science Research of Jiangsu Provincial Universities (17KJA510004), and Postgraduate Research & Practice Innovation Program of Jiangsu Province (No. KYCX18_0907).

References 1. Rec. ITU-R M.2083-0: IMT Vision- Framework and overall objectives of the future development of IMT for 2020 and beyond. http://www.itu.int/rec/R-REC-M.2083, Sept 2015 2. Dai L, Wang B, Yuan Y, Han S, Chih-Lin I, Wang Z (2015) Non-orthogonal multiple access for 5G: solutions, challenges, opportunities, and future research trends. IEEE Communications Mag 53(9):74–81 3. Wang B, Wang K, Lu Z, Xie T, Quan J (2015) Comparison study of non-orthogonal multiple access schemes for 5G. In: 2015 IEEE international symposium on broadband multimedia systems and broadcasting. Ghent, pp 1–5 4. Zeng J, Li B, Su X, Rong L, Xing R (2015) Pattern division multiple access (PDMA) for cellular future radio access. In: 2015 international conference on wireless communications & signal processing (WCSP). Nanjing, pp 1–5 5. Ding Z, Yang Z, Fan P, Poor HV (2014) On the performance of non-orthogonal multiple access in 5G systems with randomly deployed users. IEEE Signal Process Lett 21(12):1501– 1505 6. Yang Z, Ding Z, Fan P, Karagiannidis GK (2016) On the performance of non-orthogonal multiple access systems with partial channel information. IEEE Trans Commun 64(2):654– 667 7. Xu P, Ding Z, Dai X, Poor HV (2015) A new evaluation criterion for non-orthogonal multiple access in 5G software defined networks. IEEE Access 3:1633–1639 8. Yang MJ, Hsieh HY (2015) Moving towards non-orthogonal multiple access in nextgeneration wireless access networks. In: IEEE ICC. pp 5633–5638 9. Gao X (2017) Research on power allocation algorithm of non-orthogonal multiple access system based on SIC. Chongqing University of Posts and Telecommunications 10. Lan Y, Benjebbour A, Li A, Harada A (2014) Efficient and dynamic fractional frequency reuse for downlink non-orthogonal multiple access. In: 2014 IEEE 79th vehicular technology conference (VTC Spring). Seoul, pp 1–5 11. Saito Y, Kishiyama Y, Benjebbour A et al (2013) Non-orthogonal multiple access (NOMA) for cellular future radio access. In: Vehicular technology conference(VTC Spring), 2013 IEEE 77th. IEEE, Dresden, pp 1–5 12. Wei Z, Ng DWK, Yuan J (2016) Power-efficient resource allocation for MC-NOMA with statistical channel state information. In: 2016 IEEE global communications conference (GLOBECOM). Washington, DC, pp 1–7

AUG-BERT: An Efficient Data Augmentation Algorithm for Text Classification Linqing Shi(B) , Danyang Liu, Gongshen Liu, and Kui Meng School of Cyber Science and Engineering, Shanghai Jiao Tong Univerisity, Shanghai, China {shilinqing,danyliu,lgshen,mengkui}@sjtu.edu.cn

Abstract. We propose a BERT-based data augmentation for labeled sentences called Aug-Bert. New sentences are generated by stochastically selecting words and replacing them with other words predicted by AugBERT. After a two-stage training, including CLP and L-MLM, BERT can be fine-tuned to be Aug-BERT. Aug-BERT can predict stochastically selected words according to both the context and the label. Depending on the ability of BERT’s deep bidirectional language model and the information of label incorporated via label-segment embedding, Aug-BERT can generate sentences of high quality. Experiments on six different text classification tasks show that our methods outperform most of the others and can improve the performance of classifiers obviously. Keywords: Aug-BERT

1

· Replace · CLP

Introduction

Machine learning is a data-driven subject. The performance of the machine learning model is usually limited by the size of the dataset because models are prone to overfit and lose their generalization. One effective method to address the issue is data augmentation. With data augmentation, the size of dataset multiplies without the manual efforts of collecting and annotating the raw data. Data augmentation has been widely used in some areas like speech and computer vision. One main common feature of the two areas is that the original signals of speech and computer vision are continuous before they are sampled. Some simple rules can be used for data augmentation like shifting, rotation, reflection, noising, random cropping, random resizing and so on. However, none of these techniques can be adopted for text. Text is composed of an indefinite number of tokens and should be viewed as a kind of discrete signal. Those techniques used for a continuous signal will result in meaningless and even incorrect data. One common method for text is to substitute words c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2191–2198, 2020 https://doi.org/10.1007/978-981-13-9409-6_266

2192

L. Shi et al.

with their synonyms. Synonyms can be selected through a handcrafted method [8, 17], word similarity calculation [13]. However, this data generated with these methods is short of diversity. To improve the diversity, bidirectional language model is used for predicting certain words in a sentence [5]. Recently conditional BERT is proposed for data augmentation [15]. Conditional BERT is based on BERT [2], which is a pretrained deep bidirectional transformer encoder. The original BERT has the ability to predict masked words based on the bidirectional context, because it is pretrained with an object named “masked language model”. Conditional BERT contextual augmentation makes full use of the context from both sides with an objective named “conditional masked language model (C-MLM)”. Instead of predicting the random selected masked words only based on the context, which may cause the predicted word incompatible with the label, C-MLM gives consideration to both the label and context. And in order to feed the information of label into the model, the segment embedding is removed while label embedding is added.

Fig. 1. Two cases of conditional BERT. For the left one case, to predict the masked word ‘vivid’, the label ‘positive’ is needed. However, if the word ‘cinematic’ is masked as shown in the right one, the label ‘positive’ contribute little to the prediction, because ‘cinematic’ is a neutral word. In this case, the model cannot learn how to make use of the label.

C-MLM has been proven effective for data augmentation, however, the information provided by the label may be wasted during training, as shown in Fig. 1. Besides, the embedding is not compatible with the original BERT. The original BERT contains tokens embedding, segment embedding and position embedding. During the pretraining, the segment embeddings for all the words of the first sentence are EA , and only EA and EB are used because only two sentences are involved. Conditional BERT’s label embedding makes that any label can be embedded to the only one sentence. Besides, for multiclass classification, more than two kinds of label embeddings will be involved. This paper focuses on the replacement-based data augmentation methods for text classification. With the limited size of sentence label pairs, much more sentence label pairs will be generated with Aug-BERT. And the data augmentation model is based on BERT. We fine-tune BERT before we used it for data augmentation. The main contributions of this paper can be concluded as follows: 1. We propose a method to feed the label into the data augmentation model while the segment embedding is remained. 2. We propose a fine-tuning method to make sure the data augmentation model pay attention to the label and make use of both the label and the context. 3. The Experiments results show that our data augmentation method can outperform most of the others.

AUG-BERT: An Efficient Data Augmentation Algorithm

2

2193

Related Work

It is proposed by Wang and Yang [13] that using adjacent words in continuous representations to create new instances for each word in a tweet to augment dataset. Zhang [16] extracts all replaceable words from the given text and randomly select r words to be replaced with synonyms from WordNet [8]. Kolomiyets et al. [6] propose to replace only the headwords under a task-specific assumption that temporal trigger words usually occur as headwords. And words with top-K scores are selected and replaced. Scores are given by a language model based fixed length context and named as Latent Words LM [1]. In the area of machine translation, Fadaee et al. [3] propose to improve the performance via data augmentation. They replace words in a source sentence with only rare words, and a word in the translated sentence is also replaced using a word alignment method and a unidirectional language model. Kobayashi [6] proposes a fill-in-the-blank context to augment the data of text classification by replacing every word in the sentence with language model. In order to make sure that the generated sentence is compatible with the original label, a conditional constraint is introduced. The work most similar to our research is Wu [15]. Xing Wu et al. fine-tune BERT for data augmentation. They make BERT works like masked language model while the label information is included.

3 3.1

Data Augmentation Masked Language Model

Masked language model (MLM) is an objective during the pretraining phase of BERT, which is like the Cloze Task [12]. During the training, 15% of the input tokens are randomly selected and replaced with [MASK] token. After some tokens are masked, the model predicts the masked tokens according to the context. The learning objective of MLM is to minimize the negative log likelihood of the predicted tokens as shown in Eq. 1, where C stands for context, M stands for the set of all the masked tokens and V stands for the vocabulary.

L1 = −

M 

logP (m∗i = mi |C), mi , m∗i ∈ [1, 2, . . . , |V |]

(1)

i=1

3.2

Aug-BERT

In this section, we describe the process that how Aug-BERT works. Based on the structure of BERT, we first design a method of feed the label into the model. In order to give consideration to both the characteristics of BERT and the additional label, we propose label-segment embedding. As shown in Fig. 2, the segment embedding is partially kept, the segment embeddings of all the tokens except the [CLS] are still used to mark the whole sentence is the first sentence.

2194

L. Shi et al.

Fig. 2. The fine-tune process of Aug-BERT. The left part shows the CLP and the right part shows the L-MLM. CLP requires the model to predict whether the label is compatible with the sentence. L-MLM requires the model to predict the masked token based on the label and context.

The original segment embedding of [CLS] token is modified to represent the label. So the embedding includes token embedding, label-segment embedding and position embedding. Fine-Tune #1: Correct Label Prediction (CLP) CLP is key for AugBERT to learn to pay attention to the label. CLP requires text classification dataset. We denote the sentence as X = x1 , x2 , . . . , xLen . Every token in X can be found in the vocabulary V . Len stands for the length of the input sentence. The corresponding label is denoted as y. And y ∈ Y , Y is the set of all the labels in the dataset. The sentence label pair cannot be directly used. CLP is the task to predict whether the input label y ∗ is compatible with the input sequence X. y ∗ complies with the condition y ∗ ∈ Y , too. And y ∗ is randomly generated with a probability shown in Eq. 2. P (y ∗ = yi ) =

1 , i ∈ 1, 2, . . . , |Y | |Y |

L2 = −logP (y ∗ = y)

(2)

(3)

As shown in Fig. 2, both the generated label y ∗ and the input sentence X are fed into the model. The model needs to predict whether the generated label y ∗ equals the real label y. The loss can be represented in Eq. 3. Fine-Tune #2: Labeled Masked Language Model (L-MLM) L-MLM is a modified version of MLM. The model needs to predict masked tokens according to the context and the correct label. As shown in Fig. 2, 15% of the input tokens are masked. These tokens are selected randomly. The objective is represented in Eq. 4. During L-MLM, the model learns how to combine the information of

AUG-BERT: An Efficient Data Augmentation Algorithm

2195

label and context to predict masked tokens. This process is like the final data augmentation.

L3 = −

M 

logP (m∗i = mi |C, y), mi , m∗i ∈ [1, 2, . . . , |V |]

(4)

i=1

Data Augmentation via Labeled Masked Language Model After the two fine-tuning phases, Aug-BERT can be for data augmentation. We denote the dataset we have as D. At the start, the original dataset D0 is all we have, so D = D0 . The sentence-label pairs from D are fed into Aug-BERT. The Randomly masked tokens and the correct label are fed into the model. AugBERT can predict the masked tokens based on the unmasked tokens and the label. New sentences can be acquired by combining the predicted tokens with the unmasked tokens. These new sentences share the same label as the original input sentence. These new sentences and labels form the augmented dataset D . D is added into D, and the data augmentation is repeated for several times.

4

Experiments

4.1

Metrics and Implementation Details

Aug-BERT is a data augmentation method for text classification. So it is reasonable to evaluate the performance of Aug-BERT by comparing the performance improvement on different text classification tasks. In order to compare our methods with others, classifiers based on LSTM-RNN or CNN with dropout are adopted. And the reported accuracies of the models are averaged over eight models trained from different seeds. In order to implement the experiments quickly, we fine-tune the relatively small model BERTBASE for data augmentation. BERTBASE contains 110 M parameters with L = 12 layers of transformer, H = 768 hidden size and A = 12 self-attention heads. The detailed information can be found in the original paper [2]. The pretrained BERT and code refer to the PyTorch version implemented by HuggingFace.1 The implementation of classifiers refers the Kobayashi’s open source code.2 4.2

Datasets

For comparison, six classification datasets are adopted. Following Kim [4], if a dataset has no validation data, 10% of its data will be used for validation.

1 2

https://github.com/huggingface/pytorch-pretrained-BERT. https://github.com/pfnet-research/contextual augmentation.

2196

L. Shi et al.

SST-1: Stanford Sentiment Treebank-movie reviews with one sentence per review. Five fine-grained labels are involved: very positive, positive, neutral, negative, very negative. It is labeled by Socher et al [11]. SST-2: Same as SST-1 but with only two different labels: positive and negative. Subj: Subjectivity dataset with the task of determining whether a sentence is subjective or objective [9]. TREC: A question classification dataset. Six question types are involved [7]. MPQA: A dataset of annotated short phrases with opinion polarity [14]. RT: Another movie review sentiment dataset with 2 classes [10]. 4.3

Baselines

The performance improvement obtained via Aug-BERT is compared with the following baseline methods, “w/” means “with”: • w/synonym: Words are randomly replaced with synonyms from WordNet [8]. • w/context: Kobayashi [5] proposed to use a bidirectional language model to apply contextual augmentation, each word was replaced with a probability. • w/context + label: Contextual augmentation method with a label-conditional language model architecture. • w/BERT: Words are randomly masked. BERT predicts the masked words according to the context [5]. • w/C-BERT: words are randomly masked. BERT predicts the masked words according to the context and label [15]. The label is fed into the model via label embedding, and segment embedding is canceled.

4.4

Results

The results are listed in Table 1. The results show that our Aug-BERT can improve the accuracy of classifiers most. It shows that label-segment embedding has nearly no harm to the performance when comparing C-BERT with AugBERT (L-MLM). Additionally, equipped with CLP, Aug-BERT can improve the performance further.

5

Conclusion

In this work, we propose a replace-based data augmentation method named Aug-BERT. Label-segment embedding is incorporated so that the information of the label can be fed into the model. We propose a two-stage fine-tuning method, including CLP and L-MLM. Experimental results show that our proposed method outperforms other data augmentation methods and can help to improve the performance of classifiers obviously. Compared with C-BERT, AugBERT can pay more attention to the label and generate sentence-label pairs of higher quality. Additional, segment embedding is partially kept, which means it can be used for sentence pair augmentation in the future.

AUG-BERT: An Efficient Data Augmentation Algorithm

2197

Table 1. Accuracies of models for various benchmarks. The accuracies are average over eight models trained from different seeds. Lines marked as Aug-BERT means they are ours. Lines marked with “*” are experiments results from Kobayashi [5] and Wu [15]. (L-MLM) means only one stage fine-tuning is involved (CLP + L-MLM) means a two-stage fine-tuning is involved. Model

SST-1 SST-2 Subj. MPQA RT

TREC Avg.

CNN*

41.3

79.5

92.4

86.1

75.9 90.0

77.53

w/synonym*

40.7

80.0

92.4

86.3

76.0 89.6

77.50

w/context*

41.9

80.9

92.7

86.7

75.9 90.0

78.02

w/context+label*

42.1

80.8

93.0

86.7

76.1 90.5

78.20

w/BERT*

41.5

81.9

92.9

87.7

78.2 91.8

79.00

w/C-BERT*

42.3

82.1

93.4

88.2

79.0 92.6

79.60

w/Aug-BERT (L-MLM)

42.2

82.9

93.1

88.3

78.8 92.3

79.62

w/Aug-BERT (CLP + L-MLM) 42.5

83.4

93.5 88.8

79.9 93.1

80.20

RNN*

40.2

80.3

92.4

86.0

76.7 89.0

77.43

w/synonym*

40.5

80.2

92.8

86.4

76.6 87.9

77.40

w/context*

40.9

79.3

92.8

86.4

76.6 89.3

77.62

w/context + label*

41.1

80.1

92.8

86.4

77.0 89.3

77.62

w/BERT*

41.3

81.4

93.5

87.3

78.3 89.8

78.60

w/C-BERT*

42.6

81.9

93.9

88.0

78.9 91.0

79.38

w/Aug-BERT (L-MLM)

42.1

82.1

93.3

88.2

79.1 90.6

79.23

w/Aug-BERT (CLP + L-MLM) 42.9

83.0

93.5 88.7

79.7 91.5

79.88

Acknowledgments. This research work has been funded by the National Natural Science Foundation of China (Grant No. 61772337, U1736207), and the National Key Research and Development Program of China NO. 2016QY03D0604.

References 1. Deschacht K, Moens MF (2009) Semi-supervised semantic role labeling using the latent words language model. In: Proceedings of the 2009 conference on empirical methods in natural language processing, vol 1. Association for Computational Linguistics, pp 21–29 2. Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 3. Fadaee M, Bisazza A, Monz C (2017) Data augmentation for low-resource neural machine translation. arXiv preprint arXiv:1705.00440 4. Kim Y (2014) Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 5. Kobayashi S (2018) Contextual augmentation: Data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201

2198

L. Shi et al.

6. Kolomiyets O, Bethard S, Moens M-F (2011) Model-portability experiments for textual temporal analysis. In: Proceedings of the 49th annual meeting of the Association for Computational Linguistics: Human Language Technologies: short papers, vol 2. Association for Computational Linguistics, pp 271–276 7. Li X, Roth D (2002) Learning question classifiers. In: Proceedings of the 19th international conference on computational linguistics, vol 1. Association for Computational Linguistics, pp 1–7 8. Miller GA (1995) Wordnet: a lexical database for English. Commun ACM 38(11):39–41 9. Pang B, Lee L (2004) A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In: Proceedings of the 42nd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, p 271 10. Pang B, Lee L (2005) Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In: Proceedings of the 43rd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, pp 115–124 11. Socher R, Perelygin A, Wu J, Chuang J, Manning C-D, Ng A, Potts A (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp 1631–1642 12. Taylor WL (1953). Cloze procedure: a new tool for measuring readability. J Bull 30(4):415–433 13. Wang WY, Yang D (2015). Thats so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets. In: Proceedings of the 2015 conference on empirical methods in natural language processing, pp 2557–2563 14. Wiebe Janyce, Wilson Theresa, Cardie Claire (2005) Annotating expressions of opinions and emotions in language. Lang Resour Eval 39(2–3):165–210 15. Wu X, Lv S, Zang L, Han J, Hu S (2019) Conditional BERT contextual augmentation. In:International conference on computational science. Springer, pp 84–95 16. Xie Z, Wang SI, Li J, L´evy D, Nie A, Jurafsky D, Ng AY (2017) Data noising as smoothing in neural network language models. arXiv preprint arXiv:1703.02573 17. Zhang X, Zhao J, LeCun Y (2015) Character-level convolutional networks for text classification. In: Advances in neural information processing systems, pp 649–657

Coverage Performance Analysis for Visible Light Communication Network Juan Li and Xu Bao(B) School of Computer and Communications Engineering, Jiangsu University, Zhenjiang, China [email protected], [email protected]

Abstract. As a candidate wireless communication technology for future indoor scenarios, visible light communication (VLC) not only has the function of green illumination, but also can transmit data at high speed without occupying licensed spectrum resources. Therefore, it can offload the data traffic from the existing radio-frequency (RF) network. However, due to the rectilinear propagation of the visible light, the VLC coverage cannot always satisfy the traffic service. Several factors such as transmitted power, number of users, transmitted/received angle, and multiple access schemes affect the VLC coverage probability. In this paper, we utilize the user quality of experience (QoE) as the evaluation metric, and further, propose a new coverage model named as QoE probability coverage model which is defined as the LED coverage area projecting on the ground that the user can achieve a satisfying QoE with a certain probability. We investigate how the factors including non-orthogonal multiple access (NOMA) scheme, number of user, user density etc. affect the QoE probability coverage area. The research results may guide the design of the VLC network deployment, multiple access protocol and handover schemes. Keywords: Visible light communication (VLC) · Non-orthogonal multiple access (NOMA) · Coverage probability · Quality of experience (QoE)

1

Introduction

Visible light communication (VLC) [1–3] is an optical wireless communication technology that utilizes high-frequency flicker pulses emitted by white light emitting diodes (LEDs) during illumination to perform light intensity modulation, thereby providing communication capability for high-speed data transmission. Compared with the traditional wireless communication technology, VLC is green and healthy, no electromagnetic radiation, no electromagnetic interference, so it can be used in places sensitive to electromagnetic interference such as hospitals. Simultaneously, the VLC does not need to apply for wireless spectrum, and has c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2199–2207, 2020 https://doi.org/10.1007/978-981-13-9409-6_267

2200

J. Li and X. Bao

the advantages of high transmission rate, large bandwidth, and good security. Due to rectilinear propagation characteristic of the VLC signal, the coverage area has some problems for indoor full coverage. The factors affecting coverage include multiple access methods, user density and number of user. In [4], the coverage probability of a typical user is defined as the probability that its the instantaneous signal-to-interference plus noise ratio (SINR) exceeds the target SINR threshold. For a typical user, the received interference and signal power is highly dependent on the receiver’s FOV. For a VLC-only network, the coverage probability of a typical user depends on the probability that at least the optical base-stations (OBSs) should be within the FOV of a typical user. Vavoulas et al. [5] examines the impact of some key factors on VLC coverage and these factors contributes to a network deployment with a reliability degree of coverage. Its analysis also considers the impact of different modulation methods. In [6], two techniques are applied in the MIMO-VLC system for a comprehensive investigation of both the illumination and communication coverage s to improve communication coverage performance. In this paper, we study a new coverage model, which utilizes the user quality of experience (QoE) as the evaluation metric, named as QoE probability coverage model. It is defined as the LED coverage area projecting on the ground that the user can achieve a satisfying QoE with a certain probability. We discuss the impact of factors such as user density and number of user on the coverage model when using the Non-orthogonal multiple access (NOMA) scheme. The results obtained by the above research have guiding significance for the network deployment, multiple access protocol and handover schemes of the VLC system in the future.

2

System Model

In a typical indoor environment the strongest diffuse component is at least 7 dB lower than the weakest line of sight (LOS) component [7], so this paper only considers the LOS channel. We consider a scenario with single LED in an indoor environment, as shown in Fig. 1. In our setup, we assume that there are N users in the coverage of the cell. Figure 1 shows the signal is received at the position of the i-th user, the angle of irradiance and the angle of incidence are denoted by ϕi and ψi , respectively. We set the semi-angle at half illuminance in the model to ϕ1/2 , the FOV angle to ψF OV , the vertical distance from the LED to the horizontal plane to L, and the Euclidean distance between the LED and the i-th user is denoted by di . Regarding the channel model of VLC, we consider the LED follows Lambertian radiation pattern, whose order is given by m = −1/log2 (cos(ϕ1/2 )). In the LOS link, the direct current (DC) channel gain between the LED and the i-th user is written as [8]:  A m+1 cosm (ϕi ) Ts (ψi ) g (ψi ) cos (ψi ) , 0 < ψi ≤ ψF OV 2 (1) hi = di 2π ψi > ψF OV , 0,

Coverage Performance Analysis for Visible Light Communication Network

2201

Fig. 1. System model.

 where the Euclidean distance di = ri2 + L2 . Ts (ψi ) is the gain of the optical filter, and g(ψi ) represents the gain of the optical concentrator [8] is g (ψi ) = n2 /sin2 (ψF OV ), where n is the refractive index. Without loss of generality, assume that all of the users are ordered based on their channel qualities, |h21 | ≤ |h22 | ≤ · · · ≤ |h2i | ≤ · · · ≤ |h2N | i.e., the i-th user always holds the i-th weakest instantaneous channel. According to the power domain NOMA principle, the superposed signal to be transmitted at the LED is given by Yin et al. [9]: N   ai PT si + IDC , (2) xi = i=1

where the total transmitted power is PT =

N  i=1

Pi ,

N i=1

a2i = 1, Pi and ai are

the transmitted power and power allocation factor of i-th user, respectively. IDC is the DC bias added to the LED to ensure the positive instantaneous intensity. After removing the DC term, the received signal at the i-th uesr is formed by the contribution of all signals transmitted from the LED, written as [9] ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ i−1 N ⎨ ⎬   aj sj + ai si + al sl (3) yi = PT h i + zi ,

⎪ ⎪ ⎪ ⎪ j=1 l=i+1 ⎪ ⎪ ⎪ ⎪

⎩ SIGN AL ⎭ SIC

IN T ERF EREN CE

where zi denotes the real-valued Gaussian noise with zero mean and variance σi2 = N0 B. SIC is carried out at the i-th user to remove the message signal for the other users with poorer channel conditions [the SIC term in Eq. (3)].

2202

J. Li and X. Bao

SIGNAL is the signal to be transmitted by the i-th user [the SIGNAL term in Eq. (3)]. INTERFERENCE is the interference of message signal by the other users with better channel conditions for the i-th user [the INTERFERENCE term in Eq. (3)]. According to Eq. (3), the achievable data rate of the i-th user is given by: ⎡ ⎤ ⎧ ⎪ 2 ⎪ ⎪ ⎦ , i = 1, · · · , N − 1 ⎨ log2 ⎣1 + N (hi ai )  (hi aj )2 +1/ρ (4) Ri = j=i+1 ⎪   ⎪ ⎪ 2 ⎩ , i = N, log2 1 + ρ(hi ai ) where ρ = P/(N0 B) represents the transmit signal-to-noise ratio (SNR).

3

QoE Probability Coverage Model

In this section, we utilize the power domain NOMA as the multiple access method to analyze the coverage performance. We introduce the evaluation metric of quality of experience (QoE), which ties together user perception, experience, and expectations to application and network performance, typically expressed by quality of service parameters. The quantification of QoE, which is currently widely used is Mean Opinion Score (MOS) [10], which can be determined from subjective ratings by real users or predicted from objective measurements of properties of the delivered goods such as audio, video, or files. The MOS has five values from 1 to 5 indicating users satisfactory degrees: Bad, Poor, Fair, Good, and Excellent, respectively [10]. The user satisfaction of a information transfer service is solely dependent on the provided data rate. If we consider that each user has a given rate expectation which corresponds to the best user satisfaction, the model in [11] derives the following logarithmic relationship between MOS and throughput: (5) M OS = b1 log10 (b2 R) , where R is the data rate of the service. The parameters b1 and b2 in Eq. (5) are obtained from the upper and lower rate expectations for the service. QoE probability coverage model is defined as the LED coverage area projecting on the ground that the user can achieve a satisfying QoE with a certain probability. The coverage probability of the i-th user can be expressed as: Pcov = Pr [QoEi ≥ Qt ] ,

(6)

Where QoEi represents the satisfactory degree expression of the i-th user QoE, i.e., M OS in Eq. (5), Pr [·] denotes the probability of an event, and Qt is the QoE condition threshold that the user needs to meet within the coverage area. The Sect. 2 introduces the expression of the channel  gain is Eq. (1). By substituting cos (ϕi ) = cos (ψi ) = L/ ri2 + L2 and di = ri2 + L2 , the expression of the channel gain function can be written as: hi =

A (ri2

+

(m+3)/2 L2 )

m+1 Ts (ψi ) g (ψi ) Lm+1 . 2π

(7)

Coverage Performance Analysis for Visible Light Communication Network

2203

To simplify the calculation, we assume Γ =

A (m + 1) Ts (ψi ) g (ψi ) Lm+1 , 2π

(8)

so we can get the conditional relationship between the coverage probability and the radius ri as follows ⎡ ⎛⎛ ⎞ 12 ⎤ −1 ⎞ m+3 ⎢ ⎜⎜ ⎟ ⎥ ⎟ ⎢ ξ 2⎟ ⎥ , ⎠  (9) − L Pcov = Pr ⎢ri ≤ ⎜ ⎝ N ⎝ ⎠ ⎥  ⎣ ⎦ 2 2 2 (Γ ρ) ai −ξ

j=i+1

aj

Qt /b1

− 1. We assume that the radius of the coverage cirwhere ξ = 21/(B·b2 )·10 cle is the maximum coverage radius rmax . We consider that users are uniform distributed in the coverage area, the distance between the i-th user and the coverage center is represented by ri . The uniform distribution probability density function (PDF) of the i-th user is expressed as follows: f (ri ) =

2ri . 2 rmax

(10)

According to the Eq. (10), using the mathematical method of probability theory to calculate the Eq. (9), the coverage probability of system can be written as:  ζ ζ2 2ri dri = 2 , (11) Pcov = 2 rmax 0 rmax where ζ is a substitute for a complex formula, which can be denoted as: ⎛⎛

−1 ⎞ m+3

⎜⎜ ⎟ ⎜⎜ ⎟ ξ ⎜ ⎟ ! ζ = ⎜⎜ ⎟ ⎜ ⎜⎝ N  ⎝ (Γ 2 ρ) a2 − ξ 2 ⎠ a i j

⎞ 12 ⎟ ⎟ −L ⎟ . ⎟ ⎠ 2⎟

(12)

j=i+1

4

Simulation Results

In this section, we evaluate the performance of the proposed QoE probability coverage model by performing Monte Carlo simulations for different parameter configurations. The simulation parameters are listed in Table 1. We consider a fixed power allocation (FPA) strategy of NOMA, the power allocation factor of the i-th user is a2i = 2 (N + 1 − i)/(N (N + 1)), where N is the total number of users in the coverage area. Under this scenario, users within the coverage area are modeled as a homogeneous poisson point process with the density parameter λ. The number of users N is related to the maximum coverage

2204

J. Li and X. Bao Table 1. Simulation parameters Parameter name, notation

Value

Vertical separation between LED and PDs, L 2.5 (m) Total signal power, P

0.5 (W)

Reflective index, n

1.5

Optical filter gain, T

1

Poisson distribution parameter, λ

0.2

LED Semi-angle at half power, ϕ1/2

60 (◦ )

PD FOV, ψF OV

60 (◦ )

PD detection area, A

1 (cm2 )

Signal bandwidth, B

10 (MHz)

Noise PSD, N0

10−21 (A2 /Hz)

Area of room, Area

25 (m2 )

2 radius rmax and λ, i.e., N = λπrmax . We discuss the impact of different maximum coverage radius and different user density for the QoE coverage probability model performance. As can be seen from Fig. 2, the analytical results are consistent with the simulation results. The increase of coverage radius indicates that the coverage area is expanded. In the case of a certain user density, the larger the coverage area is, the more users are covered. The power domain NOMA scheme allocates power to the users of the service, when the QoE threshold and total power of this model are fixed, the coverage probability decreases as the coverage radius increases. In Fig. 3, the coverage probability is calculated by different maximum coverage radius and number of user. With fixed number of user, the increase of the maximum coverage radius leads to the decrease of the coverage probability. Due to the location of the users are random, so NOMA scheme allocates more power to users who are farther away with the reduction of channel gain. At the same time, we can see that in the model we set up, only 6 users can access the network. In Fig. 4, the effect of QoE threshold and coverage probability on QoE probability coverage model is studied. It can be seen that the QoE threshold has a greater impact on coverage. With the same coverage probability, when Qt is reduced from 4.0 to 3.5, the maximum coverage radius changes by 1 m, and when Qt is reduced from 3.5 to 3.0, rmax also changes by 1 m. Therefore, when the system needs to accommodate more users, we can moderately lower the threshold. Simultaneously, we can see that the coverage probability and the maximum coverage radius show an approximate linear relationship within some values. This can be used to predict users’ access to the system in the next study, so as to reduce the delay and bad experience caused by users invalid access to the network.

Coverage Performance Analysis for Visible Light Communication Network

2205

Coverage probability, P cov

1

0.8

0.6

0.4

0.2

0

1

1.5

2

2.5

3

3.5

4

Maximum coverage radius , r max(m)

Fig. 2. QoE coverage probability for different maximum coverage radius (Qt = 4, λ = 0.2).

1

Coverage probability, P cov

r max =2.0m r max =2.5m

0.8

r max =3.0m r max =3.5m

0.6 0.4 0.2 0

2

4

6

8

10

Number of user, N Fig. 3. QoE coverage probability with different maximum coverage radius to number of user (Qt = 4).

J. Li and X. Bao 5

0.9

4.5

0.8

4

0.7

3.5

0.6 0.5

2.5

QoE: P cov =70%

0.4

2

QoE: P cov =50%

0.3

1.5

Pcov : Q t =3.0

0.2

1

Pcov : Q t =3.5

0.1 0

3

QoE: P cov =90%

0.5

Pcov : Q t =4.0

1

1.5

2

(Dotted lines)

(Solid lines)

Coverage probability, Pcov

1

QoE threshold, Qt

2206

2.5

3

3.5

4

0

Maximum coverage radius , r max

Fig. 4. QoE coverage probability (left vertical axis) and QoE threshold (right vertical axis) for different maximum coverage radius (λ = 0.2).

5

Conclusion

In this paper, we mainly discuss the indoor VLC system coverage performance. By using QoE as the evaluation metric, we propose the QoE probability coverage model. We study how the factors including the NOMA scheme, number of user, and user density affect the QoE probability coverage area. Our research results can give an enlightenment for the research of multiple access and handover algorithms to enhance the network efficiency and also improve the handover delay.

References 1. Komine T, Nakagawa M (2004) Fundamental analysis for visible-light communication system using LED lights. IEEE Trans Consum Electron 50(1):100–107 2. Grubor J, Randel S, Langer KD, Walewski JW (2008) Broadband information broadcasting using LED-based interior lighting. J Lightw Technol 26(24):3883– 3892 3. Haas H, Yin L, Wang Y, Chen C (2016) What is LiFi? J Lightw Technol 34(6):15331544 4. Tabassum H, Hossain E (2018) Coverage and rate analysis for co-existing RF/VLC downlink cellular networks. IEEE Trans Wirel Commun 17(4):2588–2601 5. Vavoulas A et al (2015) Coverage aspects of indoor VLC networks. J Lightw Technol 33(23):4915–4921 6. Chen C et al (2017) On the coverage of multiple-input multiple-output visible light communications invited. J Opt Commun Netw 9(9):D31–D41 7. Zeng L, O’Brien D, Minh H, Faulkner G, Lee K, Jung D, Oh YJ, Won ET (2009) High data rate multiple input multiple output (MIMO) optical wireless communications using white LED lighting. IEEE J Sel Areas Commun 27(9):1654–1662

Coverage Performance Analysis for Visible Light Communication Network

2207

8. Kahn J, Barry J (1997) Wireless infrared communications. Proc IEEE 85(2):265– 298 9. Yin L et al (2016) Performance evaluation of non-orthogonal multiple access in visible light communication. IEEE Trans Commun 64(12):5162–5175 10. Anandkumar A, Michael N, Tang AK, Swami A (2011) Distributed algorithms for learning and cognitive medium access with logarithmic regret. IEEE J Sel Areas Commun 29:731–745 11. Courcoubetis CA, Dimakis A, Reiman MI (2001) Providing bandwidth guarantees over a best-effort network: call-admission and pricing. In: Twentieth joint conference of the IEEE computer and communications societies. Proceedings, vol 1, pp 459–467

An Intelligent Garbage Bin Based on NB-IoT Yazhou Guo1, Ming Li1, Kai Mao2, Zhuoan Ma1, and Yin Lu3,4(&) 1

3

4

Bell Honors School, Nanjing University of Posts and Telecommunications, Nanjing 210023, China 2 School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China Jiangsu Key Laboratory of Wireless Communications, Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected] Engineering Research Center of Health Service System Based on Ubiquitous Wireless Networks, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing, China

Abstract. Most of the garbage bins in public places are traditionally fixed, which are limited by the number of garbage bins, limited capacity, untimely cleaning and uneven placement, and other factors, causing inconvenience for people to use garbage bins [1]. This paper designs an intelligent garbage bin. Based on NB-IoT communication technology, it realizes the functions of self-moving and collecting garbage, self-detecting garbage capacity in the bin, temperature and humidity monitoring and so on. The design is divided into three parts: intelligent robot module, sensor circuit module and NB-IoT communication module. The intelligent robot module is mainly responsible for the mobile and obstacle avoidance function of the intelligent garbage bin, and realizes traversal and surround in the setting environment scene [2]. The sensor module is the core of the hardware system, which realizes the functions of obstacle avoidance evaluation, capacity monitoring, heat source induction and so on. The platform establishes contacts to feedback the contents of barrels, temperature, humidity, electricity and other information in a timely manner. This design combines convenience and practicability, and can improve the sanitary condition of public places. Keywords: Intelligent garbage bin

 NB-IoT  Single chip

1 Introduction As an indispensable commodity in life, garbage bin plays an indispensable role in improving people’s quality of life, environmental quality and even national image. In household life, the traditional fixed garbage bin can no longer meet people’s needs for intelligent and humanized living appliances. The garbage cleaning work in public places is completed by full-time cleaners [3]. Because of the limited number of garbage bins, limited capacity, improper cleaning and uneven placement, there are many acts of “no use of existing bins”.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2208–2216, 2020 https://doi.org/10.1007/978-981-13-9409-6_268

An Intelligent Garbage Bin Based on NB-IoT

2209

The loss of the function of traditional garbage bin not only causes serious damage to the environmental sanitation of public places, but also brings negative impact on people’s physical and mental health [3]. A new type of garbage bin has been born because of numerous accumulated disadvantages. This paper designs an intelligent garbage bin based on NB-IoT, which uses STM32F103RET6 as the processing core and integrates infrared and ultrasonic sensors to realize the functions of self-movement and garbage collection, self-detection of garbage capacity in the bin, temperature and humidity monitoring, and NB-IoT communication, effectively avoiding the phenomenon of “no use of existing bins” [4]. The structure of this paper is as follows. Section 2 gives a general overview of system design. Section 3 describes the design ideas of hardware system. Section 4 describes the main structure of software system. Section 5 summarizes the whole paper.

2 System Overall Design Intelligent garbage bin adopts the main modular design scheme, which is mainly divided into three modules: control module, drive module and detection module. The hardware design of the three modules is carried out separately. The software design part of the robot is completed by using C language programming, which realizes the integration of the software and hardware system of the whole intelligent garbage bin (Fig. 1).

Power Module

Temperature and Humidity Detection Module

Infrared Induction Module Master Control Unit STM32F103RET6

Motor Drive Module

NB-IoT Communication Module Ultrasound Volume Detection

Intelligent Mobile System Display Module

GPS Module

Fig. 1. System overall design block diagram

The main control module takes STM32F103RET6 as the processing core, and integrates infrared induction module, temperature and humidity detection module, motor drive module, NB-IoT communication module and other modules. In the application scenario, the intelligent garbage bin can traverse the working environment and move intelligently. The infrared sensor is used to open and close the garbage dumping port intelligently. The ultrasonic capacity monitoring module and

2210

Y. Guo et al.

temperature and humidity detection module are used to monitor the information of the garbage bin in real time [5]. The information is transmitted with the backstage manager through the NB-IoT communication module and displayed by the LED display module (Fig. 2).

Garbage Bin

Main Controller

NB-IoT Module Core Network

Garbage Bin

Main Controller

Cloud Platform

NB-IoT Module

Intelligent Terminal

NB-IoT Base Station

Fig. 2. Overall framework of NB-IoT terminal module

3 Hardware Design 3.1

Infrared Sensor

The infrared induction module is composed of infrared sensor, LED signal lamp, capacitor, transistor and other components. When someone comes to the garbage bin, the output point level of the infrared module is 0 and charges the capacitor, waiting for the voltage at both ends of the capacitor to rise to the conduction voltage of the transistor to open later, while the LED signal lamp on the lid of the barrel lights up, and the garbage discharge outlet pops off [6]. When there is no one in front of the garbage bin for infrared detection, the infrared sensor does not receive the reflection signal, then the LED does not light up, and the garbage disposal port is closed, which is normal. When someone walks in front of the barrel (this signal belongs to interference information), the signal received by the infrared module does not trigger the switch-on due to the delay of capacitance, so the circuit has anti-interference effect on the interference signal. 3.2

Ultrasonic Sensor

The transmitter of the sensor system consists of two parts: oscillation and power amplifier. It outputs a high voltage pulse train with a certain width, and converts the internal HC-SR04 module into sound energy. The receiver of the sensor system adopts UCM40R, which converts the ultrasonic modulation pulse into alternating voltage signal. After amplification by the two poles of operational amplifier, it is added to the audio decoding integrated block LM567 with locking ring. The IO port TRIG is used to trigger the ranging module. The module sends eight 40 kHz square waves and detects whether there is a signal returning. If there is a signal returning, the IO port ECHO outputs a high level, and the high-level duration is the time from the launch to return of the ultrasound [7]. The distance is calculated by formula, and the formula of distance measurement is as follows.

An Intelligent Garbage Bin Based on NB-IoT

1 d ¼ ð t  vÞ 2

2211

ð1Þ

In the formula, d denotes distance, t denotes high-level time, v denotes sound speed (340 m/s), and the measurement period is more than 60 ms to prevent the influence of the transmitted signal on the echo signal [7]. 3.3

Intelligent Mobile and Obstacle Avoidance

Intelligent garbage bin is driven by four wheels, and two PWM waves with adjustable duty cycle are sent out by single chip computer. The functions of controlling the robot’s forward, backward and turning are realized by differential adjustment of two driving wheels. A photoelectric code disk is added to the left and right motors to accurately measure the speed and travel distance of the left and right wheels and precisely control the speed and position of the left and right wheels. After the information returns to the single chip computer, it enters the digital PI regulator, adjusts the PWM duty cycle of the output of the single chip computer, and adjusts the speed of the motor, which constitutes the negative feedback system of speed measurement (Fig. 3).

Fig. 3. Concept map of intelligent dumpster

3.4

Motor Drive

The motor driver is L298N motor driver, and the L298N motor driver is full bridge driver with high voltage and high current, which is controlled by standard logic level signal. Double H-bridge motor driver chip is used. Each H-bridge is supplied with 2 A, the power part is supplied with 2.5–48 V, and the logic part is supplied with 5 V, which receives 5VTTL level [8].

2212

3.5

Y. Guo et al.

Temperature and Humidity Sensor

The temperature and humidity sensor part adopts DHT11 digital temperature and humidity sensor with calibrated digital signal output. It includes a resistive humidity sensor and a NTC temperature sensor. It is connected with a high-performance 8-bit microcontroller. It has a single-wire serial interface and four pins and a single-row pin package.

4 Software Design 4.1

Software Design of Main Controller

The core of the main controller is STM32F103RET6 single chip computer system. Its software design mainly includes the main program, timer interrupt service program and serial port receiving interrupt service program (Fig. 4). The main program includes the initialization program for single chip computer and sensors, the initialization program for NB-IoT M5310-A communication module, the definition of sensor measurement mode and measurement cycle, as well as NB-IoT frequency band, working mode, etc. The main program also includes obstacle avoidance program, alarm program and so on. It is used to judge the distance between obstacles and adjust the speed and direction in time [9]. It also monitors the capacity of garbage bins in real time and uploads alarm information to cloud platform. The timer interrupt service program collects the data measured by each sensor and uploads the data packet to ONENET platform through MQTT protocol every time a certain time constant is counted. Serial port receiving interrupt service program is mainly used for issuing instructions. 4.2

NB-IoT Module Workflow

The NB-IoT M5310-A module of MIT has three normal working modes: Active, Idle and PSM. When the system communicates with ONENET platform, it uses eDRX (enhanced discontinuous reception) technology to prolong the DRX time and extend the receiving interval to 10.24 s, so as to save electricity, and is more suitable for smart home with long connection [10]. In IDLE mode, users detect a paging channel in a certain period to detect whether there is a need for upstream packets to upload. The main controller wakes up NB-IoT module by sending AT commands to upload data to ONENET platform. The workflow of the NB-IoT module is shown in Fig. 5. 4.3

ONENET Platform

This system uses the ONENET platform of the IOT as the cloud platform. NB-IoT uploads the received data packets and alarm information to the cloud platform. After the integration of the cloud platform, the system displays the data stream on the Web end in the form of time, and can send the alarm information to the staff for processing [11, 12].

An Intelligent Garbage Bin Based on NB-IoT

2213

Start Master Control Unit Initialization NB-IoT M5310-A Initialization

Whether the time is up

N

Y

Read in Measurement Data Send to NB-IoT

N

Distance less than Alarm Threshold Y

Motor Commutate

Capacity less than Alarm Threshold

N

Y

Send Alert Commands to NB-IoT Reset Timing Constant

Fig. 4. Main program flow chart

Before that, we need to add products to ONENET platform, configure the product devices according to the protocol written in NB-IoT communication module, and obtain the corresponding authentication information of communication devices through AT commands, such as IMSI, IMEI, etc., and conduct online testing on the Web side (Fig. 6). After the test is completed, you can choose the type of data stream displayed on the Web side. The data flow is shown in Figs. 7 and 8.

2214

Y. Guo et al. Start NB-IoT M5310-A Initialization Whether upload service

N

Establish connection MQ Telemetry Transport

Fig. 5. NB-IoT module workflow diagram

Start

AT

ERROR

AT+CIMI

ERROR

AT+CSQ AT+CEREG? AT+CGAIT? Data Service Fig. 6. AT instruction configuration and cloud platform process

5 Conclusion The intelligent garbage bin based on NB-IoT designed by this system is based on sensor module and new Internet of Things technology. It has reasonable structure design, bright innovation, high integration, low power consumption. It has strong practical value and market prospects. It has a positive effect on the sanitary condition of home life and public environment [13]. At present, the research work only validates the

An Intelligent Garbage Bin Based on NB-IoT

2215

Fig. 7. Temperature detection data flow

Fig. 8. Capacity monitoring data flow

feasibility of the design scheme through simple examples, we still need to do further research and development for market users to enhance more humanized and practical application experience. “Promoting garbage classification and promoting green development” has become a new theme of the times. Therefore, we will continue to develop garbage classification function of garbage bin, build an embedded platform for image recognition and processing inside the garbage bin, automatically identify the garbage type put into the garbage bin through machine learning algorithm, and build a spherical robot arm to put into the garbage bin. Sort to the appropriate area of the garbage bin. Acknowledgements. This work is supported by the provincial key (national) project of Nanjing University of Posts and Telecommunications Innovation Training Program in 2018 (No. SZDG2018039) and the 4th National University Internet of Things Technology and Application “Sanchuang” Competition in 2018 (No. 18A147).

References 1. Zhou Q, Guan F, Lin L (2016) Design of a self-turning and compressible multi-functional intelligent garbage bin. Machinery 43(5):51–54 2. Xin Z, Lu H, Hu L (2011) Design of intelligent garbage bin system based on Internet of Things. Instrum Users 18(6):37–39 3. Zhu S, Cui Z, Shuai M (2015) Design and implementation of garbage bin intelligent management system based on Internet of Things technology. Internet Things Technol 5 (12):53–55 4. Lian X, Zhou D, Cheng K (2010) Design and implementation of air conditioning remote control system based on NB-IoT. Meas Control Technol 37(315 (05)):58–62 5. Li M, Wang C (2017) Green automatic intelligent sorting bin. Sci Technol Inf (16)

2216

Y. Guo et al.

6. Xiong J, Lu W, Ji X (2016) Design of an intelligent classification garbage bin system. Dev Innov Mech Electr Prod 29(5):27–29 7. Wang P (2011) Intelligent infrared automatic garbage bin design. J Chengde Petrol College 13(3):40–43 8. Cui L, Lin J, Fu J, Liu J (2019) A combined localization algorithm for NB-IoT system in NLOS environment. Mod Electron Technol (11) 9. Salucci M, Anselmi N, Goudos S et al (2019) Fast design of multiband fractal antennas through a system-by-design approach for NB-IoT applications. EURASIP J Wireless Commun Netw 2019(1) 10. Lin YB, Tseng HC, Lin YW et al (2018) NB-IoT talk: a service platform for fast development of NB-IoT applications. IEEE Internet Things J, 1–1 11. Soussi ME, Zand P, Pasveer F et al (2017) Evaluating the performance of eMTC and NB-IoT for smart city applications 12. Zhou Y, Zhang Q, Ou J et al (2018) The design of passive NB-Iot system based on wireless power transmission technology. IOP Conf Ser Mater Sci Eng 452:042029 13. Beyene YD, Jantti R, Tirkkonen O et al (2017) NB-IoT technology overview and experience from cloud-RAN implementation. IEEE Wirel Commun 24(3):26–32

Research on X-Ray Digital Image Defect Detection of Wire Crimp Yanwei Wang1(&) and Jiaping Chen2 1

2

Harbin Institute of Petroleum, Harbin 150027, China [email protected] College of Engineering, Heilongjiang Bayi Agriculture University, Daqing 163319, China

Abstract. In order to built a safe and stable power station, this paper provides a defect identification method to detect the quality of crimp. We detect the difference between the inserting position of steel corem and aluminum wire in the strain clamp and measurement the length and other characteristic parameters. It is meaningful for the evaluating crimp quality, and benefits for a qualitative analysis of the quality of wire crimping. Keywords: Wire crimping

 X-ray image  Defect detection

1 Background Wire crimping is widely used in power systems. Previously, the load was pressuredepleted and hydraulic. Since the wire is loaded with both tension and conductor, it is inconvenient to disassemble after installation [1]. In recent years, many accidents in China have been caused by crimped wires. When the line is under heavy load, the wires are locally heated and damaged, especially in the case of ice coating. The quality of the crimping is related to the safe and stable operation of the power system [2]. In order to ensure the detection mode of the crimping wire is not disassembled, the X-ray detecting wire crimping device is used, and the projected image is transmitted through the imaging plate by the mobile X-ray device, because the tested wire is pressed against the base material and the structure is irradiated to the ray. Different attenuation characteristics, the X-rays transmitted through the device to be inspected will be absorbed to different extents, and the quality of the crimping can be reflected by the X-ray image to determine whether the quality of the crimping is qualified, and the construction of the crimping mass X-ray digital imaging test A database of sample images, in the detection image, by means of machine learning, by comparing the detected image with the image of the defect-free sample, the crimp quality is evaluated, and a qualitative analysis of the defect is given [3, 4].

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2217–2222, 2020 https://doi.org/10.1007/978-981-13-9409-6_269

2218

Y. Wang and J. Chen

2 Measurement of Typical Characteristic Parameters Measuring the indentation depth of the wire is the distance between the measured point A and the point B, and the coordinates of the two points of AB are obtained, and then expressed by the formula (1). qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi d ¼ ð x1  x 2 Þ 2 þ ð y 1  y 2 Þ 2

ð1Þ

d is the distance of the relative pixel value, the actual distance D = d * k, and k is the measurement ratio constant. The measurement result can be tapped into the depth measurement by clicking the first button, click the starting point of the distance to be measured on the image, record it as point A, drag the mouse to the end point of the distance to be measured and click the mouse to record B. Point, you can measure the depth of the wire press-in, the corresponding information is displayed in the measurement information dialog box, as shown in Fig. 1.

Fig. 1. Schematic diagram of depth measurement of wire press-in

2.1

Resistance Clamp Detection

The method of detecting the tension clamp by the vernier caliper can not be completely detected due to factors such as human fatigue, and thus an accident occurs, and the Xray inspection can minimize the damage. The tension clamp and the wire are respectively made of aluminum, and when the crimp quality is detected by X-ray, the different positions of the tension clamp can be detected.

Fig. 2. X-ray image of tensile clamp

X-ray is a non-contact internal detection method, Fig. 2 is the X-ray image of the tensile clamp, and Fig. 3 is the depth detection value of the tensile clamp.

Research on X-Ray Digital Image Defect Detection of Wire Crimp

2219

Fig. 3. Tensile clamp length calculated by X-ray inspection

When the steel strand is completely inserted into the steel pipe, the X-ray penetration thickness is the sum of the lengths of the steel core, the aluminum wire and the steel wire. When the steel strand is not completely inserted into the steel pipe, the ray penetration thickness is calculated as the aluminum wire plus the steel wire. Comparison of transmitted ray intensities in this case3 Design Procedure. 2.2

Detection of Steel Core in the Connecting Pipe

For the steel core inserted steel anchors, the same size is crimped at both ends of the wire of 0–50 mm, and the steel core is inserted into the steel anchor with the steel anchor size of 60–100 mm at one end of the wire, and the other end is crimped according to the experimentally set size. Due to human factors in the crimping process, there is a certain error in the actual crimp size. To obtain the relationship between the steel core crimp size and the tensile force of the wire, it is necessary to accurately measure the actual crimp size of the steel core. Figure 4 shows an X-ray inspection of the steel core image (Figs. 5 and 6).

Fig. 4. X-ray inspection steel core

Fig. 5. Steel core length measurement

2220

Y. Wang and J. Chen

Fig. 6. Steel core depth

2.3

Measurement of Other Characteristic Parameters

Other characteristic parameters include wire penetration depth, steel anchor penetration depth, and press-in gap measurement. The gray-scale histogram is used to find the gray-scale jump point in the measurement to calculate the wire penetration depth. Grayscale histograms are one of the simplest and most useful tools in digital image processing, and they describe the grayscale content of an image. The histogram of any image contains considerable information, and some types of images can be fully described by their histograms. The gray histogram is a function of the gray value, and

Fig. 7. Automatic measurement

Research on X-Ray Digital Image Defect Detection of Wire Crimp

2221

describes the number of pixels having the gray value in the image, the abscissa indicating the gray level of the pixel, and the ordinate being the frequency at which the gray level appears. As shown in Fig. 7, the image is scanned vertically from top to bottom and from left to right. When the first grayscale jump point is recorded as x1, the second jump point is recorded as x2, which will be (x2 − The point at x1)/2 is denoted as A. With (x2 − x1)*k as the height (k is the measurement ratio constant), the window is generated with 2 pixels as the width, and the window is moved along the horizontal line to

Fig. 8. Vertical grayscale graph

Fig. 9. Horizontal grayscale

2222

Y. Wang and J. Chen

continuously count the accumulated value of the grayscale in the window. When the value is first up, when a jump occurs, the center of the window is B. Continue to slide the window. When the accumulated value first jumps down, the center of the window is C. The distance between BC is the insertion depth, as shown in Figs. 8 and 9.

3 Chroma and Contrast Adjustment Image characteristics are extracted by using two parameters of chromaticity/contrast, assuming that the brightness of the image is essentially the brightness of each pixel in the image, the brightness of each pixel is essentially the size of the RGB value, and the RGB value is 0, the pixel is black, and RGB is at 255, the pixel is the brightest and white. Contrast is the difference between different pixels. The larger the difference, the more obvious the contrast, and the chromaticity and contrast are obtained according to the field consistency rule.

4 Conclusion According to the above measurement, the pixel size of the wire indentation depth is obtained according to the mapping relationship, and the size of the image is expressed in pixel units, but it has a linear relationship with the actual dimension unit of the distance, and the measurement ratio between the two is The constant k, k is determined by the actual calibration value so that the actual size of the wire penetration depth can be determined. Acknowledgements. This work was supported by the Innovative Talents Training Funding No. 2017IM010500 and crosswise tasks No. FHCL2018JH01.

References 1. Ma P, Shi T et al (2017) Diagnose analysis of wire breakage based on miniaturized X-ray detection equipment. Yunnan Electr Power 45(02):34–36 2. Wei W, Qi Y et al (2018) Application of X-ray nondestructive flaw detection technology in transmission line’s press fittings. J Shanghai Jiaotong Univ 52(10):1189–1194 3. Zhang Z, Jia B et al (2016) Application of X-ray detection device in overhead transmission line compression type-fitting quality inspection. Hebei Electr Power 35(04):43–46 4. Yuan Z, Lu W et al (2018) Reason analysis of strain clamp fracture of large cross-section conductor in ±500 kV HCDC transmission line. Guangxi Electr Power 41(02):32–36

Architecture and Key Technology Challenges of Future Space-Based Networks Ni-Wei Wang(&), Xiao-Fan Xu, Ying-Yuan Gao, Yue Cui, Fei Xiao, and Zhou Lu China Academic of Electronics and Information Technology, No. 11 Shuangyuan Road, Shijingshan District, Beijing 100041, China [email protected], [email protected]

Abstract. With the increasing demand of users, the ground network cannot meet the user experiences. Considering the advantages of satellite network, an innovative network is presented, which integrate satellite networks, the Internet and mobile wireless networks into one network. In this paper, the architecture of future space-based network is proposed, which can be divided into physical structure and virtual structure. In addition, in order to form this network, a threelayer and two-domain technical architecture is discussed finally. Keywords: Space-based networks  Satellite communication  Ground network  Geostationary Earth Orbit (GEO)  Medium Earth Orbit/Low Earth Orbit (MEO/LEO)

1 Introduction In the past few decades, with the increasing demand for ground telecommunications, aviation/maritime communications, spatial data transmission, and people’s demand for user experience, future network architecture will be updated along with the booming emerging Internet and Internet of Things applications. With the rapid development of fiber optic technology, the ground network has developed quite maturely and can provide daily service for users. However, users are populated sparsely in some places where the cost of deploying ground networks is relatively high. In addition, when a natural disaster occurs, the ground network is easily destroyed and cannot provide communication services to users. Compared with the ground network, satellite networks have many advantages, e.g. wide coverage, strong robustness and so on [1–3]. Therefore, to meet the global service demand and take individual needs of regional enhancement and rapid response into account, multi-level hybrid networks with Geostationary Earth Orbit (GEO), Medium Earth Orbit/Low Earth Orbit (MEO/LEO) satellites were proposed, e.g. WGS, O3b, Iridium Next, OneWeb. These networks adopt the structure of ‘Star Network’ or ‘Skynet Network’ to guarantee multi-level three-dimensional coverage and to serve for various users.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2223–2229, 2020 https://doi.org/10.1007/978-981-13-9409-6_270

2224

N.-W. Wang et al.

With the continuous integration and expansion of different networks, an obvious developing tendency is to integrate satellite networks, the Internet and mobile wireless networks into one network [4–6]. This integrated network can provide comprehensive and efficient services through the collaboration of different subsystems and the utilization of multi-dimensional and multi-modal information.

2 Architecture of Space-Based Networks In order to meet future application requirements, based on the design concept of ‘infrastructure sharing, functional service versatility, application system-specific’, space-based network has an architecture of ‘device access network, information into the cloud, service-to-end’ by using advanced theory and technologies, such as artificial intelligence, big data, cloud computing and software defined [7–9]. This can provide on-demand, customized network and information service capability systems under time, space and task constraints to ensure the information advantages, decision-making advantages and even overall advantages are occupied globally in the future. The future space-based network will be designed for rasterized information network concepts covering ubiquity, application transparency, and service customization. It has self-healing protection, business integration, plug-and-play, and dynamic control features to achieve coverage when it arrives. From the perspective of ‘node + link’, the architecture of space-based network is shown in Fig. 1. It is distributed with transmission switching, information sensing, navigation and positioning, computing processing, and sharing. The service heterogeneous space-based nodes and the ground supporting facilities are interconnected, and the nodes interact with each other through microwaves, lasers, etc. At the same time, based on high-performance space computing, resource virtualization and network slicing, the concept of ‘physical decentralization and logical concentration’ is used to virtualize resources from both nodes and networks to provide users with intelligent information transmission and services such as distribution, network awareness, positioning and navigation. Space-based network can integrate the integration of space and ground, converged networking, flexible intelligence, and ubiquitous services. Different from the existing ‘backbone network + access network’ system, the future space-based network adopts full software definition, node and network resource virtualization technology, which can be oriented to user task requirements, flexible and flexible networking through inter-satellite links. The topology is dynamically reconfigurable to implement situational awareness, requirements analysis, data transmission, and service provisioning. The relevant computing storage resource in each physical node in the network form a virtual control node through the network cloud structure, and grasp the resource distribution information and service demand information of the entire network. It can efficiently and flexibly schedule node resources and network resources and provide corresponding services.

Architecture and Key Technology Challenges of Future

2225

Fig. 1. The architecture of future space-based networks

3 Physical Structure The space-based node will be composed of GEO nodes and MEO/LEO nodes. The space-based node is based on a common platform, and its functional load configuration varies according to the characteristics of different height, so different service capabilities are provided. In general, GEO nodes and MEO/LEO nodes can also be selfcontained while providing interconnection and interoperability, providing users with various services. The difference is that GEO nodes have strong on-board processing capability and multiple functions, and can directly provide users with high-speed, highperformance, high-precision transmission, and other services for service advanced special needs. MEO/LEO nodes can form sub-clustered to provide users with low-rate, low-latency services directly to meet the needs of daily users. In addition, GEO nodes and MEO/LEO nodes can also be interconnected to form a network with stronger service capabilities to provide full-time, global, high-speed, high-performance services. 3.1

GEO Function Nodes

A comprehensive heterogeneous space-based node includes several configurations of transmission, sensing, navigation and positioning, computational processing, and shared service capabilities. In order to support the diversity, flexibility and high efficiency of the node function, the high-rail node will also configure the high-speed laser communication terminal to realize the all-optical networking and support the dense

2226

N.-W. Wang et al.

wave-length division multiplexing while configuring the high-band microwave communication terminal. At the same time, the high-track node can transparent forward, sub-band switch, packet switch and IP route, and support all-optical switching to further enhance the switching capacity. High-performance nodes are configured with high-performance computing modules, and computing resources are scheduled according to the corresponding computing task requirements to achieve highperformance computing. 3.2

MEO/LEO Function Nodes

These nodes support for the integration design of transmission, sensing, and navigation nodes. Each satellite node is limited by load carrying capacity and usually focuses on providing single function, its communication and information processing capabilities mainly meet its own needs, but can be functionally switched according to the business needs in-orbit hardware or software reconstruction function modules. Although the MEO/LEO nodes are weaker than the GEO nodes, they have many features, complementary functions, and flexible networking. For the user task requirements, the MEO/LEO nodes can be used to realize the spatial constellation of each functional node required for the task through the scheduling management of the control node, and form a better satellite constellation. The load of the MEO/LEO nodes has the characteristics of low cost and miniaturization. The communication load and the calculation load have lower performance and FEO nodes. The overall service capability is mainly improved by the form of networking, and has anti-destructive characteristics.

4 Logical Structure 4.1

Virtual Nodes

For an entity node, configure the intrinsic properties of the space-based node through software-defined technology, adjust the coverage area, power and frequency bandwidth and other indicators, effectively decouple the satellite platform and network functions, and achieve flexible and reconfigurable nodes in orbit. According to different tasks, the node needs to be decomposed and configured with corresponding general function modules, including sensing, transmission, calculation, storage, security, and control. For a series of tasks with high requirements, the corresponding functional modules in multiple space-based and ground-based nodes are connected and work together through the network. A relatively fixed virtual node is built in a certain period of time, and the interface is provided in a node manner externally. The virtual node has strong flexibility, and can adjust the node composition according to specific requirements to form a new virtual node, while retaining the external network identifier, which is easy for external access and operation.

Architecture and Key Technology Challenges of Future

4.2

2227

Virtual Networks

Through virtualization, network slicing and other technologies, based on virtualized nodes, software-defined logical networks are superimposed on existing networks. Based on the original network without modification, physical networks and user virtual networks are realized by defining logical networks on them. At the same time, by virtualizing the infrastructure, the functions of flexibly scheduling resources for different services are realized, and a dynamically adjustable platform for cloud access at the service layer is provided.

5 Architecture of Space-Based Network Technology From the technical point of view, according to the idea of ‘network integration, functional service, application customization, and management intelligence’, adopting the concept of ‘cloud service’, space-based network is divided into three levels: resource layer, service layer and application layer. The two functional domains of security protection and operation and maintenance management support space-based network, forming a ‘three-layer and two-domain’ technical architecture, as shown in Fig. 2. Through functional decoupling and hierarchical domain aggregation, an integrated resource, cloud service, security protection, and operation and maintenance management system is formed to provide a unified network and background for various application systems.

Application Layer Security Protection Domain

Application service surface

Service Layer Detection

Information security

Mapping

Position

Access

Information Service

Connecti on

Tansmis sion

...

Operation and Maintenance Control Domain

Network Service Cloud Service Platform Service management

Cloud service platform surface

Resource Layer Network security

Node vitualization

Network virtualization

Resource management

Resource virtualization …...

Network

Computing

Communication Facilities

Storage

Sensing

Data

Computing Facilities

Fig. 2. The technology architecture of future space-based networks

…...

2228

5.1

N.-W. Wang et al.

Resource Layer

The resource layer includes infrastructure, data resources, and so on. The infrastructure provides a variety of hardware and software resources including network communication facilities and computing storage facilities. Data resources mainly include network, computing, storage, sensing, data and space-time reference resources of various platforms such as space-based, land-based, sea-based, and space-based. Through resource virtualization, stateless network technology, and service encapsulation technologies, virtualize, encapsulate, and process various types of network resources, so that decentralized resources are connected into a ‘physical distribution, logically unified’ resource pool to achieve multi-network interconnection and provide end-to-end information transmission capabilities. 5.2

Service Layer

According to the heterogeneous characteristics of the space-based information network and the idea of ‘distribution coordination and unity’, the pooled resources are used to design and package multiple sources and multi-dimensional service components to build a flexible, maintainable and scalable multi-center distributed network and information service cloud platform. In addition, the service layer provides various types of network and information services for users according to the idea of resource jointing and cloud aggregation. Network services mainly include random access, free interconnection, and secure transmission. Information services include basic data services and integrated intelligence services. Two types of basic data cover space-based data types such as reconnaissance, mapping, meteorology, and location. Through professional and integrated data processing, various types of intelligence such as battlefield environment, target surveillance, and comprehensive situation are formed. 5.3

Application Layer

The application layer highlights the concept of ‘flexible expansion’, combines the basic network and service platform functions and extends to the end, constructs a unified application platform, breaks down tasks into different services, and invokes virtualized service components through a unified application interface. Application layer adopts modular and component-based integration mechanisms to share services such as communication, computing, and software. 5.4

Operation and Maintenance Control Domain

Operation and maintenance management and control adopts with the ‘graded control, cross-domain joint’ approach. Facing task decomposition, the operation and maintenance control domain can dynamically schedule time-frequency and multi-dimensional network resources, complete collaborative functions of management functions and flexible functions. In addition, it can provide abilities of realizing self-management of space-based networks, providing application services for users, realizing global situational awareness, resource dynamic planning and transportation dimensional support.

Architecture and Key Technology Challenges of Future

2229

This domain includes resource planning, system configuration, status monitoring, fault location, operation and maintenance support and other functional elements. 5.5

Security Protection Domain

According to the idea of ‘endogenous security and dynamic empowerment’, space-based networks seamlessly embed security functions into network nodes to establish a networkwide security dynamic defense system, including password confidentiality, security protection, authentication and authorization functions. Therefore, it can achieve crossdomain dynamic defense and provide comprehensive security support across networks.

6 Conclusions In this paper, an innovative future space-based network is proposed, which has both advantages of ground network and satellite network. The architecture of future spacebased network can be divided into physical structure and virtual structure. Based on this network structure, the corresponding key technologies are discussed and a ‘threelayer and two-domain’ technical architecture described in detail finally. Acknowledgements. This work is supported by Beijing Municipal Science and Technology Commission Research under Project Z17110005217001.

References 1. Cola TD et al (2017) Network and protocol architectures for future satellite Systems. Foundations and trends R in networking, vol 12(1–2), pp 1–161 2. Hu Y, Li VOK (2001) Satellite-based internet: a tutorial. IEEE Commun Mag 39(3):154–162 3. Toyoshima M et al (2008) Ground-to-satellite laser communication experiments. IEEE Aerosp Electron Syst Mag 23(8):10–18 4. Wu M, Wu W et al (2016) Tentative plan for overall structure of integrated information network. Satell Network 3:30–36 5. Roca IEC et al (2016) Revocation mechanism for hierarchical clustered structure in space-airground integrated network. In: 2016 3rd International conference on information science and control engineering 6. Lee J, Kang S (2000) Satellite over satellite (SOS) network: a novel architecture for satellite network. In: Proceedings of IEEE INFOCOM, pp 15–321 7. Liu J, Shi Y et al (2018) Space-air-ground integrated network: a survey. IEEE Commun Surv Tutorials 20(4) 8. Nazari S, Du P et al (2016) Software defined naval network for satellite communications (SDN-SAT). In: Milcom 2016 Track 4—system perspectives 9. Bertaux L et al (2015) Software defined networking and virtualization for broadband satellite networks. IEEE Commun Mag 53(3):54–60

Filter Bank Design for Subband Adaptive Microphone Arrays Hongli Jia(&) Harbin Engineering University, Harbin, Heilongjiang, China [email protected]

Abstract. In this paper, the complex (DFT) modulated filter banks is described. Based on the research evidences and findings collected, it mainly focuses on 4 parts: the detailed history of complex (DFT) modulated filter banks part, complex (DFT) modulated filter banks, Matlab programs implementation part and the part of applications in details generally. In order to explain the impact of these aspects on the complex (DFT) modulated filter banks, evidence based on experts’ opinions and identification of facts was used. Keywords: DFT

 Matlab  Subband adaptive microphone

1 Introduction The complex (DFT) modulated filter banks are used in all kinds of communication areas broadly, the range from speech, audio, image, and video to multicarrier modulation (xDSL) and feature detection and so on [1]. In complex (DFT) modulated filter banks, the input signal may be a complex formal data disconnected from our daily life. But these complex data is mathematically equivalent to the physical problem. Therefore, the complex (DFT) modulated filter banks usually can be used to solve science and engineering problems.

2 Theory 2.1

Complex (DFT) Modulated Filter Banks

A classic M-channel maximally decimated filter bank with the corresponding nearly ideal filter responses shows as below Fig. 1 [2]. The complex (M–channel DFT) modulated filter bank has two-steps decimation of the subband signals: analysis processing and synthesis processing. The MDFT filter banks are modified by complex modulated and critically subsampled filter banks are based on DFT filter banks. The MDFT filter banks combine key features of DFT filter banks as linear phase analysis and synthesis filters and an efficient realization with almost PR or PR.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2230–2240, 2020 https://doi.org/10.1007/978-981-13-9409-6_271

Filter Bank Design for Subband Adaptive Microphone Arrays

2231

Fig. 1. M-channel maximally-decimated filter bank

Fig. 2. Complex modulated filter bank

A classic M-channel DFT complex modulated filter bank shows as above Fig. 2 [3]. It includes analysis and synthesis filters, which describes a decimation of the sampling rate by the factor and the corresponding expander. (1) Modulated Linear-Phase Analysis and Synthesis Filters One of the key features of DFT filter banks is the ability to realize linear-phase analysis and synthesis filters by means of an appropriate complex modulation of a low-pass prototype filter. To achieve this, we derive the modulated filters and in two steps. In a first step, from a real-valued zero-phase low-pass prototype filter with a transition band from to we derive complex modulated zero-phase filters. Advantages of DFT modulated filter banks have two aspects [4]: Only the low-pass prototype filter needs to be designed and the filter banks can be implemented using FFT. (2) The polyphase implementation of a critically sampled DFT filter bank With Pk ðzÞ and Qk ðzÞ being polyphase components of the prototypes. The analysis and synthesis filters, Hk ðzÞ and Gk ðzÞ, are derived from two low pass prototype filters, PðzÞ and QðzÞ, via complex modulation (Fig. 3):

2232

H. Jia

Fig. 3. DFT polyphase filter bank with critical subsampling

Hk ðzÞ ¼ PðWMk zÞ $ hk ðnÞ ¼ pðnÞWMkn Gk ðzÞ ¼ QðWMk zÞ $ gk ðnÞ ¼ qðnÞWMkn

3 Implementation The Complex (DFT) Modulated Filter Banks use basic MATLAB functions that it implements and test the technology [5]. The MDFT filter banks combine key features of DFT filter banks as linear phase analysis and synthesis filters. In the part, it is can be tested using the MATLAB. 3.1

Analysis of the DFT Filter Banks

The function is DFT filter bank to produce K complex subbands decimated by a factor N. The real input signal is decomposed by complex bandpass filters derived from a prototype filter h [6]. Where: Input parameters: x input signal (length must be multiple of N) h prototype filter (real) K number of channels (0 … 2pi) N decimation ratio Output parameters: X x_out

rows contain decimated subband signals (complex valued) last Lh-1 samples of input signal (may be used to initialize next block of data)

Filter Bank Design for Subband Adaptive Microphone Arrays

3.2

2233

Synthesis of the DFT Filter Banks

Where: Input parameters: Y h K N Y_init Output y Y_out

3.3

K complex subband signals in rows prototype filter (real) number of channels (0 … 2pi) decimation ratio initialization of state vectors (if not given, Y_init is zero-padded) parameters: reconstructed real fullband signal last Lh/N-1 samples of subband signals (may be used to initialize next block of data)

Some Programs About Different Function of the Complex (DFT) Modulated Filter Banks

(1) Expand(X,m) expands a matrix X by factor m by inserting zero columns. function Y = Expand(X,m,mode); if exist(‘mode’), Y = zeros(size(X,1),size(X,2)*m); else Y = zeros(size(X,1),size(X,2)*m−m + 1); end; Y(:,1:m:size(Y,2)) = X; (2) Rearrange the column vector c to the shape of the matrix B. function A = ReArrange(B,c); A = B; [M,N] = size(B); for n = 1:N, A(:,n) = c((n−)*M + 1:n*M); end; (3) Circularly shift the rows of matrix A by s entries. Positive s shifts right, while negative s shifts left. function B = Shift(A,s); [m,n] = size(A); s = mod(s,n); B = [A(:,n−s + 1:n) A(:,1:n−s)];

2234

H. Jia

(4) Calculates necessary indices for polyphase implementation of DFT filter bank. Where: Input parameter: K N

number of channels decimation ratio

Output parameter: V D

3.4

sample block with polyphase indices sample block with delays function [V,D] = DFTPolyEntries2(K,N); M = lcm(K,N); L = M/N; J = M/K; B = N/J; K1 = []; N1 = []; for j = 1:J, K1 = [K1 diag(ones(1,K))]; end; for l = 1:L, N1 = [N1; diag(ones(1,N))]; end; V1 = K1*diag(0:M-1)*N1; V = V1(1:B:K,1:B:N); D = 1;

Test of the Program

Test the use of DFTPolyAnalysisCmplx() and DFTPolySynthesisCmplx(). Using filter coefficients for prototype filter for 8 channel filter bank and decimation by 14: K = 16, N = 14, Lp = 448 function h ¼ filter16 14 448ðÞ; h ¼ ½7e  05; 1e  04; 1e  04; 9e  05; 8e  05; 6e  05; 4e  05; 1e  05; 7e  06; 3e  05;  6e  05; 8e  05; 1e  04; 1e  04; 1e  04;  1e  04;  1e  04; 1e  04; 1e  04; 1e  04;  7e  05; 4e  05; 1e  05; 2e  05; 6e  05; 1e  04; 1e  04; 1e  04; 2e  04; 2e  04; 2e  04; 2e  04;

Filter Bank Design for Subband Adaptive Microphone Arrays

2e  04; 2e  04; 1e  04; 1e  04; 8e  05; 2e  05;  3e  05; 9e  05; 1e  04; 2e  04; 2e  04;  3e  04; 3e  04; 3e  04; 3e  04; 3e  04;  3e  04; 2e  04; 2e  04; 1e  04; 6e  05; 2e  05; 1e  04; 2e  04; 2e  04; 3e  04; 4e  04; 4e  04; 5e  04; 5e  04; 5e  04; 4e  04; 4e  04; 3e  04; 2e  04; 1e  04; 2e  05; 1e  04;  2e  04; 3e  04; 4e  04; 6e  04; 6e  04;  7e  04; 7e  04; 7e  04; 7e  04; 6e  04;  5e  04; 4e  04; 2e  04; 1e  04; 7e  05; 2e  04; 4e  04; 6e  04; 7e  04; 9e  04; 1e  03; 1e  03; 1e  03; 1e  03; 9e  04; 8e  04; 6e  04; 4e  04; 2e  04; 1e  05; 2e  04; 4e  04;  7e  04; 9e  04; 1e  03; 1e  03; 1e  03;  1e  03; 1e  03; 1e  03; 1e  03; 1e  03;  8e  04; 5e  04; 1e  04; 1e  04; 5e  04; 8e  04; 1e  03; 1e  03; 1e  03; 1e  03; 1e  03; 2e  03; 1e  03; 1e  03; 1e  03; 1e  03; 9e  04; 5e  04; 4e  05; 4e  04; 9e  04; 1e  03;  1e  03; 2e  03; 2e  03; 2e  03; 2e  03; 2e  03;  2e  03; 2e  03; 1e  03; 1e  03;  9e  04; 4e  04; 2e  04; 8e  04; 1e  03; 2e  03; 2e  03; 3e  03; 3e  03; 3e  03; 3e  03; 3e  03; 3e  03; 2e  03; 2e  03; 1e  03; 1e  03; 1e  04; 7e  04; 1e  03; 2e  03; 3e  03;  3e  03; 4e  03; 4e  03; 5e  03; 5e  03;  4e  03; 4e  03; 3e  03; 3e  03; 2e  03;  9e  04; 2e  04; 1e  03; 2e  03; 4e  03; 5e  03; 6e  03; 6e  03; 7e  03; 7e  03; 7e  03; 7e  03; 6e  03; 5e  03; 4e  03; 2e  03;

2235

2236

H. Jia

6e  04; 1e  03; 3e  03; 5e  03; 7e  03;  9e  03; 1e  02; 1e  02; 1e  02; 1e  02;  1e  02; 1e  02; 1e  02; 9e  03; 6e  03; 3e  03; 3e  04; 4e  03; 9e  03; 1e  02; 2e  02; 2e  02; 3e  02; 3e  02; 4e  02; 4e  02; 5e  02; 5e  02; 5e  02; 6e  02; 6e  02; 6e  020 ; h ¼ ½h fliplrðhÞ; Script file to test DFTPolyAnalysisCmplx and DFTPolySynthesisCmplx p = filter16_14_448; K = 16; N = 14; Lp = length(p); Lx = 16*14*5; x = randn(1,16*14*5) + sqrt(-1)*randn(1,16*14*5); %x = sin(2*pi*0.1*(1:Lx)) + sqrt(-1)*sin(2*pi*0.13*(1:Lx)); X = DFTPolyAnalysisCmplx(x,p,K,N); y = DFTPolySynthesisCmplx(X,p,K,N); clf; subplot(211); plot(real(x(1:201))); hold on; plot(real(y(Lp−N + 1:Lp−N + 1+200)),’r–’); subplot(212); plot(imag(x(1:201))); hold on; plot(imag(y(Lp−N + 1:Lp−N + 1+200)),’r–’); It is the results of test. 4

1

2

0.5

0

0

-2

-0.5

-4

0

50

100

150

200

250

-1

4

1

2

0.5

0

0

-2

-0.5

-4

0

50

100

150

200

250

-1

0

50

100

150

200

250

0

50

100

150

200

250

Filter Bank Design for Subband Adaptive Microphone Arrays

2237

4 Application In this paper, it is can be described by focus on one application in details as: Filter Bank Design for Subband Adaptive Microphone Arrays. 4.1

Filter Bank Design for Subband Adaptive Microphone Arrays

Another application of complex (DFT) modulated filter bank will be presented in this paper is the filter bank design for subband adaptive microphone arrays. It bases on oversampled uniform DFT-filter banks. There are two steps to be consists of the proposed method. The 1st step, the analysis filter bank is designed in such way that the aliasing terms in each subband are minimized, contributing to minimal aliasing at the output without aliasing cancellation. The 2nd step, the synthesis filter bank is designed to match the analysis filter bank where the analysis-synthesis response is optimized while all aliasing terms in the output signal are individually suppressed, rather than aiming at aliasing cancellation. The two design steps include constraints on the signal delay [7]. 4.2

Synthesis Filter Bank Design

Similar to the analysis’ design filter banks, the synthesis filter banks’ design reduces to the design of a single synthesis prototype filter. At the final summation of the subband branches, the aliasing terms are cancelled at PR environment. The aliasing distortion in the output signal is controlled by minimizing the aliasing terms individually, in the synthesis filter bank’s proposed design. The proposed objective function of the design method is g ðhÞ ¼ cg ðhÞ þ dg ðhÞ

ð1Þ

where the Total Response Error is defined as cg ðhÞ ¼

1 2p

Zp

  jx   A0 e  ejxsT 2 dx

ð2Þ

p

where sT is the desired total analysis-synthesis filter bank delay and D1 M 1 X 1 X dg ðhÞ ¼ 2p d¼1 m¼0

is the Residual Aliasing Distortion.

Zp p

   Am;d ejx 2dx

ð3Þ

2238

H. Jia

The quadratic form of the total response error is cg ðhÞ ¼ gT Eg  2gT f þ 1

ð4Þ

where g ¼ ½gð0Þ;    ; gðLg  1ÞT , Lg  Lg . The hermitian matrix E is defined as 1 E¼ 2p

Zp

    WH ejx h hT W ejx dx

ð5Þ

    Re ejxsT WT ejx h dx

ð6Þ

p

and the Lg  1 vector is defined as f¼

1 2p

Zp p

Matrix WðzÞ is defined as WðzÞ ¼

X 1 M1 Um;o ðzÞ D m¼0

ð7Þ

With     Um;d ðzÞ ¼ /h zWMm WDd /Tg zWMm

ð8Þ

where /g ðzÞ ¼ ½1; z1 ;    ; zLg þ 1 T Calculating the integral assuming that sT is a multiple of M, the matrix entries Ei;j are given by Ei;j ¼

1 M2 X h ðjM  iÞhðjM  jÞ 2 D n¼1

ð9Þ

and the vector entries fi are given by fi ¼

M hðsT  iÞ pD

ð10Þ

The quadratic form of the residual aliasing distortion term in (43) is given by dg ðhÞ ¼ gT Pg

ð11Þ

where the Lg  Lg hermitian matrix P is defined as p D1 M 1 Z X  jx   T   1 X h h Um;d ejx dx UH P¼ m;d e 2 2pD d¼1 m¼0 p

ð12Þ

Filter Bank Design for Subband Adaptive Microphone Arrays

2239

Calculating the integral in (44), the following expression for matrix entry Pi;j is obtained Pi;j ¼

1 M X h ðl þ jÞhðl þ iÞuði  jÞ D2 t¼1

ð13Þ

where uðnÞ ¼ D

1 X

dðn  kDÞ  1

ð14Þ

k¼1

The optimal prototype analysis filter, in terms of minimal total response error and minimal energy in the aliasing components, is found by minimizing the weighted objective function g ðhÞ ¼ cg ðhÞ þ mdg ðhÞ

ð15Þ

A weighting factor v is introduced in order to emphasize on either the total response error ð0\v\1Þ or the residual aliasing distortion ðv [ 1Þ. g ðhÞ ¼ gT ðE þ mPÞg  2gT f þ 1

ð16Þ

  gopt ¼ arg min g ðhÞ

ð17Þ

The solution: g

can be found by solving the set of linear equations: ðE þ mPÞg ¼ f

ð18Þ

5 Conclusion In conclusion, The MDFT filter bank is a complex modulated M-channel filter bank with a two step decimation of the subband signals. In the paper, the detailed history of complex (DFT) modulated filter banks is described, theory of complex (DFT) modulated is introduced as well. In implementation part, this paper developed a Matlab programs to implementing the technology of complex (DFT) modulated filer banks. The MDFT filter banks combine key features of DFT filter banks as linear phase analysis and synthesis filters. Finally, this paper lists a few applications of complex (DFT) in general, and presents application in details as filter Bank Design for Subband Adaptive Microphone Arrays. Furthermore, In filter Bank Design for Subband Adaptive Microphone Arrays, a design method for uniform DFT-filter banks with reduced delay has been proposed for

2240

H. Jia

the application of oversampled subband adaptive microphone arrays. The design aims at minimizing aliasing components individually, in order to bind the aliasing effects when phase alterations are applied by adaptive beamformers.

References 1. Chatlani N, Soraghan J (2011) EMD-based filtering (EMDF) of low-frequency noise for speech enhancement. IEEE Trans Audio Speech Language Process 20(4):1158–1166 2. 基于麦克风阵列的近场环境下语音增强算法研究[D]. 余洋.湖南大学 2018 3. Cho H, Kim SW (2010) Variable step-size normalized LMS algorithm by approximating correlation matrix of estimation error. Signal Process 90(9):2792–2799 4. 驾驶环境下的麦克风阵列语音增强算法研究[D]. 靳韡赟.北京交通大学 2018 5. 基于麦克风阵列的语音增强方法研究[D]. 朱夏杰.东北石油大学 2017 6. Haan JMD, Grbic´ N, Claesson I, Nordholm SE (2003) Filter bank design for subband adaptive microphone arrays[Online] 11(1). Available URL: http://ieeexplore.ieee.org. ezproxy.uow.edu.au:2048/iel5/89/26485/01179374.pdf?tp=&arnumber=1179374&isnumber= 26485. Accessed 23 May 2006 7. Petraglia MR, Piber PRV (1999) Prototype filter design for oversampled subband adaptive filtering structures[Online] 3:138–141. Available URL: http://ieeexplore.ieee.org.ezproxy. uow.edu.au:2048/iel5/6311/16885/00778804.pdf?tp=&arnumber=778804&isnumber=16885. Accessed 24 May 2006

Communication System Based on DFT Spread Spectrum Technology to Reduce the Peak Average Power Ratio of CO-OFDM System Yaqi Wang1,2 and Yupeng Li1,2(&) 1

Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China [email protected] 2 College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China

Abstract. This paper uses Discrete Fourier Transform spread spectrum (DFTSpread) technology to extend the frequency domain of ortho-subcarriers in orthogonal frequency division multiplexing (OFDM), which can effectively reduce the PAPR of coherent optical orthogonal frequency division multiplexing CO-OFDM system. Based on DFT-Spread-COOFDM system theory and simulation analysis, shows that this method not only can effectively reduce the PAPR of coherent detection system CO-OFDM signal, at the same time, compared with the traditional OFDM system, analysis it only adds to the complexity of a Fourier transform (FFT), the amount of calculation than selective mapping traditional (SLM), partial transmit sequence (PLS) technology to reduce PAPR algorithm has lower complexity. Keywords: CO-OFDM

 DFT-spread  PAPR  Complexity

1 Introduction With the popularization of high bandwidth services such as bandwidth wireless communication, cloud computing and big data, the optical transmission network has been put forward with higher and higher requirements. Since the beginning of the 21st century, with the development of advanced modulation schemes and digital signal processing (DSP) technologies, coherent optical communication technology has become the main application and development direction of optical transmission network. Coherent optical orthogonal frequency division multiplexing (CO-OFDM) has a very wide application prospect in high rate, large capacity and long distance communication transmission systems. CO-OFDM technology has the dual advantages of coherent optical detection and OFDM technology, and inherits the peak average power ratio (PAPR) problem inherent in OFDM [1–3]. Due to device exists nonlinear effects in CO-OFDM system, with the increase of transmission distance, the system increase accumulation of nonlinear effect, large peak signal nonlinear area through these devices will have serious signal distortion, resulting in a intermodulation interference between subcarrier and out-of-band radiation, destroy the orthogonality between the carrier and © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2241–2250, 2020 https://doi.org/10.1007/978-981-13-9409-6_272

2242

Y. Wang and Y. Li

make the system transmission performance of serious decline [2, 3]. However, due to the limitations of current optical device manufacturing technology, the difficulty and cost of manufacturing optical devices with high linearity are very high [1–3]. Therefore, certain technologies must be adopted to reduce the peak average ratio of signals, so as to improve the overall performance of the system. Therefore, it is of great significance to study how to inhibit PAPR in CO-OFDM system. DFT-S-OFDM is one of the solutions of 3GPP-LTE uplinking modulation, which modulate data symbols into frequency domain, the information on a single sub-carrier extended to all belong to the subcarrier, make each subcarrier to include all the information symbol, thus effectively reduce the probability of OFDM signal peak value form, in order to reduce the PAPR of OFDM system, improve the OFDM transmission in the system of nonlinear tolerance [4–6]. In the above technical background, DFT-Spread technique is adopted to reduce the PAPR of coherent light CO-OFDM system.

2 System Theoretical Analysis 2.1

Principle of COOFDM System

The sender sends A serial of user information, the string, and after converting the output low speed parallel binary code stream, QAM, QPSK modulation formats such as mapping, IFFT transformation after the complete time domain to frequency domain transformation, insert the synchronous training sequence is used for the receiving end, pilot channel estimation and phase recovery for the receiving end, cyclic prefix increase the system ability to resist the ISI and the ICI, D/A conversion, digital signals into analog signals is to CO-OFDM baseband signal. Baseband signal into frequency conversion module through the MZM after the I and Q road to its orthogonal signal is modulated into domain signal light, which completed the electro-optical conversion, optical signals into optical fiber channel, then due to exist in the channel dispersion (CD) and polarization mode dispersion (PMD), the nonlinear effect factors, such as the receiver to receive the signal energy attenuation, produces light signals in optical fiber channel to compensate the loss of energy, needs to be introduced to compensate the signal energy fiber amplifier, and then in the photoelectric conversion module, photoelectric detector is used to light signal reduction for baseband OFDM signal. Finally, after the inverse process with the transmitter, including the prefix removal, training sequence removal, pilot signal removal, FFT transformation, channel equalization and synchronization process, the final signal inverse mapping is restored to the original serial binary data stream (Fig. 1).

Communication System Based on DFT Spread

2243

Fig. 1. CO-OFDM system structure block diagram

2.2

Analysis of PAPR

Since OFDM signals are composed of multiple independent modulated self-carrier signals, when these independent sub-carrier signals are in the same phase, the output of the overlapped signals will generate a large peak value and generate increased peak power, resulting in a larger peak to average power ratio (PAPR). n o max jxðnÞj2 o PAPRðdBÞ ¼ 10 log n E jxðnÞj2 Where, xðnÞ represents the signal after IFFT. In the analysis process, PAPR’s complementary cumulative distribution function (CCDF) is adopted to describe the peak statistical distribution characteristics of OFDM signals. PAPR is analyzed from the perspective of probability statistics, and the expression is: PfPAPR [ pg ¼ 1  PfPAPR\pg ¼ 1  ð1  ep ÞN When N is large, the theoretical maximum PAPR is about 10 log10 N, but the probability of theoretical peak in practice is very small, that is, the actual PAPR is much smaller than the theoretical value. For example, when N = 256, PAPR is about 24 dB in theory, but only about 12 dB in practice. 2.2.1 DFT-Spread-OFDM DFT-S-OFDM uses additional DFT to Spread spectrum. Where, the total number of subcarriers is N, fxm ; m ¼ 0; 1; . . .; M  1g is the input signal, fAk ; k ¼ 0; 1; . . .; M  1g is the frequency domain sampling data of the input signal transformed by DFT, fCk0 ; k 0 ¼ 0; 1; . . .; N  1g is the frequency domain data fAk g mapped by the subcarrier, and fyn ; n ¼ 0; 1; . . .; N  1g is the signal fCk0 g transformed by IFFT (Fig. 2).

2244

Y. Wang and Y. Li

Fig. 2. The principle diagram of the DFT spread-OFDM

Fig. 3. Subcarrier mapping

The subcarrier mapping determines which part of the spectrum resource is used to transmit data, while other parts are empty carriers or are inserted with several zeros. In Fig. 3, two subcarriers are mapped. Figure a corresponds to the centralized mapping, that is, the output of the DFT is mapped to the continuous subcarriers. The b diagram shows the distributed mapping, that is, the output of the DFT is mapped to the discrete sub-carrier. The distributed map insert L  1 zero in each frequency domain sample, then N ¼ M  L, fCk0 ; k 0 ¼ 0; 1; . . .; N  1g is the frequency domain data mapped by the subcarrier, and the frequency domain signal mapped by the subcarrier can be expressed as:  Ck 0 ¼

AI=L ; 0;

I ¼ L  K; 0  K  M  1 others

 ð1:1Þ

fyn ; n ¼ 0; 1; . . .; N  1g is the time domain data after fCk0 g IFFT transformation: yn ¼

11 2p Ak  ej M nk ; ML

n ¼ 0; 1; . . .; KL  1

ð1:2Þ

It can be obtained from Eqs. 1.1 and 1.2 that the data fyn g generated by IFFT is the transfer and repetition of the mapped data fxm g, Therefore, compared with general OFDM systems, the PAPR of DFT-S-OFDM is significantly reduced.

Communication System Based on DFT Spread

2245

2.2.2 Computation Complexity Calculation complexity the complex number of multiplication times required to perform the algorithm with OFDM symbol. The complexity of DFT-S-OFDM signal is M QAM signals passing through m-point FFT, and then IFFT is calculated by subcarrier mapping to n-point. SLM technique is to generate S statistically independent and phase different vectors fX1 ; . . .; XS g to transmit OFDM signals with the same length of N, and then carry out IFFT transformation for K vectors at the same time, respectively calculate the signal PAPR and select the appropriate vector PAPR to continue the transmission. PTS technique divides the original OFDM frequency domain data vector X into K parts without overlap. After serialization and conversion, IFFT transformation is carried out on each word block Xk . The time domain data blocks are multiplied by P the weighting coefficient fb1 ; . . .; bK g, and finally added y ¼ Kk¼1 bk  IFFTðXk Þ. Table 1 shows the comparison of the computation amount of OFDM systems with three different PAPR reduction algorithms and traditional OFDM. It can be seen that DFT-S-OFDM has the lowest computational complexity.

Table 1. Different PAPR algorithms reduce the computational complexity System

Calculating amount

OFDM

1 2 N log2 N 1 2 M log2 M

DFT-SOFDM SLM PLS

þ 12 N log2 N

k 12 N log2 N PK 1N N k¼1 bk  2 K log K

M = 178, N = 256, k=2 1024

M = 128, N = 256, k=2 1024

M = 68, N = 256, k=2 1024

1690

1572

1087

2048

2048

2048

ðb1 þ b2 Þ  448

ðb1 þ b2 Þ  448

ðb1 þ b2 Þ  448

3 System Simulation This paper uses Optisystem8.0 and Matlab joint simulation. The simulation of DFT-SCOOFDM transmission system is shown in Fig. 4. The original input data uses the m sequence with a length of 4095, and the mapping mode is 16 QAM. Pilot sequence is added at the zero frequency, and DFT spread spectrum processing points are 68, 128 and 178 respectively, and the cyclic prefix length of each OFDM symbol is. In DFT-SOFDM system, the upper and lower branches of optical I/Q modulator have V1 driving signals respectively, dc bias voltage is 1:5 V respectively, and MZM works in the linear region. The modulation rate of DFT-S-COOFDM signal is 10 Gb/S, the central frequency of the laser is 193.1 THZ, and the line width is 100 kHz. The system adopts standard single-mode optical fiber (SMF), the dispersion coefficient of which is 16 nm ps/km, the attenuation coefficient is 0.2 dB/km, and the effective area is 80 lm2 nonlinear coefficient is 2:6  1020 m2 =W. Add erbium doped fiber amplifier (EDFA, noise figure 6 dB) to compensate for channel attenuation caused by power loss, the system not to join the dispersion compensation module, regardless of the photoelectric

2246

Y. Wang and Y. Li

detector noise at the receiving end uses the coherent detection, choose the same as the sender parameter of laser coupling with the input signal, the photodiode (PIN) bandwidth for 15 GHZ, DFT-S-OFDM signal demodulation process is the reverse of modulation signal (Fig. 5).

Fig. 4. Modulation block diagram

Fig. 5. Demodulation block diagram

4 Simulation Result 4.1

PAPR Under Different Algorithms

Figures 6 and 7 shows CCDF curves of OFDM, DFT-OFDM symbols respectively. Where N is the number of subcarriers, M is the number of DFT points, Under the given

Communication System Based on DFT Spread

2247

PAPR threshold value, if the parameters are changed, CCDF will also change correspondingly, that is, the probability change of symbols beyond PAPR threshold value. Figure 8 is the CCDF curve comparison graph of different algorithms with different parameters. U is the number of IFFT transformation vectors in SLM algorithm, and V is the number of word blocks segmented by OFDM symbols in PTS algorithm. Under the given PAPR threshold, CCDF is the lowest when the DFT number is 68.

Fig. 6. CCDF of OFDM

Fig. 7. CCDF of DFT-S-OFDM

2248

Y. Wang and Y. Li

Fig. 8. CCDF under different algorithms

Fig. 9. CCDF of OFDM and DFT-spread OFDM

Communication System Based on DFT Spread

2249

Fig. 10. Channelless COOFDM system block diagram

4.2

System Analysis Without Channel

Figure 9 shows the CCDF curves of traditional OFDM and DFT-S-OFDM with carrier number N = 128 and 256 and DFT-S-OFDM with carrier number M = 68 and 128 were simulated respectively (Fig. 10). It can be seen from Fig. 9 that the PAPR of the system symbol can be significantly reduced by using DFT-Spread in back-to-back model, and PAPR can be reduced by 1.5 dB when N = 256, M = 128, and according to the simulation, the system error rate is zero, which can restore the original data accurately.

5 Conclusion This paper studies the influence of DFT-Spread technology on PAPR in 16 QAM COOFDM system. Simulation results shows that this method can effectively reduce the PAPR of OFDM signals in CO-OFDM systems. Meanwhile, compared with traditional OFDM systems, the complexity of this method only increases the computation amount of Fourier transform (FFT) and is lower than that of SLM and PLS. Acknowledgements. This work is supported by the Natural Science Foundation of Tianjin under Grant 18JCQNJC70900.

References 1. Chang SH, Kang H-S, Moon S-R et al (2016) Optimization of coherent optical OFDM transmitter using DP-IQ modulator with nonlinear response. Optical Fiber Technol 30: 112–115 2. Yi X, Qiu K (2011) Estimation and compensation of sample frequency offset in coherent optical OFDM systems. Opt Express 19(14):13503

2250

Y. Wang and Y. Li

3. Armstrong J (2009) OFDM for optical communications. J Lightwave Technol 27(3):189–204 4. Armstrong J (2002) Peak-to-average reduction for OFDM by repeated clipping and frequency domain filtering. Elect Lett 38(5):246–247 5. Jayalath ADS, Tellambura C (2000) The use of interleaving to reduce the peak to average power ratio of an OFDM signal. In: Global telecommunications conference, Nov 27–Dec 1, 2000. San Francisco, CA, USA. IEEE, pp 82–86 6. Telefonaktiebolaget LM Ericsson (publ) (2018) “Time domain in continuous DFT-S-OFDM for sidelobes reduction” in patent application approval process (USPTO 20180241600). Telecommun Wkly

Low-Complexity Channel Estimation Method Based on ISSOR-PCG for Massive MIMO Systems Cheng Zhou1(B) , Zhengquan Li1,2 , Qiong Wu1,2,3 , Yang Liu1,4 , Baolong Li1 , Guilu Wu1 , and Xiaoqing Zhao1 1

Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China [email protected],[email protected], {lzq722,qiongwu,lblong,wugl}@jiangnan.edu.cn 2 National Mobile Communication Research Laboratory, Southeast University, Nanjing 210096, China 3 Department of Electronic Engineering, Tsinghua University, Beijing 100084, China 4 The National Key Laboratory of millimeter wave, Southeast University, Nanjing 210096, China

Abstract. With the boost of the quantity of antennas at the base station (BS) in massive multiple-input multiple-output (MIMO) systems, the channel capacity and spectral efficiency are also increased. Conventional channel estimation method, such as the classical minimum mean square error (MMSE), which involves the matrix inversion in large size with enormous computational complexity, especially in massive MIMO systems due to large antenna arrays. To degrade the complexity caused by the inversion of the matrix, a low-complexity channel estimation scheme is proposed based on the improved symmetric successive over relaxation preconditioned conjugate gradient (ISSOR-PCG) method to avoid computing the matrix inversion directly. A simple way is also introduced to address the optimal relaxation parameter for the proposed scheme, by utilizing the channel asymptotic orthogonality in massive MIMO systems. Analysis shows that the proposed channel estimator is able to degrade the complexity effectively compared with MMSE channel estimator. Simulation results illustrate that the proposed scheme can obtain near-optimal performance to the classical MMSE estimation method and outperforms other baseline schemes with increased number of iterations.

Keywords: Massive MIMO

· MMSE · Iterative method · ISSOR-PCG

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2251–2258, 2020 https://doi.org/10.1007/978-981-13-9409-6_273

2252

1

C. Zhou et al.

Introduction

The number of wirelessly connected devices and their data demands have been increasing with each passing day. Especially in highly dense locations such as city centres or sports stadiums, it can be challenging for the base station (BS) to provide enough frequency resources and capacity for each user terminals using conventional technologies. However, for massive multiple-input multiple-output (MIMO) system in wireless communication area, the BS antennas are usually great larger than the number of active terminals, can achieve higher spectral efficiency and serve several terminals at the same time and frequency resources simultaneously [1, 2]. It is widely recognized as a key technology for the future wireless communication systems [3, 4]. Channel estimation is an important part for massive MIMO systems, accurate channel state information (CSI) is important at the BS. However, realizing the accurate CSI in practical for massive MIMO systems is difficult because the increased number of antennas and time-varying channels. Conventional minimum mean square error (MMSE) estimator can achieve optimal performance, but it requires matrix inversion in large dimension, whose complexity is enormous for massive MIMO systems. Recently, many researchers paid much attention to massive MIMO channel estimation technology. For example, a low-complexity polynomial expansion channel estimation algorithm is proposed to avoid matrix inversion [5]. However, the scheme has to address with complicated parameter optimization problems. In order to search orthogonal directions to modify the filter parameters better rather than the negative gradient directions, a conjugate gradient principle is proposed in [6]. To make the most of the channel statistics such as slow variation and spatially common sparsity of the channel, the author in [7] proposed a compress sensing aided channel estimation scheme. And with the concept of deep learning has attracted many researchers attention for it’s potential applications in wireless communication. In order to address the problem of enormous complexity and complicated spatial structures in massive MIMO system, a novel framework that combines deep learning with massive MIMO is proposed [8]. More and more researchers are trying to utilizing the artificial intelligence technology to boost the performance of communication systems [9–11]. In this paper, based on the improved symmetric successive over relaxation preconditioned conjugate gradient (ISSOR-PCG) method, a low-complexity channel estimation scheme is proposed to degrade the computational complexity. And also improve the convergence rate, which could avoid the high dimension matrix inversion. Then, a simple way is introduced to determine the near-optimal relaxation parameter by utilizing the channel asymptotic orthogonality in massive MIMO systems. The analysis shows that the proposed channel estimation method outperforms other methods as shown in the simulation results with the number of iterations increase. Also the proposed scheme can achieve close performance to the classical MMSE scheme by increasing the iterations. The rest of this paper is organized as follows. In Sect. 2, the system model is briefly described. Then, the proposed low-complexity iterative channel estima-

Low-Complexity Channel Estimation Method

2253

tion scheme is addressed in Sect. 3. In Sect. 4, some simulation results are shown to illustrate the superiority of the proposed estimation scheme. Conclusions are summarized in Sect. 5. Notation: Vectors are represented by boldface small letters, boldface capital letters denote matrices, superscripts (·)T , (·)H , (·)−1 , (·)∗ represent the transpose, conjugate transpose, inversion and conjugate operators, respectively. And  · 2 ,  · F denote the 2-norm and the Frobenius norm operators, respectively. | · | denotes the absolute operator. Finally, IN is the N × N unit matrix.

2

System Model

A typical massive MIMO up-link system is considered in this paper, which works in the time division duplex (TDD) mechanism, with Nr BS antennas to communicate with Nt single-antenna terminals as shown in Fig. 1. It is well-known that the quantity of BS antennas is much greater than the users, the transmitted signals from all users are mapped to constellation symbols Θ firstly, and then transmitted by Nt antennas. The transmitted signals are represented by vector x ∈ C Nt ×1 , channel matrix H ∈ C Nr ×Nt is the flat Rayleigh fading channel, which entries are followed the distribution CN (0, 1). Then, at the BS, we can obtain the received signal vector y ∈ C Nr ×1 y = Hx + n

(1)

where vector n ∈ C Nr ×1 represents the complex additive white Gaussian noise (AWGN), whose elements follow CN (0, σ 2 I).

Fig. 1. Massive MIMO system model on the uplink

2254

3

C. Zhou et al.

The Proposed Low-Complexity ISSOR-PCG Method

In this section, firstly, we review the conventional MMSE channel estimator briefly. Then, the ISSOR-PCG based iterative channel estimation scheme is introduced. After that, the way to optimize the relaxation parameter is proposed. And finally, the analysis of the complexity is shown to demonstrate superiority of the proposed method over conventional channel estimation methods. 3.1

Conventional MMSE Channel Estimator

According to [5], the terminals first transmit the pilot signals to the BS, and the received matrix Y at the BS equals Y = HP + N

(2)

where pilot matrix P ∈ C Nt ×B , the length of pilot sequences are denoted by the  S) is complex integer B, matrix N ∈ C Nr ×B followed as vec(N) ∼ CN (vec(N),  ∈ C Nr ×B , and Gaussian distributed, here, the mean disturbance denoted by N Nr B×Nr B . Vectorizing the received covariance matrix is represented by S ∈ C matrix in (2) yields  +n  = Ph y (3)   (PT ⊗ I), h = vec(H) and n = vec(N). If we assume the  = vec(Y), P where y ¯ R, N ¯ and S) are well known at the channel and disturbance statistics (i.e., H,  mmse can be BS side, then the conventional MMSE channel estimation matrix h written as  mmse = RP  H (PR  P  H + S)−1 y  H A−1 y  = RP  h (4)  P  H + S), matrix R ∈ C Nr Nt ×Nr Nt is the channel covariance where A = (PR matrix. As shown in (4), MMSE channel estimator requires the matrix inversion computation of large size, and it’s complexity O(M 3 ) (where M = Nr × Nt ) rises rapidly as the dimension of massive MIMO expands, which is difficult to achieve for the hardware. 3.2

Proposed ISSOR-PCG Channel Estimation

Considering Eq. (4), we can find that computing the matrix A−1 directly requires O(M 3 ) complexity, it’s too large especially in massive MIMO systems, and the matrix A is Hermitian positive definite according to [12], so inspired by this, we  into solving linear equation, i.e. can convert the problem of computing A−1 y  . SSOR method in [13] is considered to avoid the computation of A−1 . As = y  based on ISSOR-PCG method can The main steps to solve the equation As = y be described as  , g(0) = W−1 r(0) , z(0) = (1) Initialization s(0) ∈ n , set r(0) = As(0) − y −Vg(0) , and d(0) = W−T z(0) . Assume k = 0.

Low-Complexity Channel Estimation Method

2255

(2) If s(k) is not convergent, do αk =

(g(k) , Vg(k) )

(5)

(d(k) , 2z(k) − Vd(k) )

s(k+1) = s(k) + αk d(k) g

(k+1)

=g

(k)

+ αk (d

(k)

βk =

+W

−1

(k)

(z

(6) − Vd

(k)

(g(k+1) , Vg(k+1) ) (g(k) , Vg(k) )

z(k+1) = −Vg(k+1) + βk z(k) (k+1)

d

=W

−T (k+1)

z

))

(7) (8) (9) (10)

(3) Set k = k + 1, then go back to step (2), or stop the iteration if s(k) is convergent. The Hermitian matrix A can be decomposed as A = D + L + LH

(11)

where D denotes the diagonal matrix of A, L and LH represent the strictly lower and upper triangular component of A, respectively. If we use W and V as the decomposed SSOR, then we can obtain the preconditioner as W=

(2 − ω)D 1 (D + ωL), V = ω ω

(12)

where ω is the relaxation parameter and 0 < ω < 2, k denotes the number of iterations, which contributes a lot to the convergence rate. After several itera tions based on above equations, the obtained vector s can approximate A−1 y to avoid computing high dimension matrix inversion directly, which will reduce the complexity a lot. And we can also see the proposed method convergence rate indicated by (12) will be influenced by the parameter ω. So in the next sub-part, a way to optimize the relaxation parameter for the proposed method is introduced. 3.3

Relaxation Parameter and Complexity Analysis

According to [13], the optimal relaxation parameter ωopt for SSOR method is ωopt =

2  1 + 2(1 − ρ(BJ ))

(13)

where ρ(BJ ) denotes the spectral radius of the Jacobi matrix BJ , which can be modeled as ρ(BJ ) = ρ[D−1 (L + LH )] = ρ[D−1 (A − D)] (14) = ρ[D−1 A − I] = ρ[D−1 A] − 1

2256

C. Zhou et al.

For massive MIMO systems, matrix D−1 can be approximated by N1r I, based on the random matrix theory, the spectral radius of A can be well approximated by [14]  Nt 2 ) (15) ρ[A] = Nr (1 + Nr Based on what has been discussed above, as the Eq. (15) shown, the optimal relaxation parameter is only correlated with antennas at the BS and users, so the relaxation parameter ω opt based on the ISSOR-PCG method can be concluded to approximate optimal relaxation parameter ωopt as ω opt = where a = (1 +



Nt 2 Nr )

2  1 + 2(1 − a)

(16)

− 1. According to (5)–(10), computing this part requires

2

2M + 13M complex multiplications, so we can conclude the proposed channel estimator complexity is O(k(2M 2 +13M )), and in [15], the complexity of Taylorbased polynomial expansion is O(N M 2 ), where N is the polynomial order, and the complexity of Kapteyn-based polynomial expansion is O(KN M 2 ), where K is the truncated Bessel function order.

4

Simulation Results

In this section, the normalized mean square error (NMSE) is adopted to evaluate the proposed channel estimation performance based on ISSOR-PCG, and compared with other iterative methods. The conventional MMSE channel estimator is adopted as the benchmark for comparison to illustrate the performance better. A typical massive MIMO up-link system is considered with Nr = 100 antennas at the BS, and Nt = 10 single-antenna terminals, the length of pilot sequence is assumed B = 10. Figure 2 compares the NMSE performance of different channel estimation methods such as CG-based, SSOR-based, Taylor-based, Kapteyn-based, the proposed ISSOR-PCG and conventional MMSE under p = 0, ω = 0.5 with SNR = 5 dB, where p denotes the pilot contamination parameter. In this figure, we assume that there is no pilot contamination. We can see that the NMSE performance of all methods is reducing as the number of iterations are increasing, except MMSE estimator. And the convergence rate of the proposed scheme is more quickly than CG-based and SSOR-based schemes. The performance of the proposed ISSOR-PCG method outperforms other methods and also approaches to the MMSE method with the iterations increase. Note that the proposes method has better convergence rate than the base line schemes.

Low-Complexity Channel Estimation Method

2257

5 CG[6] SSOR[13], p=0, w=0.5 ISSOR−PCG, p=0, w=0.5 Taylor−based[15], p=0 Kapteyn−based[15], p=0 MMSE[5], p=0

NMSE(dB)

0

−5

−10

−15

5

10

15

20

Iteration−numbers(N)

Fig. 2. Comparison of NMSE performance of different methods with p = 0, ω = 0.5

5

Conclusion

In this paper, a low-complexity channel estimation method based on ISSORPCG is proposed to degrade the computational complexity in massive MIMO systems by avoiding the high dimension matrix inversion directly. And how to approximate the optimal relaxation parameter is also introduced, which is only determined by the number of antennas at the BS and terminals. Analysis shows that the proposed method is capable of reducing the complexity from O(M 3 ) to O(M 2 ), and simulation results demonstrate that the proposed method outperforms the CG-based, SSOR-based, Taylor-based and Kapteyn-based estimators, and achieves very close performance to MMSE estimator with several number of iterations. Acknowledgements. This work is supported in part by the National Natural Science Foundation of China (No. 61571108 and No. 61701197), the open research fund of National Mobile Communications Research Laboratory of millimeter wave, Southeast University (No. 2018D15), Project funded by China Postdoctoral Science Foundation (No. 2018M641354), the open research fund of the National Key Laboratory of millimeter wave, Southeast University (No. K201918), Postgraduate Research and Practice Innovation Program of Jiangsu Provence (No. SJCX180646).

References 1. Rusek F, Persson D, Lau BK, Larsson EG, Marzetta TL, Edfors O, Tufvesson F (2013) Scaling up MIMO: opportunities and challenges with very large arrays. IEEE Signal Process Mag 30(1):40–60 2. Lu L, Li G, Swindlehurst A, Ashikhmin A, Zhang R (2014) An overview of massive MIMO: benefits and challenges. IEEE J Sel Topics Signal Process 8(5):742–758

2258

C. Zhou et al.

3. Wu S et al (2018) A general 3-D non-stationary 5G wireless channel model. IEEE Trans Commun 66(7):3065–3078 4. Larsson EG et al (2014) Massive MIMO for next generation wireless systems. IEEE Commun Mag 52(2):186–195 5. Shariati N, Bjornason E, Bengtsson M, Debbah M (2013) Low-complexity polynomial channel estimation in large-scale MIMO with arbitrary statistics. IEEE Trans Veh Technol 54(4):1932–4553 6. Liu Y et al (2012) Complex adaptive LMS algorithm employing the conjugate gradient principle for channel estimation and equalization. Circuits Syst Signal Process 31(3):1067–1087 7. Zhang R, Zhao H, Zhang J (2018) Distributed compressed sensing aided sparse channel estimation in FDD massive MIMO system. IEEE Access 6:18383–18397 8. Huang H, Yang J, Huang H, Song Y, Gui G (2018) Deep learning for superresolution channel estimation and DOA estimation based massive MIMO system. IEEE Trans Veh Technol 67(9):8549–8560 9. Golub GH, Van Loan CF (2012) Matrix computations, 4th edn. JHU Press 10. Bjorck A (2015) Numerical methods matrix computation. Spring, New York 11. Gen L, Chun-an T, Lian-chong L (2013) High-efficiency improved symmetric successive over-relaxation preconditioned conjugate gradient method for solving largescale finite element linear equations. Appl Math Mech 34(10):1225–1236 12. Sun Y, Li Z, Zhang C, Zhang R, Yan F, Shen L (2017) Low complexity signal detector based on SSOR iteration for large-scale MIMO systems. In: 2017 9th international conference on wireless communications and signal processing (WCSP). Nanjing, pp 1–6 13. Xie T, Dai L, Gao X, Dai X, Zhao Y (2016) Low-complexity SSOR-based precoding for massive MIMO systems. IEEE Commun Lett 20(4):744–747 14. Bai Z, Silverstein JW (2010) Spectral analysis of large dimensional random matrices. Science Press, Beijing 15. Li Z, Wang B, Sun Y, Yan F, Xing S, Shen L (2017) Low-complexity channel estimation based on weighted kapteyn series expansion for massive MIMO systems. In: 2017 9th international conference on wireless communications and signal processing (WCSP). Nanjing, pp 1–5

Ship Classification Methods for Sentinel-1 SAR Images Jia Duan(&), Yifeng Wu, and Jingsheng Luo AVIC Leihua Electronic Technology Research Institute, Wu’xi 214031, China [email protected]

Abstract. Based on the publicly opened SAR dataset of ships, methods for ship classification have been presented in this paper. For comparison, a joint feature based method for ship classification for SAR is described first. In this method, features for SAR ship classification are concluded, in which density of RCS and main-structure feature have been proposed to discriminate ships. Afterwards, a modified LeNet based method has been presented for SAR ship classification, by restricting the size of convolutional window and layers according to the properties of SAR. Experiments are conducted on the real measured data to show the effectiveness of the methods above. And by comparing the methods, the proper method for SAR ship classification has been concluded, as well. Keywords: SAR

 Ship classification  CNN  Feature extraction

1 Introduction Synthetic Aperture Radar (SAR) has become an important source for maritime surveillance, due to its all-weather, day and night operating, wide coverage capabilities [1]. In recent years, thanks to the exploitation of space areas, a few amounts of satellite SAR systems have been launched, making it possible to conduct marine researches, such as sea ice detection [2], sea wind retrieval [3], ship detection and classification [4], etc. Among the launched satellites, Sentinel-1 satellites are designed by the European Space Agency, which is able to provide C-band SAR images at medium to high resolution. Since it is publicly available, there are a lot of researches conducted based on the acquired data. Ship classification based on SAR images are one of the most important fields for maritime surveillance. By identifying the class of ships, it is capable to promise maritime traffic safety, fisheries control and military border control, etc. In general, traditional SAR ship classification methods are consisted of four stages such as detection, discrimination, feature extraction and classification. The first two steps are utilized to obtain target region of interest (ROI), on which the feature extraction and classification can be performed. Feature extraction is the key step of ship classification. The more discriminative the feature is, the easier the class can be decided. Hence, there are lots of researches available, focusing on extracting discriminative features for SAR images of ships [5–7]. Basically, there are two kinds of ship features for SAR, including the geometric features and physical features. The geometric features are utilized to discriminate the shapes of ships targets. Typical geometric features include © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2259–2269, 2020 https://doi.org/10.1007/978-981-13-9409-6_274

2260

J. Duan et al.

length, width, centerline, bounding box, edge, etc. Physical features including polarimetric and scattering feature, indicate the different physical mechanisms resulting from the reciprocity of target and environment. Among them, the superstructure of ship is one of the most useful features for ship classification. Nowadays, with the rapid development of artificial intelligence, the deep learning methods are bringing into take place of human conducted feature extraction to improve the performance of ship classifications in SAR [8]. Such convolutional neural networks (CNN) as LeNet, VGG are validated as useful structures for SAR ship recognition [9]. However, for both traditional feature-based classification and deep learning based methods, there are few researches compares their behaviors. Motivated by this, we compare the performance of traditional feature based methods and CNN based methods on a publicly opened dataset-OPENSAR in this paper [10]. The OPENSAR ship is a dataset dedicated to Sentinel-1 ship interpretation, which provides 11346 SAR ship chips in over 17 classes. In this way, it is capable to perform different ship classification methods due to its large amount of samples. Firstly, we summarize the features useful for ship classification and the feature extraction methods. Afterwards, modified LeNetbased CNN is presented for ship classification. By changing the basic structures and layers, we are trying to explore the effects of changing structures on classification. Finally, by comparing the performance of traditional feature based ship classification method and CNN based ones, some conclusions are drawn.

2 Multi-feature Extraction of Ships for SAR In general, the features of SAR ship chips can be categorized in two classes. For SAR ships, the first class is geometric features, which are consisted of point features, linear features and region features. The second class is defined as physical features which include the polarimetric features and electro-magnetic features, respectively. The detailed features are shown in Fig. 1. In order to extract multi-features for SAR ship classification, a multi-feature extraction method is proposed here. The main procedures are shown in Fig. 2. (1) Image Segment and Binary Conversion. Although we have SAR chips, it is necessary to separate the ship targets from the clutter background. Usually, a two parameter based CFAR is applied to extract ships in SAR. Afterwards, supposing the target pixels to be 1, while background ones to be 0, a binary image of SAR ship can be obtained easily. (2) Statistical Moment Features Extraction. Based on the extracted SAR ship and binary image, the statistical moment features can be computed according to their definitions with great ease. (3) Geometric Feature Extraction. As can be seen from Fig. 2 clearly, the centerline extraction progress is the key step of multi-feature extraction, based on which other features can be extracted easily. There are a lot of centerline extraction methods for ships in SAR available, among which the

Ship Classification Methods for Sentinel-1 SAR Images

2261

Fig. 1. Main features of SAR ships

Fig. 2. Flowchart of SAR ships multi-feature extraction

approach proposed in [11] is validated to be excellent in most cases. This approach utilizes the Hough Transform to acquire line groups. By accumulating the line groups with different ship width, a joint centerline and width estimation can be obtained. Based on the extracted centerline, the minimum bounding box can be estimated by computing the line vertical to the centerline across the intersection point of edge and centerline.

2262

J. Duan et al.

Afterwards, the shape parameters such as length, width, ratio of length to width, shape complexity, etc. (4) Physical Features Extraction. Different from ground targets such as cars, it is difficult to extract attributed scattering centers. Instead, we usually utilize the density of RCS and main-structure to represent the distribution of ship targets. • Density of RCS The density of RCS is defined as the distribution of RCS in different parts of the ship. Usually, the density of RCS can be computed following the minimum bounding box extraction. By computing the intersection of edge and centerline, the length can be obtained. Defining part number as M, it is possible to segment the shape of the ship into M parts along the centerline. By computing the energy in each region, the density of RCS can be expressed as qdens ¼ ½ E1 . . . EM . For normalization, it can also be written as qdens ¼ ½ E2 . . . EM =E1 . By computing this, it is possible to know the distribution of ships’ superstructures, which is an important feature to discriminate different types of ship. For example, for passengers, superstructures are located at the middle of ships. While for bulk carriers, superstructures are located at the bottom of ships. • Main-structure The main-structure of ship is defined as the combination of strong scatters and edge. The edge can be detected easily after image segmentation. Then, a clean-based strong scatter extraction method is proposed in this paper. ① Let p = 1, the original image matrix be SO . Then, the position of scattering center p can be obtained by searching  the maximum value point in the image. Denote the position as np ; mp with amplitude Ap . Construct Fourier  n    m   basis Fp ¼ exp j2p Np  N2 þ 1: N2 exp j2p Np  M2 þ 1: M2 , in which M and N are the sample number of azimuth and range, respectively. Then,     the strong scatterer p can be reconstructed as Sp ¼ Ap norm FFT Fp , in which FFT and norm mean the two dimensional Fourier Transform and normalization operator, respectively. ② Update the image by SO ¼ SO  Sp and p ¼ p þ 1. The remained energy of image can be computed by summing the energy of SO . Let S0 be the energy of original image. If SO =S0 [ Thre, repeat above operations. Otherwise, skip to step 3. ③ The image of strong scatter can be Pcomputed by summarizing the extracted scattering centers such as Rc ¼ p Sp . Afterwards, the super-structure of ship in SAR can be obtained by adding Rc with edge image. For ship classification, a PCA based feature selection is always operating after feature extracting. Afterwards, the classifiers can be trained based on the extracted features. According to different amounts of samples and different targets, different classifier can be employed. Among which, SVM is always utilized for its perfect performance in most cases [12].

Ship Classification Methods for Sentinel-1 SAR Images

2263

3 Deep CNN Networks for SAR Ship Classification Typically, there are four kinds of operators in CNN, including convolutional operator, pooling operator, activation operator and fully connected layer. The convolutional operator is defined as zi0 ;j0;c ¼

M X N X

Wi0 ;j0;c xi0 þ i;j0 þ j

ð1Þ

m¼1 n¼1

In which, Wi0 ;j0;c and xi0 þ i;j0 þ j are the convolutional window and image in pixel ði0 ; j0 Þ, M and N are the length and width of convolutional window, respectively. The convolutional operator is utilized to express different kinds of features. The pooling operator is actually an under-sampling operator to reducing the amount of parameters. In this way, the optimization of network can be easily employed. In most cases, there are two pooling operators which are mostly used, namely the maximum and average operator such as yi0 ;j0;c ¼

max

1  i  M;1  j  N

zi0 ;j0;c

or

yi0 ;j0;c ¼

M X N X

, zi0 ;j0;c

MN

ð2Þ

i¼1 j¼1

For activation, ReLU is widely used, which has the following form y ¼ maxð0; xÞ. In this paper, we proposed a CNN network based on the famous LeNet-5 according to the properties of SAR image. First, let’s review the basic structure of LeNet-5. It is first invented by Yann LeCun in1998 to recognize written digits [13]. It is consisted of seven layers. There are two convolutional layers, two pooling layers and three fully connected layers, respectively. The first layer is a convolutional layer with six feature spectrum in size 5  5. Each feature spectrum share same convolutional window and bias. Following the first convolutional layer, the second layer is a pooling layer with 2  2 window size and stride 1. Afterwards, a convolutional layer is constructed. There are 16 convolution windows with size 5  5 and stride 1. Similarly, a convolutional layer is followed by a pooling layer. The pooling layer down samples the inputs by 2  2 window size. In order to classify the ships in SAR, we proposed a modified LeNet based on the properties of SAR. • Convolutional kernel As is known, the convolutional window defines which kind of features we can extracted. If the convolutional size is too small, the network may not able to extract features of targets. If the size is too large, detailed information may be lost, as well. Therefore in each convolutional layer, we should define the size of the convolutional window. First, for ship classification, we can have a basic range of ship targets size. Therefore, we can obtain the ships have the amount of pixels are between ½jLmin =qj : jLmax =qj. The symbols | |, q and L mean the ceil operator, resolution and length of ship, respectively. Since the special imaging theory of SAR, speckle noise

2264

J. Duan et al.

exists inevitably. Convolutional window size small than three may be affected by the speckle. Hence, the convolutional size can be written as Eq. (3). C1 ¼ maxfjLmin =2qj; 3g

ð3Þ

Similarly, the maximum size of convolutional window can be given as CM ¼ minfjLmax =2qj; N=2g

ð4Þ

where, N is denoted by the size of the SAR image; M is the maximum layer of the network. The convolutional window of mth layer can be written as Cm , C1  Cm1  Cm  CM . After deciding the size of the convolutional window, its amount is also important for the network. However, there are not sufficient theories available to define the proper number of convolutional windows. In general, we often choose 16, 32, and numbers such like that. • Basic layer Here, we define a basic layer is consisted of 1-3 convolutional layers, following by a pooling layer. As is known from the VGG network [14], the combination of two 3  3 convolutional layers has an effective receptive field of 5  5. What’s more, with deeper layers the expressiveness of network can be increased. When the network does not work well, it is suggested to use more small layers instead. In this way, the amount of parameters can be reduced as well. Afterwards, a ReLU is applied to prohibit overfitting and accelerate the training progress, followed by a max pooling or average pooling operation. • The amount of layer The question how many basic layers are proper for certain works, has been a difficult problem since its invention. This problem is usually caused by the “black box” property of neutral networks, which prevented researchers to understand its inherent mechanism. Resultantly, there are not sufficient theories to determine the layers of a CNN. In most cases, we continue the “trial and error” steps until a satisfactory result has been achieved. Here, we first assume there are two basic layers M0 just the same as the original LeNet. Then, a relatively small amount of training samples are chosen to train the constructed network and testify the performance. If the result is not ideal, we update the basic layers as M0 = M0 + 1. It is to be mentioned that the constructed network is not optimized. According to available researches, several operations can be done to compress the network. In this paper, this will not be discussed in detail. • Full connected layer Following the basic layers, a full-connected layer is always constructed to pull the feature maps into a vector. Finally, the softmax is applied to classify ships in SAR.

Ship Classification Methods for Sentinel-1 SAR Images

2265

• Training For CNN training, the BP algorithm is usually used. In order for a fast convergence, the initialization is always important. When the convergence is staggered, smaller learning rate should be applied until convergence.

4 Experimental Results and Analysis In order to compare available methods for SAR ship classification, the original feature based method and the modified LeNet are both conducted on the OPENSAR dataset. 4.1

OpenSARShip Dataset

As mentioned before, the OpenSAR Ship dataset contains 11346 ship chips from 41 Sentinel-1 SAR images, collected in Shanghai, Shenzhen, Tianjin, Yokohama and Singapore, respectively. The Sentinel-1 satellite works in C wavelength and dualpolarization. In order to label the types of ships in SAR, AIS is often utilized. The distribution of classes of SAR ships in this data set has been shown in Fig. 3.

Fig. 3. Numbers of ship chips of different ship types

Clearly, there are lots of ship chips in the dataset. However, the numbers of different types of ships are very diverse. The huge gaps in number may lead to the degradation in performance of machine learning based classification methods. Hence, in this paper, before feature extraction or ship classification, we should pre-processing the dataset first. First, the images in this dataset should be normalized to a standard size. Hence, a standard size N  N is defined. Down-sampling and interpolation are always used in this operation. Then, the numbers of different types of ships are diverse greatly, which results in error discrimination with great ease. Hence, we should uniform the numbers of training samples in advance. For ships whose number is relatively large, it is recommended to randomly select some of them. For ships whose number is relatively small, the sampling extension can be applied. Such extension operations include rotation, translation, adding noise and filtering. Since the SAR image is sensitive to incident angle, it is recommended that the angle of rotation should be no bigger than 2.5°.

2266

4.2

J. Duan et al.

Feature Extraction and SVM Based Classification

For traditional feature based classification method, the feature extracting process is important. In this paper, besides traditional feature, the density of RCS and Mainstructure are proposed to discriminate different kinds of ships. In order to show their properties in improving the classification, the features are shown in Fig. 4. As is shown, the features are different in different types of ships. Thus, it is deduced that the features can be effective in classification. It has to be mentioned here, since the numbers of several types of ships are too small to be used as a reference. We mainly concentrate on studying the performance of such types of ships as cargo, dredging, fishing, tanker and tug. Then, the extracted physical features joint with geometric and statistical features are pulled into feature vectors. Then, the feature vectors of training ships are imported to train SVM classifier. Here, we use the libsvm toolbox [15]. The RBF kernel is utilized to projecting the feature vector into another space. The function ‘SVMcgForClass’ is used to find the proper parameter for punishing and kernel. Substituting these parameters into training progress, the training accuracy can only achieves 74.5%, the accuracy of testing samples are 65.6131%. This is mainly due to the fact that the images of different types of ships are very difficult to discriminate from each other. In this case, the training accuracy cannot achieve an ideal result, not to mention the testing accuracy. Therefore, more complex and nonlinear classifier is needed to express the relationship of different types of ships. That’s why we study the deep CNN network now.

Fig. 4. Features of different types of ships

Ship Classification Methods for Sentinel-1 SAR Images

4.3

2267

Ship Classifier Based on Modified LeNet

As discussed in Sect. 3, a modified LeNet can be utilized to classify the ships. Due to the diversity of ship numbers, data equalization is very important before training progress.

Fig. 5. Ship classifying networks

To illustrate this, we first use a CNN structure similar to LeNet to classify the ships as Fig. 5a shows. The classifier is first applied to the original dataset. Then, data enlarging is applied to ships whose number of training samples is smaller than 100. Finally, we cut down the number of ships which is too much. The testing accuracies of three cases are listed in Table 1, which show the importance of the equality number of different training ship types. It can be concluded that, for both traditional feature-based methods and deep learning base methods, the more equal the ship numbers are, the more accurate the testing is. Hence, it is recommended to preprocess the dataset first before training. Table 1. Comparison of different training samples Types of ships Number of training samples Case 1 Case 2 Case 3 Cargo 3956 3956 1960 Dredging 55 165 165 Fishing 95 190 190 Search 28 168 168 Tanker 835 835 835 Tug 145 145 145 Total 5114 5459 3463

Number of testing Testing samples Case 1 1949 0.96 25 0.28 31 0.74 13 1.00 835 0.40 31 0.39 2884 0.779

accuracy Case 2 0.94 0.2 0.71 1.00 0.45 0.39 0.782

Case 3 0.93 0.16 0.68 1.00 0.80 0.29 0.876

2268

J. Duan et al.

As mentioned above, the structure of modified LeNet makes difference in SAR ship classification. It is generally known that smaller convolutional kernels and deeper layers will lead to a more accurate precision. To testify this, we conducted our experiment by using structures as Fig. 5b, c show. The final testing accuracies are shown in Table 2. As can be seen, by using the combination of two 3  3 convolutional layers to take place of one 5  5 convolutional layer, the performance of classification has been improved to some degree. However, with deeper layers, the performance degrades slightly, which means the structure of the network is already complex enough to express the relationships of classification. Therefore, in this case, the structure v2 is chosen to classify the ship targets in SAR. What’s more, compared with traditional feature based classification methods, the CNN based classifiers are better at dealing with complex cases. This is suited for the case in sentinel-1 SAR ships classifying.

Table 2. Comparison of different structures Types of ships

Number of training samples

Number of testing samples

Cargo Dredging Fishing Search Tanker Tug Total

1822 165 190 168 835 145 3157

1949 25 31 13 835 31 2871

Testing accuracy Structure Structure (a) (b) 0.93 0.94 0.16 0.16 0.74 0.77 1.00 1.00 0.80 0.77 0.29 0.39 0.876 0.881

Structure (c) 0.95 0.24 0.68 1.00 0.73 0.52 0.873

5 Conclusions In this paper, methods for ship classification in SAR have been studied. First, the traditional feature based method has been illustrated, in which the features of ships in SAR has been concluded. Second, a modified LeNet for ship classification is given by refining the convolutional window and layers according the SAR properties. Finally, the experiments have been conducted on the OPENSAR dataset. In these experiments, it is validated that the equalities of training samples, the layers of CNNs, and the sizes of convolutional windows make difference in SAR ship classification.

References 1. Wang Y (2017) Maritime surveillance with undersampled SAR. IEEE Geosci Remote Sens Lett 14(8):1423–1427 2. Wakabayashi H (2013) Sea ice detection in the sea of okhotsk using PALSAR and MODIS data. IEEE J Sel Top Appl Earth Observations Remote Sens 6(3):1516–1523

Ship Classification Methods for Sentinel-1 SAR Images

2269

3. Rana FM, Adamo M, Blonda P (2018) LG-mod multi-scale approach for SAR sea surface wind directions retrieval. In: IEEE international geoscience and remote sensing symposium, pp 3216–3219 4. Wei J, Li P, Yang J (2014) A new automatic ship detection method using-band polarimetric SAR imagery. IEEE J Sel Top Appl Earth Observations Remote Sens 7(4):1383–1393 5. Haitao L, Wu S, Lai Q et al (2017) Capability of geometric features to classify ships in SAR imagery. Image Signal Process Remote Sens XXII:1–8 6. Jiang M, Yang X, Dong Z et al (2016) Ship classification based on superstructure scattering features in SAR images. IEEE Geosci Remote Sens Lett 13(5):616–620 7. Lang H, Wu S (2017) Ship classification in moderate-resolution SAR image by naive geometric features-combined multiple kernel learning. IEEE Geosci Remote Sens Lett 14(10):1765–1769 8. Chen S, Wang H, Xu F et al (2016) Target classification using the deep convolutional networks for SAR images. IEEE Trans Geosci Remote Sens 54(8):4806–4817 9. Wang Y, Wang C, Zhang H (2018) Ship discrimination with deep convolutional neural networks in SAR images. In: IEEE international geoscience and remote sensing symposium, pp 1–4 10. Huang L, Liu B, Li B et al (2018) OpenSARShip: a dataset dedicated to sentinel-1 ship interpretation. IEEE J Sel Top Appl Earth Observations Remote Sens 11(1):195–210 11. Pastina D, Spina C (2009) Multi-fature based automatic recognition of ship targets in ISAR. IET Radar Sonar Navig 3(4):406–423 12. Lang H, Zhang J, Zhang X et al (2016) Ship classification in SAR image by Joint feature and classifier selection. IEEE Geosci Remote Sens Lett 13(2):212–216 13. Lecun Y, Bottou L, Bengio Y et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324 14. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition in ICLR. Comput Sci 1–15 15. Fan RE, Chen PH, Lin CJ (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res 6:1889–1918

Wheat Growth Assessment for Satellite Remote Sensing Enabled Precision Agriculture Yuxi Fang1, He Sun1, Yijun Yan1, Jinchang Ren1(&), Daming Dong2, Zhongxin Chen3, Hong Yue1, and Tariq Durrani1 1

3

Department of Electronic and Electrical Engineering, University of Strathclyde, 204 George Street, Glasgow G1 1XW, UK [email protected] 2 National Engineering Research Center for Information Technology in Agriculture, Beijing, China Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing, China

Abstract. In this paper, a backpropagation (BP) neural network algorithm and a multiple factor regression (MR) algorithm are presented to improve the performance of the prediction of wheat growth. By applying the BP neural network algorithm and the MR algorithm, the corresponding Leaf Area Index (LAI) and Soil Plant Analysis Development (SPAD) values can be regressed from the Thematic Mapping (TM) data. The experimental result demonstrates that the designed framework has a better performance, which can effectively predict desired parameters and provide a promising solution for the crop growth monitoring. For finding a better solution for the crop growth monitoring, the performance of the BP neural network and the MR algorithms have been investigated. Keywords: Precision agriculture  Wheat growth assessment  Multiple factor linear regression  Artificial neural network

1 Introduction In the last few decades, the crop growth monitoring has become an essential issue in the macro-management of agricultural production. With the indispensable information provided by the crop growth monitoring, the crop yield can be estimated far before the crop harvest, which gives the possible prospect of improving the yield. Nowadays, a wide of applications have been proposed with the development of the remote sensing techniques [1–3]. Lelong et al. [1] carried the filter and digital camera on the UAV, monitored the wheat experimental fields in southwestern France, and analyzed the relationship between the spectral index and the biophysical parameters measured in the field based on the spectral images in the visible-near infrared band. Due to the straightforward accessibility, remote sensing data from satellites have also been applied into the prediction of crop growth. Yang et al. [2] have estimated winter crop yield in

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2270–2277, 2020 https://doi.org/10.1007/978-981-13-9409-6_275

Wheat Growth Assessment for Satellite Remote Sensing

2271

North China Plain by utilizing multitemporal Landsat TM images into the Geographic Information System (GIS)-based Environment Policy Integrated Climate (EPIC) model. Furthermore, Zhao [3] has applied Normalized difference vegetation index (NDVI) curve retrieved from satellite images to monitor the yield of soybean. Wang et al. [4] used the ecological parameters and remote sensing data to construct the winter wheat remote sensing quality model based on the research on the relationship between different vegetation indexes (VIs) at different growth stages of winter wheat. At present, various models have been implemented to predict crop growth, such as the Random Forest (RF) [5], the Artificial Neural Networks (ANN) [6], the Support Vector Machine (SVM) [7], and the ridge regression [8]. Since each regression method has its own characteristics in principle, it is difficult to find a superior approach to others. For example, the BP neural network relies on initial weights, improper weights may damage the performance of prediction. Therefore, a hybrid method is desired to obtain better prediction results without much computational cost. In this paper, the BP neural network and the MR algorithm are compared to optimize wheat growth prediction model. The input of the designed framework are the vegetation parameters based on TM data, and the output are the estimated LAI and SPAD. The designed two models, including the BP neural network and the MR algorithm, can be optimized and acquired by employing the training dataset. With the trained model, the corresponding LAI and SPAD can be predicted. Through comparing prediction results with the measured results on the ground. The feasibility and validity of the designed two algorithms for vegetation prediction can be verified. What is more, the merits and disadvantages of those two algorithms can be examined. The rest of this paper is organized as follows. Section 2 introduces the related techniques in our implemented methodology, including the review of BP neural network and the MR algorithm. In Sect. 3, the details of our framework are elaborated. Experimental results and discussions are presented in Sect. 4. Finally, some concluding remarks are drawn in Sect. 5.

2 Related Work 2.1

BP Neural Network

The BP neural network is one of the most popular regression approaches, and it constantly revises the weights of the network so that the error function decreases along the direction of negative gradient and the expected output can be acquired [6]. Through the input layer, the raw data sample are processed by the hidden layer and the output are obtained from the output layer. If the actual output is different from the ground truth, the error back propagation starts. If the actual output of the output layer is the same as the expected output, the learning algorithm is terminated. When back propagation occurs, the output error, which reflects the difference between the expected output and the ground truth, is calculated according to the original path, and the error is distributed to each unit of each layer through the hidden layer to the input layer. The error signal of each unit of each layer is obtained and used as the basis for correcting the weight of each unit. This calculation process is completed by gradient descent

2272

Y. Fang et al.

method. After continuously adjusting the weights of each layer of neurons, the error signal is reduced to the minimum. The process of constant adjustment of weights is the training process of the network. After the forward propagation of the input data and the back propagation of errors, the adjustment of weights is repeated until the convergence. 2.2

MR Algorithm

When there are more input parameters in the system, the BP neural network cannot effectively reflect the mapping between the input and the output. A simple but efficient way to solve this problem is the MR algorithm. Different from the commonly used linear regression, the MR algorithm can handle the complex multiple factor problem and find a suitable model to represent the relationship between the input and the output. For example,the relationship between the desired output y and its determined input x1 ; x2 ; . . .; xp can be denoted as: y ¼ a1  x1 þ a2  x 1 þ    þ ap x p

ð2:1Þ

With the chosen training samples, this function can be solved and the corre sponding coefficients x1 ; x2 ; . . .; xp can be calculated, and the obtained model can be utilized to predict the desired parameters.

3 Proposed Framework 3.1

Data Preparation

The dataset in our framework is captured from Lixia River area, Yangzhou City, Jiangsu Province, China. The area is located at north latitude 32025′, and East longitude 119031′. The total area of the test area is 50 hm2. The experimental data are from the LANDSAT satellite data on May 7, 2015 and the ground manual measurement data.

Fig. 1. The experimental field and cell distribution

Wheat Growth Assessment for Satellite Remote Sensing

2273

Table 1. The parameters and formulas used in our framework Parameters Redness intensity Greenness intensity Blueness intensity Near infrared intensity Normalized difference vegetation index (NDVI) Ratio vegetation index (RVI) Green ratio vegetation index (GRVI) Normalized redness intensity (NRI) Normalized greenness intensity (NGI)

Formula R G B NIR (NIR − R)/(NIR + R) NIR/R NIR/G − 1 R/(R + G + B) G/(R + G + B)

The experimental field was divided into 1078 plots. 147 representative breeding plots were measured by SPAD502 chlorophyll content analyzer and LAI-2200C plant canopy analyzer. The information of Winter Wheat Canopy Chlorophyll and leaf area index were collected. The cell distribution is shown in the Fig. 1. In our dataset, the first four bands (TM1, TM2, TM3, TM4) of Landsat satellite TM data correspond to the visible blue, green, red and near infrared bands. Vegetation index parameters, such as NDVI, RVI, GRVI, NRI, NGI can be obtained based on above band values. The parameters and related formulas are shown in Table 1.

Fig. 2. Schematic diagram of the network structure

3.2

Designed Method

Since estimated vegetation parameters (R, G, B, NIR and NDVI) are used as input and the desired output are the LAI and SPAD, the number of nodes in the input layer and output layer is set to be 5 and 2, respectively.

2274

Y. Fang et al.

In the neural network, the hidden layer can approximate a non-linear function with arbitrary accuracy if the hidden nodes are enough. Therefore, a three-layer multi-input and two-output BP network with a hidden layer is adopted to establish a prediction model. There is no clear formula for determining the number of neurons in the hidden layer. In our experiment, the number of neurons in the hidden layer is 6 empirically. The diagram of the neural network structure is shown in Fig. 2. In our framework, the neural network and the multiple factor regression part are implemented by MATLAB. The training sample data are normalized before and the excitation functions of the input layer, hidden layer and output layer are set as tansig function, trainglm function and Mean Square Error (MSE) function, respectively. In our experiments, the iterative times is set to 5000, the expected error is defined as 1e-7, and the learning rate is 0.01. After the completion of the training process, the predicted LAI and SPAD can be obtained. For the multiple regression part, the corresponding coefficients can be calculated by the training samples, and the LAI and SPAD can be predicted by the obtained model.

4 Results As discussed above, five parameters, including R, G, B, NIR of TM data and the estimated NDVI are used as the input, and the measured LAI and SPAD are employed as the ground truth. There are 147 sets of data in total, we have employed 100 sets to train our designed BP neural network and the MR model. The remained 47 sets of data are assumed to be the test samples.

Fig. 3. The comparison between the measured LAI and the predicted LAI from the test samples

Wheat Growth Assessment for Satellite Remote Sensing

2275

Fig. 4. The comparison between the measured SPAD and the predicted SPAD

From Fig. 3, we can clearly find that the predicted result of the MR is much closer to the measured LAI, which indicates that the prediction MR is more continuous and reliable. And we can also notice that the prediction result of the BP is more erroneous and discrete. What is more, the predicted result of MR is more convergent and stable, which means that the MR is not affected by the measurement error. When the measured data has a significant change or potential measuring error, the MR is more stable than BP, which proves the superiority of MR in large amount of data. In Fig. 4, the predicted results of BP and MR for the SPAD are shown. It can be observed that the predicted results from both methods are more continuous than the measured result, which may be resulted from the incorrect measurements. However, the prediction of the MR is still more accurate than the BP from the comparison, which proves the efficiency of our proposed framework. To better demonstrate the performance of our proposed framework, we have compared the Root Mean Square Error (RMSE) between the ground truth and the predicted results of the BP neural network and MR. From Table 2, we can find that the performance of MR is more accurate than BP neural network, which argues the advantage of the MR algorithm.

Table 2. RMSE between model output and measured data BP MR

LAI 0.5508 0.5456

SPAD 2.6367 2.6348

2276

Y. Fang et al.

5 Conclusion By designing the parameters of input layer, hidden layer and output layer of the network, a neural network structure suitable for vegetation prediction is constructed. Besides, we have employed the MR algorithm to enhance the prediction result. After training process, the obtained models can accurately reflect the characteristics of surface vegetation and achieve better prediction results. In our proposed framework, the relationship between the input and the desired output can be better investigated by the BP neural network and MR algorithms, which can provide a better solution for the crop growth monitoring. For the future work, we will try to derive more valuable information from the raw data samples and improve the prediction accuracy by some novel feature extraction and selection methods, such as the folded-principle component analysis (PCA) [9], sparse representation [10], the segmented autoencoder [11], the singular spectrum analysis [12], the curvelet transform [13], and the deep learning based-methods [14] etc. Acknowledgements. The authors wish to thank the support from Newton Network + project: VIP-STB (Scale-up Village to County/Province Level to support Science and Technology at Backyard Programme).

References 1. Lelong CCD, Burger P, Jubelin G, Roux B, Labbé S, Bare F (2008) Assessment of unmanned aerial vehicles imagery for quantitative monitoring of wheat crop in small plots. Sensors. https://doi.org/10.3390/s8053557 2. Yang P, Zhou Y, Chen Z, Zha Y, Wu W, Shibasaki R (2006) Estimation of regional crop yield by assimilating multi-temporal TM images into crop growth model. IGARSS. https:// doi.org/10.1109/IGARSS.2006.584 3. Zhao Y (2014) Crop growth dynamics modeling using time-series satellite imagery. Proc SPIE https://doi.org/10.1117/12.2070387 4. Wang D, Li Y, Fan W, Qin Q (2012) Monitoring wheat quality protein content in critical period based division by remote sensing. IGARSS. https://doi.org/10.1109/IGARSS.2012. 6352085 5. Hao P, Zhan Y, Wang L, Niu Z, Shakir M (2015) Feature selection of time series MODIS data for early crop classification using random forest: a case study in Kansas, USA. Remote Sens. https://doi.org/10.3390/rs70505347 6. Kaul M, Hill RL, Walthall C (2005) Artificial neural networks for corn and soybean yield prediction. Agric Syst. https://doi.org/10.1016/j.agsy.2004.07.009 7. Mathur A, Foody GM (2008) Crop classification by support vector machine with intelligently selected training data for an operational application. Int J Remote Sens. https://doi.org/10.1080/01431160701395203 8. Herrera JM, Häner LL, Holzkämper A, Pellet D (2018) Evaluation of ridge regression for country-wide prediction of genotype-specific grain yields of wheat. Agric Meteorol. https:// doi.org/10.1016/j.agrformet.2017.12.263

Wheat Growth Assessment for Satellite Remote Sensing

2277

9. Zabalza J, Ren J, Yang M, Zhang Y, Wang J, Marshall S, Han J (2014) Novel folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. ISPRS J Photogrammetry Remote Sens. https://doi.org/10.1016/j.isprsjprs. 2014.04.006 10. Sun H, Ren J, Zhao H, Yan Y, Zabalza J, Marshall S (2019) Superpixel based feature specific sparse representation for spectral-spatial classification of hyperspectral images. Remote Sens 11:536 11. Zabalza J, Ren J, Zheng J, Zhao H, Qing C, Yang Z, Du P, Marshall S (2016) Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing. https://doi.org/10.1016/j.neucom.2015.11.044 12. Zabalza J, Ren J, Wang Z, Marshall S, Wang J (2014) Singular spectrum analysis for effective feature extraction in hyperspectral imaging. IEEE Geosci Remote Sens Lett. https:// doi.org/10.1109/LGRS.2014.2312754 13. Qiao T, Ren J, Wang Z, Zabalza, J, Sun M, Zhao H, Li S, Benediktsson JA. Effective denoising and classification of hyperspectral images using curvelet transform and singular spectrum analysis. IEEE Trans Geosci Remote Sens. https://doi.org/10.1109/tgrs.2016. 2598065 14. Yan Y, Zhao H, Kao F, Vargas VM, Zhao S, Ren J (2018) Deep background subtraction of thermal and visible imagery for pedestrian detection in videos. BICS. https://doi.org/10. 1007/978-3-030-00563-4_8

An Improved ToA Ranging Scheme for Localization in Underwater Acoustic Sensor Networks Jinwang Yi1(&), Zhipeng Lin1, Fei Yuan2, Xianling Wang1, and Jiangnan Yuan1 1

2

Xiamen University of Technology, Xiamen, China [email protected] Key Laboratory of Underwater Acoustic Communication and Marine Information Technology, Xiamen University, Xiamen, China

Abstract. Location information is a crucial requirement in underwater acoustic sensor networks. Since ToA-based localization is a commonly used method among plenty ranging-based localization techniques, the problem of improving the performance of ToA ranging ought to be carefully considered. In this paper, we propose an improved ToA ranging scheme for localization in UASNs. Based on a two-step process, this scheme can acquire accurate detection of the earliest arrival time and estimate of clock offset. Experimental results with real sea-trial data confirm that our proposed scheme has excellent performance in improving accuracy of ToA ranging from two aspects, and this method is then employed for underwater localization. Keywords: Underwater acoustic sensor networks synchronization  Ranging

 Time of arrival  Time

1 Introduction Underwater acoustic sensor networks (UASN) have aroused increasing interests in the realm of underwater acoustic research, because of its potential in a wide variety of civil military applications, such as marine observation, oceanic exploration, maritime national defence and security, etc. [1]. Location information, especially for mobile underwater submersibles, is a crucial requirement in UASN and widely used in the underwater missions [2]. This has led to the development and evolution of plenty ranging-based localization techniques, including time of arrival (ToA), time difference of arrival (TDoA), angle of arrival (AoA) and received signal strength indication (RSSI) [3]. The ToA of transmitted signal is commonly used measurement for localization and can benefit from information sharing and fusion between sensor nodes when applied to UASNs [4]. In case of ToA, the accuracy of ranging between two nodes is a critical factor for localization in UASNs [5]. Therefore, the problem of improving the performance of ToA ranging ought to be carefully considered.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2278–2284, 2020 https://doi.org/10.1007/978-981-13-9409-6_276

An Improved ToA Ranging Scheme for Localization

2279

In this paper, we propose an improved ToA ranging scheme for localization in UASNs. Based on a two-step process, this method can acquire accurate ranging measurements with the compensation of clock offset. Experimental results with real sea-trial data indicate that our proposed scheme has excellent performance in both detection of the earliest arrival time and estimate of clock offset. The rest of this paper is organized as follows. We first review the design challenges in Sect. 2. The proposed scheme of improved ToA ranging is then completely described in Sect. 3. Experimental results are presented in Sect. 4. Finally, conclusions are offered in Sect. 5.

2 Design Challenges The intrinsic drawbacks of UASN, such as node mobility, spare deployment, energy constraint and complex underwater channel, make the problem of ToA-based localization challenged. 1. Node mobility Underwater sensor nodes often have the inherent mobility in UASNs, which is caused by the dynamic characteristics of ocean environment [6]. Further, sensor nodes are inclined to undergo some degree of mobility due to water currents or wind, even for static underwater systems. Besides, mobile platforms which have attracted a growing attention, such as autonomous underwater submersible (AUV), remote operated submersible (ROV), gliders and floats, naturally aggravate the mobility of sensor nodes. As a result, adverse displacement in node positions during ranging may range from several tens to up to a hundred meters. Hence the mobility of sensor nodes should be considered to improve the accuracy of ToA ranging. 2. Energy consumption The energy efficiency of underwater sensor nodes is a crucial constraint, since nodes are battery operated and hard to be replenished [7]. The operating time of a sensor node is severely restricted by the limited power supply. Therefore, to address energy efficiency, ranging with a limited message exchange overhead needs to be developed in UASNs. 3. Complex underwater channel Underwater acoustic channel, which is characterized by limited bandwidth, long propagation delays and multipath fading, is an extremely complex channel with time, space and frequency varying [8], and underwater acoustic communication has long been known to be quite challenging [9]. Hence in order to accomplish more accurate ranging in UWANs, these impedimental effects of underwater channel should be taken into account when designing an estimated scheme of ToA ranging.

2280

J. Yi et al.

3 Description of the Improved ToA Ranging Scheme In order to achieve a better elucidation, the problem formulation for ToA ranging is addressed firstly. As shown in Fig. 1, a localization sensor network which consists of three reference beacons with known positions and one mobile underwater submersible, is considered here. Without loss of generality, underwater nodes are equipped with depth sensors, thus the three-dimensional position estimate is converted to twodimensional localization problem in UASNs. Theoretically an unknown position of the mobile submersible can be calculated via the trilateration method and the locus of points is at the intersection of circles. However, due to the errors of ToA ranging, these circles naturally lend themselves to derive distance measurement with ranging ambiguity. The rings or the circles with ranging errors do not intersect at a unique point, but at a possible area with ranging uncertainty e, as shown in Fig. 1. Thus, a ToA ranging scheme with energy efficiency and more accuracy is significant for localization in UASNs. Assume that a signal was transmitted by any one of the reference beacons at time tj, and received by the mobile underwater submersible at time tk. With knowledge of the speed of sound c, a measurement of ToA ranging between these two nodes is constrained by Eq. (1).  d^ ¼ c  tk  tj ¼ c  Dt

ð1Þ

If all sensor nodes in UASNs have the same notion of time, ranging would be really a straightforward process. But clocks in sensor nodes are not identical and usually tick at slightly different rates, referred to as clock drift. As time elapses, the induced time differences between sensor nodes, referred to as clock offset, are significant and cannot be neglected. By relate the clock offset and clock drift of sensor nodes to the measured time of flight Dt, the derived equation from Eq. (1) can be found as Eq. (2).   Dt ¼ tk  tj þ s þ g  tk  tj

ð2Þ

where s denotes the clock offset between a beacon and a mobile submersible at time tj, and g is the relative clock drift between these two nodes. Because reference beacons with known position are usually deployed on the sea surface or can synchronize with each other through acoustic communications, the beacons are time synchronized to GPS or to their common notion of time. Then the estimate of clock offset is simplified as estimating time differences of the local clock of a mobile underwater submersible with respect to the global clocks of reference beacons. Furthermore, as mentioned in our prior work [10], the effect of clock drift is negligible when an accurate clock is used. As a result, a simplified solution of measured time of flight is given by Eq. (3).  Dt ¼ ^tk  tj þ s

ð3Þ

where ^tk indicates the earliest arrival time needed to be determined at the receiver, and tj is the send time which is incorporated in the transmitted message as a time stamp.

An Improved ToA Ranging Scheme for Localization

2281

Fig. 1. ToA-based localization

To address these challenges, in this paper we propose an improved ToA ranging scheme for localization. It calculates the time of flight with a two-step process, which consists of detection of the earliest arrival time and estimate of clock offset. Here oneway travel time between the transmitters and the receivers is employed to diminish the energy budget of nodes and the effect of node mobility. And then the ranging between any one of reference beacons and the mobile underwater submersible can be deduced by measuring the one-way travel time as follows. First, an adaptive matched filter with adjustable threshold is adopted, while considering the practicality and enforceability of the methods for detecting the earliest arrival time. The main advantages of the proposed matched filter in low complexity and dispensing with any priori information, make it suitable for signal detection in underwater acoustic channel. An optimal estimation of the earliest arrival time can be obtained by maximizing the cross correlation between the received waveform and the known template signal. Next, with the purpose of re-establishing a common notion of time between a reference beacon and a mobile underwater submersible, time synchronization is of vital importance. As stated above, the key task is converted to estimate the clock offset in this step. Considering the requirement of ranging accuracy and energy consumption, a periodic signaling strategy is employed. In more detail, a mobile underwater submersible is periodically synchronized to a reference beacon by message exchange between them. That is, the estimate of clock offset is easily worked out and can be updated periodically. To meet the trade-off between accuracy and energy efficiency, the intuition is to adaptively adjust the time interval of signaling.

2282

J. Yi et al.

4 Experimental Results To validate the performance, data obtained from sea trials was applied to the proposed scheme. The deployment of experiments consisted of five reference beacons equipped with GPS unit and one mobile underwater submersible that move in a circular area with 2 km diameter. Throughout the experiment, the beacons were time synchronized to GPS and their position are known from GPS. When the localization occurred, the corresponding reference beacon transmitted an LFM signal in the 7–15 kHz range. The mobile underwater submersible passively received the acoustic signals. All these received signals within 2.5 h experiment time were processed post-facto using the proposed scheme. In the first step, a matched filter was exploited to determine the earliest arrival time of each received signal. For the sake of clarity, in Fig. 2a a partial time snippet of whole received signal is illustrated by a graphical representation of short time Fourier transform. Accordingly, Fig. 2b shows that the earliest arrival time of a received signal is accurately decided via a matched filter.

Fig. 2. Detection of the earliest arrival time. a A partial time snippet of received signal illustrated by short time fourier transform, b detection result of a matched filter

In the second step, estimate of clock offset is then operated to compensate the significant difference in measured time of flight. As shown in Fig. 3a, due to the lack of time synchronization the measured errors of time of flight is around 202 s and will seriously deteriorate subsequent ranging performance. Therefore, the compensation of clock offset is a vital procedure. Through our proposed scheme, accurate estimate of clock offset is achieved, and ranging error is distinctly reduced by the compensation, as depicted in Fig. 3b.

An Improved ToA Ranging Scheme for Localization

2283

Fig. 3. Estimate of clock offset. a Measured time of flight without the compensation of clock offset, b measured time of flight with the compensation of clock offset

5 Conclusion In this paper we have presented an improved ToA ranging scheme for localization in UASNs, which can acquire accurate ranging measurements with the compensation of clock offset. With real sea-trial data, experimental results show that our proposed scheme has excellent performance in both detection of the earliest arrival time and estimate of clock offset.

2284

J. Yi et al.

Acknowledgements. This work was supported by the National Natural Science Foundation of China (61701422, 61801412) and the Natural Science Foundation of Fujian Province of China (2017J01785).

References 1. Han G, Zhang C, Shu L, Sun N, Li Q (2013) A survey on deployment algorithms in underwater acoustic sensor networks. Int J Distrib Sens Netw 4:1–11 2. Luo J, Han Y, Fan L (2018) Underwater acoustic target tracking: a review. Sensors 18 (1):112–148 3. Jamalabdollahi M, Zekavat S (2017) ToA ranging and layer thickness computation in nonhomogeneous media. IEEE Trans Geosci Remote Sens 55(2):742–752 4. Diamant R (2016) Clustering approach for detection and time of arrival estimation of hydrocoustic signals. IEEE Sens J 16(13):5308–5318 5. Qu F, Wang S, Wu Z, Liu Z (2016) A survey of ranging algorithms and localization schemes in underwater acoustic sensor network. China Commun 3:66–81 6. Tan HP, Diamant R, Seah WKG (2011) A survey of techniques and challenges in underwater localization. Ocean Eng 38(14):1663–1676 7. Gong Z, Li C, Jiang F (2018) AUV-aided joint localization and time synchronization for underwater acoustic sensor networks. IEEE Signal Process Lett 25(4):477–481 8. Tuna G, Gungor VC (2017) A survey on deployment techniques, localization algorithms, and research challenges for underwater acoustic sensor networks. Int J Commun Syst 30(3): e3350 9. Liu J, Wang Z, Zuba M, Peng Z, Cui JH, Zhou S (2012) JSL: Joint time synchronization and localization design with stratification compensation in mobile underwater sensor networks. In: IEEE communications society conference on sensor, mesh and ad hoc communications and networks 10. Yi J, Mirza D, Kastner R, Schurgers C, Roberts P, Jaffe J (2015) ToA-TS: time of arrival based joint time synchronization and tracking for mobile underwater networks. Ad Hoc Netw 34:211–223

Performance Analysis of Three-Layered Satellite Network Based on Stochastic Network Calculus Ying Zhou1,2 , Xiaoqiang Di1,2(B) , Ligang Cong1,2 , Weiwu Ren1,2 , Weiyou Liu1,2 , Yuming Jiang3 , and Huilin Jiang4 1

School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China [email protected] 2 Jilin Province Key Laboratory of Network and Information Security, Changchun, China 3 Norwegian University of Science and Technology (NTNU), Trondheim, Norway 4 Institute of Space Optoelectronic Technology, Changchun University of Science and Technology, Changchun, China Abstract. Multi-layered satellite network is one of the effective methods for dealing with the problem of global network coverage. However, due to its complicated structure and frequent handover of satellites, the signal transmission can be influenced, resulting in the randomness of the quality of service (QoS) of links. This puts forward a challenge to the link performance evaluation. To tackle the challenge, this paper uses network calculus to analyze the performance of inter-satellite links between the three-layered satellite network, which is composed of high, medium and low orbits, and finds that the multi-layered satellite network has a better performance compared to the GEO architecture under low average channel transmission rate. In addition, the performance of three-layered satellite network, which decreases with the inter-satellite distance of LEO increasing, achieves the minimum network delay when the MEO orbital height is about 7000 km. Our analysis results provide reference to the establishment of the inter-satellite links of the three-layered satellite network. Keywords: Three-layered satellite network Inter-satellite links

1

· Network calculus ·

Introduction

With the rapid development of satellite communication technology, multi-layered satellite network has been widely used in space. There are two main types of hierarchical architecture: (1) Double-layered satellite architecture: LEO/MEO This research is partially supported by research grants from Science and Technology Project of Jilin province (20180414024GH) and National Key R&D Program of China under Grant (2018YFB1800303). c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2285–2294, 2020 https://doi.org/10.1007/978-981-13-9409-6_277

2286

Y. Zhou et al.

[1] and LEO/GEO [2]; (2) Three-layered network architecture: LEO/HEO/GEO [3], LEO/MEO/GEO [4]. Different from the ground network, the nodes in the satellite network move constantly, and ISL changes frequently with the changing of the relative positions between the satellites. Time synchronization among between the layers in the satellite network has to be done by exchanging time information over ISL. Therefore, to provide the basis for establishing ISL, the design and analysis of ISL must be considered first. In [5], the performance of LEO link quality was evaluated by using mathematical tools, which provided insights on how different parameters may affect the data transmission performance of the link. In [6], the method of establishing inter-layered links between MEO and LEO was discussed, while literature [7] analyzed the geometric parameters of the inter-layered and the ISL between LEO satellites and MEO satellites, and an analytic expression was presented considering how spatial geometric parameters of the ISL varied with time. However, there is scarcity in the performance analysis of the ISL of three-layered satellite networks composed of LEO/MEO/GEO. Therefore, this paper was carried out research on that topic on the basis of a mathematical model. There are several methods for analyzing the performance of the satellite network with fading channels, such as the queuing theory method [8], the effective bandwidth method [9] and the stochastic network calculus (SNC) method [10]. The effective bandwidth method does not often consider the impact of background traffic, so the measured bottleneck bandwidth is not the effective one. The results of traditional queuing theory are usually presented as mean values rather than probabilistic distribution, especially when the network structure is complicated [11]. SNC analyzes the flow control in the network from the perspective of the lower bound, which is more suitable for the model proposed in the paper. A probabilistic network calculus with the moment generating function (MGF) was proposed in [12], and an end-to-end service performance through MGF of the service process is studied in [13]. Therefore, the SNC with MGF is used to analyze the performance of three-layered satellite network architecture. Currently, the Finite State Markov Channel (FSMC) model [14] and the twostate Gilbert (G-E) model [15,16] are widely used for modeling fading channels. In [14], Rayleigh fading channel was represented by FSMC model, and an analysis method was developed. In [16], the two-state Markov chain model was used to study the block error behavior of Rayleigh fading channel. This paper applies the above two channel models to the three-layered satellite network. This paper analyzed the performance of the three-layered satellite network. Firstly, based on the fading characteristics of the channel, the Rayleigh Fading was used to model it. Since the fading channel between the MEO layer and GEO layer has evident characteristics with large amount of fading factors, it was mapped into the FSMC model. Besides, GEO is relatively stationary to the ground, and ISL is mainly affected by the loss of free space outside the atmosphere, so the ISL in the GEO layer was mapped into the two-state G-E model. The SNC with MGF was used to analyze the system performance, and the service curve of the whole network was presented. Finally, taking into account the coverage of the satellite signals, the three-layered satellite network architecture

Performance Analysis of Three-Layered Satellite Network

2287

was compared with the single-layered GEO architecture, and the influence of LEO inter-satellite distance and MEO orbit height on the performance of threelayered satellite architecture is analyzed. The paper is structured as follows: In Sect. 2, three-layered satellite network architecture is presented. Section 3 calculates the parameters involved in satellite network communication and gives the arrival model and service model used in this paper. Numerical results are discussed in Sect. 4. Finally, Sect. 5 concludes the entire paper.

2

System Model

This paper establishes an interplanetary laser link between different layers of satellites in the three-layered satellite network. The LEO/MEO/GEO three layered satellite network system is shown in Fig. 1.

Fig. 1. LEO/MEO/GEO three-layered satellite system

To analyze the performance of three-layered satellite network, we map it to a tandem queuing model, as shown in Fig. 2. The ISL communication channel adopts the Rayleigh fading channel model. Since the influence of Doppler frequency shift between MEO and GEO is serious, and the signal transmission of the two is interfered by many factors, so, it is mapped to the FSMC model to provide service β1 and β3 . GEO is relatively stationary to the ground, and ISL is mainly transmitted in free space outside the atmosphere, which is less affected by Doppler frequency shift, so, the two-state G-E model is used as the channel

2288

Y. Zhou et al.

Fig. 2. Three-layered satellite network mapping to a tandem queuing model

model for service β2 . Hence, the service curve of the whole system is:  providing  β = β1 β2 β3 .

3

System Model Analysis

In this section, the system model is analyzed and the service curve for the threelayered satellite network is derived. 3.1

Finite-State Markov Channel Model (FSMC)

In this paper, The FSMC model is used to describe the fading between MEO and GEO. Here a set of states S = {s1 , s2 , s3 , . . . , sk } is defined, where k is denoted as the number of states of the fading channel [14]. When the channel is in state Sk , the steady-state probability πk satisfies Γk+1

πk =

Fγ (γ) dγ = exp(−

Γk+1 Γk ) − exp(− ). γ γ

(1)

Γk √

where Fγ (γ) = γ1 e−γ γ , SNR is defined as γ = 10 log10 [PM tr /N0 · WM ], Γk < γ < Γk+1 and γ is the average SNR, PM tr represents the average transmission power in the MEO-GEO channel, N0 is noise power spectral density and WM represents the channel bandwidth.

Performance Analysis of Three-Layered Satellite Network

2289

The Level Crossing Rate (LCR) is a statistic used to describe the fading rate (Defined as: the average number of times that the instantaneous signal-to-noise ratio M passes a given threshold with a positive/negative slope)  Γ 2πΓ fm exp(− ). (2) N (Γ ) = γ¯ γ¯ Here, fm = (fc · v)/c, fm characterizes the maximum Doppler frequency shift, v is the relative motion speed of the satellite, c is light speed, and fc is the carrier frequency caused by the relative motion between satellites. Γ = (Γ0 , Γ1 , . . . , ΓK )T is the threshold vector of SNR with k + 1 elements. Equal probability method is a simple and effective way to select SNR threshold, 1 , Γ0 < Γ1 < · · · < ΓK and Γ0 = 0, ΓK = ∞. The transition which is to let πk = K probabilities of adjacent states k and k − 1 can be obtained by Pk,k+1 ≈

N (Γk+1 )Tp N (Γk )Tp , k = 1, 2, . . . , K − 1, Pk,k−1 ≈ , k = 2, 3, . . . , K πk πk

⎧ ⎪ ⎨ P1,1 = 1 − P1,2 , Besides, Pk,k = PK−1,K−1 = 1 − PK−1,K−2 , ⎪ ⎩ Pk,k = 1 − Pk,k−1 − Pk,k+1 , k = 2, 3, . . . , K − 1. (3) Let R(θ) denotes the service rate diagonal matrix, which can be defined as R(θ) = diag(eθrs1 (θ) , eθrs2 (θ) , eθrs3 (θ) , . . . , eθrsK (θ) ), where r(τ ) = rsτ (τ ) represents the service rate of the Markov process at time τ . π = [π1 , π2 , . . . , πK ] ∈ R1×K is the stationary state distribution vector. 1 is a column vector made up of ones. Then, the service curves between MEO and GEO are given as β1 = β3 = βM (0, t) = ln[ 3.2

π(R(θ)P )t−1 R(θ)1 ]. Eeθ

(4)

Two-State G-E Model

The two-state G-E model is used as the service model for the GEO layer. The service rate is only provided in the good state Rgood . Define the exponential decay rate k in a Rayleigh fading channel as k = λ0 + μ, ξ is a fixed threshold, 2 2 so, μ = ke−ξ , λ0 = k(1 − e−ξ ) . The average transmission power between GEO is PGtr , N0 represents the noise power spectral density and WG is the channel bandwidth. The service rate Rgood is defined as Rgood = WG log2 (1 +

PGtr ξ 2 ). N 0 WG

The average channel transmission rate R is 1 ( (λ0 − μ + Rgood θ)2 + 4λ0 μ − λ0 − μ + Rgood θ). R= 2θ

(5)

(6)

2290

Y. Zhou et al.

The service curve for a single GEO node is defined as βG1 = ln

πi e(Pi +θR)t 1 . Eeθ

(7)

P11 P12 , P11 = −μ, P12 = μ, P21 = λ0 , P22 = −λ0 , and P21 P22 λ0 , μ > 0, π = (λ0 /(λ0 + μ), μ/(λ0 + μ)).

Here Pi =

3.3

Three-Layered Satellite Network Service Curve

Referring to [17], considering the flow arrival between LEO and MEO layers, each MEO satellite can receive multiple data flows of multiple satellites in LEO layer at the same time, and these data streams form a convergent flow A before reaching the node, which has stochastic arrival curve αi . The Poisson model is used for the arrival model of the system. If the arrival process of aggregated flow N is given, and λ is the average arrival rate of the flow, the stochastic arrival curve with violation probability ε0 can be expressed as: αi = N λt +



N λt ln

1 . ε0

(8)

We derive the service curve of n GEO nodes with the help of min-plus algebra: β2 = inf2≤i≤n [ln

N πi e(Pi +θRi )t 12 − λi t] − εi . θ i=1

(9)

  The service curve of the whole system is β = β1 β2 β3 . According to the characteristics of the tandem network system and the solution method of the service curve, the following formula can be derived by β=

N πi e(Pi +θRi )t 12 ln[π(R(θ)P )−2 R2 (θ)1k ] + inf2≤i≤n [ln − λi t] − εi . (10) θ θ i=1

Referring to [10, 13], the backlog and delay bounds can be calculated. In the next section, the experimental results will be analyzed.

4

Numerical Analysis

In this section, the analysis results of the backlog and delay bounds in the threelayered satellite architecture are given. We set the SNR to 10 dB and Doppler frequency shift 29 kHz. Figure 3 compares the backlog and delay of the three-layered satellite architecture with that of the single layer GEO architecture in the context of low average channel transmission rate. As can be seen from the Fig. 3, with the increase of arrival rate, the backlog and delay of the two architectures are increasing.

Performance Analysis of Three-Layered Satellite Network

2291

However, the backlog and delay of the three-layered satellite architecture are always smaller than that of the single-layered GEO architecture. Furthermore, in Fig. 3b, compared with the rapid growth trend of the single-layered satellite architecture, the growth trend of the three-layered one is slower, and tends to be flat later on. Therefore, when the average channel transmission rate is low, the network performance of the three-layered satellite architecture is significantly better than that of the single-layered GEO architecture.

(a)

(b)

190

280 270

delay (ms)

backlog (kb)

170

150

250 240 230

130

110

260

\LEO-MEO-GEO \GEO

1

3

5

7

10

220 210

\LEO-MEO-GEO \GEO

1

3

5

7

10

Fig. 3. Comparison of the two architectures regarding delay bound/backlog bound

Because the inter-satellite distance of LEO on the same orbital plane is relatively stable, it has little influence on the network transmission performance. We analyze the influence of the variation of the inter-satellite distance in different orbits on the backlog and delay under three-layered satellite architecture. As shown in Fig. 4, we set three different inter-satellite distance to 2033.91 km, 4144.31 km and 6117.54 km, respectively. As can be seen, with the increase of the distance between the two satellites, the performance of the whole link keeps declining. The relative distance of the satellite is directly related to the receiving power of the satellite, so if we want to establish a well ISL, we need to dynamically adjust the receiving power of the satellite when the inter-satellite distance increases. Figure 5 shows the effects of MEO orbital altitude changes on backlog and delay performance in the three-layered satellite architecture. As can be seen in Fig. 5a, with the increase of MEO orbital altitude, the backlog of the three-

2292

Y. Zhou et al.

(a) 160

(b)

150

320

delay (ms)

140

backlog (kb)

330

130 120 110

90 1

3

5

7

300 290

\Inter-satellite distance=2033.91 \Inter-satellite distance=4144.31 \Inter-satellite distance=6117.54

100

310

280

10

\ inter-satellite distance=2033.91 \ inter-satellite distance=4144.31 \ inter-satellite distance=6117.54

1

3

5

7

10

(a) 170

(b)

420

delay (ms)

Fig. 4. Relation of the inter-satellite distance and delay bound/backlog bound

360

backlog(kb)

160 150 140

300

130 120 5000

10000

15000

20000

The orbital altitude of MEO (km)

240 5000

10000

15000

20000

The orbital altitude of MEO (km)

Fig. 5. Delay bound/backlog bound at different orbital heights of MEO

layered satellite network increases gradually, while Fig. 5b illustrates that the delay of the three-layered satellite network reaches its minimum at the MEO orbital height of about 7000 km. Specifically, when the MEO orbital altitude is lower than 7000 km, the delay decreases gradually, while it slowly increases at the altitude higher than 7000 km.

5

Conclusion

In this paper, we establish the three-layered satellite network architecture and analyze its backlog and delay. We map the complex three-layered satellite network architecture to the queuing system, and apply SNC to obtain the backlog and delay bounds of the whole system. Numerical results have been presented, which shows that when the average channel transmission rate is low, the network performance of the three-layered satellite architecture is significantly bet-

Performance Analysis of Three-Layered Satellite Network

2293

ter than that of the single-layered GEO architecture. In addition, the performance of three-layered satellite architecture decreases with the increase of LEO inter-satellite distance, and reaches the optimum at the MEO orbital height of 7000 km. This paper is a preliminary exploration of performance evaluation of three-layered satellite networks. Therefore, we only analyze the influence of some parameters on network performance, lacking the attention of Doppler frequency shift and SNR. Based on the work in this paper, we will mainly analyze the impact of these two parameters on the three-layered satellite network in the future, and propose how to adjust the parameters to achieve the best performance.

References 1. Gavish B (1997) Leo/meo systems-global mobile communication systems. Telecommun Syst 8(2–4):99–141 2. Horsham G, Schmidt G, Gilland J (2010) Establishing a robotic, leo-to-geo satellite servicing infrastructure as an economic foundation for exploration. In: AIAA SPACE 2010 conference & exposition, p 8897 3. Yin ZZ (2010) Performance analysis of inter-layer isls in triple-layered leo/heo/geo satellite networks. Comput Eng Appl 46(12):9–13 4. Shi W, Gao D, Zhou H, Feng B, Li H, Li G, Quan W (2018) Distributed contact plan design for multi-layer satellite-terrestrial network. China Commun 15(1):23– 34 5. Wang M, Li J, Jiang Y, Di X (2015) Stochastic performance analysis for leo intersatellite link based on finite-state markov chain modeling. In: 2015 4th international conference on computer science and network technology (ICCSNT), vol 1. IEEE, pp 1230–1235 6. Kimura K, Karasawa Y (1998) Satellite communication system having doublelayered earth orbit satellite constellation with two different altitudes. US Patent 5,722,042 7. Wang Y, Wang L, Zhang N (2004) Inter-layer isl spatial parameters in leo/meo double-layer satellite network. China Space Sci Technol 24(1):26–30 8. Le L, Hossain E (2008) Tandem queue models with applications to qos routing in multihop wireless networks. IEEE Trans Mob Comput 7(8):1025–1040 9. Wang Q, Wu D, Fan P (2011) Effective capacity of a correlated rayleigh fading channel. Wirel Commun Mob Comput 11(11):1485–1494 10. Jiang Y, Liu Y (2008) Stochastic network calculus, vol 1. Springer 11. Li Z, Gao Y, Salihu BA, Li P, Sang L, Yang D (2015) Network calculus delay bounds in multi-server queueing networks with stochastic arrivals and stochastic services. In: (2015) IEEE global communications conference (GLOBECOM). IEEE, pp 1–7 12. Chang C-S (2001) Performance guarantees in communication networks. Eur Trans Telecommun 12(4):357–358 13. Fidler M (2006) An end-to-end probabilistic network calculus with moment generating functions. In: 200614th IEEE international workshop on quality of service. IEEE, pp 261–270 14. Zhang Q, Kassam SA (1999) Finite-state markov model for rayleigh fading channels. IEEE Trans Commun 47(11):1688–1692

2294

Y. Zhou et al.

15. Fidler M (2006) Wlc15-2: a network calculus approach to probabilistic quality of service analysis of fading channels. IEEE Globecom, 1–6 16. Zorzi M, Rao RR, Milstein LB (1998) Error statistics in data transmission over fading channels. IEEE Trans Commun 46(11):1468–1477 17. Wang M, Di X, Jiang Y, Li J, Jiang H, Yang H (2016) End-to-end stochastic qos performance under multi-layered satellite network. In: International conference on space information network. Springer, pp 182–201

Robust Sensor Geometry Design in Sky-Wave Time-Difference-of-Arrival Localization Systems He Ma1,2(B) , Xing-peng Mao1,2 , and Tie-nan Zhang1 2

1 Harbin Institute of Technology, Harbin, People’s Republic of China The Key Laboratory of Marine Environmental Monitoring and Information Processing Ministry of Industry and Information, Harbin, China [email protected]

Abstract. This paper studies the sensor geometry design problem of sky-wave time-difference-of-arrival (TDOA) localization systems under non-line-of-sight (NLOS) scenario where signals are reflected by ionosphere-layer before arriving at sensors. Traditionally, the optimal sensor geometries for line-of-sight (LOS) scenarios have been derived. However, ionosphere-layer heights (IHs) are generally inaccurately known. IH errors can severely degrade localization performance but the joint estimation of IHs and target location is conventionally an illconditioning problem. To solve this problem, we propose a grouped sensor geometry, which enables the joint estimation of IHs and target location. In this way, we improve the robustness against IH errors in sky-wave TDOA localization. Theoretical analysis and performance comparison validate that the superiority of our proposed grouped sensor geometry. Keywords: Time-difference-of-arrival · Sensor geometry design · Ionosphere-layer height · Cramer-rao bound · Fisher information matrix (fim)

1

Introduction

The localization of signal source via time-difference-of-arrival (TDOA) measurements is a passive localization technique which is widely applied in radar, sonar, etc [1]. As the result, extensive research has been performed for TDOA estimation [2,3] as well as TDOA-based localization [4–7]. Generally, cramer-rao bound (CRB) determines the lowest localization error and can be used to evaluate the efficiency of passive localization systems [8]. Hence, a series of works on CRB-based sensor geometry design have been published [9–11]. However, the joint estimation of target location and IHs is traditionally an illconditioning problem. Hence it is impossible to alleviate IH errors by retrieving Key Program of National Natural Science Foundation of China under Grant 61831009. c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2295–2303, 2020 https://doi.org/10.1007/978-981-13-9409-6_278

2296

H. Ma et al.

IHs from TDOA measurements. To solve the problem, a grouped sensor geometry scheme is proposed. The main contribution of this paper is the proposition of grouped sensor geometry. To achieve this, we exploit the identical IH assumption based on [12], and replace the non-grouped sensor geometries by grouped sensor geometries. As the results, the grouped sensor geometries make localization systems robust to IH errors. The outline of this paper is as follows. In Sect. 2, the measurement model, and the CRB without IHs are given. The problem of geometry design is also described in Sect. 2. In Sect. 3, the grouped sensor geometry scheme is proposed. In Sect. 4, the CRB of grouped sensor geometry scheme is derived. In Sect. 5, performance analysis and simulation results are given. Conclusions are drawn in Sect. 6.

2

Basic Fundamentals

2.1

Signal Model

In this paper a basic assumption is that the time delay of the transmitting path from signal source to sensor is equivalent to that of a single-layer spherical reflection path, which is shown in Fig. 1. In Fig. 1, A represents a sensor, B represents a signal source and C represents the equivalent reflection point. Additionally, the IHs from signal source to all sensors are assumed to stay constant during the observation time.

Fig. 1. Illustration of spherical reflection path in sky-wave TDOA localization.

Under these assumptions, AC + BC is the equivalent transmitting path. According to the cosine theorem, it is obtained that 2

2

1

AC = BC = (OA + OC − 2OA OC cos(∠AOC) 2 )

(1)

Suppose s1 , · · · , sM are 3-D position vectors of M sensors and u is the 3-D position vector of signal source. According to (1), the transmitting delay from

Robust Sensor Geometry Design in Sky-Wave

u to sm (m = 1, · · · , M ) is τm



2 = (R02 + (R0 + Hm )2 − 2R0 (R0 + Hm ) c

1−

L2m 1 )2 4R02

2297

(2)

where Lm = ||u − sm ||2 , || ||2 stands for the Euclidean norm, R0 is the earth radius and c is the light speed. Then the TDOA measured by the sensor pair (m1 , m2 ) is τ˜m1 ,m2 = τm1 ,m2 + Δτm1 ,m2 = τm1 − τm2 + Δτm1 ,m2

(3)

where Δτm1 ,m2 is the measurement noise. For convenience, further define the range difference measurement vector by ˜ = c × [˜ d τ2,1 , τ˜3,1 , τ˜4,1 , · · · τ˜M −1,1 , τ˜M,1 ]T 2.2

(4)

Cramer-Rao Bound Without Ionosphere-Layer Height Errors

Recall that the signal source is located on the earth surface, then only the X-Y ¯ = u(1 : 2, 1). coordinates of target are unknowns of interest, which constitute u ˜ is The the Fisher matrix of d ˜ = σ 2 I M¯ ×M¯ Fim(d) d

(5)

where σd is the root covariance TDOA measurement noises and (5) is based on the assumption that different sensor pairs are independent identically distributed ¯ = M (M − 1)/2. When IHs are accurately Gaussian noises, σd = c × στ and M ¯ is known, the Fisher matrix of u ˜ T Fim(¯ u) = GFim(d)G ¯ , ∂ is the partial derivative symbol. The CRB of u ¯ is where G = ∂d /∂ u    CRB(¯ u) = tr (GFim(u)GT )−1

(6)

T

2.3

(7)

Problem of Designing Sensor Geometries

The problem of designing sensor geometries for sky-wave TDOA localization systems is described as follows. The joint estimation of target location and IHs is traditionally an illconditioning problem. To show this, note that the IHs from a signal source to different sensors are independent. Then the total unknowns include M IHs ¯. and two elements in u ¯ measurements. However, By measuring the TDOAs for all pairs, we obtain M ¯ ,1) can be there are only M − 1 valid TDOA measurements because d(M :M ˜ linearly expressed by d(1:M − 1,1), where d is the true part of d. Since the number of unknowns (M + 2) is more than the valid number of ¯ is an ill-conditioning measurements (M − 1), the joint estimation of IHs and u problem. To solve ill-conditioning problem, the grouped sensor geometry is proposed, and thus the joint estimation problem becomes over-determined.

2298

3 3.1

H. Ma et al.

Design of Grouped Sensor Geometry Proposition of Grouped Sensor Geometry Scheme

The optimal sensor geometries for 2-D and 3-D line-of-sight (LOS) scenarios have been derived in [9], which show that uniform angular arrays (UAAs) attain the minimum CRB. Ref. [10, 11] are the extensions of [9], where [10] analyzes the impact of constrained angular domain on CRB and [11] discusses the relationship between maximizing the trace of Fisher matrix and minimizing CRB. However, different from the above works, this paper considers a non-lineof-sight (NLOS) scenario, where signals are reflected by ionosphere-layer before arriving at sensors. Following [13], we utilize the single-layer spherical reflection model. Because ionosphere-layer heights (IHs) are mainly determined by factors including signal frequency, ionosphere-layer status and the positions of source and sensors, they vary with time and are usually inaccurately known [14]. Due to this, there is a problem of designing sensor geometries for sky-wave TDOA localization systems. The joint estimation of target location and IHs is conventionally an ill-conditioning problem. Hence it is impossible to alleviate IH errors by retrieving IHs from TDOA measurements. To solve the problem, a grouped sensor geometry scheme is proposed. It is based on the assumption that IHs from a source to different sensors are identical when the maximum distance between these sensors is less than a threshold. It can be supported by [12], where Bourgeois claims that the IH from a moving target to a static sensor barely changes within 20 min, with the target speed around 100 m/s. Therefore the IHs from a static sensor to different positions in trajectory are approximately identical and thus the assumption is supported. In this way, the sensors in a group should share the identical IH and different groups can be distantly placed to obtain a large baseline. Consequently, the joint estimation of IHs and target location becomes an over-determined problem and the localization performance is robust to IH errors if measurement noises are reasonably small (the threshold is set to 30 km in this paper). Place different groups separately to achieve a large maximum baseline. Then a grouped sensor geometry is proposed. A typical grouped sensor geometry is depicted by Fig. 3. To propose a grouped sensor geometry, divide M sensors into Ngroup groups and place the sensors in each groups within the range of 30 km. For convenience, we only discuss cases where the sensor numbers per group are the same. 3.2

Model and Analysis

Next, the measurement model for grouped sensor geometries is given. Define H = [H1 , · · · , HNgroup ]T , where Hn represents the IH for the nth sensor group for n = 1, · · · , Ngroup . Let sn,m be the mth sensor in the nth group, where m = 1, · · · , Mgroup . Then the range difference measured by sensor pair sn1 ,m1 and sn2 ,m2 is d˜(n1 ,m1 ),(n2 ,m2 ) = dn1 ,m1 − dn2 ,m2 + cΔτ(n1 ,m1 ),(n2 ,m2 )

(8)

Robust Sensor Geometry Design in Sky-Wave

where Δτ(n1 ,m1 ),(n2 ,m2 ) is measurement noise and     L2n1 ,m1  ) d(n1 ,m1 ) = 2 (R02 + (R0 + Hn1 )2 − 2R0 (R0 + Hn1 ) 1 − 4R02

2299

(9)

with Ln1 ,m1 =  u − sn1 ,m1 2 . Stacking d˜(n1 ,m1 ),(n2 ,m2 ) for all sensor pairs, we ˜ group . obtain the range measurement vector d ¯ By proposing the grouped sensor geometry, the joint estimation of IHs and u becomes an over-determined problem under wild conditions. With M − 1 valid ¯. measurements, the unknowns include Ngroup IHs and the two elements in u Hence, the joint estimation problem is over determined when M − 1  Ngroup +2, which can be easily satisfied. If the TDOA measurement errors are reasonably small (e.g., around tens of ns), it is easy to verify that IHs can be estimated with accuracies around several kilometers using four sensors uniformly distributed within 30 km. Hence, the localization performance of the proposed grouped sensor geometry is somewhat robust to IH errors. Besides, the maximum baseline of a grouped geometry can be as large as a non-grouped sensor geometry. Thus, we expect that the efficiency in alleviating measurement errors does not decrease significantly.

4

Cramer-Rao Band of Grouped Sensor Geometry Scheme

The derivation of the CRB with IH errors is inspired by [15], where the CRB with inaccurately known sensor positions is derived. Without loss of generality, we derive the CRB with IH errors for grouped sensor geometry. ¯ = ∂dT /∂H and Q = σ 2 I M ×M . ¯, G As preparations, define G = ∂dT /∂ u d d T ˜ ˜ ˜ Then, let H = [H1 , · · · , HNgroup ] be the noisy measurement of H with the 2 I Ngroup ×Ngroup , then probability density function covariance being QH = σH (PDF) on the unknowns (u and H) satisfies ˜ H; ˜ u, H) × p(H; ˜ u, H) = p(d; ˜ H) p(d, 

(10)  1 −1 ˜ − d)T Q (d ˜ − d) × exp − 1 (H ˜ − H)T Q−1 (H ˜ − H) =Cexp − 2 (d H d 2 Taking logarithm of (10) and applying partial derivatives with respect to [uT , H T ] for twice, we obtain that X Y (11) Fim(u, H) = YT Z ¯ dG ¯ T and Z = QH + GQ ¯ T . Applying the where X = GQd GT , Y = GQd G partial matrix inverse formula to generates the CRB with IH errors    (12) CRB(˜ u) = tr (X − Y Z −1 Y T )−1

2300

5

H. Ma et al.

Simulation Results

To demonstrate the robustness of grouped sensor geometry against IH errors, the CRBs for geometries in Figs. 2 and 3 are calculated via (12). In Figs. 2 and 3, L represents the length of square sensor geometries. Specially, in Fig. 3 r represents the distance from the sensor to the center of its group. Note that the utilized non-grouped and grouped sensor geometries have the same sensor number.

Fig. 2. Illustration of non-grouped sensor geometry on the x-y plane when M = 16.

Fig. 3. Illustration of designed grouped sensor geometry on the x-y plane when Ngroup = 4 and M = 16.

For the simulation results in this section, we set στ = 50 ns, R0 = 6378 km, u = [0, 0, R0 ]T . Besides, the identical distance from u to the center of the whole sensor array is set to 500 km. Additionally, we set L = 200 km, r = 20 km, σH = 100 km unless specially claimed. We respectively change σH , L and r to obtain the results. Then, the CRB performance comparison between non-grouped and grouped sensor geometry are demonstrated in Figs. 4, 5 and 6. Figure 4 depicts that the CRB of the grouped sensor geometry increases slower than that of non-grouped sensor geometry when σH exceeds 10 km. This

Robust Sensor Geometry Design in Sky-Wave

2301

Fig. 4. CRB performance comparison between non-grouped and grouped sensor geometry when σH changed

Fig. 5. CRB performance comparison between non-grouped and grouped sensor geometry when L changed

implies that the proposed grouped sensor geometry scheme is efficient in most cases because the accuracy of IH is usually worse than 10 km [12]. Figures 5 and 6 further depict that when the values of L and r change, the conclusion drawn from Fig. 4 is still holds.

6

Conclusion

In this paper, the design of sensor geometry in sky-wave TDOA localization systems is studied. It is shown that joint estimation of IHs and target location is an ill-conditioning problem. For this problem, grouped sensor geometry is proposed and the joint estimation problem becomes over-determined. Thus the

2302

H. Ma et al.

Fig. 6. CRB performance comparison between non-grouped and grouped sensor geometry when r changed

localization performance is robust to IH errors when measurement noises are reasonably small. CRB performance comparison validates the novelty of grouped sensor geometry scheme.

References 1. Li X, Deng ZD, Rauchenstein LT, Carlson T (2016) Contributed review: sourcelocalization algorithms and applications using time of arrival and time difference of arrival measurements. Rev Sci Instrument 87:1–13 2. Carter GC (1981) Special issue on time delay estimation. IEEE Trans Acoust Speech Signal Process 29:461–623 3. Carter GC (1993) Coherence and time delay estimation: an applied tutorial for research. test, and evaluation engineers. IEEE Press, Development 4. Torrieri DJ (1984) Statistical theory of passive location systems. IEEE Trans Aerosp Electron Syst 20:183–197 5. Spiritoso MA (2001) On the accuracy of cellular mobile station location estimation. IEEE Trans Veh Technol 50:674–685 6. Chan YT, Ho KC (1994) A simple and efficient estimator for hyperbolic location. IEEE Trans Signal Process 42:1905–1915 7. Dogandzic A, Riba J, Seco G, Swindlehurst AL (2005) Special section on positioning and navigation with applications to communications. IEEE Signal Proc Mag 22:10–84 8. Chalise BK, Zhang YD, Moeness GA, Himed B (2014) Target localization in a multi-static passive radar system through convex optimization. Signal Proc 102:207–215 9. Yang B, Scheuing J (2005) Cramer-rao bound and optimum sensor array for source localization from time differences of arrival. Proc IEEE ICASSP 4:961–964 10. Yang B, Scheuing J (2006) A theoretical analysis of 2D sensor arrays for TDOA based localization. Proc IEEE ICASSP 4:901–904

Robust Sensor Geometry Design in Sky-Wave

2303

11. Yang B (2007) Different sensor placement strategies for TDOA based localization. Proc. IEEE ICASSP 2:1093–1064 12. Bourgeois D, Morisseau C, Flecheux M (2006) Over-the-horizon radar target tracking using multi-quasi-parabolic ionospheric modelling. IET Radar, Sonar Navig 153(5):409–416 13. Zhang TN, Mao XP, Zhao CL, Liu JX (2019) A novel grid selection method for sky-wave time difference of arrival localisation. IET Radar, Sonar Navig 13(4):538– 549 14. Norman RJ, Dyson PL (2006) HF radar backscatter inversion technique. Radio Sci 41:1–10 15. Ho KC, Yang L (2008) On the use of a calibration emitter for source localization in the presence of sensor position uncertainty. IEEE Trans Signal Proc 56(12):5758– 5772

A NOMA Power Allocation Method Based on Greedy Algorithm Yin Lu1,2(&), Shuai Chen3, Kai Mao4, and Haowei Bian5 1

2

3

5

Jiangsu Key Laboratory of Wireless Communications, Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected] Engineering Research Center of Health Service System Based on Ubiquitous Wireless Networks, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing 210003, China Nanjing University of Posts and Telecommunications, Nanjing 210003, China 4 School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210044, China

Abstract. In the existing Non-Orthogonal Multiple Access power allocation algorithm, the iterative water-filling algorithm is a commonly used algorithm, which has good performance but high complexity. In order to reduce the complexity, this paper divides the power allocation problem of Non-Orthogonal Multiple Access into two steps. Firstly, the water-filling algorithm is used to complete the power allocation between sub-carriers, then the greedy algorithm is used to allocate power to the superimposed users in the carrier. Many elements in the candidate power allocation coefficient set that are impossible to be optimal solutions are deleted by using the sum of power distribution coefficients of each user and the product of throughput, which effectively reducing the complexity. The simulation results show that the proposed algorithm has a slightly lower performance than the iterative water injection algorithm, but it effectively reduces the complexity. The performance of the proposed algorithm is better than other traditional algorithms, and a good compromise between system performance and complexity is achieved. Keyword: Non-Orthogonal Multiple Access water-filling  Greedy algorithm

 Power allocation  Iterative

1 Introduction The traditional orthogonal multiple access method is weak in further improving system performance. In order to meet these challenges, research on new types of multiple access methods is urgent. Non-Orthogonal Multiple Access (NOMA) [1] is generated under this background. As a new access method, NOMA has become the core candidate technology of 5G [2]. Literature [3] shows that the performance of NOMA can be improved by more than 30% compared with OMA. The core idea of NOMA is to assign different power to © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2304–2313, 2020 https://doi.org/10.1007/978-981-13-9409-6_279

A NOMA Power Allocation Method Based on Greedy Algorithm

2305

different users at the transmitting end, and at the receiving end, the successive interference cancellation technology (SIC) is generally used to detect and reconstruct according to the difference of signal power of different users. Power allocation has become the focus and hotspot of research [4–6]. The fixed power allocation and fractional power allocation algorithms are studied in [7, 8]. Literature [9] shows that assigning more power to users with larger channel gains can increase system capacity. Literature [10] defines the PF scheduling factor, and based on the scheduling factor, the power allocation scheme of the two-user paired NOMA system is studied. In [11], an iterative water-filling algorithm is studied, which takes into account all users superimposed on the carrier in the iterative process, and its performance is very good, but the complexity is too high. In order to reduce the number of iterations and ensure the performance of the system as much as possible, this paper proposes an improved algorithm. The algorithm divides the power allocation problem of NOMA into two sub-problems: inter-carrier and intra-carrier power allocation, and adopts different power allocation strategies. Iterative water-filling algorithm is used for inter-carrier power allocation. For the power allocation of the superimposed users in the carrier, considering that the greedy algorithm [12] has lower complexity, this paper applies the greedy idea to realize the power allocation of the superimposed users in each carrier. The structure of this paper is as follows: Sect. 2 introduces the system model, Sect. 3 introduces the NOMA power allocation algorithm proposed in this paper, Sect. 4 compares the performance of this algorithm with the iterative water-filling algorithm and other traditional NOMA power allocation algorithms. Section 5 summarizes the full text.

2 System Model Consider such a single-cell NOMA downlink: a base station located at the center of the cell communicates with multiple users. Assume that the total bandwidth of the system is W, the total number of users in the cell is M, the number of subcarriers is T, the number of users superimposed on carrier g is kg, the system transmit power is limited, and the total transmit power is Ptot. Without loss of generality, we consider one of the subcarriers g, the base station allocates different power to the superimposed users on the carrier g and transmits the superimposed signals. The superimposed signals on the carrier g can be expressed as: fg ¼

kg X pffiffiffiffiffiffiffi pi;g xi;g

ð1Þ

i¼1

where xi,g represents the signal transmitted by the user i superimposed on the carrier g, and Pi,g represents the power allocated by the carrier g to the superimposed user i. The signal received by the user m on the receiving end carrier g is:

2306

Y. Lu et al.

ym;g ¼ hm;g fg þ wm;g ¼ hm;g

! kg X pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffi pm;g xm;g þ pi;g xi;g þ wm;g

ð2Þ

i¼1;i6¼m

where hm,g and wm,g represent the channel gain and noise of the base station to the receiving end user m, respectively. Assume that the receiving end receives the signal using the most commonly used SIC technology. The receiving principle of the user m using the SIC receiver is shown in Fig. 1. Sort User kg signal reconstruction

User m+1signal decoding

User m+1 signal reconstruction

...

User kg signal decoding

User m signal decoding

Complete signal reception

Fig. 1. Principle of user m using SIC reception

After receiving by SIC, the signal-to-noise ratio of user m is: SINRm ¼ Pm1 i¼1

pm;g jhm;g j2 pm;g jhm;g j2 þ wm;g

ð3Þ

After receiving by SIC, the throughput of user m on carrier g can be expressed as: Rm;g ¼

W log2 ð1 þ SINRm Þ T

ð4Þ

where W and T represent the total bandwidth of the system and the number of subcarriers, respectively. Further, the total throughput of the NOMA downlink can be obtained:

A NOMA Power Allocation Method Based on Greedy Algorithm



kg T X X

Ri;g

2307

ð5Þ

g¼1 i¼1

It can be seen from Eq. (5) that different user pairing combinations and power allocations affect the throughput of the system. This paper does not consider user pairing, and focuses on the power allocation scheme. The power allocation problem is divided into two sub-problems: inter-carrier power allocation and intra-carrier superimposed user power allocation.

3 Power Allocation Algorithm 3.1

Inter-carrier Power Allocation

For the inter-carrier power allocations, this paper draws on the water-filling idea of [11]. In order to study the optimal solution, we obtain the objective function of the problem from Eq. (5): max

kg T P P

pi;g g¼1 i¼1

Ri;g

s:t: C1 : Pi;g  0; 8i; g kg T P P Pi;g  Ptot C2 :

ð6Þ

g¼1 i¼1

where the constraint C1 guarantees that the power allocated by each user is nonnegative, and C2 indicates that the power of the system is limited. In [11], the iterative water-filling method is used to solve this problem, but all users in the carrier should be considered, and the complexity is high. In order to reduce the complexity, the water-filling method is used for inter-carrier power allocation and greedy algorithm for intra-carrier user power allocation. To solve the optimization problem, rewrite Eq. (6): max Pg

s:t:

T X W g¼1 T X

T

log2 ð1 þ

Pg h2g Þ dg

ð7Þ

Pg  Ptot

g¼1

The Lagrangian multiplier method is used to solve the Eq. (7), and an auxiliary function is introduced to construct the Lagrangian function: F¼

T X g¼1

log2

Pg h2g 1þ dg

! k

T X g¼1

Pg  Ptot

ð8Þ

2308

Y. Lu et al.

For Eq. (8), we obtain the partial derivative of pg and let the partial derivative be 0 to obtain: Pg ¼

1 dg  k ln 2 h2g

ð9Þ

Let set b = k ln 2, we can obtain the initial water level b0 ¼ T1 Ptot þ

T P dg g¼1

h2g

! . Then

iterate according to Eq. (9). If the power allocated by all carriers is non-negative during the iterative process, the iteration ends. Otherwise, the water-filling level is updated to continue the iterative process. The update of the water level for each iteration is given by [12]: b

T X 1 Ptot  Pg bþl Ton g¼1

! ð10Þ

where l denotes the step size of each update, and Ton denotes the number of remaining subcarriers that have not been allocated yet. 3.2

Intra-carrier Power Allocation

It is assumed that the carrier g has been allocated a total of Pg power through the power allocation algorithm of the previous section, and the number of users superimposed on the carrier g is kg. We consider sorting the users on carrier g in order of increasing power. And we assume P1,g  P2,g  …  Pkg,g, where Pi,g represents the power finally distributed by the user i on the carrier, and the power allocation coefficient corresponding to the user i is denoted as ai,g, then a1,g  a2,g  …  akg,g. Further, it can be obtained that: 0\a1;g 

P 1 1  m1 i¼1 ai;g ; am1;g  am;g  kg  m þ 1 kg

ð11Þ

Assuming that the minimum power allocation coefficient interval is D, combined with Eq. (11), all possible values of each power allocation coefficient can be calculated, which are represented by sets A1, A2…Akg, respectively, and Ai is all possible of user i power allocation coefficient. In order to simplify the calculation, we introduce the idea of greedy algorithm, which is a kind of local optimal idea, which takes the best choice of the current step as the optimal strategy in the calculation. Taking the geometric mean of the superimposed user throughput on the carrier g as the objective function, the objective function is expressed as:

A NOMA Power Allocation Method Based on Greedy Algorithm

2309



 a1;g ; a2;g ; . . .; akg ;g ffiffiffiffiffiffiffiffiffiffiffiffiffiffi9 8v kg =

> > > M > d > < T ¼ 2dTR ; dR ¼ 2 dT s:t: ðM  1ÞdT þ ðN  1ÞdR ¼ L > > > modðM=2Þ ¼ 0 > > > : MN  Umin

ð2:5Þ

The optimization problem formulated in (2.5) can be solved using the available convex optimization tools, such as the CVX toolbox. In fact, the value of a may not integer multiple of wavelength, we need find an approximate integer solution.

Three-Dimensional Imaging Method

2353

3 Imaging Reconstruction As we known, the range and azimuth information can be obtained by MIMO array. The range information can be get by fast Fourier transform (FFT). But the phase of azimuth cannot be compensated uniformly, so back projection algorithm (BP) is used in the azimuth. Then, the position of the target can be obtained in Cartesian coordinate system. Suppose the UCA consists of N antennas, each antenna element is independently fed with the same signal but different phase shift. The phase shift of nth element with the mode a is un ¼ a/n ¼ 2pan=N

ð3:1Þ

Z

P (r, θ, φ)

r

θ

φ

X

Y

Fig. 2. Coordinate frame of electromagnetic (EM) vortex imaging

For a point Pðr; h; uÞ in the space as Fig. 2, the electrical field density can be given by [9] EðaÞ ¼ j

l0 xdeikr Neiau a i Ja ðka sin hÞ 4pr

ð3:2Þ

where l0 is the magnetic conductivity in the vacuum, x is the circular frequency, Ja is the Bessel function of the first kind. Suppose the target consists of M ideal scatters, the echo is given by sða; kÞ ¼ j

M l0 xdN jap X ejkrm jaum e rm e Ja ðka sin hÞ rm 4p m¼1

ð3:3Þ

2354

J. Liang et al.

Form the Eq. (3.3), the ejaum denotes relationship between a and u satisfied dual, the ejarm also denotes the dual relationship between k and r. So the information in u and r domain can be get by FFT. However, the function Ja processed by FFT, the symmetrical peaks appears on both sides near zero frequency as in Fig. 3. Therefore Hilbert transform is needed to eliminate this effect. Then, the position of the target can be obtained in Spherical coordinate system. For a point Pðr; h; uÞ, the axis changed from Spherical coordinate system to Cartesian coordinate system, the relation in the Eq. (3.4) x ¼ r sin h cos u y ¼ r cos h

ð3:4Þ

z ¼ r sin h sin u from Eq. (3.4), the three dimension of the target can be reconstructed. bessel function

3.5 3

amplitude

2.5 2 1.5 1 0.5 0

-3

-2

-1

0

1

2

3

normalized frequecy

Fig. 3. The FFT of Bessel function

4 Simulation Suppose four points P1 ð2:5; 0:48p; 0:2pÞ, P2 ð2:62; 0:4p; 0:3pÞ, P3 ð3:4; 0:49p; 0:4pÞ, P4 ð3:49; 0:42p; 0:5pÞ are in the space. The central carrier frequency is 10 GHz, the bandwidth is 4 GHz. The mode is from −20 to 20, the number of transmitters is 4, the number of receiver is 11, the radius of UCA is 2k, the length of the linear array is 3.12 m. Figure 4 shows the results of traditional MIMO linear array. Figure 5 shows the results of MIMO linear array based on vortex EM wave. From the results of Figs. 4 and 5, the information of the height can be calculated by the Eq. (3.4), then we can get the 3-D image of the target as in Fig. 6.

Three-Dimensional Imaging Method 1

azimuth(m)

0.5

0

-0.5

-1

2

2.5

3

3.5

4

4.5

rang(m)

Fig. 4. The result of traditional MIMO

0.6 0.8

φ

1 1.2 1.4 1.6 1.8 2

2.5

3.5

3

4

rang(m)

Fig. 5. The result of vortex EM wave MIMO

height(m)

3.5 3 2.5 2 1.5 1 0.5

azimuth(m)

0

2.6

2.8

3

3.2

3.4

rang(m)

Fig. 6. The three-dimensional image of the targets

2355

2356

J. Liang et al.

5 Conclusion In this paper, based on vortex EM wave, the method for ISAR radar three-dimensional imaging and the principle for the radius of uniform circular array (UCA) is designed. The simulation results show that three-dimensional imaging of the target can be obtained by the proposed method. In our future work, we will consider the MIMO array model further optimization and single snapshot 3-D image.

References 1. Tamburini F et al (2012) Encoding many channels on the same frequency through radio vorticity: first experimental text. New J Phys 14(3):033001 2. Mahmouli FE, Walker SD (2013) 4-Gbps uncompressed video transmission over a 60-GHz orbital angular momentum wireless channel. IEEE Wireless Commun Lett 2(2):223–226 3. Thid B et al (2007) Utilization of photon orbital angular momentum in the low frequency radio domain. Phys Rev Lett 99(8):087701 4. Chen X, Chen B, Guan J, Huang Y, He Y (2018) Space-range-doppler focus-based lowobservable moving target detection using frequency diverse array MIMO radar. IEEE Access 6:43892–43904 5. Gao JK, Deng B, Qin YL, Wang HQ, Li X (2018) Near-field 3D SAR imaging techniques using a scanning MIMO array. J Radars 7(6):676–684 6. Xu XJ, Liu YZ (2018) Three-dimensional interferometric MIMO radar imaging for target scattering diagnosis. J Radars 7(6):655–663 7. Wang J, Ding CB, Liang XD, Chen LY, Qi ZM (2018) Research outline of airborne MIMOSAR system with same time-frequency coverage. J Radars 7(2):220–234 8. Mohammadi SM et al (2010) Orbital angular momentum in radio—a system study. IEEE Trans Antennas Propag 58(2):565–572 9. Guo GR, Hu WD, Du XY (2013) Electromagnetic vortex based radar target imaging. J Nat Univ Defense Technol 35(6):71–76 10. Yuan TZ, Wang HQ, Qin YL, Cheng YQ (2016) Electromagnetic vortex imaging using uniform concentric circular arrays. IEEE Antennas Wirel Propag Lett 15:1024–1027 11. Yuan TZ, Cheng YQ, Wang HQ, Qin YL (2016) Beam steering for electromagnetic vortex imaging using uniform circular arrays. IEEE Antennas Wirel Propag Lett 16:1–4 12. Liu K, Cheng YQ, Yang ZC, Wang HQ, Qin YL, Li X (2015) Orbital-angular-momentumbased electromagnetic vortex imaging. IEEE Antennas Wirel Propag Lett 14:711–714 13. Charvat GL et al (2010) An ultrawideband (UWB) switched-antenna-array radar imaging system. In: Proceeding of IEEE international symposium on phased array systems and technology, p 550 14. Ender JHG, Klare J (2009) System architectures and algorithms for radar imaging by MIMO-SAR. In: Proceeding of IEEE radar conference, pp 1–6 15. Liu K, Cheng YQ, Li X, Wang HQ, Qin YL, Jiang YW (2016) Study on the vortexelectromagnetic-wave-based radar imaging theory and method. IET Microw Antennas Propag 10:961–968

Energy Storage Techniques Applied in Smart Grid Youjie Zhou1, Xudong Wang1, Xiangjing Mu1,2, Zhizhou Long3, Changbo Lu1(&), and Lijie Zhou1 1

3

Institute of Military New Energy Technology, Beijing 102300, China [email protected] 2 Naval Logistics College, Tianjin 300450, China Institute of Quartermaster Engineering & Technology, Beijing 10010, China

Abstract. Based on the contradiction and existing problems of power system, the paper analyze and introduce the concept and connotation of the future smart power grid, and combined with the development of smart power grid, study these veral main ways of storage technology. Finally, the developing direction of smart grid energy storage technology is proposed. Keywords: Smart grid system

 Energy storage techniques  Battery energy storage

1 Introduction At present, under the background of energy structure adjustment, with the inherent requirements of the power system for safe, stable, and efficient operation, the upgrading of existing power systems and the development and utilization of new energy sources are even more urgent. It has prompted the development of China’s power grid to enter a new stage - the smart grid. The future smart grid will be a complex of advanced technologies including information and communication technology, power electronics technology, energy storage technology, sensor measurement technology, etc., while energy storage technology is whether the smart grid can be built smoothly. The main support point will play a very important role in the construction of smart grid [1].

2 Power System and Smart Grid 2.1

Power System Problem

The traditional power system follows the mode of electric energy productiontransmission-use during operation. Therefore, the total amount of power generation and the total load and various losses must be kept at a constant balance every moment, otherwise it will cause Deterioration of power quality, instability of frequency and voltage, and even large-scale vicious blackouts in severe cases pose a serious threat to the safe and stable operation of the power system. At present, the actual demand and

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2357–2363, 2020 https://doi.org/10.1007/978-981-13-9409-6_286

2358

Y. Zhou et al.

operation of China’s power system are undergoing profound changes, and many contradictions and problems are gradually emerging, including: (1) The installed capacity of the system does not meet the demand for increasing peak load; (2) The issue of security and stability after the grid is disturbed is expanding; (3) The transmission capacity of the power grid is increasingly unable to meet the needs of users; (4) Users are increasingly demanding power supply reliability and power quality; (5) The market-oriented development of power companies has made the demand for energy management technology more urgent on the user side; (6) The influence of government policy factors and environmental protection issues on the development of power systems is more significant. 2.2

Smart Grid

Smart grid is a very popular term and concept in the international market on the future development trend of power grid. Intelligentization is a new trend in the world’s power development [2]. The development of smart grid has formed a consensus in the world. The smart grid is based on an integrated high-speed two-way communication network [3]. Through advanced sensing and measurement technology, advanced equipment technology, advanced control methods and advanced decision support system technology, the grid is reliable and safe. Economic, efficient, environmentally friendly and safe to use. Compared with modern power grids, smart grids represent a significant feature of the high convergence of power flow, information flow and business flow. Its advanced nature and advantages are mainly reflected in: (1) It has a strong grid infrastructure and technical support system, which can improve the ability to withstand various types of external interference and attacks, and can adapt to large-scale clean energy and renewable energy access. (2) Wide application of technologies such as flexible AC/DC transmission, network plant coordination, power storage, and distribution network automation, making grid operation control more flexible and economical; and adapting to a large number of distributed power sources, microgrids and electric vehicle chargers Access to the discharge facility. (3) The comprehensive application of communication, information and modern management technology will greatly improve the efficiency of power equipment use, reduce power loss, and make grid operation more economical and efficient. (4) Realize the high integration, sharing and utilization of real-time and non-real-time information, display a comprehensive, complete and detailed grid operation state map for operation management, and provide corresponding auxiliary decision support, control implementation plan and response plan.

Energy Storage Techniques Applied in Smart Grid

2359

3 Energy Storage Technology 3.1

The Significance of Energy Storage Technology

Energy storage technology is a technology that stores electrical energy in a chemical or physical manner and converts it into electrical energy when needed. At present, energy storage technology has become an important part of the six processes of “extraction, transmission, transmission, distribution, use and storage” in the operation of power grids. Energy storage is the key to promoting the efficiency and efficiency of power systems. The key to the strength of the smart grid, and its growing demand, the increasingly widespread application of energy storage technology will bring many major changes to the development of China’s power system. The application of energy storage technology has important significance for the development of China’s power system and the construction of smart grid: First, it is used to peak the power system to ensure the quality of electricity. The application of energy storage technology is equivalent to adding a “storage” effect between the power generation and the power load of the power system, so that a certain degree of solution and buffer between the power generation and the power load is obtained. Therefore, the system can be significantly improved in terms of power quality, stability, and reliable operation, and the operation of the power system can be well optimized. On the technical level, by controlling the power exchange device, the energy storage system based on energy storage technology can exchange fast and largecapacity active power/reactive power with the large power grid to maintain the basic balance between power generation and load of the power system, and the power system. The voltage and frequency can also be effectively guaranteed, improve the reliability of power supply and power quality, reduce or avoid losses caused by grid reliability and power quality problems, and meet users’ high specification power requirements; The second is to optimize the utilization of renewable energy and promote the development and application of renewable energy. The development of smart grids in China in the future requires improving energy efficiency, adjusting energy structure, increasing alternative energy sources, and achieving sustainable energy development. Energy storage technology can adjust the output power of renewable energy generation and provide auxiliary services such as frequency control function and fast power response to the power system, which opens up a feasible way for large-scale application of renewable energy. 3.2

Energy Storage Technology

Energy storage technology has a wide range of applications in a variety of fields, including power systems. In recent years, energy shortages and power gaps around the world have brought new development opportunities for energy storage technologies, especially in renewable energy and distribution [4]. In the field of power generation, more and more attention has been paid. As the technology of electric energy storage, there are mainly mechanical physical energy storage pumped energy storage (PHES), compressed air energy storage (CAES), flywheel energy storage (FWES),

2360

Y. Zhou et al.

electromagnetic energy storage (supercapacitor, superconducting energy storage (SMES). Etc.), and electrochemical energy storage (including lead-acid batteries, lead carbon batteries, flow batteries, lithium-ion batteries, etc.). In the field of physical energy storage, pumped storage and compressed air storage are the two fastest growing energy storage technologies. Pumped storage is the world’s largest installed energy storage technology, accounting for 98% of the world’s total energy storage capacity, Japan, China, the United States installed the world’s top three installed capacity. Pumped storage is suitable for energy storage above 100 MW. Its single scale has reached 300 MW, which is the most mature energy storage technology. Traditional compressed air energy storage is currently available in large-scale commercial applications in Germany (Huntorf, 321 MW) and the United States (Mcintosh, 110 MW; Ohio, 9  300 MW, etc.). In terms of new compressed air storage, there are four organizations in the world with megawatt design capabilities: General Compression (2 MW regenerative compressed air energy storage) and SustainX (1.5 MW isothermal compressed air). Energy storage, UK Highview Power (megawatt liquid air energy storage), and Institute of Engineering Physics, Chinese Academy of Sciences (1.5 MW supercritical compressed air energy storage). Flywheel energy storage has developed rapidly in recent years. According to statistics, as of 2016, the global flywheel energy storage application scale is around 50 MW. Heat storage is a major category of energy storage. There are three main methods of heat storage, including sensible heat storage, phase change heat storage and thermochemical heat storage, which are characterized by the state quantity and heat exchange of the performance of the heat storage medium. The impact of performance in processes and conversions. All kinds of energy can be converted into heat, stored in the form of heat, converted to electrical energy output or used directly when needed. The heat storage can be divided into cold storage, low temperature heat storage and high temperature heat storage according to the temperature range of the stored heat. Cool storage is mainly used in the field of cooling, and the cold storage medium can be ice or cold water of 5–7 °C for air conditioning and refrigeration. Low-temperature heat storage is mainly used for heating, and the heat storage medium may be normalpressure hot water, pressurized hot water or low-temperature phase change material, and is mainly used for heating. The application range of high temperature heat storage is higher than that of cold storage and low temperature heat storage. The technology can convert solar heat, biomass chemical energy (such as chemical energy in biomass obtained by burning biomass) and electrical energy (which will need to be stored, such as nighttime valley electricity, abandoned wind, abandoned light, etc.) through electrothermal conversion devices. It is heat energy and is stored in a heat storage medium. When it is required to be released, heat is transferred to the working medium of the power generation island by a steam generator or a heat exchanger to provide temperature and pressure of the medium. The high temperature and high pressure medium pushes the steam turbine generator to work, generates electric energy, and can provide high temperature steam and waste heat for heating and cooling according to load demand.

Energy Storage Techniques Applied in Smart Grid

3.3

2361

Chemical Battery Energy Storage

In the field of chemical energy storage, lead-acid batteries (including lead carbon batteries, the principle shown in Fig. 1) are the most mature chemical batteries so far due to their early technological development and low material cost. As of 2015, the world. The storage capacity of lead-acid batteries has reached 120 MW, and China is the largest producer and user of lead-acid batteries. Lithium-ion batteries have become the most competitive chemical energy storage technology in the world. Since 2012, the installed capacity of lithium-ion batteries has been greatly improved. At present, the number and installed capacity are the highest, and the growth rate is the highest. Fast, the stand-alone scale has reached 64 MW. LuxResearch 2014 reports that lithium-ion batteries have become the dominant technology in the field of grid energy storage. There are many liquid flow battery systems. The principle is shown in Fig. 2. The flow battery system that has been demonstrated in operation has sodium polysulfidebromide, zinc-bromine and all-vanadium flow energy storage systems. Among them, the vanadium redox flow battery is the most promising. In 2016, Sumitomo’s 15 MW/60 MWh all-vanadium liquid energy storage demonstration power station, which was put into operation in Hokkaido, Japan, is the largest liquid battery storage project currently in operation. Sodium-sulfur batteries are currently installed in more than 200 locations worldwide. Japan’s NGK Corporation is the only organization that realizes the industrialization of sodium-sulfur batteries. After the fire incident in the sodium-sulfur battery system of NGK in 2015, the development rate of sodium-sulfur batteries was slow. Supercapacitors In the established distributed generation and microgrid, supercapacitors are mainly used in combination with other energy storage batteries to undertake high current input/output, such as photovoltaic power generation systems and 5 MW energy storage established in Wakkanai, Japan. In the system, a 1.5 MW sodium-sulfur battery and a 1.5 MW supercapacitor composite energy storage technology.

Fig. 1. Lead carbon battery energy storage schematic

2362

Y. Zhou et al.

Fig. 2. Flow battery energy storage schematic

There is a common phenomenon in the battery pack of the smart grid energy storage system, that is, the probability that the battery has problems in system research and manufacturing is so great that it threatens the safety of the entire system and largely affects the research process of the smart grid. The main reasons are as follows: (1) The current cost of energy storage batteries is high, the technology is immature, and the defects in battery technology will affect the life of the smart grid energy storage system. How to use the battery reasonably and effectively is a key issue. (2) An energy storage system battery pack consisting of tens or even hundreds of single cells, because each cell has different operating conditions and different initial conditions, resulting in differences between them, performance is uneven, that is, some The battery is in good operating condition and some are backward. At this time, the performance of the battery pack will be degraded due to the performance of the backward battery. (3) Short service life is the bottleneck of battery research. Incorrect operation is to shorten the life of the battery and even scrap it. (4) During the use of the battery pack, it is required to accurately judge the capacity of the battery pack, and it is necessary to detect the SOC of the battery (the state of charge of the battery, that is, the remaining battery power). At the same time, SOC is also an important parameter about the state of the battery. It has many influencing factors, and how to accurately estimate is a problem.

Energy Storage Techniques Applied in Smart Grid

2363

4 Summary Different energy storage technologies have different advantages and disadvantages. According to their respective charging and discharging characteristics and scale, they can be applied to different nodes of the smart grid. Smart grids require power quality that meets the needs of the 21st century, adapt to all types of power sources and storage methods, marketable transactions, optimize grid assets, improve operational efficiency, and enable the grid to be self-healing, interactive, and secure. This requires us to start the power supply shortly when the power supply is insufficient for a short period of time. The supercapacitor and flywheel energy storage technology responds quickly and can discharge at high current, which can be used as the first choice for short-time power supply. After short-term buffering, if the power grid If there is still insufficient power supply, it can start a backup power supply that can supply power for a long time, such as pumped storage and compressed air storage, so as to achieve uninterrupted power supply; while wind, solar energy and other green new energy power generation require smooth internet access, which requires large-scale The energy storage ensures that the generated electricity can be smoothly delivered to the power grid, and the flow battery and sodium-sulfur battery become a good choice. With the right energy storage technology, the smart grid can be “strong” and truly “intelligent” technical features of informationization, digitization, automation and interaction.

References 1. Song Y, Yang X (2009) Smart grid: the solution to challenges of power supply in the 21st century. Electr Power Technol Econ 21(6):1–8 2. Wang M (2010) Smart grid and intelligent energy network. Power Syst Technol 34(10):1–5 3. Chen S, Song S, Li L (2009) Overview of smart grid technology. Power Syst Technol 33 (8):1–7 4. Cheng S, Wen J, Sun H (2005) Energy storage technology and its application in modern power systems. Electr Appl 24(4): 18, 19, 24, 35

A Robust Hough Transform-Based Track Initiation Method for Multiple Target Tracking in Dense Clutter Qian Zhu, Panhe Hu, Jiameng Pan, and Qinglong Bao(&) Science and Technology on Automatic Target Recognition Laboratory, National University of Defense Technology, Changsha 410072, China [email protected], {hupanhe2009,cbpest}@163.com, [email protected]

Abstract. Existing multiple track initiation methods based on Hough Transform have flaws, such as huge computation load in dense clutter environment and performance degradation in measurement error. To improve those defects, we propose a robust Hough Transform-based track initiation method for multiple target tracking in dense clutter. The whole proposed method is divided into three steps. The first step is to screen data through the grid-based velocity test. The second step is to cluster the screened data into groups based on eight connectivity principle of grids. The last step is to respectively make Hough Transform on each group data to achieve track initiation. The experiment results illustrate the proposed algorithm outperforms traditional algorithms on Hough transform based track initiation. Keywords: Hough transform

 Track initiation  Multiple track

1 Introduction Track initiation is the key part in target tracking. According to existing literatures, various researches on track initiation are categorized as two groups: sequential processing algorithms and batch processing algorithms. Methods in the former group only perform well in weak clutter environment. But they have less computing cost. The latter group needs more scans to process. But it is suitable for dense clutter environment. Typical methods in the latter group are standard Hough transform based method [1] and its modified version. The Hough transform is a feature extraction technique used in image analysis, computer vision and digital image processing [2]. In radar data processing, tracks can be regarded as lines for detection. Since the standard Hough transform has high computational complexity and high occurrence probability of false tracks, various modified versions are proposed to overcome its shortcomings. The modified Hough transform (MHT) [3] is proposed utilizing the movement of the target to complete track initiation with three consecutive radar scans. But it can only work in the high detecting probability. Another classical modified version is the random Hough transform (RHT). Randomly select two measurements to calculate the intersection in Hough transform parametric space for accumulation. This can avoid the calculation of © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2364–2372, 2020 https://doi.org/10.1007/978-981-13-9409-6_287

A Robust Hough Transform-Based Track Initiation Method

2365

each measurement in full parametric space. But it requires for the prior information of track numbers, and the threshold of accumulation is worthy for further discussion [4] propose Real Time HT (RTHT) and Real Time MHT (RTHT) to reduce computation load. Comparing to typical Hough transform based algorithms, it adds sliding windows on radar scans and screens data by the limitation of velocity and accelerated velocity. But the effectiveness on computation load is limited, if the error of the measurements is large. To overcome this problem, we propose a robust Hough transform based track initiation method in this paper. Based on the RTHT, we make further modifications on the data screening step. To avoid the calculation of the Euclidean distance between point pair, we introduce the grid-based method to solve the velocity limitation of screening. The experiment results illustrate the proposed algorithm outperforms traditional algorithms on Hough transform based track initiation. The remaining part is organized as follows. Section 2 is the theory of Hough Transform (HT) for track initiation. Section 3 is Real Time Hough Transform (RTHT) based track initiation method. Section 4 is the proposed method. Section 5 is the experiment result and analysis. Section 6 is the conclusion.

2 The Theory of Hough Transform (HT) for Track Initiation Hough Transform is designed to detect lines. The major process of the Hough Transform is to project data into Hough parametric space, vote in quantized parametric space and detect the peak location. The Hough parametric space consists of two parameters: q and h. q is the distance from the original point to the line in Euclidean space. h is the slope angle of the line in Euclidean space. Assume the coordinate of a point in Euclidean space is ðx; yÞ. The Hough transform of the point is illustrated as Eq. (1). q ¼ x  cos(hÞ þ y  sin(hÞ

ð1Þ

Figure 1 shows an example sketch map of the Hough transform. The left part is the Euclidean space, while the right part is the Hough parametric space. Two points in Euclidean space can be respectively projected into two curves in the Hough parametric space. The cross point of the curves is corresponding to the line through these two points in the Euclidean space. Thus, the line detection can be achieved by vote in the Hough parametric space. The peak location of the vote result is corresponding to the line covering most points in the Euclidean space. For vote, the Hough parametric space should be divided into discrete space. The division of the parameter h and the parameter q are respectively illustrated in Eqs. (2) and (3). H ¼ fhjhi þ 1  hi ¼ Dh; 0  i  Nh g

ð2Þ

P ¼ fqjqi þ 1  qi ¼ Dq; 0  i  Nq g

ð3Þ

where Dh and Dq are respectively the resolution ratio of h and q. Nh and Nq are respectively the total number of segments of h and q.

2366

Q. Zhu et al.

For track initiation, the track of target can be approximately regarded as linear in short time period. So, the issue of track initiation is transformed into the issue of line detection in radar data.

100

60

90

40

80

20

70

0

rho

y

60 50 40

-40

30

-60

20

-80

10 0

-20

0

20

40

60

80

100

-100 -100

-50

x

0

50

100

theta(º)

Fig. 1. An example sketch map of the Hough transform

3 Real Time Hough Transform Based Track Initiation Method Real time Hough transform based track initiation method includes two steps. The first step is to screen the measurements in the radar scans within the sliding window. The second step is to apply Hough transform on the screened results and initiate tracks based on the accumulation of the Hough parametric space. Firstly, choose the sliding window of length four. The choice of the length is based on the analyses of Receiver Operating Characteristics (RoC) of radar. Secondly, screen the measurements that satisfy the following conditions: • Velocity test The estimated velocity should satisfy the following condition:

Pi þ 1  Pi

 Vmax Vmin 

ts

ð4Þ

where Vmin and Vmax are respectively the minimum and maximum admissible speeds for a moving target, and ts is the scan period. kk is the operator of modulus. • Acceleration test

Pi þ 2  Pi þ 1 Pi þ 1  Pi

 amax ðts Þ; i ¼ 1; 2. . . 

ts ts Thirdly, apply the Hough transform to each measurement.

ð5Þ

A Robust Hough Transform-Based Track Initiation Method

2367

4 The Proposed Algorithm In massive clutter condition, it consumes much time to calculate the modular between point pair in RTHT. So, we decide to explore grid-based method to overcome it. Besides, when the measurement error is large, the acceleration test fails to screen out the candidate data sometimes. So, we decide to omit the acceleration test for reduction of computation load. In RTHT, the velocity test is adopted within two consecutive scans. However, when detection probability decreases, the target information does not certainly exist in every scan data. The strategy of the velocity test in RTHT is flawed. So, we decide to expand the number of scans for the velocity test. The screened data is ready for the Hough Transform to detect tracks. If directly apply Hough Transform on the whole data, the redundant data may result in false tracks. So, for multiple tracks initiation, it is necessary to explore the cluster method of target data based on location. 4.1

Grid-Based Velocity Test

To reduce computation load, we consider to apply grid-based method on the velocity test. We divide the surveillance scope into grids. Set the grid size based on the measurement error of the surveillance system. The measurement in radar is the range and the azimuth of target. If the maximum measurement error of range and azimuth is Dr and Da respectively, the grid spacing in range and azimuth is also Dr and Da respectively. Figure (2) illustrate an example sketch map of the grid division. We can find that the size of the grid in range dimension is getting larger along with the range. Next step is to mark the point data on the grid constructed. As the grid is uniform both in range and in azimuth, the marking process is much simpler. Assume the coordinate of the point is ðr; aÞ. Equations (6) and (7) gives the marking calculation process. nr ¼

r Dr

ð6Þ

na ¼

a Da

ð7Þ

where  is the symbol of round down. Dr and Da are respectively the grid spacing in range dimension and in azimuth dimension. nr and na are respectively the range and azimuth indexes of target in the grid. Construct a zero matrix as the marked matrix with the same size of the grid. Mark as 1 at the location of ðnr ; na Þ in marked matrix. And so on, for each point data in scan.

2368

Q. Zhu et al. 40 35 30 25

20 15 10 5 0 0

1

10

4

2

10

4

3

10

4

R(m)

Fig. 2. An example sketch map of the grid division

Assume the length of the sliding window for velocity test is 2k + 1. In other words, the number of scans for velocity test is 2k + 1 once. The center scan is the object scan for test. Rest of scans are the reference scans. Mark the data of each scan on grid, and obtain the corresponding marked matrix. Suppose the object scan number is n. Its marked matrix is An . The reference marked matrixes are Ank . . .An1 An þ 1 . . .An þ k . Compose them into a new mark-matrix Cn , shown in (8). Cn ¼ BmðAnk þ    þ An1 þ An þ 1 þ    þ An þ k Þ

ð8Þ

where BmðÞ is the symbol of binarization. To ensure the target points in the neighborhood of the points from reference frames, we do morphological dilation on Cn with a rectangular structural element B. The size of B depends on the coarse estimation of the target’s move range within the time period of sliding window. Suppose LBr and LBa is respectively the size of B in range and in azimuth. The calculations of LBr and LBa are shown below. LBr ¼

Vmaxr  ts  k Dr

ð9Þ

LBa ¼

Vmax a  ts  k Da

ð10Þ

where Vmaxr and Vmaxa are respectively the maximum probable velocity of the target. ts is the time period of each scan. The dilation result matrix marks the neighborhood area of the points from reference scans. Mn in Eq. (11) is the dot product of the dilation result and the object markmatrix An .

A Robust Hough Transform-Based Track Initiation Method

Mn ¼ ðCn  BÞ  An

2369

ð11Þ

where  is the symbol of dilation.  is the symbol of the dot product. The matrix Mn stands for the final marked area of screened data for the object scan. To facilitate realization, the dilation of binary matrix can be expressed by binarization after convolution. So Eq. (11) can be rewritten as Mn ¼ BmðCn  BÞ  An

ð12Þ

where  is the symbol of convolution. The processing progress is illustrated in Fig. 3.

Composition

Dilation

Point product object mark-martrix An

An + k An +1 An An −1

structural element B

...

...

screening data for

reference mark -martrix Γ n

An

dilation result

An − k

Fig. 3. The general view of the processing progress

4.2

Grid-Based Clustering Method

The screened data within the sliding window are marked on the matrix Mnk . . .Mn1 ; Mn ; Mn þ 1 . . .Mn þ k . Here, the sliding window for track initiation is the same length as the sliding window for velocity test. Compose the screened data together into Pn as Eq. (13) shown. Pn ¼ BmðMnk þ    þ Mn1 þ Mn þ Mn þ 1 þ    þ Mn þ k Þ

ð13Þ

Pn is the marked matrix of all screened data within the sliding window. To cluster the screened data, we adopt grid-based method to achieve it. Firstly, re-divide the previous grid in large spacing. New grid spacing is in accord with the size of rectangular structural element in Sect. 4.1. Previous grid spacing is Dr and Da. New grid spacing is LBr  Dr and LBa  Da. Reintegrate Pn into new cluster marked matrix Dn based on new grid division strategy. Cluster data on Dn into several groups based on eight connectivity principle. Arrange groups in descending order of the number of the data. Remove those groups containing 2 or less data. Rest of the groups are the object groups for following processing. And the number of these groups is the probable minimum number of tracks in surveillance scope.

2370

4.3

Q. Zhu et al.

Hough Transform on Each Group Data

After data screening by velocity test and data cluster, we obtain several groups of data to make track initiation. We respectively do Hough Transform on each group data. The detail process is based on the theory in Sect. 2. The only difference is the operation on voting result. In standard Hough Transform, track initiation is achieved only through the location of the maximum value in voting. But for multiple target track initiation, it may result in many redundant tracks like the true track. To overcome this flaw, we choose the location of the local maximum value in voting to initiate tracks.

5 Experiment Result In this section, the proposed algorithm is tested on a simulated scenario. The computer configuration for experiment: Inter(R) Core(TM) i7-4790 CPU @ 3.60 GHz. RAM: 16.0 GB. All operations in this section run on Matlab R2018a. Simulation environment: Three targets move in the surveillance scope, as Fig. 4 shown. And we add amounts of clutters in the data. 90 60 4 10

4

30 2 10

4

0

0

-30

Fig. 4. Simulated scenario

We apply respectively the standard Hough Transform, RT Hough Transform and the proposed algorithm on the simulated data to achieve track initiation. The experiment results are shown below. Figure 5a is the track initiation result of the standard Hough Transform. Figure 5b is the track initiation result of the RT Hough Transform. Figure 5c is the track initiation result of the proposed algorithm.

A Robust Hough Transform-Based Track Initiation Method

(a)10 4

(b)

6

6

5

5

4

4

3

3

2

2

1

1

0

0

-1

-1

2371

10 4

-2

-2 0

1

2

4

3

(c)

5

6 10 4

1

0

2

3

4

5

6 10 4

10 4

6 5 4 3 2 1 0 -1 -2 0

1

2

3

4

5

6 10 4

Fig. 5. The track initiation results

The time consuming of the standard Hough Transform, RT Hough Transform and the proposed algorithm are respectively 0.95 s, 0.27 s and 0.49 s. From results, we can find that the proposed algorithm outperforms traditional algorithms on Hough transform based track initiation.

6 Conclusion In this paper, we propose a robust Hough Transform-based track initiation method for multiple target tracking in dense clutter. The whole method is divided into three steps. The first step is to screen data through the grid-based velocity test. The second step is to cluster the screened data into groups based on eight connectivity principle of grids. The last step is to make Hough Transform on the data of each group to achieve track initiation. The experiment results illustrate the proposed algorithm outperforms traditional algorithms on Hough transform based track initiation.

2372

Q. Zhu et al.

References 1. Duda RO, Hart PE (1972) Use of the Hough transformation to detect lines and curves in pictures. Commun ACM 15(1):11–15. https://doi.org/10.1145/361237.361242 2. Zhou X, Wang H, Cheng Y, Qin Y (2017) Radar coincidence imaging by exploiting the continuity of extended target. IET Radar Sonar Navig 11:60–69. https://doi.org/10.1049/ietrsn.2015.0632 3. Leung H, Hu Z, Blanchette M (1996) Evaluation of multiple target track initiation techniques in real radar tracking environments. IET Radar Sonar Navig 143(4):246. https://doi.org/10. 1049/ip-rsn:19960404 4. Benoudnine H, Meche A, Keche M, Ouamri A, Woolfson MS (2016) Real time hough transform based track initiators in clutter. Info Sci 337–338:82–92. https://doi.org/10.1016/j. ins.2015.12.021

Structure Design and Analysis of Space OmniDirectional Plasma Detector Junfeng Wang(&), Tao Li, Hua Zhao, Qiongying Ren, Yi Zong, and Zhenyu Tang Beijing Institute of Spacecraft Environment Engineering, Youyi Road 104, Haidian District, Beijing 100094, China [email protected]

Abstract. This paper first introduced the necessity of developing the space omni-directional plasma detector and its detecting target, and then presented the physical design, structural design, electrical performance, mechanical performance and other constraints and design requirements under this background. Then, in view of the omni-directional plasma space detector, in order to ensure the realization of space plasma detection, analysis, transferring function and could accept the space environment simulation experiment examination, this paper designed structural and improved and, by establishing a finite element mechanics model, gave the acceptance level mechanics simulation, including the detector’s response of modal analysis and frequency analysis. At last the mechanical test data proved its rationality. Finally, the application of composite material on the space omni-directional plasma detector is proposed. Maybe it would become a direction to improve the design. Keywords: Space omni-directional plasma detector analysis

 Structure  Design and

1 Overview The function of space plasma probe is receiving, processing, storage, and transmission of signal, aiming at on-orbit monitoring orbital space environment of MEO (Medium Earth Orbit). For the satellites in the outer radiation belt of the van Allen belt, the orbit was slightly higher than that of GPS, and satellites in this area were seriously threatened by surface charge and discharge. A large number of observations have been made in this region abroad (mainly in the United States). For example, NASA launched the van Allen probe (named the radiation belt storm detector, or RBSP) in 2012. In order to accurately observe the energy flux of particles (electrons, protons, nitrogen and oxygen) in the whole radiation belt (including inner and outer radiation belts), the energy range of the low-energy electrons and ion detectors on that satellite could reach Sponsored by Space multiparameter on-orbit perception and evaluation technology (A project of Ministry of Science and Technology.) No. 2015DFR80190. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2373–2379, 2020 https://doi.org/10.1007/978-981-13-9409-6_288

2374

J. Wang et al.

15 ev–50 keV and 1 ev–50 keV through special processing of the satellite body (static electricity, remanence, etc.). There are also other similar satellites like THEMIS, POLAR and CLUSTER. Therefore, two omni-directional electron/ion detectors with an energy range of 10 eV–20 keV were finally selected and the electrostatic analyzer was selected as the detection scheme [1–3].

2 Constraints and Requirements of Design 2.1

Constraints of Physical Design

As mentioned above, the function of space plasma probe is receiving, processing, storage, and transmission of signal. Among them, the probe structure of the detection part is the most complex. The probe is mainly composed of three parts: electrostatic deflection system (deflection electrode), energy analysis system (electrostatic analyzer) and counting system (microchannel plate). From the perspective of electron flight path, electrons pass through the electrostatic deflection system, energy analysis system and counting system successively. The deflection pole and electrostatic analyzer were required to work with a static voltage of several kilovolts (Fig. 1).

Fig. 1. The structure of probe

2.2

Constraints of Electrical Design

As shown in Fig. 2, the potential of each part of the sensor is different, with a potential difference of thousands of volts. Good electrical insulation or conduction was needed to be ensured at the corresponding position.

Structure Design and Analysis of Space Omni-Directional Plasma

2.3

2375

Constraints of Mechanical Design

The detector was mounted directly on the satellite, extending from the inside of the plate to the outside. The mounting surface was required to be smooth without protrusions and the contact area should meet the flatness of 0.1 mm/100 mm  100 mm. 2.4

Design Criterion of Statics and Dynamics

a. Request for Fundamental Frequency: The lowest natural frequency >140 Hz. b. Request for strength: The structure is not destroyed under static overload and random vibration and the safety margin >1. Define of safety margin: M:S: ¼ Sa=Se  1 M.S. Sa Se

ð2:1Þ

Safety margin. Permissive load or corresponding stress. Identification load or corresponding stress.

3 Design In Fig. 2, the design of the probe, which needs to complete the complex design and analysis under the physical properties, mechanical, electrical, thermal, and statics and dynamics and other constraints, was more complex.

Fig. 2. The inner design of the detector

2376

J. Wang et al.

4 Simulation and Analysis by FEM In view of the design improvement above, finite element simulation and mechanical test are needed to verify its rationality. 4.1

Setting Up the Model

Figure 3 showed the finite element model. The nodes with 4 mounting holes were selected for fixed constraint to eliminate all degrees of freedom.

Fig. 3. The model of FEM in Patran

4.2

Simulation Conditions

The test conditions of random vibration are given in Table 1. Table 1. Random vibration test conditions Direction of vertical mounting surface Frequency (Hz) Spectrum density 10 0.05 g2/Hz 10–100 3 dB/oct 100–400 0.2 g2/Hz 400–600 −6 dB/oct 600–800 0.15 g2/Hz 800–2000 −6 dB/oct 2000 0.0284 g2/Hz

Direction of parallel mounting surface Frequency (Hz) Spectrum density 10 0.035 g2/Hz 10–100 3 dB/oct 100–400 0.2 g2/Hz 400–600 −6 dB/oct 600–800 0.12 g2/Hz 800–2000 −6 dB/oct 2000 0.0178 g2/Hz

Structure Design and Analysis of Space Omni-Directional Plasma

2377

The main material composition is shown in Table 2. Table 2. The main material and properties Name 2A12-H112 (aluminium alloy) Epoxy resin FR-4 (PCB)

4.3

Elastic modulus E (GPa) 70 14

Density q (g/cm3) 2.8 1.75

Poisson ratio c 0.30 0.15

Modal Analysis and Frequency Response Analysis

The results of modal analysis are shown in Table 3. Table 3. The frequency at order 1–5 of the detector Order Frequency

1 260

2 263

3 388

4 390

5 554

Fig. 4. The first order mode and frequency response stress nephogram

2378

J. Wang et al.

As shown in Fig. 4a, the first mode is the PCB at the bottom starting vibration. The other graphs (b), (c) and (d) are the stress-strain response cloud nephogram in X, Y and Z loading directions respectively. The maximum stress value at the X-direction loading response point is 2.86 MPa; Y is 3.12 MPa and Z is 3.84 MPa. It can be seen that the base frequency of the space omni-directional plasma detector is much higher than the required 100 Hz. The calculation formula of structural safety margin MS is as Eq. (2.1), the ultimate strength of aluminum alloy for r = 275 MPa, but the maximum stress is 3.84 MPa. So structure and the structure safety margin is greater than 1.

5 Vibration Test As shown in Fig. 5, the random vibration test of identification level was carried out on the electric servo vibration table in accordance with Table 1. In addition, telemetry, scanning high voltage and other electrification test and electronic incident test were conducted before and after the test and no abnormalities were found. Therefore, improved design can pass the mechanical performance test.

Fig. 5. Vibration test and data of result

6 Conclusions The structure of the space omni-directional plasma detector contains many complex components, especially the internal constraints of the probe. Considering the layout, function, constraints and other factors, the structure was designed and improved in Sects. 2 and 3. In Sects. 4 and 5, through rigorous mechanical simulation calculation and experiment verification, finally the mechanical test was successfully passed, proving that it could meet the complex functional requirements after selection and optimization. In the future, the proportion of aluminum alloy materials may be gradually reduced to increase the application of composite materials. In that case, the weight of the structure can be greatly reduced under the premise of satisfying mechanical strength constraints [4–6].

Structure Design and Analysis of Space Omni-Directional Plasma

2379

References 1. Wurz P, Balogh A, Coffey V, Dichter BK, Kasprzak WT, Lazarus AJ, Lennartsson W, McFadden JP (2007) Calibration techniques, in calibration of particle instruments in space physics. In: vol. 7, Wüest M, Evans DS, von Steiger R (eds) ISSI Scientific Report. ISSI, Bern, Switzerland, pp 117–276 2. Reme H, Bosqued JM (1997) Space science reviews 1997. The Cluster ion Spectrom CIS Experim 79:303–350 3. Su B, Kong L, Zhang A (2018) Design and simulation of light ion analyzer for SMILE satellite. J Astronaut 39(3):347–354 4. Yu MX, Zengyao DH (2006) Research evolution on the satellite-rocket mechanical environment analysis and test technology. J Astronaut 27(3):323–331 5. Zhijie Li, Linli Guo, Bainan Zhang, Faren Qi (2016) Study on mission application an key technologies of reusable spacecraft. Manned Spacefligh 22(5):570–575 6. Zhang Y (2014), Research of the mechanism of equivalence of sine sweep tests simulating environmental excitations. Master thesis, Harbin Institute of Technology, pp 23–25

Design and Implementation of GEO Battery Autonomous Management System for Lithium Battery with Balanced Control Function Lijun Yang1(&), Bohan Chen2, Yan Du3, Liang Qiao1, and Jiaxiang Niu1 1

3

Beijing Institute of Spacecraft System Engineering, Beijing 10094, China [email protected] 2 Institute of Manned Spacecraft System Engineering, China Academy of Space Technology, Beijing, China Institute 706, Second Academy of China Aerospace Science and Industry Corporation, Beijing 100094, China

Abstract. To solve the problem of autonomous management of lithium batteries with balanced control energy in GEO satellite, this paper analyzes the requirements of the main management of lithium batteries, and presents the design scheme of the software system for the autonomous management of lithium batteries, which includes charge and discharge management of lithium batteries, mode conversion, Autonomous balanced management; selfovervoltage, over-temperature and over-current charging protection of the software, and bus overvoltage protection. The design and implementation of the system is verified on a GEO satellite. The design and implementation of the autonomous management software system proposed in this paper can provide reference for the autonomous management of lithium batteries with balanced control function for the follow-up GEO satellites. Keywords: Lithium batteries function

 Autonomous management  Balanced control

1 Introduction The existing GEO satellites power system in China is usually equipped with nickelhydrogen batteries. With the increase of satellite load power, the use of nickelhydrogen batteries can’t meet the weight requirements. After a lot of argumentation, lithium ion batteries are generally used to replace lithium nickel hydride batteries in satellites, which can reduce the weight of satellite system. Among the batteries with various chemical compositions, lithium batteries have the advantages of high specific energy, high cell voltage, stable discharge voltage, low self-discharge rate, no memory effect and no pollution [1, 2]. However, due to process technology limitations, the characteristics of each single cell are not completely the same [3]. Lithium batteries with unbalanced performance are prone to overcharge, overvoltage and over temperature when used in groups, which will seriously affect the service life of the battery and even cause safety hazards [4, 5]. The lithium battery autonomous management system © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2380–2389, 2020 https://doi.org/10.1007/978-981-13-9409-6_289

Design and Implementation of GEO Battery Autonomous Management

2381

designed and implemented in this paper realizes autonomous balanced management; self-overvoltage, over temperature, overcurrent charging protection and bus overvoltage protection; The system independently manages the charging and discharging of the lithium battery, and effectively overcomes the adverse effects of battery inconsistency on the battery life [6, 7].

2 Requirements Analysis The autonomous management system for lithium-ion batteries (hereinafter referred to as autonomous system) is mainly responsible for the on-orbit management function of lithium-ion batteries packs (Autonomous Charge and Discharge Management, Autonomous balanced management, Autonomous overcharge protection management, etc.) partial fault monitoring of lithium batteries at the same time (autonomous battery voltage validity, channel switching, autonomous overvoltage monitoring, single battery status monitoring, etc.). The software function requirements block diagram shown in Fig. 1.

PCU bus overvoltage protection monitoring

remote unit A/B

Fault monitoring And siitch

Autonomous Charge and Discharge

PCU bus overvoltage protection monitoring

Autonomous balanced

power controller

lithium batteries

Autonomous overcharge protection (OVP OCP OTP)

autonomous overvoltage monitoring

channel autonomous switching

BIMU

remote unit C

Fig. 1. Requirements block diagram

autonomous battery voltage validity

single battery status monitoring

2382

L. Yang et al.

Specific requirements are as follows: (1) Through the ground command, the lithium battery packs charging function can be started and terminated autonomously; the operating status (enable or disable) of each function module of the autonomous system can be controlled; the control threshold, parameters and processing coefficients of the autonomous system can be modified. The hardware module and control parameter master backup used by the power management software can be selected. The above change status can be transmitted through telemetry. (2) Automatically detect and provide the working mode of the lithium battery packs; automatically detect the satellite entering and leaving signs during the shadow period, and provide the working mode of the lithium battery packs, and set the working mode of the power controller charging adjustment module. (3) Complete the constant current-voltage limiting charge and discharge control of the lithium battery; perform autonomous balanced management of the lithium battery packs according to the battery voltage. (4) Software automatic over-voltage protection (Soft OVP), automatic overtemperature protection (Soft OTP), automatic over-current protection (Soft OCP) for the lithium battery packs when charging the lithium battery packs. (5) Complete channel autonomous switching; self-determination of voltage effectiveness of lithium battery packs; PCU bus overvoltage protection monitoring and independent recovery function.

3 Design of Autonomous System 3.1

Architecture Design

The autonomous system runs on the central computer, and realizes the power automanagement function by collecting telemetry parameters from the power controller and battery packs. The TM/TC interface of power controller is the main/backup configuration. The main fixed and remote unit interface, backup fixed and remote unit B interface provide telemetry parameters to the digital tube through DS channel, and receive remote control commands through memory loading command (ML) channel. The power distribution interface unit interfaces with the remote unit C through the CSB bus, and the remote unit C interfaces with the central computer through the 1553B, and the power distribution interface unit can realize the voltage collection of the battery unit. As shown in Fig. 2.

Design and Implementation of GEO Battery Autonomous Management

2383

1553B bus Bus command Temperature On/off parameters

Autonomous System

Control logic implementation command

central computer

Bus command telemetry parameters Command sending

telemetry parameters

AN/TH parameters

remote unit A

Control Command

ML command

ML command DS parameters

remote unit B

TM/TC backup

Discrete command

Bus command

Control Command

Powper lithium controlle r battery packs

AN/TH parameters

Control Command

telemetry parameters

TM/TC main

DS parameters

Bus command Temperature On/off parameters

Discrete command

Bus command

remote unit C

telemetry parameters Command sending

Bus command

telemetry parameters

cell voltages

BIMU

Control Command

CSB bus

Fig. 2. Architecture design

3.2

Autonomous Management Module

The autonomous system is divided into an autonomous charge and discharge management of lithium batteries module, an autonomous balanced management module, an self-overvoltage, over-temperature and over-current module, and an autonomous bus overvoltage protection module. The autonomous charge and discharge management of lithium batteries module contains: a power supply autonomous command sub-module, a lithium battery packs unit voltage validity detecting sub-module, a lithium battery charging management sub-module, a lithium battery discharge management submodule, and a channel autonomous switching function module; The overcharge protection management module includes an autonomous voltage overcharge protection module, an autonomous temperature overcharge protection module, and an autonomous current overcharge protection module. The data flow diagram between modules is shown in Fig. 3.

2384

L. Yang et al.

Receive analyze and assign commands

Collect and send telemetry parameter

Telemetry parameter buffer

Autonomous charge and discharge management of lithium batteries module a power supply autonomous command sub-module

Control parameter change Execution commands

a lithium battery packs unit voltage validity detecting sub-module Control telemetry

Autonomo us system command s

Autonomous system parameter

a lithium battery charging management sub-module lithium battery discharge management sub-module a channel autonomous switching function module

Control telemetry

Control parameter passing

an autonomous balanced management module autonomous overchage protecƟon module

Control telemetry

overvoltage module over-temperature module over-current module

Control telemetry

autonomous bus overvoltage protecƟon module

Fig. 3. Flowchart of autonomous management module

4 Implement the Application 4.1

Autonomous Charge and Discharge Management

The autonomous Charge and Discharge Management can be divided into several working modes: pause mode, hold mode, supplementary charging mode (constant current charging, south battery constant current continuous charging, north battery constant current continuous charging), discharge mode, charging mode (constant Streaming charging, south battery current decreasing continuous charging, north battery current decreasing continuous charging). Figure 4 shows the workflow of the autonomous charge and discharge management of the lithium battery packs. In the

Design and Implementation of GEO Battery Autonomous Management

2385

figure, C1 is the discharge criterion of lithium battery (into the shadow criterion), C2 is the charging criterion of lithium battery (out of view criterion), C3 is the pollination/continuous charging end criterion of northern lithium battery, and C4 is the rotation of south lithium battery/The continuous charging end criterion, C5 is the lithium battery packs automatically transferred to the hold mode criterion, the C6 north or south supplementary charging criterion, C7 is the northern lithium battery packs supplementary charging end criterion, C8 is the southern lithium battery packs supplementary charging end the criteria.

autonomous system start

hold mode

TC Pause mode, (full charge state)

C8/C7 batteriesS/N (constant current continuous charging)

C5 C6

batteriesN/S(co nstant current take turns charging) /C8 C7

Autonomous balanced management

Supplementary charging mode C4/C3

C1 C1 Charge mode batteriesN/S (current decrement C3/C4 continuous charging) batteriesN/S(co C3/C4 nstant current take turns batteriesS/N charging) (current decrement take turns charging) Autonomous balanced management

C1

C2

Discharge mode

C1

Fig. 4. Autonomous charge and discharge workflow

Pause mode, autonomous system starts to enter the pause mode by default. During each geodetic season, after the daily charging of the north and south batteries is completed, the C3/C4 criterion is established and enters the pause mode. Entering the pause mode, the discharge capacity is reset to zero. Hold mode, the hold mode can be transferred from the pause mode, and the system supports autonomous switching from the pause mode to the hold mode (the C5 criterion is established) and the ground remote command switching. When the C7/C8 criteria for the North and South batteries were established, the software also entered the hold mode. After entering the hold mode, when the long-light period self-management

2386

L. Yang et al.

function is enabled, the current charge adjustment module is set to the take turns charging mode, and the charging current is set to zero. In the hold mode, if the C6 criterion is established, the supplementary charging mode is entered. Supplementary charging mode, when the self-management function is enabled in the long-light period, the charging mode of the charging adjustment module is set to the take turns charging mode. In the take turns charging mode, if the C7 criterion of the north battery is established and the north battery does not have over-temperature or over-voltage protection, In the long-light period self-management function is enabled, it is set to the south continuous charging mode; in the take turns charging mode, if the C8 criterion of the south battery is established and the south battery does not have overtemperature or over-voltage protection, the self-management function in the long-light period When it is enabled, it is set to the north continuous charging mode; if the south battery C8 criterion is established in the south continuous charging mode, the charging current is set to 0. If the north battery C7 criterion is established in the north continuous charging mode, the charging is set. The current is zero. In the south continuous charging mode, if C8 is established, it enters the hold mode; in the north continuous charging mode, if C7 is established, it enters the hold mode. Discharge mode, except for the discharge mode, the C1 criterion is detected. When the C1 criterion is established, the discharge mode is entered. The software determines whether the temperature control threshold is the ground shadow threshold. If it is the illumination period threshold, the software changes the battery. The group temperature control threshold is the ground shadow period threshold. If the C2 criterion is established, it automatically enters the charging mode. Charge mode, the software in each charging cycle calculates the cumulative discharge capacity CD of each group of battery packs and transmits them as software telemetry parameters. T0 Zþ Dt

CD ¼ CD0 

Ic dt=k

ð1Þ

T0

CD0 is the cumulative value of the discharge capacity when the previous discharge mode is exited, including CD0-S and CD0-N, which respectively represent the cumulative value of the discharge capacity when the South/North battery packs exits the discharge mode. T0 is the charging start time; DT is the charging cycle of each battery packs; IC-S (primary/backup) and IC-N (primary/backup) are the actual charging current values of the South/North battery packs; K is the charge-discharge ratio of the battery packs, and the K-factor of the battery packs can be independently set by the remote control command; During the charging process, after the accumulated CD0 of the software is zero, the accumulation is stopped regardless of whether the charging continues. The satellite charging uses the constant current limiting voltage charging. During the charging process, the discharge criterion C1 may appear (abnormal state), and CD0 is equal to the last calculated value during charging. Monitor the status of the DS

Design and Implementation of GEO Battery Autonomous Management

2387

channel, ML channel, and charge adjustment module at all times. If abnormal, switch autonomously if the switch is enabled. The C3/C4 criterion was established and entered the suspension mode. 4.2

Autonomous Balanced Management

The basic principle of the equalization strategy is to divert a part of the current from a battery with a high charging voltage, and gradually charge the battery with a low charging voltage to achieve the purpose of synchronization. In order to avoid the adverse effect of battery charge on the battery performance, considering that the battery voltage has a good correspondence with the battery charge, determine that the battery voltage exceeds DV1 to start equalization, limit the expansion of battery dispersion, when the battery voltage is lower than ΔV2 When the balance is stopped. For a group of lithium batteries, the voltage detection is effective and the cells that have not been rejected are subjected to equalization control, and the cell voltages are respectively Vi (i = 1 to the number of monomers n). During the ground shadow period, proceed as follows: Vi − Vmin > 0.5 * ΔV1, the shunt resistance switch of the i-th battery string is turned on; 0 < Vi − Vmin < DV2, the shunt resistance switch of the i-th battery string is turned off; DV2  Vi − Vmin  0.5 * DV1, the shunt resistance switch of the i-th battery string maintains the original state; When Vi − Vmin  0, the shunt resistance switch of the i-th battery string maintains the original state. During the long-light period, proceed as follows: Vi − Vmin > ΔV1, the shunt resistance switch of the i-th battery string is turned on; 0 < Vi − Vmin < 0.5 * ΔV1, the shunt resistance switch of the i-th battery string is turned off; When 0.5 * DV1  Vi − Vmin  DV1, the shunt resistance switch of the i-th battery string maintains the original state. When Vi − Vmin  0, the shunt resistance switch of the i-th battery string maintains the original state. When sending the corresponding equalization instruction, it is necessary to judge the current bus usage status, and organize the corresponding instruction code to send to the remote unit C, and record the type of the equalization switch on/off instruction sent. 4.3

Autonomous Overcharge Protection Management

During the charging process of the lithium battery, the entire set of voltage, battery temperature, charging current and discharge capacity are monitored, and the software system performs (voltage, temperature, current) charging protection on the lithium battery packs. Over–Voltage Charging Protection. During the high-current charging or supplementary charging process of the lithium battery packs, if the voltage of the lithium

2388

L. Yang et al.

battery packs is greater than the set, the charging of the lithium battery is stopped, and the charging current of the lithium battery packs is set to 0, and the lithium battery packs is temporarily charged and charged. The recovery charging control is completed by changing the working mode of the charging adjustment module. Over-Temperature Charging Protection. During the charging process, the lithium battery packs temperature (the intermediate value of the same battery packs A temperature collection amount) is greater than the temperature threshold, then the charging of the lithium battery is stopped, the charging current of the lithium battery packs is set to 0, and the charging adjustment module mode will be switched according to the following logic. And according to a certain logic, the charging adjustment module mode is switched. Under temperature protection is consistent with the principle of over temperature protection. Over-Current Charging Protection. During the charging process, the upper limit of the charging current of the lithium battery packs has a corresponding relationship at different lithium battery packs temperatures (given according to the battery performance). When the charging current exceeds the upper limit, the command is sent to reduce charging. Current, set the lithium battery pack charging current to the default value at this temperature. If the lithium battery pack temperature is not within the threshold range, the temperature overcharge protection is activated with reference to the corresponding relationship. 4.4

Autonomous Overvoltage Protection Management

Under normal conditions, the overvoltage protection load is not in the on state. When the bus has a transient overvoltage, the load is automatically turned on. Under normal circumstances, the probability of accidental turn-on of the load during on-orbit flight is very small. Once connected (normal or accidental), a protective reset is required to avoid long-term load on. The power management software should set the “Overvoltage protection load accidentally connected” monitoring, which can be enabled or disabled by the ground software control command. When the software detects the bus overvoltage protection (overvoltage protection power tube is turned on, three consecutive formats), it is considered that the OVP load switch is accidentally turned on, and the reset command is automatically sent (send sequence: A reset ! B reset, command interval Xms)–Yms), the software sends the bus overvoltage protection reset command through the remote units A and B respectively.

5 Conclusion After analyzing the requirements of the autonomous management system, this paper designs and implements autonomous management system for lithium battery with balanced control function. Its function and performance have been fully verified in system test. At present, it has been in orbit on a GEO satellite, and has gone through the shadows and long-light seasons. The system is in good condition on orbit. The design

Design and Implementation of GEO Battery Autonomous Management

2389

and implementation of this system can provide reference for the new generation of GEO satellites with lithium batteries.

References 1. Jinding Z (2011) Design and research of power lithium ion battery management system based on MSP430. Hunan University of Science and Technology, Hunan 2. Zhang H, Sun Y, Zhuang S (2007) Research on equalization circuit of lithium battery based on TL431. Appl Electr Technol 9:144–146 3. Liu T (2016) Design and implementation of a lithium battery management equipment for high orbit satellites. In: Power supply technology, p 9 4. Xie H (2019) Ground constant current and constant voltage charging technology of lithium batteries for high orbit satellites. In: Power supply technology, p 3 5. Cui B, Chen S, Li X, Zhu L (2017) Autonomous management design of high-orbit satellite lithium ion battery pack. In: Spacecraft Engineering, p 1 6. Yang S, Zhao C (2017) Design of a high power SAR satellite power system spacecraft engineering, p 3 7. Wang K, Xu W, Zhang T, Zhang W (2017) Power controller design based on communication satellite TDMA load. In: Power supply technology, p 7

Target Direction Finding in HFSWR Sea Clutter Based on FRFT Shuai Shao1, Changjun Yu1, Aijun Liu1, Yulin Hu2, and Bo Li1(&) 1

2

Harbin Institute of Technology (Weihai), Weihai, China [email protected] Rhein Westfal TH Aachen University, Aachen, Germany

Abstract. In high frequency surface wave radar (HFSWR), Doppler frequencies of the vessels targets may overlap the first order Doppler frequencies from the sea clutter interference. Therefor it is hard to discover the vessel targets and estimate direction of arrive (DOA) of the ship targets. In this paper, the target direction finding method based on fractional Fourier transform (FRFT) was introduced for uniform acceleration/deceleration target, which used the frequency peculiarity difference between the targets and the first order sea clutter. Before the target DOA estimation, sea clutter were suppressed by FRFT method. The effectiveness of the method was verified by the simulations. The result demonstrated our method can estimate the DOA of targets in first order Doppler spectrum of sea clutter. Keywords: HFSWR

 Sea clutter  FRFT  DOA

1 Introduction HFSWR transmits vertical polarization electromagnetic wave which propagates along the sea surface. Therefor HFSWR have ability of target detection beyond the horizon. The merit of HFSWR is working in all kinds of weather, real time operation and extensiveness region monitoring. The HFSWR is an outstanding instrument in maritime target monitoring and control. HFSWR plays an important role in monitoring the 200 nautical mile exclusive economic zone. However echo signals received from array antenna are affected by all kinds of radio interference, sea clutter and ionospheric clutter. About those interferences, the first order Doppler sea clutter becomes one of the dominant sources, which is because of reflecting from ocean waves for half wavelengths of the radar working wave [1]. Especially the Doppler shifts from ship target motion overlay with the first order sea clutter Doppler spectrum. The trouble for target direction finding encounter in HFSWR because sea clutter can cover target in Doppler frequency domain and leads to blind zone [2]. In order to improve the target detection performance and direction finding performance of HFSWR, many researchers have done a lot of research. To improve the resolution performance of HFSWR multi-signal classification algorithm (MUSIC) algorithm in spatial color noise environment, literature [3] introduced pre-whitening and post-processing in MUSIC algorithm. Target detection problem of the first order Bragg spectrum of HFSWR on ship was solved by a kind of method, which utilize © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2390–2397, 2020 https://doi.org/10.1007/978-981-13-9409-6_290

Target Direction Finding in HFSWR Sea Clutter Based on FRFT

2391

space time adaptive processing [4]. A modified algorithm is proposed to detect ships near Bragg peak in HFSWR in literature [5]. Based on the space-time distribution of first-order sea clutter, a vessel detection technique in the broadened sea clutter spectrum is proposed [6]. A time-frequency method for detecting small accelerating targets in sea clutter by HFSWR is designed [7]. The performance of high resolution method to reestimate the detected azimuth after decreasing the physical aperture of HFSWR linear array is evaluated in literature [8]. Literature [9] studies the influence of sea clutter in the detection performance of two classes of vessels in HFSWR. A fast maneuvering target detection algorithm for HFSWR based on time frequency analysis is proposed, which combined constant false alarm rate (CFAR) detection and time frequency analysis [10]. Multiplicative beamforming was introduced in HFSWR for improving azimuth resolution [11]. A space blind filtering approach based on oblique projection is proposed for vessels target detection in sea clutter in literature [2]. A novel approach for HFSWR target tracking based on the combination of detection and tracking is proposed, which realize detection and tracking integration [12]. A dual-frequency HFSWR target detection method based on range-Doppler (RD) spectrum image fusion is proposed in literature [13]. A space-time joint estimation method is proposed for target detection in multi-target and sea clutter [14]. Literature [15] proposed a method for filtering the first order sea clutter components, which was based on the combination of singular value decomposition (SVD) and FRFT. In HFSWR, it is a common phenomenon that the target Doppler frequency overlaps with the first order sea clutter Doppler spectrum. This will make such targets difficult to detect, and its direction of arrival estimation is also a problem. In this paper, we will propose a direction finding method based on FRFT for uniform acceleration/ deceleration target in the first order sea clutter. The proposed method utilizes the frequency characteristic difference between the targets and sea clutter. Sea clutter signal and uniform acceleration target signal are presented as single frequency signal and linear frequency modulation signal respectively after being processed by HFSWR. Therefor FRFT facilitates target signal extraction and further direction of arrival estimation. Herein target direction finding in HFSWR sea clutter based on FRFT is realized. The remaining sections of this paper are organized as follows. In Sect. 2, the signal model are introduced. In Sect. 3, direction finding method based on FRFT is proposed. In Sect. 4, simulation is utilized to prove the proposed approach and the target DOA estimation is provided. Section 5 gives conclusion about the present work in this paper finally.

2 Target and Sea Clutter Signal Model For the target detection and parameter estimation situation in HFSWR, the vessels can be regards as a point target from an orientation, but the sea clutter come from many directions confirmed by the sea wave spatial distribution. Because sea wave produce resonance reflecting. The HFSWR receiving antenna array includes the omnidirectional

2392

S. Shao et al.

antennas, which consists of a uniform spacing d array of M elements. The y direction is vertical to the antennas array. The first receiving antenna element is treated as the direction reference. For a range bins, the data saved both for time domain and Doppler frequency domain. Because the vessels detection processing is generally performed in Doppler frequency domain after signal coherent integration processing, the Doppler frequency data for the range gate is provided. Therefor the received array signal can be viewed as xðf Þ ¼

J X

Asðfk Þ þ

K X

k¼1

Bk cðfk Þ þ nðf Þ

ð1Þ

k¼1

where s(fk) is the spectrum of the vessels target when it includes J different Doppler frequency components, c(fk) is the sea clutter spectrum when it includes K different Doppler frequency components, vessels target is submerged in the sea clutter (K > J), n (f) is the additivity white Gaussian noise, A and Bk is the target and sea clutter steering vectors severally. They are provided as  A ¼ aðh0 Þ ¼ 1

ej2pd sin h0 f0 =c

Bk ¼ ½ a1k ðh1k Þ

...

ejðM1Þ2pd sin h0 f0 =c

. . . aLk ðhLk Þ T

T

ð2Þ ð3Þ

where h represents direction angle, f0 represents the HFSWR operation carrier frequency, c represents the light speed, superscript T represents the transpose operation, Lk operations the spatial received sea clutter components number into the kth Doppler frequency and every steering vector for Bk is the similar form for a(h0), except the direction angles are random. The sea clutter interference is the most general interference arising in HFSWR, which could submerge the targets whose frequency is similar Doppler frequencies. Normally the sea clutter interference is uncorrelated to the HFSWR emission signal and its duration time is much longer than the pulse repetition interval (PRI) of radar. Therefor the range transformation processing realized by the matched filter often leads to the sea clutter interference into most range cells. The Doppler frequency position of sea clutter interference is usually confirmed by radar carrier frequency. Figure 1 shows the Range-Doppler distribution of a sea clutter interference from experimental HFSWR data. The Doppler components around Doppler frequency at 0.22 and −0.21 Hz are due to the sea clutter. Clearly, the targets whose Doppler frequencies coincide with those of the sea clutter will be masked.

Target Direction Finding in HFSWR Sea Clutter Based on FRFT

2393

Fig. 1. Range-doppler map of sea clutter interference affected experimental data

3 Direction Finding Based on FRFT The FRFT represents the signal into an orthonormal basis from chirps, taking it more fit for nonstationary target signal than the conventional Fourier transform. The FRFT at order p for function x(t) at an angle a expressed in [15]   Xa t p ¼

Z1

  xðtÞKa t; tp dt

ð4Þ

1

in which Ka(t, tp) denotes the transformation kernel function, tp is the transformation of t to the pth order. For a chirp signal, the optimum order value popt is transformed in the unified form 2 popt ¼ arccotðkÞ p

ð5Þ

On a distance unit, the sea clutter presents the form of single frequency signal after the distance transformation. The uniform acceleration target is presented as a frequency signal of linear modulation. The sea clutter is suppressed in fractional order domain and the target energy is retained. The specific framework is shown in Fig. 2. The flow of the algorithm is described in detail below. The first step: The time series of the same distance gate after the solution distance transform is obtained. The second step: Do FRFT for the time series obtained in the first step. The third step: Suppression of sea clutter in fractional order domain. The fourth step: Do IFRFT to get the time series after suppressing sea clutter. The fifth step: Repeat the first four steps for each channel, then do the DOA estimation for the array information.

2394

S. Shao et al.

Fig. 2. Framework direction finding based on FRFT

4 Simulation In the part, the presented method is examined by simulations. The radars in the simulation circumstances consist of a receiver made by a eight elements line array. The data length is 512 snapshots. In the simulation, the working frequency is 6 MHz. Both target and sea clutter are coming from 30°. The frequency of the uniformly accelerated target becomes from 0 to 1.2 Hz. The corresponding frequency of sea clutter is 0.25 Hz. Since HFSWR emits interrupted continuous wave with linear frequency modulation, the frequency will be broadened in the distance transformation. Because the Fourier transform of the gate corresponds to the sinc function. The time-frequency distribution of time series on a distance gate is shown in Fig. 3. In the first 200 snapshots, the target frequency and the sea clutter frequency distribution are particularly close. It is difficult to separate the target from sea clutter and estimate the target DOA. FRFT is made of the time series of the distance gate to obtain fractional order domain signal, as shown in Fig. 4. The linear frequency modulation signal energy of uniformly accelerated target is concentrated in one longitudinal direction. Only the maximum value is retained, the other values are eliminated, and then IFRFT is done. The above process completes sea clutter suppression and prepares for target DOA estimation. The time-frequency distribution of time series after clutter suppression is shown in Fig. 5. The sea clutter on the distance gate is suppressed, leaving only the target information. DOA estimation of the time series is made to obtain the orientation information of the target, such as MUSIC algorithm. The DOA estimation results are shown in Fig. 6. Target orientation is approximately 30°.

Target Direction Finding in HFSWR Sea Clutter Based on FRFT

Fig. 3. The time-frequency distribution

Fig. 4. Fractional order domain signal of time series on a distance gate

2395

2396

S. Shao et al.

Fig. 5. Fractional order domain signal

Fig. 6. The DOA estimation

5 Conclusion Doppler frequencies of the ship targets may overlap the first order sea clutter. In this paper, the target direction finding method based on fractional Fourier transform (FRFT) was introduced for uniform acceleration/deceleration target, which utilized the frequency characteristic difference between the targets and sea clutter. Before the target DOA estimation, sea clutter were suppressed by FRFT method. From the timefrequency distribution simulation results, it can be seen that sea clutter can be suppressed. On this basis, the target DOA estimation is achieved. The effectiveness of this method is proved. The multi-objective case will be completed in the following work.

Target Direction Finding in HFSWR Sea Clutter Based on FRFT

2397

Acknowledgements. This work was supported in part by the National Key R&D Program of China under Grant No. 2017YFC1405202, in part by the National Natural Science Foundation of China under Grant No. 61571157 and Grant No. 61571159, in part by the Public Science and Technology Research Funds Projects of Ocean under Grant No. 201505002, in part by the Natural Science Foundation of Shandong Province under Grant No. ZR2018PF001.

References 1. Barrick D (1972) First-order theory and analysis of mf/hf/vhf scatter from the sea. IEEE Trans Antenn Propag 20(1):2–10 2. Wang Y, Mao X, Zhang J, Ji Y (2016) Ship target detection in sea clutter of HFSWR based on spatial blind filtering. In: International radar conference. IET 3. Xie J, Yuan Y, Liu Y (1998) Super-resolution processing for HF surface wave radar based on pre-whitened music. IEEE J Oceanic Eng 23(4):313–321 4. Xie J, Yuan Y, Liu Y (2001) Experimental analysis of sea clutter in shipborne HFSWR. IEE Proc Radar, Sonar Navig 148(2):67–71 5. Leong H, Helleur C, Rey M (2002) Ship detection and tracking using HF surface wave radar. Radar, IET 6. Xie J, Yuan Y, Liu Y (2002) Suppression of sea clutter with orthogonal weighting for target detection in shipborne hfswr. IEE Proc Radar, Sonar Navig 149(1):39–44 7. Yasotharan A, Thayaparan T (2006) Time-frequency method for detecting an accelerating target in sea clutter. IEEE Trans Aerosp Electron Syst 42(4):1289–1310 8. Wang J, Riddolls R, Ponsford A (2007) Music-enhanced CFAR for high frequency over-thehorizon radar. In: Radar Conference. IEEE 9. Leong H, Ponsford A (2008) The effects of sea clutter on the performance of HF surface wave radar in ship detection. In: 2008 IEEE radar conference, pp 1–6 10. Dakovic M, Thayaparan T, Stankovic L (2010) Time-frequency-based detection of fast manoeuvring targets. IET Signal Proc 4(3):1 11. Guinvarc HR, Gillard R, Uguen B, El-Khoury J (2012) Improving the azimuthal resolution of HFSWR with multiplicative beamforming. IEEE Geosci Remote Sens Lett 9(5):925 12. Ji Y, Zhang J, Wang Y, Chang G, Sun W (2017) Performance analysis of target detection with compact HFSWR. In: CIE international conference on radar. IEEE 13. Zhang J, Ji Y, Wang Y, Chu X (2016) Vessel target detection based on fusion range-doppler image for dual-frequency high-frequency surface wave radar. IET Radar Sonar Navig 10 (2):333–340 14. Li Q, Zhang W, Li M, Niu J, Jonathan W (2017) Automatic detection of ship targets based on wavelet transform for HF surface wavelet radar. IEEE Geosci Remote Sens Lett 14 (5):714–718 15. Chen Z, He C, Zhao C, Xie F (2017) Using SVD-FRFT filtering to suppress first-order sea clutter in HFSWR. IEEE Geosci Remote Sens Lett 14(7):1076–1080

Adaptive Non-uniform Clustering Routing Protocol Design in Wireless Sensor Networks Qingtian Zeng(&), Tianyi Zhang, Geng Chen(&), and Ge Song(&) College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao 266590, China [email protected], {gengchen,songge}@sdust.edu.cn

Abstract. This paper proposes a clustering routing protocol in wireless sensor networks, which combines non-uniform clustering and inter-cluster multi-hop routing denoted by Adaptive Unequal Clustering Routing Protocol (AUCR). In this protocol, the energy of the candidate cluster head is self-enhanced, and the surrounding node density and the average energy of the nodes within the cluster radius are used to calculate the time of the cluster head. After clustering, each cluster head reaches the sink node by forwarding control information, and the sink node generates the routing table through the artificial bee colony algorithm to complete the data transmission. The cluster head dynamically adjusts its cluster size parameter through the data transmission process and the information exchange between the surrounding cluster heads, and adjusts the cluster size of the common nodes in the cluster by broadcasting. Simulation results show that adaptive intelligent clustering protocol can quickly adapt to network conditions, and can reduce node energy consumption, enhance network balance, and extend network life cycle. Keywords: Wireless sensor network clustering  Energy consumption

 Routing protocol  Non-uniform

1 Introduction The wireless sensor network is composed of a large number of sensor nodes with communication functions. The nodes collect and disseminate the information collected by wireless communication. They are widely used in industrial control, environmental monitoring, agricultural production, defence and military fields. The sensor node often carries a fixed battery and cannot be charged. The sensor sends the collected data to the sink node or the base station in a fixed period. Because the energy of nodes can not be supplemented, the energy of nodes often becomes a key factor restricting the network life. In order to save energy consumption of nodes and enhance the balance of network load, clustering and data aggregation methods are often used. Data fusion can reduce the amount of data flowing into the network, thereby reducing the overall energy consumption of the network. The clustering divides the individual wireless sensor nodes into groups, and the ordinary nodes transfer the data to the cluster head nodes, and the data is sent out by the cluster head nodes after data fusion. But the emergence of clusters actually increases the load imbalance of the network. Different nodes play © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2398–2409, 2020 https://doi.org/10.1007/978-981-13-9409-6_291

Adaptive Non-uniform Clustering Routing Protocol Design

2399

different roles in the network, causing some nodes to die rapidly, resulting in network coverage shrinkage or even network failure. In this context, this paper proposes a clustering protocol to alleviate network load imbalance and reduce network energy consumption. LEACH [1] protocol is one of the earliest clustering protocols proposed. It uses the method of generating random numbers to claim cluster heads. The emergence of clusters reduces the overall energy consumption of sensor networks. But LEACH protocol claims that cluster heads make the geographical distribution of cluster heads uncertain, which makes the lifetime of LEACH protocol more uncertain. Therefore, in EECS [2] protocol, timed broadcasting method is used to make the network form a cluster with approximately uniform distribution. Common nodes choose cluster heads by the distance from cluster head node and the distance from cluster head node to sink node, so that the cluster heads near the sink node have more sub-nodes. However, in routing protocols with clustering, cluster heads not only need to receive data from the fused cluster nodes, but also need to send data to the sink nodes by a single hop. Therefore, the cluster heads farther away from the sink nodes consume more energy and die faster. LEACH protocol and EECS protocol both adopt single-hop mode to communicate with sink node, but there is a problem of excessive energy consumption in large-scale wireless sensor networks through single-hop transmission. Therefore, a multi-hop communication mode is created. For example, PEGASIS [3] protocol links nodes into chains. Only a few nodes communicate with sink nodes in each round. It is the predecessor of multi-hop routing protocol. The multi-hop method shortens the sending distance of each node and reduces the energy consumption of peripheral nodes. However, the birth of multi-hop routing protocol makes the nodes near the cluster head bear greater forwarding pressure. In order to balance the load of the network, the concept of non-uniform clustering is proposed. The concept of nonuniform clustering was proposed by UCS [4]. EEUC [5] protocol has been improved. EEUC combines non-uniform clustering and inter-cluster multi-hop routing. By using non-uniform clustering radius, the number of sub-nodes in the cluster near the base station is relatively small, thus balancing the energy consumption of inter-cluster data forwarding and balancing the energy consumption of cluster head. In EEUC protocol, the cluster radius is only related to the location of the node, and the load of the node is not only related to its location. Moreover, the EEUC protocol generates routing paths with high communication complexity and energy consumption. This paper proposes an efficient and balanced non-uniform clustering protocol AUCR, which combines non-uniform clustering and inter-cluster multi-hop routing. Nodes compete for cluster heads by timing broadcast. Routing path construction in inter-cluster communication is calculated by artificial bee colony algorithm (ABC) [6]. In the process of routing communication, each cluster head dynamically adjusts its cluster radius by sensing the state of the surrounding cluster head, and adjusts the cluster radius of the surrounding ordinary nodes by broadcasting [7].

2400

Q. Zeng et al.

2 System Model 2.1

Network Model and Assumptions

This protocol shall be established on the following assumptions: (1) The network consists of a sink node and N common nodes. (2) Ordinary nodes have the same position and initial cluster radius, and can act as cluster heads or ordinary nodes. (3) Ordinary nodes are randomly distributed in the monitoring area, and the initial energy is isomorphic and can not be supplemented. (4) Common node communication power can be adjusted according to distance. (5) Cluster head adopts data fusion method to reduce the amount of data transmitted. (6) The sink node energy can be supplemented and fixed in the network. (7) Each node performs data acquisition tasks periodically, and data is always transmitted to the sink node. 2.2

Energy Consumption Model of the Nodes

Node energy consumption includes two parts, data receiving energy consumption and data sending energy consumption. Data sending energy consumption includes transmitting circuit energy consumption and power amplifying circuit energy consumption. Power amplifier energy consumption is determined by formula (1). Data receiving energy consumption is mainly receiving circuit energy consumption [8, 9]. ( Ef ¼

data  Efs  d 2 ; d\d0 data  Emp  d 4 ; d  d0 sffiffiffiffiffiffiffiffi Efs d0 ¼ Emp

ð1Þ

ð2Þ

Efs and Emp are the energy consumption coefficients of power amplifier circuits under different channel propagation models. The energy consumption of receiving and transmitting circuits follows the formula (3) [10]. Ei ¼ data  Eelec

ð3Þ

Eelec is the RF energy consumption coefficient, and the data is divided into Control Message and Data Message, which will be abbreviated as CM and DM. In addition, the cluster head additionally consumes the energy generated by data aggregation. An important reason for the advantage of clustered routing is that the cluster head compresses and packs the collected node information, which reduces the total amount of data flowing into the network. The process is called data aggregation. In order to simplify the design, this paper assumes that the cluster head will fuse the DM compression sent by the collected nodes to the DM size, and the energy consumption of data aggregation is set to EDA ¼ 5 nJ/bit.

Adaptive Non-uniform Clustering Routing Protocol Design

2401

3 The Proposed AUCR Protocol Design The purpose of the AUCR protocol is to enable each node to automatically adjust its cluster radius according to its role, enhance the load balance of the network, and extend the life of the network. All nodes are randomly distributed in the target area before the start of the protocol. The initial energy of the nodes is equal and the clustering radius is equal. The AUCR protocol flow is shown in Fig. 1.

Fig. 1. Flow chart of the AUCR protocol

2402

3.1

Q. Zeng et al.

Node Clustering

Each node broadcasts CM packet to its cluster radius at the beginning of cluster stage. Each node broadcasts CM packet to its cluster radius at the beginning of cluster stage. The CM packet contains node ID and residual energy. After receiving CM information from surrounding nodes, the node determines the time Ti declared to be cluster head by its own energy Ei, the number of surrounding nodes n and the average energy of Ej and time constant Th of surrounding nodes. The value of Ti is determined by formula (4) Ti ¼ Th 

Ej n  Ei

ð4Þ

Clustering is often used in sensor networks to reduce the overall energy consumption of the network, because cluster heads can integrate and pack the data of the surrounding nodes to reduce the total data flow into the network, so when there are fewer nodes in the cluster, the data flow of the whole network will increase relatively. Therefore, when the nodes with higher density become cluster heads, the overall energy consumption of the network can be effectively reduced, however, the increase of the number of surrounding nodes will also lead to the increase of energy consumption of cluster head nodes, so the energy of candidate cluster head nodes and the average energy of surrounding nodes are considered. If the residual energy of node itself is lower than the average energy of surrounding nodes, the time of declaring cluster head will increase, that is, the probability of becoming cluster head will decrease. Therefore, nodes with high density of surrounding nodes and high energy of their own are more likely to become cluster heads. In order to balance the energy consumption of cluster head, the cluster head will be re-elected every set number of rounds. The specific steps are as follows: Step 1. Each node broadcasts a CM to its own cluster radius at the beginning stage, and the message contains its own node ID and the remaining energy of the node. Step 2. Each node receives the CM broadcast by other nodes, and adds the node to its own neighbour node list by parsing the node ID and the node energy. Step 3. Each node calculates the time of declaring cluster head by formula (4). If the node receives the message that other nodes declare the cluster head before the node declares the cluster head, the node withdraws from the election and becomes the subnode of the node and goes to sleep. 3.2

Inter-cluster Multi-hop Routing

Inter-cluster routing is generated by the artificial bee colony algorithm (ABC). Each cluster head sends a CM with its number of laps and its own ID according to the transmission time Tj. After the surrounding cluster head parses the CM, only the cluster heads below this number of laps will forward the message, and the information will be forwarded from the high circle number to the low circle number. The time Tj at which the cluster head sends the CM packet is related to the number of laps it is in. The cluster head in the outer circle will be sent first, and then sent in the cluster head of the inner circle. The cluster head that forwards the CM information will add its own ID and the

Adaptive Non-uniform Clustering Routing Protocol Design

2403

number of laps into the CM information for forwarding, and will not send the CM packet itself as the starting point until the inter-cluster multi-hop routing stage ends. In this way, the total amount of data entering the network and the energy consumption of the network are reduced. After receiving the CM sent by each node in the Tm time, the Sink node calculates the routing table according to the ABC algorithm, and then transfers it to each cluster head according to the routing table flashback method, and the inter-cluster communication route is completed. Tj is determined by the formula (5), where K is a random number generated in order to prevent collisions when the cluster heads of the same number of circles issue CM information, Tm is a time constant, and Ri is a corresponding number of circles of the cluster head. Tj ¼ K 

Tm Ri

ð5Þ

After the route is formed, the transmitting node sends the DM to the next hop, and the next hop node replies to the CM to indicate the reception until it is delivered to the sink node. Each node considers that a call has occurred when sending a message or forwarding a message, and the node records the number of calls for subsequent cluster radius adjustment stage use. The flow of the artificial bee colony algorithm is shown in Fig. 2: The fitness function of the artificial bee colony algorithm to generate the routing path solution consists of the routing path length and the average energy of the routing path node. The shorter the path length, the larger the average energy of the path node has a higher fitness value. 3.3

Cluster Radius Adjustment

In the process of inter-cluster multi-hop routing, information exchange exists between nodes in the communication relationship, and the node obtains both the DM from the previous hop node and the CM that the next hop node indicates that the reply is received, so in this way, the node perceives the environment around itself. The cluster head can obtain the cluster radius Rj, the number of calls Nj, the average energy difference Eij, the node density p and the number of calls Ni, of other cluster heads with communication relationship. The cluster head can calculate the cluster radius Rij corresponding to each cluster head by formula (7) and its average value Rij. The cluster head can calculate its new cluster radius by formula (9), in formula (9), krad buffer coefficient is set to prevent the radius of nodes from changing too fast. After the calculation, each cluster head broadcasts all the nodes in the original cluster radius. The node receiving the broadcast information calculates the influence coefficient ra by radiation formula (8), where L is the distance between the node and the sending cluster head, and updates its cluster radius by formula (10). By this method, the cluster radius of the nodes close to the cluster head is similar to that of the cluster head. The cluster radius of the nodes at the junction of multiple cluster heads is affected by the broadcast of multiple cluster heads.

2404

Q. Zeng et al.

Fig. 2. Flow chart of ABC algorithm

K ¼ 4368  Nj þ 3611  q

ð6Þ

Rij ¼ ðð2100  Ni  5  104  Eij  2100  Nj þ 4368  Ni  R2j þ 3611  q  R2j Þ=kÞ0:5 ð7Þ

Adaptive Non-uniform Clustering Routing Protocol Design

ra ¼ 1 

L R

2405

ð8Þ

Ri ¼ ð1  krad ÞRi þ krad  Rij

ð9Þ

r ¼ ð1  krad ÞRi þ krad  ðð1  ra Þr þ ra Rj Þ

ð10Þ

The specific steps of dynamic adjustment of cluster radius are shown in Fig. 3.

Fig. 3. Flow chart of cluster radius adjustment

4 Simulation Results In order to verify the performance of AUCR protocol, the LEACH protocol, EEUC protocol and AUCR protocol are simulated by MATLAB under the following parameters. The simulation parameters are shown in Table 1.

2406

Q. Zeng et al. Table 1. Network environment simulation parameters

Parameter Network coverage Base station location Node number Initial energy DM size Eelec Emp

4.1

Value 200 * 200 m2 (100, 100) 400 0.25 J 4000 bit 50 nJ/bit 0.0013 pJ/bit/m4

Parameter d0 T krad r0 CM size Efs EDA

Value 100 10 0.5 45 200 bit 13 pJ/bit/m2 5 nJ/bit

Cluster Radius Adjustment

At the beginning of the simulation, the initial cluster radius of each node is r0/2. After 50 rounds of data sending and receiving, the cluster radius of each node is shown in Fig. 4: the abscissa is the node number, and the ordinate is the corresponding cluster radius. The graph shows that each node quickly perceives its surrounding environment and adjusts its cluster radius through the perceived data. In the operation of the network, the heavily loaded nodes are usually closer to the cluster head of the sink node and the number of nodes is smaller, so the number of nodes with larger radius is larger than that with smaller radius.

Fig. 4. Cluster radius (a). Radius increase and reduction of node proportion (b)

4.2

Life Cycle and Load Balancing

In this simulation, half of the node’s dead time is set as the network life cycle [11], and the load balance of the whole network is evaluated by the first node’s dead time and the energy variance of the node [12]. The results obtained are as follows:

Adaptive Non-uniform Clustering Routing Protocol Design

2407

Fig. 5. Network lifetime

In Fig. 5a–c, the node death curves of EEUC protocol, LEACH protocol and AUCR protocol are respectively shown. It can be seen that the first node death time and half node death time of this protocol are better than those of EEUC and LEACH.

Fig. 6. Residual energy variance of node when the first node dies

In Fig. 6a–c, the average remaining energy of node and the residual energy variance of node when the first dead node appears in EEUC, LEACH and AUCR protocols respectively. The abscissa is the number of running rounds of the network, and the ordinate is the average remaining energy of network node and residual energy variance of node when the first node dies. It can be seen that the number of rounds appearing in the first dead node of this protocol is later than that of the other two protocols, and the variance of energy in the remaining nodes of this protocol is smaller than that of the other two protocols, which can prove that the load balancing performance of this protocol is better than that of LEACH and EEUC.

2408

Q. Zeng et al.

5 Conclusion In this paper, an adaptive intelligent clustering routing protocol AUCR is proposed, in which timed broadcast clustering method is used to reduce the overhead of CM. Sink nodes replace cluster heads to generate optimal routing paths by directional propagation CM to reduce the energy consumption of network routing. In inter-cluster communication, cluster heads adjust their cluster radius by sensing the information of neighbour cluster heads. After adjusting the cluster head radius, the cluster head adjusts the cluster radius of the nodes in the cluster by broadcasting to achieve non-uniform clustering of all nodes. The simulation results show that AUCR protocol can effectively save energy of single node, balance network load and prolong network life cycle. Acknowledgements. This work is supported by the National Natural Science Foundation of China under Grant No. 61701284, 61472229 and 31671588, the Innovative Research Foundation of Qingdao, the Sci. & Tech. Development Fund of Shandong Province of China under Grant No. 2016ZDJS02A11, ZR2017BF015 and ZR2017MF027, the Taishan Scholar Climbing Program of Shandong Province, and SDUST Research Fund under Grant No. 2015TDJH102.

References 1. Heinzelman WR, Chandrakasan A, Balakrishnan H (2006) Energy-efficient communication protocol for wireless microsensor networks. In: Proceedings of the 33rd annual Hawaii international conference on system sciences 2. Chen G, Li C, Ye W et al (2007) EECS: a clustering scheme for energy saving in wireless sensor networks. J Comput Sci Technol (02):170–179 3. Lindsey S, Raghavendra C, Sivalingam KM (2002) Data gathering algorithms in sensor networks using energy metrics. IEEE Trans Parallel Distrib Syst 9:924–935 4. Soro S, Heinzelman WB (2005) Prolonging the lifetime of wireless sensor networks via unequal clustering. In: Proceedings of the 19th IEEE international parallel and distributed processing symposium. IEEE Computer Society Press, Denver 5. Li C, Ye M, Chen G, Wu J (2005) An energy-efficient unequal clustering mechanism for wireless sensor networks. In: IEEE International conference on mobile adhoc and sensor systems conference. IEEE, pp 8–pp 6. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization, vol 200. Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005) 7. Jiang C-J, Shi W-R, Tang X-L, Wang et al (2012) Non-uniform clustering routing protocol for wireless sensor networks with energy balance. J Softw 23(05):1222–1232 8. Yick J, Mukherjee B, Ghosal D (2008) Wireless sensor network survey. Comput Netw 52(12):2292–2330 9. Raghavendra CS, Sivalingam KM, Znati T (eds.) (2006) Wireless sensor networks. Springer

Adaptive Non-uniform Clustering Routing Protocol Design

2409

10. Xiangning F, Yulin S (2007) Improvement on LEACH protocol of wireless sensor network. In: International conference on sensor technologies and applications (SENSORCOMM 2007). IEEE, pp 260–264 11. Ye W, Heidemann J, Estrin D (2004) Medium access control with coordinated adaptive sleeping for wireless sensor networks. IEEE/ACM Trans Netw (ToN) 12(3):493–506 12. Li C et al (2005) An energy-efficient unequal clustering mechanism for wireless sensor networks. In: IEEE international conference on mobile adhoc and sensor systems conference. IEEE

Comparative Analysis of Reflectivity from an Updated SC Dual Polarization Radar and a SA System in CINRAD Network Yue Liu1,2(&), Debin Su1,2, Xue Tan3, and Haijiang Wang1 1

Chengdu University of Information Technology, Chengdu 610225, China [email protected] 2 Key Laboratory of Atmospheric Sounding, China Meteorological Administration, Chengdu 610225, China 3 Chengdu Zhongdian Jinjiang Information Industry Co., Ltd., Chengdu 610051, China

Abstract. For to know the performance of dual linear polarization SC radar and its consistency with CINRAD-SA detection data, the echo reflectance data of two kinds of radar are compared and analyzed. Using reflectivity data of two radars at the same time in the same rainfall event, two radar data were interpolated into a 3d grid at the same height to compare their intersecting areas by using the method of region lattice point comparison. Study on the scatter characteristics and probability distribution of two radar reflectivity values in intersecting regions. The results show that echo reflectivity characteristics of the perpendicular bisector in two radars have good consistency, but the echo data value detected by dual linear polarization SC radar is larger than that of SA radar; Under 25 dB, the detection value of dual-line polarization SC radar is greater than that of SA radar; The echo reflectivity data of SC radar are less discrete. The rainfall calculated by using the Z-R relation, R(KDP) and R(ZH, ZDR) method of the dual-line polarization SC radar is all higher than the observed values of the ground rain gauge, and the difference between the observed values of the rainfall calculated by the R(KDP) method and the rain gauge is relatively small. In order to improve the effect of quantitative precipitation estimation, further quality control of polarization parameters and echo data is needed. Keywords: Dual linear polarization SC radar  CINRAD-SA radar Reflectivity factor  Rainfall  Comparison and analysis



1 Introduction Weather radar has become a very effective tool for small and medium-scale severe weather monitoring and short-term weather forecasting with its high spatial and temporal resolution and timely and accurate remote sensing capability. As the most basic detection data of weather radar, reflectivity factor is the main data source for identifying rainfall type, judging the evolution of echoes, and conducting quantitative measurement of rainfall. China has been deploying a new generation of weather radar station © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2410–2417, 2020 https://doi.org/10.1007/978-981-13-9409-6_292

Comparative Analysis of Reflectivity

2411

network (CINRAD, ChIna New generation doppler weather RADar) since 1998. There are many types of radar systems in the CINRAD network, For example, S-band radars have SA, SB, and SC, and C-band radars have CA, CB, CC, and CD. Due to different radar manufacturers and differences in system performance indexes, the measurement results of each radar system are different even for the same observation target [1] (there is a certain attenuation phenomenon in band C). Reliable radar measurement data can help radar network observation, data fusion, quantitative measurement of rainfall and monitoring and early warning of medium or large scale severe weather. Therefore, this paper uses two S-band radars to perform data for the same time and space observation target. Comparative analysis, study comparative analysis methods, and study the differences in their detection performance and their characteristics.

2 Equipment Introduction The SA radar data used in this study was derived from the Yichun radar station in Jiangxi Province. The station’s radar is the s-band doppler weather radar produced by Beijing Metstar radar co. The dual-linear polarization radar used in this paper is the sband dual-linear polarization doppler radar obtained by chengdu jinjiang electronics (state-owned 784 factory) after upgrading and adding the dual-linear polarization function on the new-generation doppler weather radar CINRAD-SC of China meteorological administration. The radar station is located in ji’an city, jiangxi province. The radar operates in a dual-transmitter, dual-receiver mode, in which both horizontal and vertically polarized waves are transmitted/received simultaneously. The radar can provide not only intensity (REF), velocity (VEL), and spectral width (SW), but also observation data such as differential reflectance (ZDR), differential phase shift ratio (PhiDP), and common-pole correlation coefficient (RhoHV).Compared with the conventional Doppler weather radar system, its ability to detect clouds and precipitation is significantly improved, which can significantly improve the ability of radar to observe precipitation and identify the phase of precipitation particles. Table 1 shows partial observation parameters of two radars. Table 1. Partial observation parameters of two radars Antenna gain (dB) Detection range (Km) Pulse width (ls) Range bin (m) Pulse repetition frequency (Hz) Beam width (°) Observation data

Dual-linear polarization SC radar 44.9 >150 1.57/4.7 300 300–1300 H0.97, V0.97 Z, V, W, T, qHV, Zdr, Udp, Kdp

CINRAD-SA 44 >230 1.57/4.71 1000 318–1304 0.99 Z, V, W

2412

Y. Liu et al.

The altitude of Yichun SA radar is 135 m, the altitude of Ji’an dual-linear polarization SC radar is 64 m, and the linear distance between the two radars is 107 km. The position of the radar station is shown in Fig. 1.

3 Radar Data Processing and Comparative Analysis On March 21, 2019, there was a large range of hail, thunderstorms and strong convective weather in Ji’an City. The scope of hail is rare in history. Both radars used the same detection mode (VCP21) to continuously detect the rainfall. Figure 2 shows the CAPPI comparison diagram of two radars at an altitude of 3 km at 17:33 on March 21, 2019, with a distance circle of 50 km. The difference in altitude between the two radars is not significant (71 m), so the influence of altitude on CAPPI is ignored.

Fig. 1. Location distribution of two radars

Comparative Analysis of Reflectivity

(a) CINRAD-SA

2413

(b) Dual linear polarization SC radar

Fig. 2. CAPPI comparison of two radars at an altitude of 3 km at 17:33 on March 21, 2019 (The white triangles in the figure are where the SC\SA radar is located)

The original data of weather radar and most of its products are in polar coordinate format. In order to further compare the detection results of the two radars, in this study, linear interpolation is used to convert CAPPI data in polar coordinate format into rectangular coordinate system [2], and then the longitude and latitude corresponding to each grid point are calculated. And the result after superimposed map projection is shown in Fig. 3.

(a) CINRAD-SA

(b) Dual linear polarization SC radar

Fig. 3. SA radar (left) and SC radar (right) 3 km high reflectivity CAPPI

Comparing the observation data of two radars, it can ensure the spatio-temporal consistency of the data if the two radars are placed in the same place, but it is difficult to do so in practical work. In the study, first make the perpendicular bisector to the two radar lines according to the radar geographical location, as shown in the black dotted line in Fig. 3. Considering the distance between the points on the perpendicular bisector and two radars is the same. Grid point data at the line is selected as a first group of comparison data, and data of the same rectangular area with the perpendicular

2414

Y. Liu et al.

bisector as diagonal is selected as a second group of comparison data. The selected area is shown in red dotted box in Fig. 3. Figures 4 and 5 respectively show the scatter distribution and probability distribution of the two radars in two groups of comparison data. To quantitatively evaluate the difference between two radar echo intensities, the mean deviation, standard deviation and correlation coefficient of two radar echo intensities in two sets of data were calculated respectively [3], and the statistical analysis results are shown in Table 2.

M:D ¼

 P  x  x 

n vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X ðxi  r Þ2 rðrÞ ¼ t N i¼1 CovðX; Y Þ rðX; YÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Var½ xVar ½Y 

ð1Þ

ð2Þ

ð3Þ

As can be seen from Figs. 4 and 5, the point-to-point values of the two radars CAPPI are relatively concentrated on both sides of the line with slope 1. The reflectivity factor value of SC radar at the midperpendicular position is generally greater than that of SA radar, with an average deviation of 1.23 dBZ. From the selected area (Fig. 4b), when the SA radar detection value is less than 25 dBZ, the correlation between the two radar echo data is weak (Correlation Coefficient: 0.44), and the average value of SC radar data is larger than that of SA radar (larger: 3 dBZ). However, in the overall area data, the average deviation between them is not large (0.45 dBZ). The correlation coefficient of the two groups’ comparison data reached 0.8, indicating that the observation values of the two radars were consistent. From the perspective of

Fig. 4. Scatter distribution of reflectivity values of the midperpendicular line (a) and selected area (b)

Comparative Analysis of Reflectivity

2415

Fig. 5. Middle vertical line a and selected area b probability distribution

Table 2. Statistical analysis of the two groups of comparison data Mean variation (dBZ) Standard deviation (dBZ) Coefficient of association

First set 1.23 SA:3.83 SC:3.98 0.8

Second set 0.45 SA:5.67 SC:5.04 0.8

probability distribution (Fig. 5a, b), the CAPPI data curve trend of the two radars is basically consistent. The standard deviation indicates the degree of dispersion of the deviation. The larger the standard deviation, the less concentrated the reflectance factor difference distribution of the radar measurement. The standard deviations of the two radars are close to each other near the midperpendicular line of their connection line (SA: 3.83, SC: 3.98), indicating that the differences are evenly distributed, but the standard deviations of the dual-linear polarization SC radar in the selected area are less than those of SA radar (SA: 5.67, SC: 5.04), that is, the dispersion degree of the echo data detected by the dual-linear polarization SC radar is less than that of SA radar.

4 Quantitative Precipitation Estimation and Analysis of Dual-Line Polarization Radar Using the radar data observed by Ji’an dual-line polarization radar from 17:00 to 18:00 on March 21, 2019, different precipitation estimation methods were established by using reflectivity factor ZH, differential reflectivity factor ZDR and differential propagation phase shift rate KDP. The following three estimated precipitation methods were compared and analyzed.

2416

Y. Liu et al.

Method one: Zh = A  Rb Method two:  Z−1.67 R = 1.42  10−2  Z0.77 h dr Method three: R = 44.0  |KDP|0.822  sign(KDP) The equation in Method one varies with the type and nature of precipitation. In this study, the value of A is 300 and the value of b is 1.4. Method 2 uses two variables, ZH and ZDR. This method is less sensitive to changes in raindrop spectrum, and theoretically can improve precipitation estimation; KDP is not sensitive to changes in raindrop spectrum, but its own calculation method has a great influence on precipitation estimation. In this study, the combined reflectance data is used to calculate the rainfall, and the latitude and longitude of the ground rain gauge is used to match the radar quantitative rainfall estimation data [4]. Excluding the radar quantitative rainfall estimation is valued—the rain gauge has no value, the rain gauge has value—the radar Quantitative rainfall estimates no value data and is compared with the one hour measurement of the ground rain gauge. The results are shown in Fig. 6.

(a) Z-R relation

(b) R (ZDR, KDP)

(c) R (KDP)

Fig. 6. Comparison of rainfall and ground rain gauges calculated by dual polarization radar

It can be seen from Fig. 6 that the results calculated by the three methods overestimate the rainfall. From the relative error, the third method is overestimated at 30%, and the second method is the worst. In order to further improve the accuracy of precipitation estimation, it is necessary to control the rainfall echo data, ZDR and KDP to eliminate the influence of ground clutter, non-rainfall echo and system measurement error.

Comparative Analysis of Reflectivity

2417

5 Conclusion This paper focuses on a precipitation event in Ji’an, Jiangxi province on March 21, 2019. Comparative analysis the echo reflectivity data of dual-linear polarization SC radar and CINRAD-SA. It can be seen from the results that the reflectivity data of the two radar observations at the vertical position of the line are basically the same. Above 15 dB and below 25 dB, the detection value of dual-linear polarization SC radar is greater than that of SA radar, and the dispersion degree of data is less than that of SA radar. By comparing the three measurement precipitation methods of dual-polarization radar, the rainfall obtained by R(KDP) method is similar to that of ground rain gauge. To further improve the accuracy of precipitation estimation, quality control of observation data is also needed. This paper is only a preliminary analysis based on the observation data of a precipitation process, and more comparative studies are needed later.

References 1. Xiao Y (2007) A contrast analysis of synchronous observations from regional radar network. Acta Meteorol Sinica 65(6):919–927 2. Haibo Z, Jiusheng S, Shiru D (2014) Study of the doppler radar data gridding. Meteorol Disaster Reduction Res 3. Jun L, Xingyou H, Yuqin HE, Zhenhui W, Jinhu W (2015) Comparative analysis of x-band phased array antenna weather radar measurements. Plateau Meteorol 4. Han Q, Eubanks C, Jones WL, Kasparis T (2000) Comparison of TRMM precipitation radar with NEXRAD and in-situ rain gauges in central and south Florida. In: Aerosense. International Society for Optics and Photonics

Subcarrier Allocation-Based SWIPT for OFDM Cooperative Communication Xueying Liu1 , Xin Liu1(B) , Bo Li2 , and Weidang Lu3 1

2

School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China [email protected],[email protected] School of Information Science and Engineering, Harbin Institute of Technology Weihai, Weihai 264200, China 3 College of Information Engineering, Zhejiang University of Technology, Hangzhou 310014, China [email protected]

Abstract. In cooperative communication system, relay node may consume more energy for relaying the information from source node, resulting in decreasing the stored energy for its own transmission. Recently, simultaneous wireless information and power transfer (SWIPT) has been presented to collect radio frequency (RF) while decoding the received signal. In this paper, A SWIPT based on relay and subcarrier joint allocation for OFDM cooperative communication is proposed to maximize the transmission performance of the relay node while guaranteeing the relay performance, via collecting the RF energy of the source signal. A joint optimization problem for subcarrier and power allocations is proposed, which can be worked out by a repeated optimization arithmetic. It can be seen from the simulation results that the system can change the spectrum efficiency. Keywords: SWIPT Cooperative relay

1

· Subcarrier allocation · Power allocation ·

Introduction

With the development of 5G, relay cooperative communication is recently proposed to achieve high data rate [1]. The cooperative network mainly studies the opportunity relay selection scheme system [2]. And the influence of relay distance on cooperative communication is analyzed in [3]. However, cooperative relay consumes a lot of energy because it needs to transmit information both source node’s and its own. In traditional wireless communications, much of the energy carried by radio waves is eventually dissipated as heat. In recent years, more and more attention has been paid to the simultaneous wireless information and power transfer c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2418–2426, 2020 https://doi.org/10.1007/978-981-13-9409-6_293

Subcarrier Allocation-Based SWIPT for OFDM

2419

(SWIPT) [4]. SWIPT can provide continuous energy for energy-constrained network by using the radio-frequency signals emitted for both information transmission and energy acquisition [5]. Traditional SWIPT algorithms include power splitting (PS) and time switching (TS) [6]. For PS [7], In the receiver, a power separator should be designed to divide the power of each subcarrier in a fixed proportion. In TS scheme [8], the transmission of information and energy is not strictly simultaneous because it diverts the signal in the time domain. Such an approach is an arithmetic in the overall execution of the system, but it does not completely solve the problem. The algorithm proposed in [9] combines the relay and the sub-carriers to maximize the rate of acquisition by the destination node, so that the relay node itself forwards the information of the source without sacrificing. At the same time, two types of relays AF and DF are considered, and an intelligent transport block relay protocol based on time splitting is proposed [10]. However, information transmission and energy consumption of relay itself are not considered in the above mentioned relay and subcarrier cooperative algorithms. Therefore, this paper proposes a model of forwarding relay’s own information while decode-and-forward source node information. In the first phase, the relay collects energy while decoding the information. Next, the relay node forwards the information of the source node using the energy collected in the first stage. At the same time, the relay also transmits its own information.

2 2.1

System Model and Problem Formularion System Model

In the OFDM system, consisting of a source (S), a destination nodes (D1), a relay nodes (R) and a relay receiver (D2), as shown in Fig. 1. In this system model, there are two time phase. In the first phase, the entire bandwidth of the link S → R is divided into K sub-carriers, and we define the set of sub-carriers as N = {1, . . . , k}. P is the total transmitted power of the source node. Assume that the channel power gain on the subcarrier k is known at the transmitter as hk . And pk is the power allocated to subcarrier k. In relay, noise nk will destroy the signal received on each subcarrier. The noise is expressed as nk and follows a nk ∼ CN (0, σk2 ) distribution. Signals received on all OFDM subcarriers are divided into two groups: GI and GE , where GI ∈ K, GI ∈ K and GI + GE = K. For information decoding, we use sub-carriers in GI . And for energy harvesting, we use sub-carriers in GE . In the second phase, some subcarriers are used to help the source node forward information, and the rest are used to relay their own information transmission. The energy used for forwarding information is divided into the energy Q collected in the first phase and the energy Q provided externally. At this point, we let the sub-carriers which are transmitting information in the first phase continue to transmit information. The sub-carriers that transmit energy in the first phase are assigned to relays to transmit it’s own information. Suppose the channel gain of subcarrier k in the link R → D1 is defined as gk , and the power is defined as p1k . In the link

2420

X. Liu et al.

R → D2, the channel gain is defined as gk , and the power is defined as p2k . Sub-carriers will be corrupted by noise nk , denoted by nk ∼ CN (0, σk2 ).

Fig. 1. System model.

2.2

Problem Formulation

In the first phase, the rate received by the relay node can be represented as    p k hk (1) ln 1 + 2 R1 = σk k∈GI

The energy received by the relay node can be represented as  Q=ξ (pk hk + σk2 )

(2)

k∈GE

and the energy conversion efficiency is expressed as ξ. The rate obtained by the target node at relay nodes can be represented as   1  p1k gk (3) ln 1 + R2 = 2  σk2 k ∈GD1

The relay node transmits its own information rate can be expressed as   1  p2k gk ln 1 + Rr = 2  σk2

(4)

k ∈GD2

We studied subcarrier allocation G = {GI , GE } and power allocation p1 = {pk }, p2 = {p1k , p2k } of two time phase, so that the relay node can transmit its own

Subcarrier Allocation-Based SWIPT for OFDM

2421

information while meeting the requirement of forwarding source node information, that is max Rr (G,pk ,p1k ,p2k ) 

s.t.



pk +

k∈GI

(5a)



((pk + pb )hk + σk2 ) ≥

k∈GE



= min(R1 , R2 ) 

p1k +

k∈GD1

p2k

(5b)

k∈GD2

pk ≤ P

(5c)

k∈GE

  1  p1k gk ≥ RT ln 1 + 2 σk2

(5d)

k∈GD1

3

Optimal Solution

It can be seen that the above optimization problem is a non-convex optimization problem. If the number of subcarriers is sufficient, the Lagrange dual method can be used to figure out the above problem. The Lagrange dual function in (5a) is given by   p1k − p2k ) + β1 [R1 − RT ] L(G, p1 , p2 ) = β3 (Q + Q − k∈GD1

k∈GD2

+ β2 [R2 − RT ] + Rr + β4 (P −



pk −

k∈GI



pk )

(6)

k∈GE

Since the dual function is convex, we can optimize (6) using a subgradientbased method and ensure convergence. The following formulas represent subgradient.     1  p k hk 1  p1k gk − RT ; Δβ2 = − RT ln 1 + 2 ln 1 + Δβ1 = 2 σk 2 σk2 k∈GI k∈GD1    Δβ3 = ((pk + pb )hk + σk2 ) − p1k − p2k ; k∈GE

Δβ4 = P −



k∈GI

pk −



k∈GD1

k∈GD2

pk

k∈GE

(7) Letting Δβ = (Δβ1 , Δβ2 , Δβ3 , Δβ4 ), β is updated as β t+1 = β t + v t Δβ. The subgradient method can converge β ∗ to the optimal value as the step size vt decreases. Then the optimal subcarrier collection G∗ and power distribution p∗1 , the p∗1 can be obtained respectively through the following three steps. 3.1

OPTIMIZING p∗1 with GIVEN G

G is fixed, the partial derivatives of the power used to decode information pk , k ∈ GI and energy harvesting pk , k ∈ GE in (5a–5d) are given by ∂L(G, p1 , p2 ) β1 hk − β4 = ∂pk,k∈GI 2(σk2 + pk hk )

(8)

2422

X. Liu et al.

∂L(G, p1 , p2 ) = β3 hk − β4 ∂pk,k∈GE

(9)

∂L(G,p1 ,p2 ) 1 ,p2 ) By the Karush-Kuhn-Tucker conditions, ∂L(G,p = 0. ∂pk,k∈GI = 0 and ∂pk,k∈GE Therefore, the optimal power allocation for information decoding is expressed as

 pk ∗ =

β1 σ2 − k 2β4 hk

+ (10)

+

where () means that all negative values in the formula become zero, and the positive values do not change. Meanwhile, the optimal power distribution for energy collection is  pmax β3 hk > β4 (11) pk ∗ = pmin β3 hk ≤ β4 pmin represents the minimum power constraint of each subcarrier. And pmax represents the maximum. 3.2

Optimizing p∗2 with Given G

Partial derivatives of optimization variables p1k and p2k are given in (6). β2 gk ∂L(G, p1 , p2 ) gk  ∂L(G, p1 , p2 ) − β3 ; − β3 = = 2 2 ∂p1k ,k∈GD1 2(σk + p1k gk ) ∂p2k ,k∈GD2 2(σk + p2k gk ) (12) ∂L(G,p1 ,p2 ) ∂L(G,p1 ,p2 ) = 0 and = 0, so By the Karush-Kuhn-Tucker conditions, ∂p ∂p2k,k∈GD2 1k ,k∈GD1 we can get  p1k ∗ = 3.3

β2 σ 2 − k 2β3 gk

+

 ; p2k ∗ =

1 σ 2 − k 2β3 gk

+ (13)

Obtaining the Optimal G

Substituting (1–4)(10)(11) and (13) into (6), and rewrite Lagrangian type (6) as (14) by algebraic transformation.     K 1  1  p∗k hk p∗2k gk + + β1 ln 1 + 2 ln 1 + L(G) = 2 σk 2  σk2 k∈GE k=1 k ∈GD1     p∗1k gk 1  − RT − β1 RT + β2 ln 1 + 2 σk2 k∈GD2   K    ∗ ∗ p1k + p2k + β4 P − β4 p∗k − β3 

Fk∗

k∈GD1

k∈GD2

k=1

(14)

Subcarrier Allocation-Based SWIPT for OFDM

where Fk∗ = β3

 k∈GE

   1 p ∗ hk ((p∗k + pb )hk + σk2 ) − β1 ln 1 + k 2 2 σk

2423

(15)

k∈GE

We find that only the first part of formula (19), Fk∗ , relates to GE . Therefore, we only need to optimize the optimal subcarrier set GE .The optimal GE is  Fk∗ (16) G∗E = arg max GE

All k(k ∈ k) that makes

Fk∗

k∈GE

positive makes G∗E . We can get the optimal G∗I is G∗I = K − G∗E

(17)

Algorithm 1 Relay Combined Subcarrier Allocation Algorithm 1: 2: 3: 4: 5: 6: 7:

4

initialize positive or zero values {β1 , β2 , β3 } repeat Solve for the p∗k in (11). Solve for the p1k ∗ and p2k ∗ in (13). Solve for the G∗I and G∗E according to the (15) and (16). Update {β1 , β2 , β3 } defined in (7). until {β1 , β2 , β3 } converge.

Simulations Result

The implement of the devised cooperative communication scheme is verified by the simulation results of relay combined subcarrier assignment. We set the center   frequency to 1.7 GHz. Therefore, g(k) =

M ˜ M +1 g

+

1 ˆ(k) M +1 g

is the channel 2

modeling on subcarrier k. g˜ is LOS deterministic component, |˜ g | = −50 dB, gˆ(k) ∼ CN (0, −50 dB) represents Rayleigh fading component, and M is Rayleigh 2 coefficient of LOS and fading component power ratio. h(k) = |g(k)| represents channel power gain. Set M = 5, K = 16, ξ = 1. Figure 2 shows the trend of the sum of harvested energy and transmitted power at different target rates. We get less energy as RT increases. When the transmission power is fixed and the target rate increases, more power is needed to decode the information. In this way, less power is used for energy harvesting. Figure 3 shows the relationship between joint sub-carriers and power distribution. The relationship between power and subcarrier allocation ratio and total transmission power is shown in Fig. 4. We find that there will be more power for energy harvesting, as the emission power increases. This is because the power used to decode the information does not change at a fixed target transmission rate, so there is more power left for energy harvesting.

2424

X. Liu et al.

Fig. 2. The relationship between P and Q at different transmission rates.

Fig. 3. The distribution of GI and GE when RT = 3 bps/Hz.

5

Conclusions

In this paper, in the case of OFDM frequency-selective fading channel, combined with cooperative relay decode-and-forwarding technology and SWIPT technology, we proposed a wireless energy carrying the communication method based on OFDM collaboration. In this method, the source node first broadcasts the signal, and the relay node receives the signal. The relay section then uses part of the subcarrier to decode the information and another part to harvest energy. In the second phase, external energy and energy collected in the first phase are

Subcarrier Allocation-Based SWIPT for OFDM

2425

Fig. 4. Power/subcarrier allocation ratio versus sum transmit power when RT = 4 bps/Hz.

used to help the source node forward information to the destination node, meanwhile forward its own information. The simulation results show that the spectral efficiency and the energy efficiency are improved. Acknowledgements. This paper is supported by New Technology Research University Collaboration Project of the 54th Research Institute of CECT.

References 1. Deng J, Tirkkonen O, Freij-Hollanti R, Chen T, Nikaein N (2017) Resource allocation and interference management for opportunistic relaying in integrated mmwave/sub-6 GHz 5G networks. IEEE Commun Mag 55(6):94–101 2. Jain N, Dongariya A, Verma A (2017) Comparative study of different types of relay selection scheme for cooperative wireless communication. In: 2017 international conference on information. communication, instrumentation and control (ICICIC), pp 1–4 3. Sah N, Sandhu N (2017) Analysis of effect of distance of relay node in cooperative communication. In: 2017 international conference on energy. Communication, data analytics and soft computing (ICECDS), pp 2647–2649 4. Varshney LR (2008) Transporting information and energy simultaneously. In: 2008 IEEE international symposium on information theory, pp 1612–1616 5. Zhang R, Ho CK (2013) MIMO broadcasting for simultaneous wireless information and power transfer. IEEE Trans Wirel Commun 12(5):1989–2001 6. Nasir AA, Zhou X, Durrani S, Kennedy RA (2013) Relaying protocols for wireless energy harvesting and information processing. IEEE Trans Wirel Commun 12(7):3622–3636 7. Liu L, Zhang R, Chua K (2013) Wireless information and power transfer: a dynamic power splitting approach. IEEE Trans Commun 61(9):3990–4001

2426

X. Liu et al.

8. Liu L, Zhang R, Chua K (2013) Wireless information transfer with opportunistic energy harvesting. IEEE Trans Wirel Commun 12(1):288–300 9. Lu W, Gong Y, Liu X, Wu J, Peng H (2018) Collaborative energy and information transfer in green wireless sensor networks for smart cities. IEEE Trans Ind Inf 14(4):1585–1593 10. Nasir AA, Zhou X, Durrani S, Kennedy RA (2015) Wireless-powered relays in cooperative communications: time-switching relaying protocols and throughput analysis. IEEE Trans Commun 63(5):1607–1622

Power Control for Underlay Full-Duplex D2D Communications Based on D. C. Programming Zanyang Liang1 and Liang Han1,2(&) 1

2

College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China [email protected] Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China

Abstract. In recent years, with the massive increase in mobile smart devices, the demand for mobile services has also increased dramatically. In order to improve the network performance, full-duplex device-to-device (D2D) communications have drawn significant research interests. In this paper, a power control problem for underlay full-duplex D2D communications is formulated, which maximize the transmission rate of the full-duplex D2D link while fulfilling the minimum rate requirement of the cellular user. By using difference of convex (D. C.) programming, we can transform the problem into a convex optimization problem and find the optimal solution by using the iterative algorithm. Simulation results are given to prove the effectiveness of the algorithm. Keywords: D2D

 Underlay mode  Full-duplex  D. C. programming

1 Introduction With the rapid development of mobile communications, the frequency resources become more and more scarce. Therefore, under the premise that it is difficult to obtain more frequency resources, improving the spectrum efficiency is an urgent problem to be solved. Two promising technologies for next generation mobile communication network (5G) are D2D communication and full-duplex communication, both of which have received widespread attention [1]. D2D communication allows direct communication between two terminals through frequency resource reuse, and to some extent improves the spectrum efficiency. Full-duplex communication allow the devices to transmit and receive at the same time and over the same frequency, therefore can increases the transmission rate while ensuring the timeliness of communication. The combination of D2D communication and full-duplex communication technology can further increase system capacity and spectrum efficiency. D2D communications can be divided into two modes: overlay mode and underlay mode. In overlay mode, cellular resources are dedicated to D2D users. In underlay mode, cellular and D2D communications share the same frequency resources, which can improve the spectrum efficiency but cause interference between D2D and cellular communication. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2427–2434, 2020 https://doi.org/10.1007/978-981-13-9409-6_294

2428

Z. Liang and L. Han

For underlay D2D communications, due to the reuse of frequency resources, the problem of co-channel interference is very serious [2]. Because the interference is related to the transmit power of D2D users, how to control the power becomes a key problem. In this paper, we investigate the power control for underlay full-duplex D2D communications by maximizing the sum-rate of the full-duplex D2D link on the premise of ensuring the minimum rate requirement of the cellular link. The rest of this paper is structured as follows. Section 2 introduce the full-duplex D2D communication model. Section 3 investigates the power allocation based on D. C. programming. Section 4 shows the simulation results. Section 5 concludes this paper.

2 System Model BS Useful signal D2D-to-cellular interference Cellular-to-D2D interference

CU

Self-interference

DU1

DU2

Fig. 1. System model of underlay full-duplex D2D communications

As shown in Fig. 1, the system model considered in this paper is that a pair of fullduplex D2D users reuse the uplink of a cellular user under a base station control mechanism. We assume each D2D user has one receive antenna and one transmit antenna, which works on full-duplex mode. The full-duplex D2D users are represented as D1 and D2 respectively. For convenience, the symbols used are defined as follows [3]. • • • • • • •

D H21 : The channel gain from D2 to D1; D : The channel gain from D1 to D2; H12 PC : The transmission power of Di (i = 1, 2); PC : The transmission power of the cellular user; HiCD : The interference channel gain between cellular and Di (i = 1, 2); HiD : The self-interference channel gain for Di (i = 1, 2); a: The self-interference cancellation factor;

Power Control for Underlay Full-Duplex D2D Communications

2429

• H CB : The channel gain between cellular user and base station; • HiDB : The channel gain between Di (i = 1, 2) and base station; • N0 : The noise power. The signal-to-interference-plus-noise ratio (SINR) for the cellular user is: SINRC ¼

DB PD 1 H1

PC H CB DB þ PD 2 H2 þ N0

ð1Þ

According to Shannon’s theorem, the achievable transmission rate of the cellular user is: RC ¼ log2 ð1 þ SINRC Þ

ð2Þ

Similarly, the SINR for the full-duplex D2D users are given as: SINR1 ¼ SINR2 ¼

D PD 2 H21 D PC H1CD þ PD 1 aH1 þ N0

ð3Þ

D PD 1 H12 D þ P2 aH2D þ N0

ð4Þ

PC H2CD

The achievable sum-rate of the D2D user can be obtained as follows: RD ¼ log2 ð1 þ SINR1 Þ þ log2 ð1 þ SINR2 Þ

ð5Þ

3 Problem Formulation and Power Control Algorithm In this paper, we formulate the power control problem by maximizing the achievable sum-rate of the full-duplex D2D link while guaranteeing the minimum rate requirement of the cellular  link. D  Let P ¼ PC ; PD 1 ; P2 denote the matrix of power allocation, and our optimization problem can be expressed as: max RD ðPÞ P

s:t: RC  rC ; PC  PCmax ; D PD 1  Pmax ; D PD 2  Pmax :

ð6Þ

2430

Z. Liang and L. Han

where rC denotes the minimum rate requirement of the cellular link, PCmax denotes the maximum transmit power of cellular user, and PD max denotes the maximum transmit power of D2D user. The minimum rate constraint of the CU can be written as  log2 1 þ

 PC H CB  rc DB D DB PD 1 H1 þ P2 H2 þ N0

ð7Þ

From this constraint, we can get: PC 

  DB D DB ð2rc  1Þ PD 1 H1 þ P2 H2 þ N0 H CB

ð8Þ

Then, we get the feasible domain of transmission power of cellular user and D2D users [4]: 9 8 C D D D D D D = < ðP ; P1 ; P2 ÞjP1  Pmax ; P2  Pmax ;   R ¼ ð2rc  1Þ PD H DB þ PD H DB þ N0 1 1 2 2 :  PC  PCmax ; H CB

ð9Þ

Since the objective function is non-convex, we cannot directly find the optimal solution. However, we can convert it into the form of difference of the convex (D. C.) function [5]. In this way, we can rewrite the objective function as follows: max F ðPÞ  GðPÞ P

ð10Þ

where    C CD  D D D D D D D FðPÞ ¼ log2 PC H1CD þ PD 2 H21 þ P1 aH1 þ N0 þ log2 P H2 þ P1 H12 þ P2 aH2 þ N0    C CD  D D D GðPÞ ¼ log2 PC H1CD þ PD 1 aH1 þ N0 þ log2 P H2 þ P2 aH2 þ N0 According to [6], we can get a first-order approximation of G(P):  D  E GðPÞ  G PðkÞ þ rG PðkÞ ; P  PðkÞ

ð11Þ

Therefore, the objective function can be expressed as:  D  E max F ðPÞ  G PðkÞ þ rG PðkÞ ; P  PðkÞ P

ð12Þ

Power Control for Underlay Full-Duplex D2D Communications

2431

Since both F(P) and G(P) are concave functions, the objective function is converted to D. C. form. Then, we can use an iterative algorithm to calculate the objective function and find the optimal solution. Let P(0) be the initial value and P(k) be the result of k-th iteration. Let k = 0, initialize P(0), solve the convex optimization problem (12)   to obtain P , set k = k + 1, PðkÞ ¼ P , and calculate RD PðkÞ . When the condition

 ðkÞ   ðk1Þ 

RD P

 e ðe  0Þ meets, we get the optimal solution. Since G(P) is  RD P concave, we have:  D  E GðPÞ  G PðkÞ þ rG PðkÞ ; P  PðkÞ

ð13Þ

From this, we can get the convergence of the algorithm, which is proved as follows:   F Pðk þ 1Þ  G Pðk þ 1Þ   D  E  F Pðk þ 1Þ  G PðkÞ  rG PðkÞ ; P  PðkÞ    F PðkÞ  G PðkÞ

ð14Þ

Therefore, the optimal solution of the objective function will either increase or remain unchanged in the continuous iteration. Since the power has upper and lower limits, we can get the optimal solution after a finite number of iterations.

4 Simulation Results In this section, we present some simulation results on our proposed algorithm. In the simulation, we assume N0 = 0.1 lW, Pmax ¼ ½0; 7; 0:9; 0:9, e = 10−6. The channel gains are given as follows 2

H CB H ¼ 4 H1DB H2DB

H1CD D H21 aH2D

3 2 0:4598 0:0012 H2CD aH1D 5 ¼ 4 0:0002 0:4006 D 0:0209 0:0005 H12

3 0:2105 0:0080 5 0:4623

Using the formula of weighting and throughput and its constraints (6a) (6b) and (6c), we analyze the following results by using iterative optimization.

2432

Z. Liang and L. Han

Fig. 2. Weighted sum throughput

Fig. 3. The impact of required minimum data rate on the maximum throughput

Power Control for Underlay Full-Duplex D2D Communications

2433

Fig. 4. Individual data rate for required minimum data rate

Figure 2 shows that we get our optimization result after about 20 iterations, which shows that our proposed algorithm reduces the number of iterations and increases the algorithm speed. At the same time, we also found that when the initialization P(0) takes different values, the optimization results tend to be consistent, but only have a slight impact on the convergence rate. When the threshold rc of each link is same, Fig. 3 shows that by taking different thresholds, the total throughput of the system decreases as the threshold increases. Figure 4 shows the transmission rate of a cellular link and two links of a D2D pair in the system. The link rate from D1 to D2 is superior to the other, but the transmission rate of the cellular link decreases as the threshold increases.

5 Conclusion In this paper, we mainly considered the power allocation problem in full-duplex D2D wireless communication in underlay mode. The proposed method mainly solved the problem of maximizing the throughput of the entire system using D. C. programming. The simulation results showed that our optimization algorithm can get the optimal solution after quite a few iterations, and realize our power allocation issues.

2434

Z. Liang and L. Han

Acknowledgements. This work was supported by the National Natural Science Foundation of China (61701345), Natural Science Foundation of Tianjin (18JCZDJC31900), and Tianjin Education Commission Scientific Research Plan (2017KJ121).

References 1. Tehrani MN, Uysal M, Yanikomeroglu H (2014) Device-to-device communication in 5G cellular networks: Challenges, solutions, and future directions. IEEE Commun Mag 52(5):86– 92 2. Yates RD (1995) A framework for uplink power control in cellular radio systems. IEEE J Sel Areas Commun 13:1341–1347 3. Li Song, Ni Qiang, Sun Y, Min G (2017) Resource allocation for weighted sum-rate maximization in multi-user full-duplex device-to-device communications: approaches for perfect and statistical CSIs. IEEE J Mag 5:27229–27241 4. Feng D, Lu L, Yuan-Wu Y, Li GY, Feng G, Li S (2013) Device-to-device communications underlaying cellular networks. IEEE Trans Commun 61(8):3541–3551 5. Tuan HD, Apkarian P, Hosoe S, Tuy H (2000) D. C. optimization approach to robust controls: the feasibility problems. Int J Control 73:89–104 6. Cai J (2017) Research on massive MIMO precoding technology in heterogeneous networks. Beijing University of Posts and Telecommunications

Power Control for Underlay Full-Duplex D2D Communications Based on Max-Min Weighted Criterion Yingwei Zhang1 and Liang Han1,2(&) 1

2

College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China [email protected] Tianjin Key Laboratory of Wireless Mobile Communications and Power Transmission, Tianjin Normal University, Tianjin 300387, China

Abstract. In recent years, device-to-device (D2D) communications and fullduplex communications have attracted great attention. In the underlay mode, D2D users reuse the same spectrum resource with the cellular user, and thus the problem of the co-frequency interference between full-duplex D2D users and cellular user is particularly prominent. This paper proposes a power control algorithm based on max-min weight criterion, which transforms the non-convex optimization problem into the convex optimization problem by using D.C. programming. The optimal result of power control through finite numbers of iterations can improve the transmission rate of the whole system. Keywords: Power control  Full duplex D2D Max-minimum weighted criterion

 Underlay mode 

1 Introduction In recent years, the application of 5G technology has become a hot topic. However, with the development of technology and the increasing demand for bandwidth, the spectrum resources become more and more scarce. As one of the key technologies of 5G, D2D (device-to-device) communication technology can well improve the problem of low spectrum utilization. Within a certain area, D2D technology can realize direct communication between users under the control of the base station. Also, it can effectively avoid the problem of information loss caused by user transfer through the base station and improve the channel gain and spectrum resource efficiency, and thus improve the throughput. As another key technology of 5G, full duplex technology enables different users to send and receive messages simultaneously on the same spectrum resource, which greatly increase the rate of information transmission. Therefore, the full duplex D2D communication technology provides great help for the problem of spectrum resource scarcity. In recent years, there have been some researches on the power control of full duplex D2D communications. In literature [1], the optimal transmission power is obtained through the Lagrange duality method of convex programming. Literature [2] proposes © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2435–2442, 2020 https://doi.org/10.1007/978-981-13-9409-6_295

2436

Y. Zhang and L. Han

a power control algorithm that coordinates the same frequency interference to maximize the throughput of the link. All the papers have contributed to the power control problem of full duplex D2D communications. However, there are still problems of high computational complexity and multiple iterations. In this paper, we investigate the power control of underlay full duplex D2D communication based on max-min weighted criterion, which can obtain the optimal power with the least number of iterations and improve the throughput of the overall system under the premise of ensuring the quality of user service.

2 System Model In this paper, we assume a pair of D2D users and a cellular user jointly reuse the uplink channel under the control of a base station. In this model, as shown in Fig. 1, the communication between the base station and the cellular user is half-duplex communication and D2D users are full-duplex communication [3]. BS Useful signal D2D-to-cellular interference Cellular-to-D2D interference

CU

Self-interference

DU1

DU2

Fig. 1. System model of underlay full-duplex D2D communications

For convenience, we define the following symbols: D H21 D H12 D Pi PC HiCD HiD a H CB HiBD N0 ni

Channel gain from D2 to D1 Channel gain from D1 to D2 Transmission power of Di Transmission power for cellular users Channel interference gain between cellular and Di users Self-interfering channel gain for Di users Self-interfering attenuation factor Channel gain between cellular user and base station Channel gain between base station and user Di Channel noise Threshold

Power Control for Underlay Full-Duplex D2D Communications

2437

This paper deals with the problem of communication between a cellular user CU1 and a pair of D2D users (D1, D2). Since both cellular user and full-duplex D2D users reuse the same uplink channel, full-duplex D2D users will be interfered by cellular users and themselves in the communication process. Signal to Interference plus Noise Ratio (SINR) of D1 and D2 can be written as follows: SINR1 ðPÞ ¼ SINR2 ðPÞ ¼

D PD 2 H21 D þ P1 aH1D þ N0

ð1Þ

D PD 1 H12 D PC H2CD þ PD 2 aH2 þ N0

ð2Þ

PC H1CD

From this, we can conclude that the transmission rate of D1, D2 and D2D pairs can be written as follows: r1 ðPÞ ¼ log2 ð1 þ SINR1 ðPÞÞ

ð3Þ

r2 ðPÞ ¼ log2 ð1 þ SINR2 ðPÞÞ

ð4Þ

R1 ðPÞ ¼ r1 ðPÞ þ r2 ðPÞ

ð5Þ

When the base station receives the information from the cellular user, D1 user and D2 user will cause certain interference to the base station. The SINR of the cell user can be written as follows: SINR3 ðPÞ ¼

BD PD 1 H1

PC H CB BD þ PD 2 H2 þ N0

ð6Þ

The transmission rate of cellular user is as follows: R2 ðPÞ ¼ r3 ðPÞ ¼ log2 ð1 þ SINR3 Þ

ð7Þ

3 Power Control 3.1

Problem Description

Cellular users and full-duplex D2D users share the same spectrum resources, which will lead to mutual interference and self-interference problems between cellular users and D2D users in full-duplex D2D communication system. In order to solve this problem, this paper proposes a fast-iterative algorithm—maximum and minimum weighted criterion algorithm. On the premise of ensuring users’ QoS (Quality of Service) requirements and power constraints, our method can improve the performance

2438

Y. Zhang and L. Han

of the worst link as far as possible and the fairness and reliability of the overall links, so as to improve the throughput of the system. The problem modeling is as follows: max min wi ri ðPÞ

ð8Þ

8 PC  Pmax;3 > > > > < PD  P max;2 1 s:t: D > P2  Pmax;1 > > > : ri ðPÞ  ni

ð9Þ

i¼1...3

P

3.2

Algorithm Description

Here, we use convex approximation algorithm to transform non-convex optimization into convex optimization problem, and use D.C. programming to solve the optimal solution. It can not only iterate the optimal result quickly, but also reduce the error, improve the accuracy of the result and reduce the computational complexity. Before the study, we first discuss the constraints of users’ power. The weighted D C matrix is defined as w ¼ ½wD 1 ; w2 ; w . The user power control matrix is defined as D D C P ¼ ½P2 ; P1 ; P . According to (9), we know the maximum power of cellular users and D2D users is not greater than Pmax . What’s more, through the constraint condition (12), we can get the following linear constraints: D D CD C P2 þ ð1  2n ÞðaH1D PD D1: H21 1 þ H1 P þ No Þ  0

ð10Þ

D D CD C P2 þ ð1  2n ÞðaH2D PD D2: H12 1 þ H2 P þ No Þ  0

ð11Þ

n BD D CB C CU1: H2BD PD 2 þ ð1  2 ÞðH1 P1 þ H P þ No Þ  0

ð12Þ

1

2

3

Under the premise of satisfying the power constraint, formula (8) is expanded to obtain the following formula: max XðPÞ  YðPÞ

ð13Þ

P

In here XðPÞ ¼ min ½wi xi ðPÞ þ i¼1::3

YðPÞ ¼

X

wj yj ðPÞ

ð14Þ

j6¼i 3 X j¼1

wj yj ðPÞ

ð15Þ

Power Control for Underlay Full-Duplex D2D Communications

2439

When i ¼ 1; 2 D D D xi ðPÞ ¼ log2 ðPC HiCD þ PD i aHi þ P3i Hið3iÞ þ No Þ

ð16Þ

BD D BD C CB xi ðPÞ ¼ log2 ðPD 1 H1 þ P2 H2 þ No þ P H Þ

ð17Þ

D yj ðPÞ ¼ log2 ðPC HiCD þ PD i aHi þ No Þ

ð18Þ

BD D BD yj ðPÞ ¼ log2 ðPD 1 H1 þ P2 H2 þ No Þ

ð19Þ

When i ¼ 3

When j ¼ 1; 2

When j ¼ 3

According to the Eqs. (14) and (15), we know XðPÞ and YðPÞ are concave functions, so the formula (8) is transformed into a D.C. programming problem that maximizes the difference between two concave functions. According to literature [4], YðPÞ can be approximately YðPðtÞ Þ þ \rYðPðtÞ Þ; P  PðtÞ [ , where PðtÞ is expressed as the power in the iteration t. Because YðPÞ is a concave function, so YðPÞ  YðPðtÞ Þ þ \rYðPðtÞ Þ; P  PðtÞ [. By this function, we can deduce that XðPðt þ 1Þ Þ  YðPðt þ 1Þ Þ   XðPðtÞ Þ  YðPðtÞ Þ þ \rYðPðtÞ ; ðPðt þ 1Þ  PðtÞ Þ [  XðPðtÞ Þ  YðPðtÞ Þ

ð20Þ

From the above equation, it can be seen that the results of each iteration are better than the results of the last iteration. Because of the constraint conditions, it will converge to an optimal value. Therefore, formula (8) can be approximate to the convex optimization problem as shown below: max min wi ri ðPÞ P

i¼1...3

¼ max XðPÞ  YðPðtÞ Þ  \rYðPðtÞ Þ; P  PðtÞ [

ð21Þ

P

According to the above algorithm, we set t = 0 and assume an initial value Pð0Þ . ð0Þ First, we calculate the value of rmin ¼ min wi ri ðPð0Þ Þ. After that, under the premise of i¼1::3

satisfying the constraint conditions, we bring Pð0Þ to formula (21) to get an optimal ðtÞ value P0 , and then we make PðtÞ ¼ P0 , t ¼ t þ 1, and calculate rmin ¼ min wi ri ðPðtÞ Þ i¼1::3    ðtÞ ðt1Þ  until rmin  rmin   e. By this way we can find the optimal power control in a few iterations, and then reduce the interference problem caused by communication of users.

2440

Y. Zhang and L. Han

4 Numerical Results We define the maximum power matrix as Pmax ¼ ½0:8; 0:9; 1:0ðmwÞ. The initial power matrix is Pð0Þ ¼ Pmax . The weighted matrix is w ¼ ½1=6; 1=3; 1=2. The noise generated by the channel is No ¼ 0:1lw; e ¼ 106 . Threshold is defined as n1 ¼ n2 ¼ n3 ¼ 1. In order to understand the gain matrix as easy as possible, the corresponding values of the gain matrix are as follows: 2

D H21 4 H ¼ aH1D H1CD

3 2 0:2284 H2BD H1BD 5 ¼ 4 0:0134 0:0062 H CB

aH2D D H12 CD H2

0:0030 0:4220 0:0470

3 0:0276 0:0352 5 0:5146

Minimum weighted data rate (bps/Hz)

1 0.95

P(0)=Pmax P(0)=0.8Pmax P(0)=0.5Pmax

0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55

0

1

2

3

4

5

6

7

8

9

10

Iterations

Fig. 2. Convergence of the max-min transmission rate

As shown in Fig. 2, we set the initial values in three different cases. From the figure, we can see that the optimal power value can be obtained only with a few iterations. When the initial power Pð0Þ ¼ Pmax , we only need 7 iterations to figure out the optimal power control Popt ¼ ½0:8000; 0:1568; 0:1568 ðmwÞ for each user. At this time, the minimum weighted transmission rate of all links is 0.9862 bps/Hz, the transmission rate of cellular user is 1.9724 bps/Hz, the D1 user transmission rate is 5.9172 bps/Hz, and the D2 user transmission rate is 2.9586 bps/Hz, as shown in Fig. 3. What’s more, as shown in Fig. 4, it can be seen that the weighted transmission rate of each link gradually tends to the same value with the increase of iteration times. When we get the optimal solution, the weighted transmission rate of each link is the same, that is, w1 r1 ðPopt Þ ¼ w2 r2 ðPopt Þ ¼ w3 r3 ðPopt Þ ¼ 0:9862bps=Hz, which provides strong evidence for improving the fairness of each link.

Power Control for Underlay Full-Duplex D2D Communications

2441

6 DUE1 DUE2 CUE1

5.5

Data rate (bps/Hz)

5 4.5 4 3.5 3 2.5 2 1.5

0

1

2

3

4

5

6

7

8

9

10

Iterations

Fig. 3. Transmission rate of each link for different iterations

5 Conclusion In this paper, we investigate the optimal power control to maximize the fairness of links in full-duplex D2D users and cellular users. Based on this method, we can also continue to study the power control problems of multiple full-duplex D2D users and multiple cellular users on different channels, which provide an effective method for power control problems on different spectrum resources.

1.8 DUE1 DUE2 CUE1

Weighted data rate (pbs/Hz)

1.6 1.4 1.2 1 0.8 0.6 0.4

0

1

2

3

4

5

6

7

8

9

10

Iterations

Fig. 4. Weighted transmission rate of each link for different iterations

2442

Y. Zhang and L. Han

Acknowledgements. This work was supported by the National Natural Science Foundation of China (61701345), Natural Science Foundation of Tianjin (18JCZDJC31900), and Tianjin Education Commission Scientific Research Plan (2017KJ121).

References 1. Zhu G, Liu T, Yang J (2017) Optimal power control scheme in 5G full-duplex D2D communication system. Comput Appl Res 34(12) 2. Zhao J, He Q, Qu H, Luan Z (2017) Full-duplex D2D communication power control in cellular network. Telecommun Sci 33(03):1–7 3. Li S, Ni Q, Sun Y et al (2017) Resource allocation for weighted sum-rate maximization in multi-user full-duplex device-to-device communications: approaches for perfect and statistical CSIs. IEEE Access 99:1–1 4. Kha HH, Tuan HD, Nguyen HH (2012) Fast global optimal power allocation in wireless networks by local DC programming. IEEE Trans Wireless Commun 11(2):510–515

Analysis on the Change of Dynamic Output Degree Distributions in the BP Decoding Process of LT Codes Shuang Wu(B) College of Engineering, Xi’an International University, Xi’an 710077, China [email protected]

Abstract. Although the LT codes have obtained much attentions, but how to design degree distributions can make LT codes with optimal decoding performance is also a hard issue which have not been solved very well. As most of the practical degree distributions are based on ideal degree distributions, and ideal degree distributions were designed under the asymptotic conditions, which can not make the practical LT codes with optimal decoding performances. Different with the asymptotic ensembles, under the finite length conditions, the degree distributions in the decoding process are dynamic. In this paper, we analyze the BP decoding process of LT codes, and then provide the change trends of output degree distributions by quantitize the BP decoding process of LT codes. Keywords: LT codes · Rateless codes analysis · Finite length analysis

1

· BP decoding · Aysmptotic

Introduction

In the Past two decades, LT codes is one of the most important cases in the erasure codes [1]. Different with the error control codes, the erasure codes cannot correct the bit error problems but which can correct the bit/packet erasure problems in the erasure channels. LT codes with a important property named as rateless, which means the code rate of such codes are not fixed [2,3]. For this reason, LT codes with the protential to provide higher transmission efficiency than the other error control codes in the variable channel ensembles. As the rate of LT codes are not fixed, which codes are not based on the generate matrix but based on the degree distribution. In the past decade, nearly all the researchers us the degree distribution which proposed by shokrollahi [4] to study the LT based codes. Although such distributions can provide the pretty well decoding performance, but which are designed with the principle c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2443–2447, 2020 https://doi.org/10.1007/978-981-13-9409-6_296

2444

S. Wu

of asymptotic conditions. As in practical, the asymptotic conditions are not exist, how to design degree distributions under finite length conditions is also an charming issue [5, 6].

2

Preliminary and Definitions

Aiming to make the analysis process clearly, some definitions in this paper is proposed in this section. As shown in Fig. 1, the BP decoding process of a LT code is based on bipartite graph. In the bipartite graph, an output symbol and the input symbols which are connected by edges are neighbors, while the output symbol with only 1 neighbor is termed released. In this paper, we divide the BP decoding process into a series individual loops and each loop with 2 steps. In the first step, one finds a released output symbol, of which the unique neighbor is named recovered symbol. Then the edge between the released and recovered symbols is eliminated, and this released symbol is moved out of the decoding process. In the second step, the recovered symbol is processed, which will reduce the degrees of the un-released output symbols having connections with the recovered symbol. After each loop, there is at least an released symbol and its corresponding recovered symbol are moved out of the decoding process, which are called discarded and decoded symbols, respectively. At the beginning of the following loop, the un-discarded and un-decoded symbols can be considered as a new LT code.

Fig. 1. BP decoding process based on bipartite graph of LT codes

3

Analysis of BP Decoding Process for LT Codes

Next, the analysis process is given by following. Considera LT code with k input symbols, the output degree distribution d is Ω(x) = d Ωd x . As the overhead is γ, the total number of edges in the bipartite graph is γkΩ (1). Assume the encoding process is strict randomly, the

Analysis on the Change of Dynamic Output Degree

2445

degree distribution of input symbols follow binomial distribution, and the input  ˆ degree distribution Λ(x) = dˆ Λdˆxd can be calculated by (γkΩ (1)−d)  dˆ ˆ γkΩ (1) k−1 1 . k k dˆ

 Λdˆ =

(1)

At the beginning of the first loop, the number of input and output symbols are k (1) = k(1 − Puncov =) and n(1) = γk, respectively. As the un-selected input symbols cannot join to the decoding process, the input degree distribution of the first loop Λ(1) (x) is computed by (1) Λdˆ = 

Λdˆ

ˆ d≥1

Λdˆ

.

(2)

Consider consider the moment at the beginning of the ith loop, there are n(i) output symbols un-discarded and k (i) input symbols connected with these output symbols, the degree distributions of input and output symbols are Λ(i) (x) and Ω(i) (x), respectively. The probability there is no released symbol can be found in this loop (which means the decoding process is corrupt at this loop) is n(i)  (i) (i) = 1 − Ω1 . Pcor

(3) (i)

Thus the expected number of found released symbol in this loop is 1 − Pcor . Assume a released symbol is found in this loop, its unique neighbor can be recovered, and we can calculate the expected degree of this recovered symbol. Then the expected degree of the recovered symbol in the ith loop is ˆ  k (i) dΛ ˆ ¯ . dˆ (i) (i) d  dˆ(i) rec = n (Ω (x)) |x=1 ˆ (i)

(4)

d

(i)

As the expected number of the found released symbol is 1 − Pcor , which equals to the expected number of recovered symbol, then for the (i + 1)th loop, (i) the expected number of un-decoded input symbols is k (i+1) = k (i) −1+Pcor , and the degree distribution of these input symbols can be computed by the following Theorem. At the beginning of the (i+1)th loop, the input degree distribution Λ(i+1) (x) is given by (i)

(i+1)

Λdˆ

=

(i)

(i)

(i)

ˆ )/E k (i) Λdˆ − (1 − Pcor )(k (i) dΛ dˆ 1st,dˆ k (i+1)

.

(5)

Then we will analyze the 2nd step of the ith loop. At the beginning of this step, the found released output symbol is move out of the decoding process, and (i) (i) (i) the remain number of edges in bipartite is E2nd = E1st − 1 + Pcor , in which

2446

S. Wu

the number of edges connected with output symbols with each degree d can be computed by  (i) (i) d=1 nd − 1 + Pcor (i) , (6) E2nd,d = (i) d≥2 dnd (i)

(i)

where nd = n(i) Ωd is the expected number of output symbols with degree d at the beginning of ith loop. (i) Let E2nd,d↔rip represents the expected number of edges between the recovered symbol and output symbols with degree d, which can be given by (i)

(i)

(i)

(i)

E2nd,d↔rip = E2nd,rip E2nd,d /E2nd .

(7)

At the beginning of the (i+1)th loop, the expected number of output symbols with degree d is (i+1)

= nd ⎧ (i) (i) (i) (i) − E2nd,1↔rip + E2nd,2↔rip n − 1 + Pcor ⎪ ⎪ ⎨ 1 ⎪ ⎪ ⎩

(i)

(i)

(i)

(i)

(i)

nd − E2nd,d↔rip + E2nd,d+1↔rip nD − E2nd,D↔rip

(8) d=1 1 i pffiffiffiffiffiffi pffiffiffiffiffiffi 1 LC :

pffiffiffiffiffi t þ ZLCðxcÞ pffiffiffiffiffi t LCðxcÞ

9 > = Pdt dc > ;

ð4:15Þ

5 Analysis of Simulation Results of Inductive Overvoltage on Transmission Lines Figure 2 shows the overvoltage waveform on the line at the horizontal distance x = 100 m and 500 m from the start of the transmission line, taking R = r = 2 km, the lightning current waveform is taken 2:5=50 ls, and the line impedance Z1 = 500 X at the head end [10].

Fig. 2. x = 100 m and 500 m line induced overvoltage

The solid line is an overvoltage at 100 m, and the dotted line is an overvoltage at 500 m. It can be seen from the figure that the overvoltage of the overvoltage generated on the lightning wave line at 500 m is significantly lower than the overvoltage at 100 m [11], and gradually decays to zero with the end of the lightning wave action time. Take a fixed value x = 100 m, change the distance r between the observation point and the lightning current channel. The r in the simulation is 500 m and 1 km respectively, and the obtained waveform is shown in Fig. 3.

Calculation and Simulation of Inductive Overvoltage

2531

Fig. 3. r = 500 m and 1 km line induced overvoltage

The solid line is overvoltage at 500 m, and the dotted line is overvoltage at 1 km. Compared with Fig. 2, r = 2 km, 500 m and 1 km are relatively close to the lightning channel, so the space electric field is relatively large [12, 13], Therefore, the overvoltage generated on the line is also greater than the amplitude at r = 2 km.

6 Conclusion In the paper, an analytical solution of lightning induced overvoltage in transmission lines in the time domain is derived through a series of mathematical operations. Based on the theory of lightning electromagnetic field, through a large number of calculations, the incident field voltage is first calculated, and then the scattering field voltage is calculated by the structure of the solution of the wave equation. The induced overvoltage on the transmission line is the sum of the incident field voltage and the scattered field voltage, and finally selected. The appropriate parameters [14], simulation curve of the line overvoltage, the obtained curve is in accordance with the actual situation, verifying the correctness of the calculation method.

References 1. Liu L, Cui X (2013) research on lightning induction overvoltage calculation and flashover probability of electrified railway contact network. J North China Electr Power Univ 40 (2):10–16 2. Zhang Y, Liu F, Wang Y et al (2013) Improved double-exponential function lightning current waveform and its calculation of radiated electromagnetic field. Trans China Electrotech Soc 28(2):131–139 3. Yu Z, Zeng W, Wang S et al (2013) Simulation analysis of lightning induction overvoltage of distribution lines. High Voltage Eng 39(2):415–422 4. Jing H, Wang S (2017) Research on spatial electromagnetic field around lightning strikeback channel based on time domain difference method. Electromagn Arrester (5):65–70

2532

Y. Qiu et al.

5. Zhang H (2006) High voltage technology. China Electric Power Press, Beijing, p 127 6. Rubinstein M, Uman MA (1991) Transient electric and magnetic fields associated with establishing a finite electrostatic dipole, revisited. IEEE Trans EMC 33(4):312–320 7. Cooray V (1994) Calculation lightning-induced overvoltage in power lines: A comparison of two coupling models. IEEE Trans Electromagn Compat 36(3):179–182 8. Zhou B, Wang Y, Li L (2005) Mathematical physics equation. Publishing House of Electronics Industry, Beijing, p 223 9. Zhou Y, Li K (2018) Comparison of lightning arrester and lightning protection of lightning induced overvoltage on overhead lines. Electromagn Lightning Arrester 5:86–90 10. China Electric Power Research Institute (2014) Design specifications for overvoltage protection and insulation coordination of AC electrical equipment: GB/T50064-2014. China Planning Press, Beijing 11. Coelho VL, Raizer A, Paulino JOS (2010) Analysis of the lightning performance of overhead distribution lines. IEEE Trans Power Delivery 25(3):1706–1712 12. Hoidalen HK (2003) Calculation of Lightning-induced volt-ages in models including lossy ground effects. In: International conference on power systems transients (IPST 2003), pp 1–6 13. Gulyas A, Szedenik N (2009) 3D simulation of the lightning path using a mixed physicalprobabilistic model—the open source lightning model. J Electrostat 67(2):518–523 14. Heidler F, Cvetic JM, Stanic BV (1999) Calculation of lightning current parameters. IEEE Trans Power Delivery 14(2):399–404

Deep Learning Based Exploring Channel Reciprocity Method in FDD Systems Jie Wang1, Guan Gui1(&), Rong Wang2, Yue Yin1, Hao Huang1, and Yu Wang1 1

College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China [email protected] 2 College of Electronic and Optical Engineering and College of Microelectronics, Nanjing University of Posts and Telecommunications, Nanjing 210003, China

Abstract. The major bottlenecks of the frequency division duplex (FDD) systems are the overheads of downlink channel state information (Downlink-CSI) estimation and feedback. To address the aforementioned problems, this paper enables reconstructing Downlink-CSI directly from the information of uplink channel state (Uplink-CSI) by proposing a convolutional long short-term memory network (CLSTM-net) scheme. More specifically, the CLSTM-net extract features from Uplink-CSI by utilizing the temporal and spatial correlations existed in the Uplink-CSI and the Downlink-CSI, and then maps the features to the reconstruction of Downlink-CSI. Experiment results verify the superiority of the proposed CLSTM-net based method. Keywords: Frequency division duplex system Downlink  Uplink  Deep learning

 Channel prediction 

1 Introduction Accurate channel state information is vital for various wireless communication technologies, e.g., beamforming, precoding, and resource allocation, to support the quality of service [1]. It is easy to be achieved in time division duplex (TDD) systems by inferring downlink channel state information (Downlink-CSI) from uplink channel state information (Uplink-CSI) based on channel reciprocity. However, the aforementioned reciprocity property of channel does not exactly exist in the frequency division duplex systems (FDD) because of the different frequency bands for uplink and downlink channel [2]. The conventional way of obtaining Downlink-CSI in FDD systems is that receivers estimate and sent the Downlink-CSI back to transmitters [3]. Therefore, the process cause heavy computation overheads at receivers [4]. Existing schemes for overcoming the bottlenecks above are roughly divided into two categories. One is limited CSI feedback schemes [5–8]. These schemes mainly utilize the CSI’s sparsity structure to obtain an acceptable accurate measurement of Downlink-CSI from limited feedback. However, there exists two essential limits in compressive sensing (CS) based methods [6–8]. Firstly, the sparsity structure of the © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2533–2541, 2020 https://doi.org/10.1007/978-981-13-9409-6_309

2534

J. Wang et al.

CSI does not exist exactly in any specific domain. Secondly, the high complexity of existed CS-based algorithms still cause certain computational overheads at receivers. With the rise of artificial intelligence (AI), the applications of deep neural networks to wireless communications have been widely studied [9]. For example, a deep neural network based CSI feedback scheme, called CsiNet, is proposed in [5]. In detail, the CsiNet extracts inherent characteristics of the CSI to feedback and maps the characteristics to CSI at transmitters. Simulation results in [5] demonstrate that the CsiNet significantly improved reconstruction accuracy and performed multifold times faster than CS-based algorithms. Though the outperformance of the CsiNet, the computational burden of Downlink-CSI estimation at receivers still exists and it is not suitable for the scenario of ultra-low latency requirements. Another category utilizes the correlations between Uplink-CSI and Downlink-CSI [10–12]. In detail, methods in [10, 12] utilize the slow changing of the channel’s longterm statistical characteristics including direction of arrival, channel spatial-temporal correlations and channel covariance matrix in different frequency bands. In addition, [11] extracts and learns parameter values of frequency independent physical path from Uplink-CSI. After that these information are mapped to CSI at any other frequency bands. In general, the aforementioned schemes verify that there exists certain correlations between downlink and uplink in the FDD systems. However, these schemes mentioned above either are extremely high complexity or relies on mathematical models that may mismatch the practical channel model because of the time-varying and complex wireless communication environments. Motivated by powerful deep learning technologies [13] and the correlations between the Uplink-CSI and Downlink-CSI in the FDD networks, this paper proposes a deep learning based scheme to make the FDD systems to be able to get the DownlinkCSI by utilizing the information of Uplink-CSI directly. In detail, the issue of inferring the Downlink-CSI by utilizing the knowledge of the Uplink-CSI is treated as an issue of forecasting a spatiotemporal image. Then a convolutional long short-term memory network (CLSTM-net) is proposed to learn frequency independent environment information from the Uplink-CSI and transfer it to the Downlink-CSI. Simulation results evaluate the effectiveness of CLSTM-net based scheme. The rest of the present studies is structured as follows. System model and the existence of the correlations existed in Uplink-CSI and the Downlink-CSI are first introduced in Sect. 2. Section 3 details the proposed scheme and Sect. 4 evaluates the effectiveness of CLSTM-net based scheme through experiments. Section 5 concludes the paper.

2 System Model and the Existence of Uplink-CSI to Downlink-CSI Mapping 2.1

System Model

This paper consider a base-station with single transmit antenna and a user with single receiver antenna in a FDD system. Traditionally, Downlink-CSI is estimated at receivers and then is fed back to transmitters for serving the signal transmission. This

Deep Learning Based Exploring Channel Reciprocity Method

2535

communication process causes extremely high complexity and feedback overheads at receivers, which is shown in Fig. 1a. This paper removes away the overheads mentioned above by inferring the Downlink-CSI with the information of the Uplink-CSI directly. The communication process of our introduced scheme is shown as Fig. 1b.

(a)

(b)

Fig. 1. a The conventional signal transmission scheme in current FDD systems. b The introduced new signal transmission in the FDD systems

Consider a scenario that orthogonal frequency division multiplexing (OFDM) model with Ns subcarriers is adopted in the FDD system and a time-varying channel is considered. In the considered scenario, CSI in Nt time slots is a complex-matrix and it’s size is Ns  Nt . The proposed scheme treat Uplink-CSI of each time slot as image with size of N‘  Nw  2, where Ns ¼ N‘ Nw . The first channel are real values of the UplinkCSI matrix and second channel are imaginary values of the Uplink-CSI matrix. The proposed CLSTM-net learns inherent structures of Nt time slots (Nt frames) UplinkCSI and transform them to Nt time slots Downlink-CSI without any mathematical model and prior assumptions. The proposed scheme will be detailed in Sect. 3. 2.2

The Existence of Uplink-CSI to Downlink-CSI Mapping

It is intuitionistic that wireless channels in wireless communications are functions of propagating environments including scatters, locations that are independent of frequency band. Particularly, paper [11] designs a channel-to-path transform to extract frequency-invariant physical paths from one frequency band and then maps the paths to channels on another frequency band. Experiment results in [14] demonstrate that CSI magnitudes in uplink and CSI magnitudes in downlink exhibit strong correlations. Reference [15] denote that there exits angular reciprocity among Uplink-CSI and Downlink-CSI in FDD model based systems, i.e., angles of propagation for uplink and downlink channel are invariant on different frequency bands with small separation. The above all evaluate the existence of Uplink-CSI to Downlink-CSI mapping although it is hard to be described in mathematical model. Hence, a neural network is a good choice to approximate such a function by learning features between Uplink-CSI and Downlink-CSI without any prior knowledge.

2536

J. Wang et al.

3 Architecture and Principles of the Proposed CLSTM-Net Considering the time-varying characteristics and the spatial correlations of the channels, this present paper formulates inferring the Downlink-CSI by utilizing the UplinkCSI as an issue of inferring a spatiotemporal image and proposes a CLSTM-net based scheme. In detail, the CLSTM-net is made up of several CLSTM layers. Specially, the powerful ability for capturing spatiotemporal correlations of the CLSTM layer has been proved [16]. The distinguishing feature of CLSTM layer is that it uses a convolution operator in the state-to-state and input-to-state transitions. By stacking multiple CLSTM layers, the CLSTM-net is skilled in predicting problem in a complex timevarying scenario like the Downlink-CSI predicting issue in present studies. The detailed structure of the proposed CLSTM-net is described in Fig. 2 and the hyper-parameter setups are detailed below. The CLSTM-net is made up of two modules: the first module is feature extraction module with five CLSTM layers and the second module is prediction module made up of one 3D convolutional layer (3D-CL). Each CLSTM layer exploits kernels with size of 3  3 and generates 32 feature maps. The 3D-CL uses kernels with size of 3  3  3 and generates 2 feature maps which is the same size of initial Downlink-CSI images. In general, the feature extraction module extracts spatial and temporal features from inputs without any prior assumptions. And then the features are sent to the prediction module to map out the final reconstruction of Downlink-CSI. time slots ℓ

×

subcarriers

Feature extraction module

UL-CSI

ConvLSTM_1

Prediction module 3D-CL

ConvLSTM_5

ConvLSTM_2

… frame UL CSI

×



×

×

×



×

×

×



×

×

×



×

×

frame DL CSI

Fig. 2. The structure of our proposed CLSTM-net

The Uplink-CSI samples should be preprocessed before enter the proposed CLSTM-net. To capture correlations between subcarriers, one time slot channel vector with size of Ns  1 should be reshaped to a image with size of N‘  Nw  2, as mentioned in 2.1 part. To learn temporal correlations existed in time-varying channels, Nt time slots channel vectors (i.e., Nt frames images) are treated as one input sample.

Deep Learning Based Exploring Channel Reciprocity Method

2537

As shown in Fig. 2, the inputs of CLSTM-net are Nt -frame images. One image is the presentation of one time slot channel vector mentioned above. And the outputs are also Nt -frame images which present Nt time slots estimated Downlink-CSI vectors. In Fig. 2, the values of Nt  N‘  Nw  K represent the frames, length, width, and channels of one feature map orderly. Every layer in the CLSTM-net uses zero padding and the inputs are the same size. For feature extraction module of the CLSTM-net, each CLSTM layer choose “tanh” as the activation function and followed by a batch normalization layer. The outputs of feature extraction module will be sent to the final 3D-CL with linear activation function and be processed to recover Downlink-CSI. The CLSTM-net is trained by end-to-end training method. All parameters is updated by Adam optimizer in training process. The loss function for CLSTM-net is mean square error (MSE) which is defined as L¼

N  2 1X b   H DL  HDL  N i¼1 2

ð1Þ

where symbol kk2 is Euclidean norm, symbol N denotes the number of training data in b DL are the true and predicted Downlink-CSI respectively. each batch size. HDL and H

4 Experiments and Discussions In this paper, Extended Vehicular A (EVA) model based dataset is adopted, which includes 40,000 independent samples of which 35,000 samples are for training and 5000 samples are for testing [17]. For each sample, it has size of 72  14 (72 subcarriers over 14 time slots). In detail, the first 36 rows with the first 7 columns are defined as uplink channel, and the second 36 rows with the second 7 columns are defined as downlink channel. To train the CLSTM-net, recurrent kernel parameters are initialized by “glorot_uniform” method and convolutional kernel parameters are initialized by using “orthogonal” method. In addition, the batch size is set as 35 and epoch is set as 300. The dynamic learning rate is exploited by monitoring the variation of the validation loss. In the proposed CLSTM-net, the learning rate is initialized as 0.001 and the dynamic learning rate will be decreased to one tenth of the initial value when the value of the validation loss doesn’t decrease after 40 epochs. The implementations of the CLSTM-net are in Python. In addition, all implementations of training and test are conducted on NVIDIA GTX 1080 Ti. We compare CLSTM-net with CNN [17] under two metrics. In detail, the first metric of above mentioned metrics is normalized mean squared error (NMSE), as shown in the follow NMSE ¼ E

" 2 # hDL  ^ hDL 2 khDL k22

ð2Þ

where hDL denotes practical Downlink-CSI and ^hDL is the reconstruction results of the Downlink-CSI. Another one is correlation coefficient which is defined as

2538

J. Wang et al.

"   # hDL ^hH 2 DL 2   s¼E khDL k ^hDL  2

ð3Þ

2

  if ^hDL =^hDL 2 is used as a beamforming vector, then correlation coefficient can evaluate the performance of the beamforming vector. The corresponding experiment results are given out in Table 1 and the best simulation results are highlighted in red. Specially, when we compute normalized mean squared error and s, the prediction results of the CLSTM-net and the convolutional neural network are reshaped to the initial structure of the Downlink-CSI with the size of 36  7. The results of NMSE in Table 1 reveal that the CLSTM-net significantly outperforms the CNN with approximately 10 dB gain. The results of correlation coefficient s of both schemes exceed 0.9, which demonstrates the effectiveness of using deep learning technologies to utilize the channel reciprocity in FDD systems. And compared with the CNN, the CLSTM-net achieves higher value of s, which indicates that the CLSTM-net can obtain higher beamforming gains. Table 1. NMSE (dB) and s of prediction under different methods Methods

NMSE

CNN

–16.6

0.9995

CLSTM-net

–25.7

0.9999

Real

Imaginary

1

6 4

0.4

2 0.2

40

30

20

Subcarriers

10

0

0

-0.2 6 -0.4

4 2

-0.6

Time slots

0.6

Time slo ts

Amplitude

0.8

Amplitude

0

40

30

Fig. 3. Original Downlink-CSI

20

Subcarriers

10

0

0

Deep Learning Based Exploring Channel Reciprocity Method Real

2539

Imaginary High errors

High errors 0

1

-0.1

0.8

Amplitude 6

0.2 40

4 2

20

Subcarrie

0

rs

e

m

Ti

0

s

t slo

-0.4 -0.5

5

slo ts

0.4

-0.3

-0.6 40

30

20

10

Subcarriers

Real

0

Tim e

Amplitude

-0.2 0.6

0

Imaginary Low errors

Low errors

0

6 0.4

6

-0.4

4

4

Subcarriers

0

0

ts -0.6 40

2 30

20

Subcarriers

10

0

0

Tim

2

20

Tim

40

e slo

0.2

e slo

0.6

-0.2

ts

Amplitude

0.8

Amplitude

1

Fig. 4. Comparative results between predicted Downlink-CSI and original Downlink-CSI. Upper: Comparison of the CNN; Under: Comparison of the CLSTM-net

The outperformance of CLSTM-net because its stronger ability of extracting temporal correlations existed in adjacent time slots than the CNN. To explore the outperformance of the CLSTM-net further, the outputs of the existed CNN and the proposed CLSTM-net were visualized by plotting mesh plots, see Figs. 3 and 4. In detail, X-axis represents the time slots (i.e., columns) of one CSI matrix, Y-axis represents the subcarriers (i.e., rows) of one CSI matrix and Z-axis represents the amplitude of one CSI sample. We plot one sample of actual Downlink-CSI in Fig. 3, and comparison between the original Downlink-CSI and the prediction of it in Fig. 4. Figure 4 demonstrates that the predicted results of the existed CNN have significant deviation at edge subcarriers. This can be explained by the operating principle of

2540

J. Wang et al.

convolutional layers. However, our proposed CLSTM-net scheme has low NMSE at every subcarrier by learning additional temporal correlations existed in the Uplink-CSI and the Downlink-CSI.

5 Conclusions This paper introduced a new CLSTM-net based scheme to make the FDD systems obtain the Downlink-CSI by utilizing the information of Uplink-CSI without any additional overheads caused by channel prediction and feedback. Experiment results reveal that the proposed CLSTM-net outperforms the CNN because of its stronger ability than CNN in extracting temporal correlations existed in adjacent time slots.

References 1. Tse D, Viswanath P (2005) Fundamentals of wireless communication. Cambridge Universiery Press 2. Khalilsarai MB, Haghighatshoar S, Yi X, Caire G (2019) FDD massive MIMO via UL/DL channel covariance extrapolation and active channel sparsification. IEEE Trans Wirel Commun 18(1):121–135 3. Shen W, Dai L, Shim B, Mumtaz S, Wang Z (2015) Joint CSIT acquisition based on lowrank matrix completion for FDD massive MIMO systems. IEEE Commun Lett 19(12):2178– 2181 4. Shen W, Dai L, Gui G, Wang Z, Heath RW, Adachi F (2017) AoD-adaptive subspace codebook for channel feedback in FDD massive MIMO systems. In: IEEE international conference on communications, pp 1–5 5. Wen CK, Shih WT, Jin S (2018) Deep learning for massive MIMO CSI feedback. IEEE Wirel Commun Lett 7(5):748–751 6. Rao X, Lau VKN (2014) Distributed compressive CSIT estimation and feedback for FDD multi-user massive MIMO systems. IEEE Trans Signal Process 62(12):3261–3271 7. Gao Z, Dai L, Dai W, Wang Z (2016) Structured compressive sensing based spatio-temporal joint channel estimation for FDD massive MIMO. IEEE Trans Commun 64(2):601–617 8. Gao X, Dai L, Han S, Chih-Lin I, Wang X (2017) Reliable beamspace channel estimation for millimeter-wave massive MIMO systems with lens antenna array. IEEE Trans Commun 16 (9):6010–6021 9. Wang T, Wen C, Wang H, Gao F, Jiang T, Jin S (2017) Deep learning for wireless physical layer: opportunities and challenges. China Commun 14(11):92–111 10. Miretti L, Cavalcante RLG, Stanczak S (2018) FDD massive MIMO channel spatial covariance conversion using projection methods. In: IEEE international conference on acoustics, speech and signal processing, pp 3609–3613 11. Vasisht D, Kumar S, Rahul H, Katabi D (2016) Eliminating channel feedback in nextgeneration cellular networks. In: Special interest group on data communication, pp 398–411 12. Zhang X, Zhong L, Sabharwal A (2018) Directional training for FDD massive MIMO. IEEE Trans Wirel Commun 17(8):5183–5197 13. Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444 14. Liu Z, Zhang L, Ding Z (2019) Exploiting bi-directional channel reciprocity in deep learning for low rate massive MIMO CSI feedback. IEEE Wirel Commun Lett 8(3):889–892

Deep Learning Based Exploring Channel Reciprocity Method

2541

15. Ali A, González-Prelcic N, Heath RW (2018) Millimeter wave beam-selection using out-ofband spatial information. IEEE Trans Wirel Commun 17(2):1038–1052 16. Shi X, Chen Z, Wang H, Yeung D-Y (2015) Convolutional LSTM network : a machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems, pp 802–810 17. Safari MS, Pourahmadi V (2018) Deep UL2DL: channel knowledge transfer from uplink to downlink. arXiv: 1812.07518v1, pp 1–24

Steering Machine Learning Mechanism Based on Big Data Integrated Cooperative Collision Avoidance for MASS Chengzhuo Han1(B) , Tingting Yang2 , Siwen Wei1 , Hailong Feng1 , Jiupeng Wang3 , and Genglin Zhang4 1

2

Navigation College, Dalian Maritime University, Dalian, China {hzc dmu,redshield,wawtop}@163.com School of Electrical Engineering & Intelligentization, Dongguan University Of Technology, Dongguan, China [email protected] 3 Luoyang Electronic Equipment Test Center, Luoyang, China [email protected] 4 Dalian Chinacreative Technology Co. Ltd, Dalian, China [email protected]

Abstract. The current collision avoidance implementation is based on unilateral static information, but not the bilateral movement information of the two vessels. In this paper, we utilized the Machine Learning (ML) Mechanism Integrated Vessel Networks (MLMIVN) to collision avoidance cooperatively for the two vessels especially for the (Maritime Automatic Surface Ships, MASS). The device onboard has the capability of big data analysis and edge computing and Vessel Networks is based on Device-to-Device(D2D) communication. The safety and economy of collision avoidance route can be improved by training historical navigation data. First, we put forward the concept of cooperative collision avoidance that considering the motion state of each vessel, and a two-step-turn cooperative collision avoidance method is utilized. Then a improved genetic algorithm combined with K-Means algorithm is used to train the big data. Keywords: Genetic algorithm · Cooperative preventing collisions Two-step-turn method · Group evolution

1

·

Introduction

Over the years, rich marine data have been accumulated through satellite remote sensing observation, navigation observation and collection of various underwater equipment. The continuous production of multiple observations and pattern c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2542–2549, 2020 https://doi.org/10.1007/978-981-13-9409-6_310

Steering Machine Learning Mechanism Based on Big Data

2543

data makes the ocean data increase rapidly, while also the requirements for the processing of the limitation of the statute. Traditional data processing analysis of sea data is used by manual classification identification, traditional statistical analysis, and ocean model simulation, which are often influenced by the main view factors and cannot be accurately described in the big data. And the large number of ocean numbers is not structural or semi-structured data, the relationship between data is complex or unrelated, these all challenges traditional statistical analysis and marine model simulation. The machine learning, using data as the driving, extracting useful information in the big data, and the possible relationship between objective mining big data, can improve the efficiency and accuracy of big data processing, and bring new opportunities for the intelligent analysis mining of the ocean big data [1]. In the face of the increasingly number of vessels and the demand for maritime communication, the way to avoid collision of single vessel information can no longer meet the demand. The cooperative mode that according to the multivessel information can better meet the current maritime communication needs. The rules for vessel collision avoidance are given by the Convention on the International Regulations for Preventing Collisions at Sea (COLREGS) [2–4], by the International Maritime Organization (IMO) [5]. Since COLREGS were made for vessels operated by a crew, it cannot be fully applied to the automatic collision avoidance system. In this paper, we break through the bottleneck of COLREGS, and the cooperative preventing collisions algorithm between two vessels or multiple vessels is studied under the background of internet of vessels. Considering the parameters of two vessels simultaneously, the scheme can obtain a better economic route than the traditional method. Currently, the technology of vessel networking is not implemented, so the cooperative avoidance is an absolutely open issue. At present, the maritime communication technology is not perfect, and cooperative collision avoidance is a very open issue. Furthermore, we utilized the Machine Learning (ML) Mechanism Integrated Vessel Networks to collision avoidance cooperatively for the two vessels. First the vessel network is builded [6] to obtain the required changed knowledge, e.g. the direction of the two vessels. The communication devices onboard have the capability of data mining (DM) and edge computing, which could change information between vessels based on Device-to-Device(D2D) technology. The remainder of this paper is organized as follows. System model is given in Sect. 2. Problem formulation in Sect. 3. Adopted algorithms are proposed in Sect. 4. Simulation results are given in Sect. 5. This paper is concluded with future work in Sect. 6.

2 2.1

System Model Vessel Network

We consider the radar onboard monitors the surroundings all the time, within the duration of sailing. When the two vessels are about to collide, the device onboard set up a network initiating the collision avoidance mechanism. D2D

2544

C. Han et al.

Origin port

Shipborne sever

CR Shipborne sever

Origin port

Destination port Destination port Vessel

Inforstation Coverage

Radar radius

Fig. 1. The framework of vessel network

communication technology is utilized to transfer between terminals directly instead of via the ship-shore network [7,8]. And it uses cognitive radio(CR) technology to connect two vessels, so that we can make full use of the spectrum at sea. The motion information of the new path is transmitted back to the starting position. Figure 1 shows the framework of this dedicated collision avoidance vessel network. 2.2

Synergetic Avoidance Mechanism

The task of path planning is to find a set of feasible paths that meet the requirement of the two vessels collision avoidance in emergency. The relative motion of the two vessels is also affected by the marine environment, and the path planning of the dynamic environment is more complicated than that of the static environment. The system uses various sensors to obtain environmental information in order to improve operational accuracy. Simultaneously, the digital high resolution DEM, GPS, ARPA and other data information are integrated. During the task initialization period, the vessels’ relative position information was obtained from the vessel network.

Steering Machine Learning Mechanism Based on Big Data

3

2545

Problem Formulation

For cooperative preventing collisions, a economical and safe route should be designed. Therefore, when calculating the fitness function of individual population, the following factors should be considered: safety, economy and stability. V alue(p∗ ) = min[f1 (p), f2 (p), f3 (p)]

(1)

Here the target functions is denoted as f (p1 ), f (p2 ), f (p3 ), which represents the economy, safety and smoothness path.V alue(p∗ ) represents the feasibility of the route to the minimum of them. So it takes the minimum value among them. – Length of path f (p1 ) = s1 + s2 + s3 si1 is the distance from the beginning to the end before the first turn; si2 is the distance from the first turn to the second turn; si3 is the remaining distance. – Security m f2 (p) = max {danger(ti )} n

i=1

m = max {timesi } i=1

The timesi denotes the time required for vessel i to navigate the planned route. – Path smoothness 2 f3 (p) = max {turn(αi )} i=1

We limit the two steering angles to minimize the loss of the vessel when turning.

4

Problem Solutions

This paper uses the K-Means analysis method and genetic algorithm (GA) [9] to solve the problem. First, set a priori K to representK clusters. Then sample points are aggregated into class K by using K-means method. Match the type of vessel it is in according to the data of its movement. The steering angle of this type of ship is taken as the next step to calculate the input data. And then using GA to calculate the steering distance of vessels. First, the problem of turning to distance is transformed into multi-objective combinatorial optimization problem, and the distance is considered as the possible solution of the problem. Firstly, the steering distance problem is transformed into a multi-objective combinatorial optimization problem, and the steering distance and steering angle are optimized. In order to realize the evolutionary process of survival of the fittest, the genetic operation task is coded into the initial population and selected as the fittest individual according to its adaptability to the environment [10]. In order to approximate the optimal solution, the genetic operation solves the evolutionary problem by optimizing the search method [11].

2546

C. Han et al.

Fig. 2. The two-step-turn cooperative preventing collisions scheme

4.1

Population Initialization

In this paper, we adopt a two-step-turn cooperative preventing collisions method. At the second turn, the speed is changed to sia2 , the velocity is adjusted to via2 , the direction is toward to destination. And vessel B is in the same situation. The two-step-turn cooperative preventing collisions scheme is shown in Fig. 2. Each initial chromosome in the initial population is randomly generated. Chromosomes represent a set of collision avoidance paths. The structure is {(sa1 , va1 , αa1 ), (sa1 , va2 ), (sb1 , vb1 , αb1 ), (sb1 v,b2 )} Here, (sa1 , va1 , αa1 ) stands for the sailing distance, the sailing speed, and the first steering direction of vessel A. (sa2 , va2 ) stands for the sailing distance and the second steering speed of vessel A. (sb1 , vb1 , αb1 ) stands for the sailing distance and the first steering speed of vessel B. (sb2 , vb2 ) stands for the sailing distance, the sailing speed, and the sailing direction of vessel B for second steering. 4.2

Calculation of Collision Risk

As the important parameters of collision risk for the coordination collision avoidance technology, we calculate it accurately by utilizing the two vessel distance to closest point of approach(DCP A) [12] and time to closest point of approach(T CP A) [13], to assessment the collision risk. When the number of vessels is n ≥ 1, the uDCP Ai , uT CP Ai , uDi , uki is each vessel’s parameters. The risk is expressed as: f (uDCP Ai , uT CP Ai , uDi , uki ) = aDCP AuDCP Ai + aT CP AuT CP Ai + aDuDi + akuki Velocity vector is v=

 vx 2 − v y 2

(2)

(3)

Steering Machine Learning Mechanism Based on Big Data

The distance between the two vessels is  2 2 D = (xt − x0 ) + (yt − y0 ) ⎧ ⎪ ⎪ 1, DCP Ai ≤ d1  ⎪ ⎨ 1 1 π 1 − 2 sin d2 −d1 (DCP Ai − d2 −d ) , 2 2 uDCP Ai = ⎪ d1 < DCP Ai ≤ d2 ⎪ ⎪ ⎩ 0, d2 < DCP Ai ⎧ 1, |T CP Ai | ≤ d1 ⎪ ⎨

2 t2 −|T CP Ai | uT CP Ai = d1 < |T CP Ai | ≤ d2 t2 −t1 ⎪ ⎩ 0, d2 < |T CP Ai |

5 5.1

2547

(4)

(5)

(6)

Simulation Results The Efficiency of Improved Genetic Algorithms

In this paper, we compare the genetic algorithm and the improved genetic algorithm, and show the advantages of the improved genetic algorithm in the cooperative collision avoidance. The experimental data are randomly generated. First, the number of parameter propositional atoms is atom num, and the constraint clause is sen num, and the propositional atom has a probability of β. The GA algorithm and the improved GA algorithm IM P GA were used to calculate 110 random instances. Here, uf n − m represents the problem space. The n is the number of vessels. The m represents the number of routes. The experimental results are shown in the Table 1. 5.2

Simulation Collision

When there is collision risk, the path is planned through the vessel network through collaborative collision avoidance algorithm. The original path of threedimensional is shown in Fig. 3. The new route of three-dimensional is shown in Fig. 4. From the simulation results, we find that our proposed collision avoidance algorithm could successfully improve the navigation safety, via the information collected from the vessel network. Table 1. Fitness and sorting Examples of collections Successes-GA Successes-IMP GA Uf248 249

59

79

Uf250 250

60

65

Uf248 251

59

81

UfUf245 246

65

71

Uf251 249

62

81

Uf245 252

62

74

2548

C. Han et al.

Fig. 3. The original route

1500 1000 500 0 -500 2000 150

1000

100

0

50 -1000 0

Fig. 4. The new route

6

Conclusion

In this paper, we realized automatically collect information for collision avoidance vessel network, which has the big data mining and edge computing capability. Via analyzing information, the solutions for cooperative collision avoidance based on the idea of ML scheme is proposed. A method of calculating the path that the K-Means algorithm is combined with the improved GA algorithm is proposed to deal with bilateral dynamic information transmission. From simulation results, we find that this method can significantly improve the economy and security of collision avoidance path.

Steering Machine Learning Mechanism Based on Big Data

2549

Acknowledgements. This work was supported in part by Natural Science Foundation of China under Grant 61771086, Dalian Outstanding Young Science, Technology Talents Foundation, Natural Science Foundation of Liaoning Province under Grant 201602083 and key technologies of multi-layer heterogeneous network and resource optimization in sea area under Grant 83117938.

References 1. Cheng N, Lyu F, Chen J, Xu W, Zhou H, Zhang S, Shen XS (2018) Big data driven vehicular networks. IEEE Network 99:1–8 2. Johansen TA, Perez T (2016) Unmanned aerial surveillance system for hazard collision avoidance in autonomous shipping. In: Proceedings of the international conference on unmanned aircraft systems, pp 1056–1065 3. Jincan H, Maoyan F (2015) Based on ECDIS and AIS ship collision avoidance warning system research. In: Proceedings of the international conference on intelligent computation technology and automation, pp 242–245 4. Johansen TA, Perez T, Cristofaro A (2016) Ship collision avoidance and COLREGS compliance using simulation-based control behavior selection with predictive hazard assessment. IEEE Trans Intell Transp Syst 17(12):3407–3422 5. COLREGs-Convention on the International Regulations for Preventing Collisions at Sea. IEEE Transactions on Intelligent Transportation Systems, London 6. Li X, Shu W, Li M, Huang HY, Wu MY (2009) Performance evaluation of vehiclebased mobile sensor networks for traffic monitoring. IEEE Trans Veh Technol 58(4):1647–1653 7. Deyu Z, Ruyin S, Ju R, Yaoxue Z (January 2018) Delay-optimal proactive service framework for block-stream as a service. IEEE Wireless Commun Lett (Early Access ) pp 1–1 8. Zhang D, Chen Z, Awad MK, Zhang N, Zhou H, Shen XS (2016) Utility-optimal resource management and allocation algorithm for energy harvesting cognitive radio sensor networks. IEEE J Sel Areas Commun 34(12):3552–3565 9. Folino G, Pizzuti C, Spezzano G (2001) Parallel hybrid method for sat that couples genetic algorithms and local search. IEEE Trans Evol Comput 5(4):323–334 10. Abdul-Rahman O, Munetomo M, Akama K (2011) An improved binary-real coded genetic algorithm for real parameter optimization. In: Proceedings of the third World Congress on nature and biologically inspired computing, Salamanca, pp 149–156 11. Bajrami X, Dermaku A, Demaku N, Maloku S, Adem K, Kokaj A (2016) Genetic and fuzzy logic algorithms for robot path finding. In: Proceedings of the mediterranean conference on embedded computing, pp 195–199 12. Suarez B, Theunissen E (2015) Systematic specification of conflict geometries for comparison and evaluation of human-in-the-loop traffic avoidance functions. In: Proceedings of the IEEE/AIAA 34th digital avionics systems conference, pp 5A2– 1–5A2–13 13. Dunthorne J, Chen WH, Dunnett S (2014) Estimation of time to point of closest approach for collision avoidance and separation systems. In: Proceedings of the UKACC international conference on control, pp 646–651

A Weighted Fusion Method for UAV Hyperspectral Image Splicing Yulei Wang1,2, Yao Shi1(&), Qingyu Zhu1, Di Wu1, Chunyan Yu1, Meiping Song1,2, and Anliang Liu1 1

2

Information and Technology College, Dalian Maritime University, Dalian 116026, China [email protected] State Key Laboratory of Integrated Services Networks, Xidian University, Xian, China

Abstract. In order to obtain a wide field of view and high spatial resolution hyperspectral image, image splicing has been widely studied. This paper proposed a weighted fusion algorithm to address the phenomenon of color inhomogeneity in the overlapping regions of two spliced images, which is caused by different light intensity or different camera angles of scanning. Different from the traditional direct average fusion methods, the proposed weighted algorithm conducted the fusion of the overlapping regions with adaptive weights change. The simulation results show that the proposed weighted average fusion algorithm can effectively eliminate the seam between two parts of the assembled image, making the transition of the fusion area more naturally [1]. Keywords: UAV hyperspectral image Uniform color

 Image fusion  Image splicing 

1 Introduction Along with the development of the UAV imaging technology, UAV images have been widely used in many fields such as military exploration, environmental detection, and disaster rescue due to their low cost, flexibility and convenience. Benefited from the development of imaging spectrometer remote sensing technology, hyperspectral UAV sensors have been widely applied in recent years [2, 3]. However, due to the limitation of imaging angle and the need of high spatial resolution, one single UAV hyperspectral image often includes too small region for further use such as detection, classification and monitoring. As a result, splicing techniques for hyperspectral images are in an urgent need to get a larger scene. However, due to a series of factors such as the heterogeneity of optical lens imaging, atmospheric conditions, and illumination conditions, the color, brightness, and contrast distribution of different regions of the same image will be inconsistent, which will affect the interpretation and translation of the image and other subsequent applications. Therefore, image homogenization, especially for overlapping regions of spliced images, has drawn great interest to obtain brightness, consistent color, rich information of UAV hyperspectral images [4]. © Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2550–2554, 2020 https://doi.org/10.1007/978-981-13-9409-6_311

A Weighted Fusion Method for UAV Hyperspectral Image Splicing

2551

Image homogenization, with the purpose of eliminating the phenomenon of uneven color in the image, could obtain a uniform color and a balanced brightness image [5]. In this paper, we use a weighted fusion algorithm instead of traditional average fusion algorithms, which produces a gradual change within the overlapping region, making it more uniform of visual effect. Simulation results show that the algorithm proposed in this paper effectively solves the splicing trace in the image fusion process, making the transition of the fusion area more naturally.

2 Image Fusion 2.1

Average Fusion

Image fusion refers to the use of computer graphics and digital image processing and other technologies to extract favorable information for multiple image data, and finally synthesize a high-quality panoramic image, thereby improving the utilization of image information and improving the spatial and spectral resolution of the original image, which is convenient for visual observation. Image fusion is widely used in medical imaging, satellite remote sensing, weather forecasting and military observation. In this section, a direct average fusion algorithm is introduced, which is commonly used to uniform the color of two spliced images. The direct average fusion method is very simple and easy to implement. The main idea is that, the pixel gray value in the overlapping region of two images to be spliced is simply added and the average value is obtained as the final gray value of the overlapping region. The equation is as follows: 8 ðx; yÞ 2 f1 < f1 ðx; yÞ ð1Þ f ðx; yÞ ¼ f1 ðx;yÞ þ2 f2 ðx;yÞ ðx; yÞ 2 ðf1 \ f2 Þ : ðx; yÞ 2 f2 f2 ðx; yÞ In formula (1), f1 ðx; yÞ and f2 ðx; yÞ indicate the two images that to be merged, separately. f ðx; yÞ represents the merged image. From formula (1), it can be seen that the algorithm only performs simple mean operation on grayscale values of overlapping region pixels without changes in other regions. 2.2

Weighted Average Fusion

Different from the direct average fusion method, the weighted average fusion method is no longer simply applying mean operation on overlapping region pixels, but summing two terms consisting of two adaptive weighting functions of two images to be spliced. As is shown below: 8 ðx; yÞ 2 f1 f1 ðx; yÞ < ð2Þ f ðx; yÞ ¼ w1 f1 ðx; yÞ þ w2 f2 ðx; yÞ ðx; yÞ 2 ðf1 \ f2 Þ : ðx; yÞ 2 f2 f2 ðx; yÞ

2552

Y. Wang et al.

In formula (2), w1 and w2 represent weight of two images within overlapping region, satisfying the conditions: w1 þ w2 ¼ 1 and 0\w1 ; w2 \1. Their values are related to the width of the overlapping area. Assuming that the width of the overlapping area is d, the position where the overlapping area begins is e, and the pixel of the current processing point is j, then w1 ¼ ½d  ðj  eÞ=d

ð3Þ

w2 ¼ 1  w1

ð4Þ

When w1 changes from 0 to 1, w2 changes from 1 to 0. The smooth transition between f1 ðx; yÞ and f2 ðx; yÞ can be achieved in overlapping regions. By selecting the appropriate weight, the smooth transition of the image splicing is achieved, and the splicing seams are effectively eliminated. In particular, when w1 ¼ w2 ¼ 0:5, the weighted average fusion method becomes the direct average fusion method in Sect. 2.1.

3 Experimental Results and Analysis The drone we used in this article is the Dajiang drone, model Nano-Hyperspec, with a wavelength range of 400–1000 nm, a spatial channel number of 640, a spectral channel number of 270, and a spectral resolution of 6 nm @ 20 lm slit. The image acquisition

(a) left

(b) right

Fig. 1. Original two figures for splicing

A Weighted Fusion Method for UAV Hyperspectral Image Splicing

2553

has a flying height of 27.17 and a wide angle of 94°. The two UAV hyperspectral images we obtained were 640  7525 pixels and 640  8285 pixels, respectively. The original image is as follows (Fig. 1). And Fig. 1a is the left and Fig. 1b is the right. Three algorithms are realized using Open CV programming language, which are directly image spliced using SIFT algorithm (named as SIFT), SIFT splicing algorithm with direct average fusion algorithm (named as DAF-SIFT), and SIFT splicing algorithm with weighted average fusion (named as WAF-SIFT). The directly spliced image using is shown in Fig. 2a. It is obvious that there is an uneven color between the left and right. The average fusion image is shown as Fig. 2b. Although the effect of fusion has been improved a little, there is still a very obvious stitching seam. The weighted average fusion image is shown as Fig. 2c. Obviously, after the fusion, the uniformity effect is improved and the stitching seam is eliminated effectively.

(a) SIFT

(b) DAF-SIFT

(c) WAF-SIFT

Fig. 2. Image splicing results with three algorithms

2554

Y. Wang et al.

4 Conclusion The uniform processing of remote sensing images has broad application prospects, and can effectively make images of different colors and brightness become uniform in color and brightness. In view of the shortcomings of the direct average fusion algorithm, an improved weighted fusion algorithm is proposed to test the image of the drone. The results show that the method can make the spliced image uniform color, achieve the ideal visual effect, and improve the practicability of the UAV stitching image. In addition, the test results also have certain reference value for the splicing and fusion of other remote sensing images. Acknowledgements. This work is supported by the National Nature Science Foundation of China (61801075, 61601077), the Fundamental Research Funds for the Central Universities (3132019218, 3132016331) and Open Research Fund of State Key Laboratory of Integrated Services Networks.

References 1. Xu C, Nie S (2016) Image splicing algorithm based on SURF and improved gradient method. Digital Technol Appl (12):133 + 136 2. Liu Y (2018) Research and implementation of UAV image mosaic technology. Anhui University, Anhui 3. Wang X, Dai H, Yu T, Xie D, Wu Y (2013) Research on UAV image mosaic and color uniform based on multi-resolution fusion. Mapping Bull (06):27–30 4. Sun W, Sun Y, Fu X, Song M (2014) Remote sensing image homogenization algorithm based on nonlinear MASK. Mapp Sci 39(09):130–134 5. Xue P (2017) The research of uniform color method based on remote sensing images. Xidian University, Xian

Hyperspectral Target Detection Based on Spectral Weighting Di Wu1(&), Yulei Wang1,2, Yao Shi1, Qingyu Zhu1, and Anliang Liu1 1

2

Information and Technology College, Dalian Maritime University, Dalian 116026, China [email protected] State Key Laboratory of Integrated Services Networks, Xidian University, Xian, China

Abstract. Target detection has become an important research direction in hyperspectral imagery (HSI) processing. In this paper, aiming at the phenomenon that different bands have different abilities to distinguish materials, a spectral weighting detection algorithm is proposed. Firstly, relative distance between different categories as the spectral separability criterion is used to estimate the distinction ability of each band. And then different bands are endowed with different weighting coefficients. Finally, the RX and LPD algorithms are used to test the efficiency of the proposed spectral weighting method. The experimental results show that the detection algorithms based on spectral weighting have better performances than the traditional RX and LPD algorithms. Keywords: Hyperspectral imagery criterion  Spectral weighting

 Target detection  Spectral separability

1 Introduction Recently, the fast development of hyperspectral remote sensing imaging technique has led to an increased interest in exploiting spectral imagery for classification and target detection. Hyperspectral imaging sensors can provide hundreds of narrow (normally about 10 nm wide) contiguous bands that typically span the visible, near-infrared, and mid-infrared portions of the spectrum (0.4–2.5 lm). This enables the construction of an essentially continuous radiance spectrum for every pixel in the scene. Thus, hyperspectral can detect materials which cannot be detected in traditional remote sensing imagery [1]. Hyperspectral target detection algorithms can be developed using statistical, physical, or heuristic approaches [2]. The most famous detection algorithm is RX detector [3]. Another approach is developed by Harsanyi [4], referred to as Low Probability Detection (LPD). The LPD algorithm makes use of the unity vector as its matched signal, while the RX uses the processed pixel as the matched signal.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2555–2561, 2020 https://doi.org/10.1007/978-981-13-9409-6_312

2556

D. Wu et al.

2 Spectral Weighting Method 2.1

Statistic Characteristics Analysis of Hyperspectral Data

Different bands have different abilities to distinguish materials. In order to analyze this characteristic of hyperspectral data, AVIRIS data in Fig. 1 is used. This hyperspectral image is part of the Indiana remote sensing testing area taken in June 1992. We choose two kinds of materials, Corn-no till and Wood, and calculate the statistic properties of the two categories, for example the average and variance of each band [5].

(a)

(b)

(c)

means of coin-no till and woods

8000

(d) 3

Coin-no till Woods

7000

variance

mean

4000

variances of coin-no till and woods Coin-no till Woods

2 1.5 1

3000

0.5

2000 1000

4

2.5

6000 5000

x 10

0

20

40

60

80 100 120 140 160 180 200

band number

0 0

20

40

60

80 100 120 140 160 180 200

band number

Fig. 1. Hyperspectral data from AVIRIS sensor and Statistic characteristics analysis, with a true-color image, b real mark image, c band mean of Corn-no till and Wood and d band variance of Corn-no till and Wood

Hyperspectral Target Detection Based on Spectral Weighting

2557

Figure 1c, d shows the statistic characteristics analysis of this hyperspectral data, in which the x axis represents different bands, and the y axis represents the corresponding statistics values (mean and variance). From Fig. 2 we can know, different bands have different separability between the two categories. In bands 3–55, 64–73, 110–140 and 160–180, the data has stronger separability, and in other bands, the data has weaker separability, especially in bands 75–100, the statistic characteristics are almost the same. That is to say, some bands have stronger distinction ability than other bands.

(a) 9th band of the data

(b) distribution of targets

Fig. 2. The 9th band of hyper-spectral data and corresponding distribution of targets

2.2

Estimation of Weighting Coefficients Based on Spectral Separability Criterion

According to the Fisher principle, for classification, it is better that the within-class scatter is low and the between-class scatter is large. This can be measured by relative distance between categories. Assume that we have two categories X and Y, respectively represent target and background. For band p, the within-class distance is Sw and the between-class distance is Sb, then we can know, Sw ¼

NX NY 1 X 1 X dp ðxi ; xÞ þ dp ðyj ; yÞ 2NX i¼1 2NY j¼1

Sb ¼

1

NX X NY X

2NX NY

i¼1 j¼1

dp ðxi ; yj Þ

ð1Þ

ð2Þ

in which xi and yi represent the sample of X and Y, respectively, x and y represent the mean value of X and Y, respectively. NX and NY represent sample quantity of X and Y, respectively. Goal is to maximize Sb and minimize Sw .

2558

D. Wu et al.

The expression for relative distance between categories J can be formed as, JðÞ ¼

Sb Sw

ð3Þ

The larger value of J, the higher separability between the two categories, and then we should assign a larger weighting coefficient to the corresponding band. 2.3

Spectral Weighting Target Detection Algorithms

Different bands have different distinction abilities, so if we detect target in the original data space, we cannot make good use of spectral separability. As a result, we propose a target detection method based on spectral weighting. Algorithm procedure is as follows: • Convert the original hyperspectral data from a 3-D form to 2-D form. • Compute weighting coefficient evaluation function J of each band in the original data space according to (3). • According to the value of J, assign a suitable weighting coefficient to each band. • Use target detection algorithms (RX detector, LPD or other detection algorithms) to the new data. • Give a suitable threshold and output detection results.

3 Experimental Analysis In this section, the RX detector and the LPD algorithm are used to prove the efficiency of spectral weighting algorithm. Real hyperspectral image collected by AVIRIS is used for simulation experiments. Figure 3a is the 9th band of the original hyperspectral data, and Fig. 3b is the actual distribution of 38 targets. Figure 4 gives the RX detection applying on original hyperspectral imagery and RX detection obtained after spectral weighting, denoted as RX and SWRX (spectral weighting based RX), respectively. The 3-D plots and final segmentation of RX detector are shown in Fig. 4a, b, and using SWRX detector experiments are shown in Fig. 4c, d, respectively. It is easy to find that the detection result obtained by the proposed SWRX detector greatly outperforms the conventional RX detector. Then the LPD algorithm is used in the same way. The results are shown in Fig. 4a–d. We can get the same conclusion as the RX experiment that the detection result obtained by the proposed SWLPD greatly outperforms the conventional LPD algorithm.

Hyperspectral Target Detection Based on Spectral Weighting

2559

From the results comparison in Fig. 4, the performance of spectral weighting based target detection algorithms are better than target detection directly in the original data space. Because of the inability in processing efficiently the massive amounts of original hyper-spectral data, the performance of RX based target detection algorithms are worse than LPD based detection algorithms.

(a) 3-D plot of RX

(c) 3-D plot of SWRX

(b) threshold segmentation of RX

(d) threshold segmentation of SWRX

Fig. 3. Detection results using conventional RX and SWRX

2560

D. Wu et al.

(a) 3-D plot of LPD

(c) 3-D plot of SWLPD

(b) threshold segmentation of LPD

(d) threshold segmentation of SWLPD

Fig. 4. Final results of LPD and SWLPD

4 Conclusion In this paper, a spectral weighting method is proposed for preprocessing of hyperspectral data. As is known to all, different bands have different separability, so the proposed algorithm can improve the separability of the data. Through a certain spectral weighting coefficient evaluation function, we can assign suitable weighting coefficient to each band for detection. By using RX detector and LPD algorithm, the proposed algorithm is tested. From the experimental results and the ROC curves, the spectral weighting algorithm is proved to be effective. Acknowledgements. This work is supported by the National Nature Science Foundation of China (61801075), the Fundamental Research Funds for the Central Universities (3132019218).

Hyperspectral Target Detection Based on Spectral Weighting

2561

References 1. Tong Q, Zhang B, Zheng L (2006) Hyperspectral remote sensing. High Education Press, pp 1–2, 129–135 2. Manolakis D, Shaw G (2002) Detection algorithms for hyperspectral imaging applications. Sig Process Mag IEEE 19:29–43 3. Reed IS, Yu X (1990) Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans Acoust Speech Signal Process 38(10):1760–1770 4. Harsanyi JC (1993) Detection and classification of subpixel spectral signatures in hyperspectral image sequences. Department of Electral Engineering, University of Maryland Baltimore Country, Baltimore 5. Gao H, Wan J, Xu Z, Qian L (2011) Semisupervised classification of hyperspectral Image based on spectrally weighted TSVM. Sig Process 27(N0):1

A Framework for Analysis of Non-functional Properties of AADL Model Based on PNML Cangzhou Yuan1, Hangyu He1, Panpan Zhan2(&), and Tao Chen1 1

2

School of Software, Beihang University, 100191 Beijing, China {yuancz,hehangyux5,arnochen}@buaa.edu.cn Beijing Institute of Spacecraft System Engineering, 100094 Beijing, China [email protected]

Abstract. To analyze the various non-functional properties of the AADL (Architecture Analysis and Design Language) model, many model transformation processes transform different AADL elements to different Petri nets. Unifying these transformation processes into a single process can greatly facilitate architects analyzing multiple properties simultaneously. The difficulty is that the specific elements in specific Petri nets lead to different transformation rules of different transformation processes. Some studies transformed AADL model to Petri Net Markup Language (PNML), the interexchange format of different kinds of Petri nets, to realize the unification, but only supported the transformation of part of the AADL architecture model elements. This paper proposes a framework for analysis of non-functional properties of AADL model, improving the unification work by supporting more AADL elements transforming to PNML. We establish the transformation rules mapping elements in AADL error model and behavior model to PNML. In addition, we transform AADL properties to tool specific information in PNML to generate specific Petri nets. Keywords: AADL

 PNML  Non-functional property analysis

1 Introduction Real-time embedded systems in the aerospace industry have high requirements for nonfunctional properties such as dependability, safety, and schedulability. The sooner the non-functional properties are verified and assessed, the sooner the errors and deficiencies in the system will be discovered, and the less rework will be done in the later stages. AADL [1] can model the system hardware and software architecture, system fault conditions, system runtime behavior and system non-functional properties in the design stage. By analyzing the AADL model, system non-functional properties are analyzed. However, AADL is a semi-formal model. A large number of system descriptions are implicit in the property description. It is difficult to quantify the nonfunctional properties of the system and cannot be formally verified. Therefore, model transformation is widely used in the system non-functional property analysis process by transforming the AADL model into a formal analysis model that can be quantitatively analyzed or formally verified.

© Springer Nature Singapore Pte Ltd. 2020 Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2562–2570, 2020 https://doi.org/10.1007/978-981-13-9409-6_313

A Framework for Analysis of Non-functional Properties of AADL

2563

Petri nets are widely used in the field of non-functional property analysis such as reliability analysis, safety analysis, and timing property analysis. Many studies currently transform AADL design models into several variants of Petri net models to assess or verify different non-functional properties. Rugina et al. [2] annotated dependability-related information into AADL architecture models using AADL Error Model Annex (EMV2) [3] such as faults, failure modes, error propagation, and more. At the same time, a set of transformation rules is provided, which can automatically map AADL into a Generalized Stochastic Petri Net (GSPN) model for dependability analysis. The dependability is assessed by analyzing the GSPN model, and the model transformation tool ADAPT [4] is realized according to the transformation rules. Renault et al. [5] transformed the AADL model to Time Petri Net (TPN) and analyzed the TPN using the TINA tool to verify the time properties such as schedulability. Liu et al. [6] modeled the Integrated Modular Avionics (IMA) system using the AADL core and AADL ARINC653 Annex [3], then transformed the AADL model to a Colored Petri net (CPN) model using existing CPN analysis tools such as Wolfgang to verify system dynamic behavior. The application of various Petri net models has resulted in a variety of model transformation processes and corresponding tools, which have brought difficulties for usage. Because different types of Petri net models extend the original Petri nets in different directions, so each model transformation process needs to support specific elements contained in a particular Petri net type. Also, these model transformation tools may be created at different times, with different usage conditions and purposes, they may run on different platforms, making it difficult to integrate seamlessly. If users want to analyze multiple non-functional properties of the system, it may be necessary to use different model transformation tools on different platforms to transform AADL models containing different AADL elements. Also in this process, in order to meet the conditions of use of different model transformation tools, it may be necessary to modify the AADL model appropriately. The Petri Net Markup Language (PNML) [7] is an XML based interexchange format for different kinds of Petri nets and is currently standardized in ISO/IEC-15909. The transformation process between proprietary Petri net and PNML is shown in Fig. 1. Based on the concepts of PNML, Rena et al. [8] transforms AADL model to PNML to realize unification of different transformation processes between AADL models and Petri net models. However, there are two inadequacies: (1) Only the mapping of AADL architecture model elements to PNML elements is implemented, the transformation of AADL Error Annex elements and Behavior Annex elements commonly used in dependability analysis, behavior verification, etc. is not implemented. (2) The PNML model obtained according to the proposed method does not include the information for specific Petri net types and specific Petri net analysis tools, so different types of Petri nets cannot be generated.

2564

C. Yuan et al.

This paper proposes a framework for analysis of non-functional properties of AADL model that unifies the transformation processes between the AADL model and different Petri nets, improving current work in two aspects in response to the above two inadequacies: (1) Implements transformation rules mapping AADL error model elements and AADL behavior model elements to PNML elements, also supplements the transformation rules of architectural model elements. (2) In order to generate different types of Petri net models based on the transformed PNML model, we selected some important properties in the AADL model and established the transformation rules mapping these properties to information for specific Petri net type and specific tool in PNML. With the unified transformation process, AADL models can be transformed to PNML and then analyzed by different tools, greatly facilitating the architect to analysis non-functional properties of the AADL model.

Proprietary PN documents

1

Petri Nets meta-models

Create Petri Nets models D

2 Save into PNML documents

C

3

PNML documents

B

Load PNML documents

4

A Validate PNML syntax

PNML grammar

Fig. 1. PNML and proprietary petri nets transformation [9]

2 Framework Overview An overview of our framework consisting of three main steps is presented in Fig. 2. 1. Create AADL design model. AADL models the system architecture in units of components. The embedded software is modeled by software components (thread, process, data, subprogram, etc.), and the embedded system platform is modeled by hardware components (processor, bus, memory, etc.). The interaction between the software components is modeled by connections (port, access, parameter, etc.), and the interaction between the software components and the hardware components are modeled by bindings. It is also possible to model the operating modes of the embedded system by AADL mode and model system information flow or control flow through AADL flow. The characteristics of each element are described by AADL properties.

A Framework for Analysis of Non-functional Properties of AADL

2565

In order to fully describe the non-functional properties of the system, AADL provides an annex extension mechanism. The AADL Error Model Annex can describe the fault condition of the system to establish an error model. The AADL Behavior Annex [10] can describe the behavior of the system runtime to establish a behavior model. These two models can more adequately model non-functional properties such as dependability, schedulability, and safety, and obtain more accurate verification and assessment results.

Error Model

Architecture Model

Behavior Model

Unified Model Transformation

PNML Tool Specific Information

PNML Core Model

Generate different Petri net for different Petri net analysis tool TPN

CPN

SPN

TINA

Wolfgang

GreatSPN

Fig. 2. Framework overview

2. Transform the AADL model to PNML. PNML is an XML-based syntax for highlevel Petri nets such as CPN, which is being designed as a standard interchange format for Petri net tools, allowing interoperability and exchangeability among different types of Petri nets. At present, the concept of PNML consists of two parts, one part is the PNML core model, which defines the elements included in the all kinds of Petri net models, such as place, transition, arc, etc. Another part defines the specific elements for specific Petri net tools. Compared to the previous work, we support the transformation of the AADL error model and the AADL behavior model. We transform the entity elements in the AADL design model into elements in the PNML core Model, transforming some of the properties of the entities in the AADL design model into PNML tool-specific information. We will discuss the specific transformation rules in Sect. 3.

2566

C. Yuan et al.

3. Generate various types of Petri net models through PNML. Some of the Petri net models (such as CPN, TPN, etc.) shown in Fig. 2 are not included in the current PNML standard, but analysis tools for different types Petri nets implement PNML format of specific Petri net. For example, TINA specifies the PNML format of the TPN that itself can analysis. We transform some of the properties of the AADL model into PNML tool specific information in the second step, so as to support the generation of specific types of Petri nets with different PNML formats.

3 Unified Model Transformation The key to transformation of AADL model to PNML is to establish mapping rules for AADL elements to PNML elements. The transformation rules for mapping elements in the AADL architecture model to PNML elements are proposed in [8], but they are not complete enough and it does not provide mapping rules of elements in the AADL error model and the AADL behavior model. This paper first supplements the mapping rules of some elements in the AADL architecture model, and then proposes the transformation rules from error model and behavior model to PNML. Also, to realize the generation from PNML to different kinds of Petri nets, we select some important properties of entities in AADL model, and establish transformation rules mapping properties to tool specific information in PNML. The AADL architecture model is the core of the AADL design model and is the basis for error propagation modeling in error models and thread runtime modeling in behavioral models. Table 1 shows the mapping rules for architectural model elements. Table 1. Transformation rules of AADL architecture model elements AADL architecture model element Feature Port (event port/data port/event data port) Access (data access/bus access/subprogram access) Parameter Component Subprogram/thread/process Processor/memory/device/bus Connection Port connection Access connection Parameter connection Modes Mode Mode transition

PNML element Place

Transition Place Place to place transition via arcs Place Place to place transition via arcs

A Framework for Analysis of Non-functional Properties of AADL

2567

The AADL error model supports architecture error modeling at three levels of abstraction: 1. Error propagation: focus on the propagation of errors between components. 2. Component error behavior: focus on error modeling in a component, the effect of incoming error propagations, and conditions for outgoing propagations. Use an error behavior state machine to establish error model of a component. 3. Composite error behavior: Associate the error states of a component with the error states of its subcomponents.

Table 2. Transformation rules of AADL error model elements Component error behavior

Error propagation between components

Composite error behavior

AADL error model element Error state Event Detection (error_detection) AADL Transition Src_State[Event]-Dest_State Propagations (Out_Propagation_condition) In propagation point Out propagation point Error propagation path Subcomponent error state Composite error state Composite error behavior

PNML element Place Transition Transition Place to place transition via arcs Place to place transition via arcs Place Place Place to place transition via arcs Place Place Place to place transition via arcs

We give the mapping rules of AADL error model elements to PNML elements from the above three aspects. As shown in Table 2. The purpose of the AADL Behavioral Annex is to improve the implicit behavioral specification specified by the AADL core. The behavioral model contains a state transition system that describes the internal behavior of the component with guards and actions. The behavior model contains more runtime states, including normal and error states, than the error behavior state machine included in the error model. The specific mapping rules are shown in Table 3.

2568

C. Yuan et al. Table 3. Transformation rules of AADL behavior model elements

AADL behavior model element Behavior state Guard (dispatch_condition/execute_con dition/external_con dition/internal_condition) Action Transition

PNML element Place Transition

Transition Place to place transition via arcs

Different Petri net models extend the original Petri net model in different directions. Both TPN and SPN add a delay time attribute to the transition. The TPN uses a fixed time interval to represent the time delay, while the SPN uses a random time variable to represent the time delay. Therefore, TPN is usually used to represent AADL architecture model and behavior model, because these two models are mostly used to represent the execution of threads in the system, while thread execution often has a fixed time interval in real-time embedded systems; SPN is usually used to represent AADL error model, because the time delay of the error event in the error model complies to some random distribution function. CPN extends the original Petri net model by supporting multiple token types, mostly used to represent different data types of data ports in the AADL architecture model. We selected some AADL elements from these common scenarios, transformed some of their properties to elements of different types of Petri nets, and then established the transformation rules for these properties to tool specific elements in PNML, as shown in Table 4.

Table 4. Transformation rules of AADL properties AADL element Period thread

AADL properties

PNML tool specific element

Compute_Execution_Time Period



Compute_Execution_Time._1

2* Period- Compute_Execution_Time._1



(continued)

A Framework for Analysis of Non-functional Properties of AADL

2569

Table 4. (continued) AADL element Sporadic thread

AADL properties

PNML tool specific element

Compute_Execution_Time Compute_Deadline



Compute_Execution_Time._1 Compute_Deadline



Data port

data identifier



data identifier



Event in error model

OccurrenceDistribution



OccurrenceDistiribtion.DistributionFunction

OccurrenceDistribution.OccurrenceRate



4 Conclusion This paper uses PNML as the intermediate medium to unify the transformation process of AADL model to various Petri net models, solves the problem that different model transformation tools are difficult to seamlessly integrate because they must deal with the specific elements in specific Petri nets. Compared with the existing work, we improve the transformation rules that map AADL architecture model elements to PNML, and realize the transformation of AADL error model and behavior model. In addition, we implement the transformation of important properties of AADL model elements to PNML, enabling the generation from PNML to specific Petri nets processed by different Petri net analysis tools. This framework including the unified model transformation process greatly facilitating the architect to analyze the AADL model. Acknowledgements. This paper is partly supported by the Preresearch of Civil Spacecraft Technology (No. B0204).

2570

C. Yuan et al.

References 1. Architecture Analysis and Design Language (AADL) (2017). https://doi.org/10.4271/ AS5506C 2. Rugina AE, Kanoun K, Kaâniche M (2007) A system dependability modeling framework using AADL and GSPNs. In: de Lemos R, Gacek C, Romanovsky A (eds) Architecting dependable systems IV, vol 4615. LNCS. Springer, Heidelberg, pp 14–38 3. SAE Architecture Analysis and Design Language (AADL) (2015) Annex volume 1: annex A: ARINC653 annex, annex C: code generation annex, annex E: error model annex. https:// doi.org/10.4271/AS5506/1A 4. Rugina AE, Kanoun K, Kaâniche M (2008) The ADAPT tool: from AADL architectural models to stochastic petri nets through model transformation. In: 7th European dependable computing conference, pp 85–90 5. Renault X, Kordon F, Hugues J (2009) Adapting models to model checkers, a case study: analysing AADL using time or colored petri nets. In: 2009 IEEE/IFIP international symposium on rapid system prototyping 2009, pp 26–33 6. Liu C, Gu T, Zhou Q, Wang S, Li Z (2016) Model transformation method from AADL2ECPN and its application in IMA. J Beijing Univ Aeronaut Astronaut 7. Billington J, Christensen S, van Hee K, Kindler E, Kummer O, Petrucci L, Post R, Stehno C, Weber M (2003) The petri net markup language: concepts, technology, and tools. In: van der Aalst WMP, Best E (eds) Applications and theory of petri nets 2003, vol 2679. LNCS. Springer, Heidelberg, pp 483–505 8. Reza H, Chatterjee A (2014) Mapping AADL to petri net tool-sets using PNML framework. J Softw Eng Appl 7:920–933 9. PNML Supporting Tool. http://www.pnml.org/tools.php 10. Architecture Analysis and Design Language (AADL) (2017) Annex D: behavior model annex. https://doi.org/10.4271/AS5506/3

A Golden Section Method for Univariate One-Dimensional Maximum Likelihood Parameter Estimation Ruitao Liu(B) and Qiang Wang National Engineering Lab for Mobile Network Technologies, Beijing University of Posts and Telecommunications, Beijing 100876, China {ruitaoliu,wangq}@bupt.edu.cn Abstract. The covariance estimation of dynamic system control models has applied both to estimator design and controller performance monitoring. Many algorithms has been proposed to estimate the unknown noise covariance of dynamic systems, such as maximum likelihood estimation (MLE), Bayesian estimation, covariance matching, correlation technique. The MLE method that maximizes likelihood estimation of the noise covariance matrix for the given observation sequence has a larger time overhead. This paper solves this problem by proposing MLE based on golden section. Each iteration of this algorithm will reduce the convergence interval to 0.618 times of the previous one. The length of convergence interval will be exponentially reduced. Simulation results show that the proposed algorithm has a more stable and faster convergence speed than the gradient-based MLE in both linear and nonlinear examples. Keywords: Maximum likelihood parameter estimation Golden section · Gradient

1

· MLE

Introduction

Noise covariance estimation plays an important role in dynamic system control. The accuracy of the covariance estimate directly affects the performance of the system. Noise covariance estimation plays a very important role in many applications [1] such as Kalman filtering [2] and extended kalman filtering [3]. Generally speaking, there has been a lot of work on state estimation and noise measurement. Methods for estimating the state and measuring noise covariance can be broadly classified into four categories: covariance matching [4], correlation techniques [5], Bayesian [6], and maximum likelihood methods [7]. The maximum likelihood method [7] can estimate the maximum likelihood of a parameter directly from the sequence. [8] derived first and second order conditions for the maximum likelihood estimator of the mean and covariance for a normal distribution. The advantage of the maximum likelihood approaches is that they can be applied even when measurements are available at irregular c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2571–2580, 2020 https://doi.org/10.1007/978-981-13-9409-6_314

2572

R. Liu and Q. Wang

intervals. Shumway [9] and Raghavan et al. [10] provide a framework to identify state space models along with the state and measurement noise covariances for linear systems using the EM algorithm in the presence of irregularly sampled data. The MLE method is also solvable when Q → 0+ is mentioned in [7], but it will encounter many difficulties if the algorithm of gradient descent is used. For example, the problem of step selection. The main work of this paper is to solve the above problems. In the univariate one-dimensional case, the linear search method is used to make the MLE converge faster and more stably. The following example will also illustrate that the linear search does not converge slower than the gradient descent method even in the nonlinear system. This paper is organized as follows: First, Sect. 2 introduce the measurement state model of inertial sensor and the maximum likelihood estimation algorithm of state space model parameters. In the Sect. 3, the limitations of the MLE algorithm based on one-dimensional search and the golden section MLE proposed by this paper in the third section are presented. Finally, the gradient-based MLE algorithm in [7] is compared with the MLE algorithm based on one-dimensional search, and the convergence speed is analyzed in the Sect. 4.

2

Maximum Likelihood Parameter Estimation

The state space model can be written as follows [7]: x+ = Ax + w y = Cx + v      w Qw 0 ∼ N 0, v 0 Rv

(1) (2) (3)

where x, w ∈ Rn , y, v ∈ Rp . w and v are uncorrelated in time. The problem can be transformed into the maximum likelihood estimation of the covariance matrix Qw , Rv for the given sequence y(0), ..., y(N − 1) and system matrices A, C: max ln py (y(0)...y(N − 1)|Qw , Rv )

(4)

Qw ,Rv

s.t. Qw , Ry ≥ 0 For the sake of simplicity, we omit the simplification process here. For the specific derivation process, refer to [7]. After that, we can get the following distribution: ⎛ ⎞ y(0) ⎜ y(1) ⎟ ⎜ ⎟ Y= ⎜ ⎟ ∼ N (0, P) .. ⎝ ⎠ . y(N ⎛ − 1) Qw ⎜ .. P = O⎝ .





⎟  ⎜ ⎠O + ⎝ Qw



Rw ..

⎟ ⎠

. Rw

(5)

A Golden Section Method for Univariate

2573

We can get the expression of P as: P=

N +K−1



Oi Qw O i +

i=1

N

Ij Rv I j

(6)

j=1

where Oi is the ith pN × n block column of O and Ii is the ith pN × p block column of IN p . Finally, we write the maximum likelihood problem as: min φ(Qw , Ry ) = ln det P + Y P−1 Y

Qw ,Rv

(7)

s.t. Qw , Ry ≥ 0 P is the covariance matrix defined in Eq. 5. The relationship between the φ function and the likelihood function is: φ(Qw , Ry ) = −2 ln pY (Y|Qw , Ry )

3

(8)

Gradient Algorithm and One-Dimensional Search Algorithm

As mentioned in the introduction, in the inertial sensor measurement scenario, the measurement covariance matrix R can be directly obtained, so the above problem can be transformed into the problem of finding the parameter Q. In [7], the maximum likelihood estimation is obtained by using the gradient descent method, and the gradient method is not good in the actual scene. In this paper, the one-dimensional search algorithm is adopted to ensure that the convergence interval of each iteration will decrease the fixed magnification and improve the stability and convergence speed of the algorithm. 3.1

MLE Based on Gradient Descent Method

Through Eqs. (6) and (7), we can get dP and dφ [7]: dP =

N +K−1

Oi (dQw )Oi +

i=1

dφ = tr((dQw )

N

Ii (dRv )Ii

(9)

i=1

i

+tr((dRv )

Oi P−1 (P − YY )P−1 Oi )

Ii P−1 (P − YY )P−1 Ii )

(10)

i

Since the measurement covariance R of the inertial sensor can be directly obtained by the sensor parameters, Eq. (10) can be further simplified in this scenario as:

Oi P−1 (P − YY )P−1 Oi ) (11) dφ = tr((dQw ) i

2574

R. Liu and Q. Wang

Although Eq. (11) yields a gradient of φ, everything satisfies Eq. (12) and the value of Qw > 0 is a condition that satisfies the likelihood function. Obviously, these are not enough. We need to prove the concavity of the φ in R+ and discuss the number of extreme points.

Oi P−1 (P − YY )P−1 Oi ) = 0 (12) i

From Eq. (7), the second order differential of φ can be obtained as Eq. (13). The unevenness of the φ can be determined by Eq. (13). d2 φ = tr((dP)P−1 (dP)P−1 ) −2tr((dP)P−1 (dP)P−1 (P − YY )P−1 )

(13)

In fact, the internal minimum satisfies dφ = 0 and d2 φ = 0, which means that the solution cannot be determined from the gradient simply to satisfy the solution of Eq. (7). For detailed analysis, refer to [7]. Especially when the minimum value of the parameter is close to 0+ , the defect of the gradient method will be very obvious. The large difference of the gradient values on both sides will make the convergence step difficult to determine. If the step size selection mainly considers the left side of the extreme value point, the convergence speed is very slow. If the step size mainly considers the right side, the algorithm will seriously deviate from the extreme point and even fail to converge. We will see this in the fourth chapter. 3.2

MLE Based on Linear Search

Since this scenario can be simplified to solve one-dimensional parameters, the linear search method is a better method. This paper uses the golden section method to ensure that the length of each iteration interval will be reduced to 0.618 times the original length. Let’s discuss the application of the Golden Section method in this scenario. First, for a given interval xmin ∈ [a0 , b0 ] containing the minimum point, two points a1 ∈ [a0 , b0 ] and b1 ∈ [a0 , b0 ] in the interval are obtained by the following equations: a1 = a0 + 0.382(b0 − a0 ) b1 = a0 + 0.618(b0 − a0 )

(14)

If φ(a1 ) < φ(b1 ) at this time, then xmin ∈ [a0 , b1 ], otherwise xmin ∈ [a1 , b0 ], we can update the interval of the minimum point.Detailed proof can be found [11]. The shrinkage ratio ρ ≈ 0.382, so each iteration of the interval length is gradually compressed according to the ratio of 1 − ρ ≈ 0.618. After N iterations, N N the length becomes (1 − ρ) ≈ (0.618) [12], which is the total compression ratio. The interval will be steadily reduced according to this ratio.

A Golden Section Method for Univariate

2575

Like the gradient descent problem, the minimum value point to which the golden section method converges is not necessarily the minimum point, but the algorithm can guarantee convergence to the minimum value when the intrainterval point satisfies the condition of d2 φ > 0. And the gradient descent algorithm mentioned in the previous chapter fails when the minimum value approaches 0+ . This does not apply to the golden section method. On the contrary, this is the advantage of this method. This method guarantees convergence to the minimum point no matter where the extreme point is at the boundary of the interval. On the contrary, the gradient method [12] has unstable convergence speed, and there is no stable and narrowing down range compared with the golden section based algorithm proposed in this paper. Moreover, the gradient-based algorithm will converge very slowly when it converges near the minimum [12].

4

Simulation Results

In this section, simulation results are provided to demonstrate the performance of our proposed algorithm. we use two one-dimensional examples to compare the gradient-based MLE and the golden section-based MLE. 4.1

Linear Dynamic System Model

Consider first an example of a linear dynamic system model: The state transition equation is x+ = 0.0002x + w

(15)

where Q is the noise covariance of the transfer matrix, which can be obtained by the brute force algorithm Q ∈ 1.3 × 10−5 , 1.5 × 10−5 The observation equation is y = x + v√ v ∼ N (0, 0.00416521)

(16)

From Eqs. 15 and 16, we can get: A = 0.0002 C=1 from the Fig. 1 we can clearly see that the MLE convergence rate based on the golden section is significantly faster than the gradient-based MLE. Figure 2 shows precisely the convergence of Q in iterations. It can be seen that after 25 iterations the MLE based on the golden section has converge to the interval (0.000013, 0.000015) determined by the violent algorithm, and the algorithm based on the gradient descent does not have any point in the iteration until 80 iterations. Table 1 compare the Q-convergence between the MLE based on gradient descent and the MLE based on golden section in 20 iterations and 50 times,

2576

R. Liu and Q. Wang

respectively. It can be seen that after 50 iterations, the MLE based on the golden section has converged to the Q value interval determined by the violent algorithm. On the contrary, MLE based on gradient descent did not. And the time cost of the two algorithms is not much different under the same number of iterations.

Fig. 1. φ with iteration times

Fig. 2. Q with iteration times

A Golden Section Method for Univariate

2577

Table 1. Convergence speed comparison Q

φ

iteration times

time

−521.3001

20

203 ms

-522.030

20

237 ms

−524.218

50

673 ms

−529.299

50

401 ms

−469.762

20

271 ms

−517.448

20

251 ms

−486.203

50

430 ms

−521.623

50

429 ms

(a) Linear simulation parameters N = 20 Gradient Golden section

0.186788 × 10 11.6493 × 10



5

−5

(b) Linear simulation parameters N = 50 Gradient Golden section

0.279423 × 10 1.41113 × 10



5

−5

(c) Nonlinear simulation parameters N = 20 Gradient Golden section

0.633519 × 10



5

11.6493 × 10−5

(d) Nonlinear simulation parameters N = 50 Gradient

0.8195422 × 10

Golden section 7.561484 × 10

4.2



5

−5

Nonlinear Dynamic System Model

The state transition equation is +w x+ = 3 sin(0.0002πx) √ w ∼ N (0, Q)

(17)

where Q is the noise covariance of the transfer matrix, which can be obtained by the brute force algorithm Q ∈ 7 × 10−5 , 8 × 10−5 The observation equation is y = x + v√ v ∼ N (0, 0.00416521)

(18)

This simulation example uses a nonlinear system. Since this model is only applicable to linear models, the first step is to linearize the transformation equation (like EKF). Here, take 100 sequence points as y, and the period of the state transition equation is T = 1000 > 100. Considering that when x → 0, sin(x) is the equivalent infinitesimal of x: A = 0.0006π C = 1 Figure 3 shows the convergence behavior of the golden section based MLE and gradient-based MLE. It can be observed that at the first iteration, the MLE based on gradient descent converges quickly, but then the speed is significantly slower than the MLE based on the golden section. And when iterating 80 times, the former still failed to converge to the latter level.

2578

R. Liu and Q. Wang

Fig. 3. φ with iteration times

Fig. 4. Q with iteration times

Figure 4 shows precisely the convergence of Q in iterations. It can be seen that after 25 iterations the MLE based on the golden section has converge to the interval (0.00007, 0.00008) determined by the violent algorithm, and the algorithm based on the gradient descent does not have any point in the iteration until 80 iterations. From Table 1, we can draw a similar conclusion from the previous subsection. At the same number of iterations, there is no significant difference in the time overhead of the two algorithms. The MLE based on the golden section has a

A Golden Section Method for Univariate

2579

smaller φ in the case of equal iterations, and enters the convergence interval when the number of iteration is 50 times, but the algorithm based on gradient descent does not converge to this interval. Finally, comparing Figs. 3 and 4, we can see that there are some differences between the curves of φ and Q in linear and nonlinear cases, and there is no significant difference in the convergence characteristics of the algorithm. From Table 1, there is no significant difference in time overhead between the two algorithms under the same number of iterations in the two simulations. The gradient-based MLE algorithm enters the convergence interval obtained by the violent algorithm after 25 iterations, and the gradient-based algorithm fails to enter the convergence interval within 80 iterations.

5

Conclusions

In this paper, we focus on the univariate one-dimensional dynamic system noise covariance estimation problem. To obtain a better estimation performance, the MLE method is adopted to maximize ML function. An MLE based on the golden section algorithm is proposed to deal with the formulated problem. Simulation results shows that, compared the MLE based on the gradient descent, our proposed algorithm has a more stable and faster convergence speed in both linear and nonlinear systems. Ackowledgements. This work was supported in part by National Nature Science Foundation of China under Grant 61631005 and 61325006 and in part by the 111 Project of China B16006.

References 1. Bavdekar V, Deshpande A, Patwardhan S (2011) Identification of process and measurement noise covariance for state and parameter estimation using extended Kalman filter. J Process Control 21(4):585–601 2. Harvey AC (1991) Forecasting, structural time series models and the Kalman filter. Cambridge Books, Cambridge 3. Lin JS, Zhang Y (1994) Nonlinear structural identification using extended Kalman filter. Comput Struct 52(4):757–764 4. Myers K, Tapley B (1976) Adaptive sequential estimation with unknown noise statistics. IEEE Trans Autom Control 21(4):520–523 5. Odelson BJ, Lutz A, Rawlings JB (2006) The autocovariance least-squares method for estimating covariances: application to model-based control of chemical reactors. IEEE Trans Control Syst Technol 14(3):532–540 6. Hilborn CG, Lainiotis DG (1969) Optimal estimation in the presence of unknown parameters. In: Symposium on adaptive processes. IEEE, New York 7. Zagrobelny MA, Rawlings JB (2015) Identifying the uncertainty structure using maximum likelihood estimation. In: American control conference. IEEE, New York 8. Magnus JR (1978) Maximum likelihood estimation of the GLS model with unknown parameters in the disturbance covariance model. J Econ 7(3):281–312

2580

R. Liu and Q. Wang

9. Lund R (2009) Time series analysis and its applications: with R examples: time series analysis and its applications: with R examples. Time series analysis and its applications 10. Raghavan H, Tangirala AK, Gopaluni RB et al (2006) Identification of chemical processes with irregular output sampling. Control Eng Pract 14(5):467–480 11. Shumway RH (1982) An approach to time series smoothing and forecasting using the em algorithm. J Time Ser Anal 3(4):253–264 12. Qing A (2010) An introduction to optimization

Network Service Analysis Based on Feature Selection Using Improved Linear Mixed Model Chen Lu1 , Dong Liang1(B) , Dongxu Wang1 , and Yilin Zhao2 1

2

Key Laboratory of Universal Wireless Communication, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China {2013213110,liangdong,adong1994}@bupt.edu.cn Research Institute of Forestry, Chinese Academy of Forestry, Beijing 100091, China [email protected]

Abstract. Artificial Intelligence (AI), which is designed to analyze huge amount of data, is introduced into wireless network analysis for assistance. Because the amount of data in this field is extremely massive, feature selection is a critical process. Compared to correlation based feature selection techniques, causal inference based Linear Mixed Models (LMM) can identify features with direct and fixed effects resulting from causal variables. However, Correlation based Feature Selection (CFS) does not give interpretable results and lacks justification. In this paper, an improved LMM is proposed for feature selection and used to analyze the performance of a wireless network. We introduce the L2 normalizer into the parameter estimation process of an LMM to regularize the standard model. Then, we utilize the results of the network analysis to construct a quality of users prediction model and use the improved LMM algorithm to select features and perform prediction. The experimental results prove that our proposed feature selection model outperforms other methods with respect to interpretability and prediction accuracy. Keywords: Network service · User perception Improved linear mixed model

1

· Feature selection ·

Introduction

Researchers have started studying how to link network quality with the quality of experience (QoE) of users [1–3]. Concepts related to data mining and artificial intelligence enter into this field as appropriate solutions [4–7]. However, the usage of such popular algorithms often requires an important procedure, namely feature selection. As mentioned before, the heterogeneity of the network makes the network environment impossibly complicated, which increases the amount of data created in the network to a huge amount. Thus, feature selection becomes c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2581–2589, 2020 https://doi.org/10.1007/978-981-13-9409-6_315

2582

C. Lu et al.

increasingly important with respect to this topic. Traditional approaches mainly utilize correlations between independent variables and labels. The features are selected via ranking and statistical techniques. The selected feature subset is evaluated for quality, and this approach does not involve a machine learning (ML) algorithm [8]. However, these correlation based techniques may yield a confounded relationship between variables owing to the effects of a confounder. More specifically, these solutions only find the superficial correlations between data, but they do not identify the actual causal relationships [9]. For example, in our scenario, we could possibly identify a strong correlation between the feature user monthly average traffic and QoE. However, this would probably be because the cell in that area is not operational, which implies that the user could not use the flow often. The inoperability of the cell is the cause of the low QoE, rather than the feature user monthly average traffic. Thus, for our objective, causal inference based feature selection is more suitable and interpretable than selection based on correlation. This manuscript is organized as follows. In Sect. 2, we summarize the related work on causal inference based feature selection and network service quality evaluation. In Sect. 3, we present our proposed feature selection method based on improved causal inference. In Sect. 4, we describe heterogeneous network parameter data obtained from a Chinese operator. In Sect. 5, the experiment to demonstrate the performance of our proposed method compared to other feature selection approaches is presented. In Sect. 6, we present our results with a brief discussion on how causal inference based feature selection is superior and more interpretable than other approaches and how this feature selection influences the prediction of user perception. Finally we conclude this paper in Sect. 7 by discussing future research directions.

2 2.1

Background and Related Work Causal Inference

Causal inference has been applied in many research and industrial fields such as e-commerce, network optimization, philosophy, statistics, and health-care [10]. The superiority of causal inference is that the aim is not only to predict failures but also to identify the reasons for such failures while simultaneously adjusting for confounders. This is critical for evaluating QoE and related network quality. The prediction of QoE is vital; however, we need to identify the causal variables that influence the QoE and adjust them. The author of [11] proposed a general approach to accurately describe the causal inference on pairs of discrete variables based on Shannon entropy. Additionally, the authors of [12] invented a novel causal inference based decision tree, named causal decision tree (CDT). The experiments proved that CDT can identify meaningful causal relationships and that the CDT algorithm is scalable.

Network Service Analysis Based on Feature Selection Using

2.2

2583

Linear Mixed Model

LMMs [13] involve both random effects and fixed affects together. Fixed affects are caused by causal variables, as mentioned above, whereas random effects originate from confounders, which influence both the target and the independent variables. The concept of LMM is depicted in Fig. 1. LMM can be described in the following form: y = Xβ + Zμ + ε (1) where y represents the observed target. X is the set of causal variables or explanatory variables and Z is the random effect caused by confounders. β and μ represent the parameter vectors of the fixed effect and the random effect, respectively. ε denotes the observation noise.

Fig. 1. LMM idea

Under the assumption that μ ∼ N (0, σg2 I), Eq. (1) follows the following Gaussian process: (2) y ∼ N (Xβ, σg2 K + σe2 I) where

K = ZZ T

Equation (2) is the standard expression of LMM. 2.3

Parameter Estimation

Several estimation approaches for the parameter have been mentioned above. The more frequently used and state-of-the-art algorithms are maximum likelihood estimation (MLE) and restricted maximum log-likelihood (REML). The LMM’s log-likelihood estimation is as follows: 1 lM LE (β, σg , δ) = − (m log 2πσg2 + log |K + δI| 2 T −1 (y − Xβ) (K + δI) (y − Xβ) + 2 σg

(3)

2584

C. Lu et al. σ2

where δ = σe2 . Considering the degree of freedom loss resulted from evaluating g the fixed effect will extend Eq. (3) into REML: 1 lREM L (σg , δ) = lM L (β, σg , δ) + (d log 2πσg2 2     + log X T X  − log X T (K + δI)X )

(4)

For this non-convex optimization problem, the factored spectrally transformed LMM (FaSTLMM) presents a more effective solution than the grid-search. Based on this, Eq. (3) can be extended as 1 lM LE (β, σg , δ) = − (m log(2πσg2 ) + log |S + δI| 2 1 + 2 (U T y − U T Xβ)(S + δI)−1 (U T y − U T Xβ)) σg

(5)

The parameters β and σg2 can be obtained by setting the derivative to zero. By replacing them in Eq. (5), we can obtain δ via a search using Brents method. Additionally, the parameter of most importance, namely the causal effect, β, can be expressed as β = (XUT (S + δI)−1 XU )−1 XUT ((S + δI)−1 yU

(6)

where XU = UT X and yU = UT y.

3

Feature Selection Method Based on Improved LMM Algorithm

Based on the knowledge of the standard LMM algorithm, we now introduce regularized LMMs. The base knowledge is that multi-variable linear regression could be regularized via L1 and L2 regulation. In this paper, we propose an L2 regularized LMM. This is achieved by introducing L2 regulation, a prominent regularizer, into the parameter estimation of the standard LMM. As previously mentioned, the standard fixed effect coefficient can be obtained from the following expression: β = (XUT (S + δI)−1 XU )−1 XUT ((S + δI)−1 yU

(7)

Based on the standard expression, we apply L2 regularization for parameter estimation. Our proposed method ‘LMM based on L2 regulation’ obtains the corresponding coefficients as follows: β = (XUT (S + δI)−1 XU )−1 XUT ((S + δI)−1 yU + λβ2

(8)

In Eq. (8), λ represents a prior to control the strength of the sparsity regularizer, as an extension of ridge regression. An LMM with L1 regulation has been presented before, named the sparse LMM. Our method, which differs from sparse LMM, may exhibit better variable selection performance. Additionally, the L2 regularizer can help select the variables more stably.

Network Service Analysis Based on Feature Selection Using

4

2585

Dataset

In this section, we describe the data used for the experiment, including the basic description and data pre-processing. The data used in the experiment comprised user profiles and the corresponding wireless network parameters. The target feature is customer QoE, ranging from 1 to 10. Each sample represents a user, and there are 7800 samples in the data. For the user static data, there are a total of 13 features, such as gender, age, user level, average mobile Internet fee, the use of SMS, etc. For the network parameter, the data includes two types of network standards, which assume that users are under a wireless network comprising different network standards. Appropriate data processing is required for the construction of the model. Numerical computation must be performed to convert categorical features. We eliminate some singular values in certain samples and features. We discuss missing values in two cases. For features with over 98% of its parameters as zero, we directly eliminate this feature as it contains little usable information. Additionally, for features missing certain parts of data, we use the appropriate algorithm to fill in the blanks.

5

Experiment

To validate the effects of our proposed feature selection method, we conduct an experiment to test feature selection and its effects on prediction performance. The data used in this study is from China Mobile after being strictly encrypted. All the data does not involve any personal privacy issues. 5.1

Feature Selection and QoE Prediction

In this experiment, the target feature is overall user satisfaction. We compare our proposed ‘L2 regularized LMM’ method with other feature selection approaches, namely Pearson correlation based feature selection, standard LMM, lasso LMM, and sparse LMM. The first step is to identify the independent variables and the target for the experiment. As previously mentioned, the target feature is overall user satisfaction. LMM discriminate independent variables into fixed effect resulted from causal variables and random effect caused by confounders. In this experiment, we define all the users’ static profiles as random effect. Because we want to predict the QoE based on network QoS, we need to eliminate the statistical discrepancies caused by varying user characteristics. Correspondingly, the QOS of other networks is defined as fixed effect, and it is also termed as a causal variable. The second step is to apply these different techniques to feature selection in order to choose the feature that the model determines to be correlated or causally related. The features selected through different techniques are presented in Fig. 2. The third step is to use the features selected through different approaches to predict QoE by using the corresponding models, namely linear regression, LMM,

2586

C. Lu et al.

sparse LMM, and our proposed method. We use mean squared error (MSE) and R2 as the prediction performance criteria. The experiment result is shown in Table 1. Table 1. Prediction performances of different models

ID 1 2 3 4 5 6 7 8 9 10

Correlation based

LMM

MSE

0.14178

0.13981

0.14034

0.13972

R2

0.24213

0.49669

0.49287

0.49890

Pearson Correlation Cell downlink data flow LLC layer Mbit encoding Cell downlink data traffic LLC layer Mbit EGPRS downlink LLC layer traffic CELL level Mbit EGPRS downlink LLC layer traffic CELL level Mbit encoding Cell total data traffic LLC layer Mbit Cell total data traffic LLC layer Mbit encoding circuit domain inter-cell handover success rate TDGSM GSM congestion rate TCH congestion rate without switching cell uplink data traffic LLC layer Mbit encoding

Standard LMM GSM congestion rate

Sparse LMM Our method

Sparse LMM TD system packet domain service traffic CellKBYTE encoding

TCH congestion rate does not include handover

PDCH bearer efficiency

Cell drop rate

Voice channel call rate does not include cut

Voice channel call rate does not include cut EGPRS uplink LLC layer traffic CELL level Mbit Cell uplink data traffic LLC layer Mbit GPRS uplink LLC layer traffic CELL level Mbit GPRS uplink LLC layer traffic CELL level Mbit encoding GPRS downlink LLC layer traffic CELL level Mbit Cell total data traffic LLC layer Mbit

Wireless connection rate Random access success rate CELL level PS immediate assignment Success rate Half rate traffic ratio Switching success rate CELL level Switching success rate Switching into success rate

Our method EGPRS uplink of LLC layer traffic CELL level Mbit Cell uplink data traffic LLC layer Mbit GPRS uplink LLC layer traffic CELL level Mbit GPRS downlink LLC layer traffic CELL level Mbit Cell total data traffic LLC layer Mbit GPRS downlink LLC layer traffic CELL level Mbit encoding GPRS uplink LLC layer traffic CELL level Mbit encoding EGPRS downlink LLC layer traffic CELL level Mbit encoding Cell uplink data traffic LLC layer Mbit encoding Cell total data traffic LLC layer Mbit encoding

Fig. 2. Features selected through different techniques

5.2

Different QoEs Prediction

We conduct more experiments to examine the efficiency of the proposed method based on the first experiment. In this part, independent variables are unchanged while the target features are expanded. We collect 7 more QoE parameters from user. Y1 : Differences from expectations. Y2 : Differences from perfection. Y3 : Net promoter score. Y4 : Overall network perception. Y5 : Perception of network coverage. Y6 : Perception of Internet access quality. Y7 : Perception of call quality. All these parameters are collected via interviews with users at the same time and in the same batch. Each of the targets accompanied by the independent variables comprised a new data set. Thus, 8 data sets in total are used, including one from the former experiment. We apply five models constructed as mentioned previously into these 8 data sets to compare the prediction performance via MSE and R2 . The experimental results are presented in Figs. 3 and 4 .

6 6.1

Discussion Feature Selection Effect and Prediction Performance

In the above experiments, we compare the effectiveness of our proposed approaches with other methods with respect to feature selection. In the first

Network Service Analysis Based on Feature Selection Using

2587

Fig. 3. Prediction performance comparison between different techniques based on MSE

Fig. 4. Prediction performance comparison between different techniques based on R2

experiment, overall user perception is considered as the target and the experimental results are presented in Table 1. The three methods based on LMM, except for the correlation-based method, exhibit similar values of R2 . This proves that, when applied to real industrial data, the correlation-based feature selection technique is inferior to LMM-based techniques. This can be attributed to the fact that the correlation is designed to obtain the statistical experience instead of the real causal factor. Further, this may result in the identification of false causal signals, while the true ones are missed. This experiment also verifies that LMM-based techniques are superior to the other techniques. The correlation-based method has larger R2 and smaller MSE values in general. More specifically, the MSE comparison figure shows that the other three techniques have a similar tendency, except for the blue line. Users may be annoyed when posed with several questions simultaneously during the interview. Thus, because of the customers’ unstable emotions, the data we collect may not reflect the real QoE. Therefore, instability is observed because of the data. Among these three methods, our proposed ‘L2 normalizer-based LMM’ method generally outperforms the other two methods and has evident

2588

C. Lu et al.

advantages with respect to Y2 and Y8 . This could be because of the difference between L1 and L2 regulators. The superiority of our method is probably a result of the data characteristic that we applied during the experiment and the speciality of L2 regularizer.

7

Conclusion

In this study, we focused on three main tasks. (1) We proposed an improved LMM, also known as causal inference, by introducing an L2 regularizer into the parameter estimation procedure. (2) We introduce and review three other existing feature selection methods: standard LMM, sparse LMM, and Pearson coefficient based techniques. (3) We apply these four methods to experiments for feature selection and wireless network QoE prediction by using actual industrial data. The experimental results prove that the features selected by our method yield higher prediction performances than those of the other three models in terms of MSE and R2 . Additionally, from the experiment for different target features, we demonstrate that our model has more stability during feature selection than other solutions. In conclusion, the mathematical and experimental analyses prove the superiority of our proposed method in terms of feature selection stability, interpretability, and prediction performance.

References 1. Fiedler M, Hossfeld T, Tran-Gia P (2010) A generic quantitative relationship between quality of experience and quality of service. IEEE Netw 24(2):36–41 2. Chen Y, Wu K, Zhang Q (2015) From QoS to QoE: a tutorial on video quality assessment. IEEE Commun Surv Tutorials 17(2):1126–1165 3. Su Z, Xu Q, Qi Q (2016) Big data in mobile social networks: a QoE-oriented framework. IEEE Netw 30(1):52–57 4. Ben Youssef Y, Afif M, Ksantini R, Tabbane S (2018) A novel online QoE prediction model based on multiclass incremental support vector machine. In: 2018 IEEE 32nd international conference on advanced information networking and applications (AINA), Krakow, pp 334–341 5. Casas P et al (2017) Predicting QoE in cellular networks using machine learning and in-smartphone measurements. In: 2017 Ninth international conference on quality of multimedia experience (QoMEX), Erfurt 6. Ben Youssef Y, Afif M, Ksantini R, Tabbane S (2018) A novel QoE model based on boosting support vector regression. In: 2018 IEEE wireless communications and networking conference (WCNC), Barcelona 7. Mushtaq MS, Augustin B, Mellouk A (2012) Empirical study based on machine learning approach to assess the QoS/QoE correlation. In: 2012 17th European conference on networks and optical communications, Vilanova i la Geltru 8. Visalakshi S, Radha V (2014) A literature review of feature selection techniques and applications: Review of feature selection in data mining. In: 2014 IEEE international conference on computational intelligence and computing research, Coimbatore

Network Service Analysis Based on Feature Selection Using

2589

9. Yadav P, Kurup U, Shah M (2017) Structured causal inference for rare events: an industrial application to analyze heating-cooling device failure. In: 2017 16th IEEE international conference on machine learning and applications (ICMLA), Cancun 10. Zhang Q, Dong C, Cui Y, Yang Z (2014) Dynamic uncertain causality graph for knowledge representation and probabilistic reasoning: statistics base, matrix, and application. IEEE Trans Neural Netw Learn Syst 25(4):645–663 11. Budhathoki K, Vreeken J (2018) Accurate causal inference on discrete data. In: 2018 IEEE international conference on data mining (ICDM), Singapore 12. Li J, Ma S, Le T, Liu L, Liu J (2017) Causal decision trees. IEEE Trans Knowl Data Eng 29(2):257–271 13. Wang H, Yang J (2016) Multiple confounders correction with regularized linear mixed effect models, with application in biological processes. In: 2016 IEEE international conference on bioinformatics and biomedicine (BIBM), Shenzhen

SFSSD: Shallow Feature Fusion Single Shot Multibox Detector Dafeng Wang(B) , Bo Zhang, Yang Cao, and Mingyu Lu Information Science and Technology College, Dalian Maritime University, 116026 Dalian, China [email protected]

Abstract. The main contribution of this paper is an approach for introducing more context to improve the accuracy of the traditional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. We augment SSD with a multi-level feature fusion method at shallow layers for introducing contextual information to improve accuracy, especially for the detection of small objects, calling our resulting system SFSSD for shallow feature fusion single shot multibox detector. In the feature fusion module, features from different layers with different scales are concatenated together, followed by some down-sampling blocks to generate new feature pyramid which will be fed to multibox detectors to predict the final detection results. For the Pascal VOC2007 test set trained with VOC2007 and VOC2012 training sets, the proposed network with the input size of 300 × 300 achieved 75.4 mAP (mean average precision), while the network with 512 × 512 sized input achieved 79.7 mAP. Our SFSSD shows state-of-the-art mAP, which is better than those of the conventional SSD, Fast R-CNN, Faster-RCNN, ION and MR-CNN.

Keywords: Small object detection Single-shot · Feature pyramid

1

· Multi-level feature fusion ·

Introduction

Recognizing objects at vastly different scales has always been the focus and challenge in the field of computer vision. Reliably detecting small objects is a quite difficult task due to its limited resolution and less semantic information in images. In order to achieve better accuracy, most of the advanced object detection algorithms have been proposed. To detect multi-scale objects in image, the majority of previous detectors are based on hand-crafted features, utilizing the image pyramids (see Fig. 1a) [1]. To a certain extent, these approaches handle the problem of different scales. However, the hand-engineered features [2] have been replaced with features

c Springer Nature Singapore Pte Ltd. 2020  Q. Liang et al. (Eds.): CSPS 2019, LNEE 571, pp. 2590–2598, 2020 https://doi.org/10.1007/978-981-13-9409-6_316

SFSSD: Shallow Feature Fusion Single Shot Multibox Detector

2591

abstracted by convolutional neural works recently. The detection systems [3– 5] adopt the top-most feature map which has fixed receptive field to predict candidate bounding boxes with scales and aspects ratios (see Fig. 1b). The FPN [6] leverages the pyramidal features shape of feature hierarchy while creating a feature pyramid that has strong semantics via a top-down pathway and lateral connections (see Fig. 1c). The top-down architecture with lateral connections is developed for establishing the high-level semantics feature maps at all scales. The network of SSD [7] and MR-CNN [8] combine predictions from multiple features with different resolutions to naturally handle objects of various sizes (Fig. 1d). It eliminates the region proposal network and encapsulates all computation in a single network. An enhanced SSD which is called FSSD with a novel and lightweight feature fusion module is designed to improve the performance (Fig. 1e) [9]. This network shows many improvements in accuracy compared with the conventional SSD. In summary, they all have a good performance on the detection of multiple objects. However, seldom do they concentrate on reusing shallow layers with small receptive fields and dense feature maps, which have benefits of improving semantic information, especially the accuracy on small objects. The low-level features are more accurate for object location but lacking of semantically features. In this paper, we adopt the Fig. 1f to predict objects of different scales. The novel design combines low-resolution, semantically strong features with highresolution, weak features at shallow layers via a top-down pathway. In the feature fusion module, features from different layers with different scales are concatenated together, followed by some down-sampling blocks to generate new feature pyramid which will be fed to multibox detectors to predict the final detection results. We evaluate our method, called a Shallow Feature Fusion Single Shot Multibox Detector (SFSSD). The base network we apply remains VGG16, which is used by the conventional SSD. The proposed SFSSD introduces contextual information to improve accuracy at shallow layers, especially for the detection of small objects. Our main contributions are summarized as follows: 1. We design a novel way of combining feature maps from different levels and generating feature pyramid to fully utilize the features. The new fusion features are rich in semantic information with relatively high resolutions. 2. We delete the top-most layer of the SSD network which had been proved no effects on the accuracy. Moreover, it improves the detection accuracy with a slightly upgraded speed.

2

Related Work

The object detector based on the top-most feature map. The majority of object detection approaches, including OverFeat, SPPnet, R-CNN, Fast RCNN, Faster RCNN and YOLO, adopt the top-most layer of a very deep convolutional neural network (ConvNets) to learn to make prediction at different

2592

D. Wang et al.

Fig. 1. a An image pyramid to build a feature pyramid. b The top-most feature map is used to make prediction. c A top-down architecture with lateral connections [10]. d reuse the feature hierarchy computed by a ConvNet to detect objects. e Features from different layers with different scales are used to generate FPN to detect objects. f Our proposed feature fusion method with top-down pathway

scales. OverFeat applies a CNNs as a feature extractor in the sliding window at the top-most feature map to generate the bounding boxes and resolve the detection into classification and localization. R-CNN and Fast R-CNN use the region proposals generated by the selective search to generate the bounding boxes and then it is extracted features by the CNNs to be used by SVM to do classification. Faster R-CNN introduces the RPN (region proposal network) to replace selective search which is used to generate the candidate bounding boxes (anchor boxes) and filter out the background regions at the same time. YOLO divides the input image into several grids and performs classification and localization on each part of the grid. YOLO 9000 is an improved version of YOLO and it improves the YOLO by removing the fully connected layers and adopts anchor boxes like the RPN. The object detector based on multi-scale feature map. In order to utilize the multi-scale feature map efficiently, some approaches have been put forward. SSD , an efficient one-stage object detector, adds a series of progressively smaller convolutional layers to generate pyramid feature maps and sets corresponding anchor size according to the receptive filed size of the layers to detect different scale objects. The method takes care of the speed and accuracy. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the

SFSSD: Shallow Feature Fusion Single Shot Multibox Detector

2593

sub-network, multiple output layers is used to do detection without combining features or scores. Also deconvolution layer is also explored to upsample the feature maps to enrich the semantic information. The object detector based on the combination of multi-scale feature map. A number of recent approaches which try to concatenate multiple layers to improve the performances on object detection have been put forward. Parsenet [11] and ION [12] concatenate features from multiple layers before predicting the result. FCN , U-Net , Stacked Hourglass networks also use skip connections to concentrate low feature map and high-level maps to fully utilize the semantic information. FPN introduces top-down structure and lateral connections to utilize different layers to improve the performance. Inspired by the approaches above, we design a novel network called SFSSD which improves the performance on the detection, especially for the small objects. It only utilizes the feature maps at shallow layers. We concatenate the low-level feature maps to generate feature pyramid to enrich the semantic information which is beneficial to classify small objects and the lefted feature maps of SSD layers (not including Conv9 2) are still used to predict the relative large objects.

3

SFSSD

Fig. 2. SSD framework

2594

D. Wang et al.

Fig. 3. SFSSD framework

3.1

The Novel SFSSD Architecture

A lot of approaches have been put forward to utilize the feature pyramidal features to improve the accuracy of object detection. The most popular used method of the feature fusion is FPN (Fig. 1c). The feature fusion design firstly introduces the lateral connections that associate low-level feature maps across semantic levels and resolutions. However, the method only fuses the feature maps from the neighbors. The recent new feature fusion method is FSSD (Fig. 1e). The design is to fuse the different levels at an appropriate way and generate the feature pyramid from the fused features. It chooses a few layers at an appropriate way and generates feature pyramid from the fused features. But the design improves the accuracy at the cost of deleting the top layers which are useful for detecting large objects. Considering the factors above, We propose a lightweight and efficient feature fusion module. The design is not only beneficial for small object detection but also improves the accuracy with little time consuming. There are two popular ways to merge the different feature layers together: concatenation and element-wise summation. We prefer to concatenation. Because Element-wise summation have to convert the features to have the same channels. This requirement limits the agility of feature fusion. The conventional SSD(Fig. 2) adopts the VGG16 as the base network which is the most popular used to detect objects. The author chooses Conv4 3, FC 7 and new added layers Conv6 2, Conv7 2, Conv8 2, Conv9 2 to process the object detection task. And the relative feature map size is 38 × 38, 19 × 19, 10 × 10, 5 × 5, 3 × 3, 1 × 1.

SFSSD: Shallow Feature Fusion Single Shot Multibox Detector

2595

The overall framework of SFSSD for input 300x300 is like Fig. 3. The shallow layes (Conv4 3, FC7, Conv6 2) have the relative small receptive field which is beneficial for the detection of small objects. We combine the high-level layers with much semantic information which is beneficial for classification with the low-level layers with high-resolution which does good to localization together to make predict small objects. 3.2

Feature Fusion Module

As is showed in Table 1, we choose Conv4 3, Fc7, Conv6 2 to generate feature pyramid to detect small objects. The top-most layers (Conv8 2, Conv9 2) are also utilized to classify and locate the big objects. We refer to the conv4 3’s size as the fundamental feature map size. The feature maps of conv4 3 are operated by 2 × 2 max-pooling to resize the same size 38 × 38. We adopt the bilinear interpolation to make the feature maps whose size are smaller than 38 × 38 to have the same size with conv4 3. By doing this, they are referred to the same dimension which is used to compute. The novel network is depicted as Fig. 3. Table 1. Effects of using multiple output layers Prediction source layers from:

mAP use boundary boxes ? #Boxes

conv4 3 FC7 conv6 2 conv7 2 conv8 2 conv9 2 Yes √ √ √ √ √ √ 74.3 √ √ √ √ √ 74.6 √ √ √ √ 73.8 √ √ √ 70.7 √ √ 64.2 √ 62.4 Bold indicates mean average precision

4

No 63.4

8732

63.1

8764

68.4

8942

69.2

9864

64.4

9025

64

8664

Experiments

In order to compare our SFSSD with the conventional SSD fairly, our base network is set to VGG16 which is adopted by the conventional SSD. We conduct our experiments On PASCAL VOC 2007, 2012. The predicted bounding box is correct if its intersection over union (IOU) with the ground truth is higher than 0.4. We use the mAP (mean average precision) as the metric for evaluating the detection performance. All of our experiments are based on the Caffe version of SSD implementation.

82.4

75.6

79.0

76.8

83.2

72.3

64.1

70.9

78.5

77.6

70.1

69.3

69.2

Bird

75.7

19

SSD512

75.6

75.4

80.2

77.5

79.0

84.1

83.1

80.6

78.1

78.3

Bike

84.7

19

SFSSD300

75.5

73.4

76.5

80.3

79.2

70.0

77

74.5

Aero

74.3

68.0

73.2

78.2

75.6

69.9

70.0

66.9

mAP

SFSSD512 19 79.7 81.4 84.1 Bold indicates mean average precision

19

19

MR-CNN

SD300

19

ION [12]

19

07

Faster [4]

07

19

Fast

SD300

07

Fast [3]

Faster

Data

Method

70.8

73.8

66.7

66.3

59.0

65.5

70.8

65.6

57.3

59.4

53.2

Boat

45.6

53.2

44.4

47.6

38.9

52.1

68.5

54.9

49.9

38.6

36.6

Bottle

84.2

86.2

83.1

83.0

75.2

83.1

88.0

85.4

78.2

81.6

77.3

Bus

83.6

87.5

84.6

84.2

80.8

84.7

85.9

85.1

80.4

78.6

78.2

Car

88.0

86.0

87.4

86.1

78.5

86.4

87.8

87

82.0

86.7

82.0

Cat

55.4

57.8

56.0

54.7

46.0

52.0

60.3

54.4

52.2

42.8

40.7

Chair

79.3

83.1

79.7

78.3

67.8

81.9

85.2

80.6

75.3

78.8

72.7

Cow

72.3

70.2

74.1

73.9

69.2

65.7

73.7

73.8

67.2

68.9

67.9

Table

85.8

84.9

85.7

84.5

76.6

84.8

87.2

85.3

80.3

84.7

79.6

Dog

86.5

85.2

86.4

85.3

82.0

84.6

86.5

82.2

79.8

82.0

79.2

Horse

84.6

83.9

82.4

82.6

77.0

77.5

82.0

82.2

75.0

76.6

73.0

Mbike

76.4

79.7

77.2

76.2

72.5

76.7

76.4

74.4

76.3

69.9

69.0

Person

49.6

50.3

48.6

48.6

41.2

38.8

48.5

47.1

39.1

31.8

30.1

Plant

76.2

77.9

75.0

73.9

64.2

73.6

76.3

75.8

68.3

70.1

65.4

Sheep

75.2

73.9

79.2

76.0

69.1

73.9

75.5

72.7

67.3

74.8

0.2

Sofa

83.5

82.5

87.5

83.4

78.0

83.0

85.0

84.2

81.1

80.4

75.8

Train

76.6

75.3

76.2

74.0

68.5

72.6

81.0

80.4

67.6

70.4

65.8

Tv

Table 2. PASCAL VOC2007 test detection results. The two SFSSD models have the same settings except the input size (300 × 300, 512 × 512). It is obvious that larger input size can have a better results than smaller size. “07”: VOC2007 trainval, “19 = 12 + 07”: union of VOC2007 and VOC2012 trainval

2596 D. Wang et al.

SFSSD: Shallow Feature Fusion Single Shot Multibox Detector

4.1

2597

Results on PASCAL VOC

The novel network of our method is depicted as: We referred to VGG16 as our base network. We referred to 1x1 Conv layes to conv all the new features to make the same 256 channels. FC7, Conv6 2 whose base convolutional layers are interpolated to the same 38x38. Then the feature maps are referred to get together. And using the batch normalization to make the feature values normalize. Finally, we use the features which is fusion to generate the pyramid features. Results on PASCAL VOC2007 We adopt VOC2007 trainval and VOC2012 trainval to be the base data and train our novel model. And we use the same data with SSD. And we also train the SFSSD300 using the NVIDIA 1080 GPU with 8 as the batch size for 100k iterations. We set the initial learning rate as 0.005 and train it with 4k,6k iterations. And we use the same training strategy with the traditional SDD method. Following the training strategy like SSD, the weight decay is set to 0.0005. We referred to the momentum as the value 0.9 to optimize the novel method which is trained by a pre-trained VGG16 model. Our results on VOC2007 test set are shown in Table 2. Our SFSSD can achieve 75.4% mAP which improves 1.1 points compared with the conventional SSD300. The result is in Table 2.

5

Conclusion and Future Work

In this paper, we presents an improved ssd which is called SFSSD with a lightweight feature fusion module. The experimental results show that the feature maps of different levels can be fully utilized by splicing the feature maps of different levels. The pyramid feature is then generated using a plurality of convolutional layer fusion feature maps of step size 2. Experiments on voc-pascal 2007 prove that our sfssd has a much better accuracy and efficiency in small target detection than traditional ssd, and is superior to several other recent target detectors. In the future, there are three ways to enhance SFSSD we proposed. First, we can replace vgg16 with deep resnet and densenet for better performance. And we can also design a new network to fully utilize the feature maps generated by the shallow layers to improve the performance on small object detection. Finally, we can utilize the dilated convolution to generate different scale feature maps at each layer which have more anchor sizes. Acknowledgements. This work is supported by the National Natural Science Foundation of China (Grant No. 61702073).

References 1. Adelson EH, Anderson CH, Bergen JR et al (1984) Pyramid methods in image processing. RCA Eng 29(6):33–41 2. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: International conference on computer vision pattern recognition (CVPR’05), vol 1. IEEE Computer Society

2598

D. Wang et al.

3. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448 4. Ren S, He K, Girshick R et al (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, pp 91–99 5. He K et al (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916 6. Lin TY, Dollr P, Girshick R et al (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125 7. Liu W et al (2016) Ssd: single shot multibox detector. In: European conference on computer vision. Springer, Cham 8. Cai Z et al (2016) A unified multi-scale deep convolutional neural network for fast object detection. In: European conference on computer vision. Springer, Cham 9. Li Z, Zhou F (2017) FSSD: feature fusion single shot multibox detector. arXiv preprint arXiv:1712.00960 10. Pinheiro PO, Lin TY, Collobert R et al (2016) Learning to refine object segments. In: European conference on computer vision. Springer, Cham, pp 75–91 11. Liu W, Rabinovich A, Berg AC (2015) Parsenet: looking wider to see better. arXiv preprint arXiv:1506.04579 12. Bell S, Lawrence Zitnick C, Bala K et al (2016) Inside-outside net: detecting objects in context with skip pooling and recurrent neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2874–2883 13. Yang T, Zhang X, Li Z, Zhang W, Sun J (2018) Metaanchor: learning to detect objects with customized anchors. In: Advances in Neural Information Processing Systems, pp 320–330 14. Badrinarayanan V, Kendall A, Cipolla R (2017) Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495

Beamforming Based on Energy State Feedback for Simultaneous Wireless Information and Power Transmission Chunfeng Wang(&) and Naijin Liu Qian Xuesen Laboratory of Space Technology, CAST, Beijing, China jessen_wa