Advances in Intelligent Systems, Computer Science and Digital Economics IV (Lecture Notes on Data Engineering and Communications Technologies, 158) [1st ed. 2023] 3031244745, 9783031244742

This book comprises high-quality peer-reviewed research papers presented at the 4th International Symposium on Computer

128 13 71MB

English Pages 1002 [993] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Conference Organizers and Supporters
Contents
Advances in Computer Science and Their Technological Applications
Bar-Code Recognition Based on Machine Vision
1 Introduction
2 Bar-Code Recognition Technology
2.1 Bar-Code
2.2 Bar-Code Recognition Technology
3 Bar-Code Recognition Based on Halcon
3.1 Bar-Code Recognition Steps Based on Halcon
3.2 Bar-Code Recognition Operator Algorithm Based on Halcon
4 Experimental Platform and Experimental Results
4.1 Experiment Platform
4.2 Experimental Results
5 Conclusion
References
Some Problems and Solutions for Non-absorbent Wall Climbing Robot
1 Introduction
2 Motion and Power Problems and Solutions
2.1 Sufficient Positive Pressure
2.2 Avoid Wheel Overhang
3 Safety and Risk Avoidance Problems and Solutions
3.1 Fall Prevention
3.2 Obstacle Avoidance
4 Control and Operation Problems and Solutions
4.1 Flexible Crawling
4.2 Stopping
4.3 Convenient Operation
5 Conclusion
References
SOC Estimation of Lithium Battery Based on BP Neural Network with Forgetting Factor
1 Introduction
2 SOC Estimation Algorithm Based on Circuit Model
2.1 SOC Estimation Based on Ampere Hour Method
2.2 SOC Estimation Based on EKF Algorithm
3 SOC Estimation of BP Neural Network
3.1 Traditional BP Neural Network
3.2 BP Neural Network with Forgetting Factor
4 SOC Estimation Experiment
4.1 Experimental Test Platform
4.2 Parameter Identification of Thevenin Model
4.3 SOC-OCV Data Acquisition and Processing
4.4 SOC Estimation Experiment Under DST Condition
5 Conclusion
References
Design of Train Circuit Parallel Monitoring System Based on CTCS
1 Introduction
2 The Function of Parallel Monitoring System Design and Analysis of Existing Problems
2.1 Concept of Parallel Monitoring
2.2 Existing Problems of Track Circuit System
3 Information Collection of Channel Circuit Parallel Monitoring System
3.1 Camera Layout for Parallel Monitoring of Track Circuit
3.2 Design of Subsection Installation Scheme of Track Circuit Monitoring Camera
4 Data Processing of Track Circuit Parallel Monitoring System
4.1 Processing of Images Collected by Track Circuit Parallel Monitoring Camera
4.2 Analysis of Data Exchange Mode of Existing Track Circuit
4.3 Comparative Study of Parallel Monitoring Data Output
5 Conclusion
References
Information Spaces and Efficient Information Accumulation in Calibration Problems
1 Introduction
2 Calibration Problem for Linear Experiment
2.1 Linear Estimation with Inaccurate Information About the Measurement Model
2.2 Calibration Measurements
2.3 Canonical Calibration Information
2.4 Measurement Model Information
3 Improving Estimation Accuracy Through Multiple Measurements
3.1 Multiple Measurements of the Object of Study
3.2 Asymptotic Behavior of the Estimation Accuracy and Balance of Error Contributions Between Calibration and Repeated Measurements
3.3 Canonical Information for Repeated Measurements
3.4 Accumulation of Canonical Information of Two Kinds in the Calibration Problem with Repeated Measurements
3.5 Distributed Accumulation for of Two Types Information Within MapReduce Model
4 Conclusion
References
The Novel Multi Source Method for the Randomness Extraction
1 Instruction
2 Literature Review
3 Randomness Extractors
4 Deterministic Extractors
5 Seeded Extractors
6 Novel Randomness Extractor
7 Conclusion
References
Post-quantum Scheme with the Novel Random Number Generator with the Corresponding Certification Method
1 Introduction
2 Literature Review
3 Improvements
4 Quantum Random Number Generators
4.1 Optical Quantum Random Number Generators
4.2 Time of Arrival Quantum Random Number Generators
4.3 Photon Counting Generators of Quantum Random Numbers
4.4 Attenuated Pulse Quantum Random Number Generators
4.5 Self-testing for the Quantum Random Number Generators
4.6 Device-Independent QRNGs
4.7 Other Forms of Quantum Certification
5 Methodology
5.1 Novel Hybrid QRNG
5.2 Novel Semi Self-testing Method
6 New Scheme
7 Results and Security
8 Conclusion and Future Plans
References
Prediction of UWB Positioning Coordinates with or Without Interference Based on SVM
1 Introduction
2 Problem Modeling
2.1 Support Vector Machine
2.2 Evaluating Indicator
3 Methods and Analysis
3.1 Data Preprocessing
3.2 Build Classification Model
4 Simulation
4.1 ROC Curve of SVM Algorithm
4.2 Error Comparison Between Models
5 Summary
References
Analysis and Comparison of Routing and Switching Processes in Campus Area Networks Using Cisco Packet Tracer
1 Introduction
2 State-of-the-Art
3 Modeling and Results
3.1 Comparison of Time Indicators When Using EIGRP and OSPF Routing Protocols – Scenario A
3.2 Time Performance When Using Layer 2 Switching Technology – Scenario B
4 Comparison and Discussion
5 Summary and Conclusion
References
A Parallel Algorithm for the Detection of Eye Disease
1 Introduction
2 Related Works
3 Materials and Methods
3.1 Description of the Proposed Algorithm
3.2 Defining the Parallelization Step
3.3 Data Review and Analysis
3.4 Application of the Proposed Approach
3.5 Computational Complexity of the Algorithm. Theoretical Evaluations
4 Research Results
5 Conclusions
References
Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence
1 Introduction
2 Systems and Systemforming Factor
3 Newton-Galilean Mechanics, Lagrangian and Other Mechanics
4 Mechanics with Servoconstraints, Control, Artificial Intelligence
5 Summary and Conclusion
References
A New Approach to Search Engine Optimization Based on the Synthesis of a Virtual Promotion Map
1 Introduction and Related Works
2 Problem Statement
3 Proposed Method
4 Proposed Algorithm
5 Experiment
6 Conclusion
References
Software Reliability Models: A Brief Review and Some Concerns
1 Introduction
2 Types of SRGMs
2.1 Data Domain Models
2.2 Time Domain Models
3 Literature Review
4 Open Issues and Future Directions
4.1 Lack of Universally Accepted Model
4.2 Lack of Standard Practices
4.3 Not Applicable to Open Source Software
4.4 Problem in the Definition of Software Reliability
5 Conclusion
References
An Enhanced Session Based Login Authentication and Access Control Scheme Using Client File
1 Introduction
2 Related Works
3 Methodology
3.1 AES Cryptographic Algorithm
3.2 SHA-256 Hash Function
4 Enhanced Session Based Login Authentication and Access Control Scheme
4.1 Login Phase
4.2 Access Control Phase
5 Analysis of the Enhanced Session Based Login Authentication and Access Control Scheme
5.1 User Privacy
5.2 Unauthorized Access Control
5.3 Defense Against Impersonation Attack
6 Conclusion
References
Electric Meters Monitoring System for Residential Buildings
1 Introduction
2 Literature Review and Problem Statement
3 The Aim and Objectives of the Study
4 Technologies and Research Methods
5 Discussion of Experimental Results
6 Summary and Conclusion
References
Implementation of Blockchain Technology for Secure Image Sharing Using Double Layer Steganography
1 Introduction
2 Proposed Model
3 Methodology
3.1 Base Image
3.2 Gray Scaling
3.3 Binarize
3.4 Resize
3.5 LSB Substitution
3.6 Embedding
3.7 Extraction
4 Results and Discussion
5 Conclusion
References
Nature-Inspired DMU Selection and Evaluation in Data Envelopment Analysis
1 Instruction
2 Literature Review
3 The Proposed Method and Evaluation
4 Conclusion
References
Hybrid Convolution Neural Network with Transfer Learning Approach for Agro-Crop Leaf Disease Identification
1 Introduction
2 Related Work
3 Identification Process of Crop Diseases and Recognition Model
4 Data Processing
4.1 Dataset
4.2 Dataset Processing
5 Transfer Learning Network
5.1 VGG-16 Architecture
5.2 ResNet-50
5.3 CNN Model Concatenation Algorithm
6 Experiment and Analysis
7 Experimental Result
8 Conclusion
References
Illumination Invariant Based Face Descriptor
1 Introduction
2 Related Works
3 Description of Descriptors
3.1 MRELBP-NI
3.2 ELBP
3.3 RBP
4 Experiments
4.1 Dataset Description
4.2 Feature Size Particulars
4.3 Accuracy Generation
4.4 Accuracy Comparison Against Literature Methods
5 Discussions
6 Conclusion with Future Prospect
References
Classification of Chest X-Ray Images for COVID-19 Positive Patients Using Transfer Learning
1 Introduction
2 Literature Survey
3 Methodology
3.1 Dataset Download
3.2 Image Preprocessing
3.3 Model Loading and Secondary Data Set Creation
3.4 Training Simple Machine Learning Classifiers
4 Results and Discussions
5 Conclusion and Future Enhancement
References
Arabic Sentiment Classification on Twitter Using Deep Learning Techniques
1 Introduction
2 Related Work
3 The Proposed Methodology
3.1 The Collected Dataset Consolidation
3.2 Data Preprocessing
3.3 Arabic Tweets Embedding Representation
3.4 Different Deep Learning Models
3.5 Evaluation
4 Experimental Results
5 Conclusion and Future Work
References
Combination Probability in Finite State Machine Model for Intelligent Agent of Educational Game “I LOve Maratua”
1 Introduction
2 Literature Review
3 Methodology
3.1 Multimedia Development Cycle
3.2 Concept and Design of Finite State Automata Model
4 Result and Discussion
4.1 Assembly Randomization on Object Position
4.2 Assembly FSM with Probability
4.3 Beta Testing
5 Conclusion
References
Multi-threaded Parallelization of Automatic Immunohistochemical Image Segmentation
1 Instruction
2 Literature Review
3 Materials and Methods
3.1 Immunohistochemical Images
3.2 Layered-Parallel Algorithm Form of Automated Selection of Segmentation Algorithm
4 Computer Experiments
5 Summary and Conclusion
6 Related Works and Discussion
References
Advances in Digital Economics and Methodological Approaches
Digital Finance and Corporate Social Responsibility—Empirical Evidence from China
1 Introduction
2 Literature Review and Theoretical Hypothesis
2.1 Literature Review
2.2 Theoretical Analysis and Hypothesis
3 Research Design
3.1 Data Sources
3.2 Model
4 Empirical Analysis
4.1 Descriptive Statistics
4.2 Regression Analysis
4.3 Robustness Test and Endogeneity Test
4.4 Endogeneity Test
5 Further Analysis
5.1 Nature of Property Rights
5.2 Market Environment
6 Conclusions
References
Establishing the Optimal Market Price for a Product Using a Neuro-Fuzzy Inference System
1 Introduction
1.1 Pricing Models
1.2 Hybrid Networks for Solving Pricing Problems
2 Materials and Methods
2.1 Main Aspects of Pricing
2.2 Fuzzy Logic Toolbox Features for Building Neuro-fuzzy Models
3 Fuzzy Output System
3.1 Source Data and Structure of the Fuzzy Inference System
3.2 Hybrid Network Training
3.3 Testing and Verification of the Hybrid Network
4 Summary and Conclusion
References
Exploring the Application of Intelligent Logistics Technology in Pharmaceutical Cold Chain Logistics
1 Introduction
2 Overview of Pharmaceutical Cold Chain Logistics
2.1 Concept of Cold Chain Logistics
2.2 Characteristics of Pharmaceutical Cold Chain Logistics
3 Journals Reviewed
3.1 Research Status Abroad
3.2 Domestic Research Status
3.3 Literature Review
4 Characteristics of Pharmaceutical Cold Chain Logistics Under the Background of Intelligent Logistics
4.1 Pharmaceutical Cold Chain Market Maintains “High Growth”
4.2 Intelligent Driving Pharmaceutical Cold Chain Industry Reform
4.3 Innovative Application of Pharmaceutical Cold Chain Supply Chain
4.4 Pharmaceutical Cold Chain Logistics Technology Continues to Emerge
5 Application of Intelligent Logistics Technology in Pharmaceutical Cold Chain Logistics
5.1 Application of Internet of Things Technology
5.2 Application of Blockchain Technology
5.3 Application of Artificial Intelligence Algorithm
5.4 Model Establishment
5.5 Intelligent Algorithm Solution
6 Summary and Conclusion
References
Quality Evaluation on Agri-Fresh Food Emergency Logistics Service
1 Introduction
2 Literature Review
2.1 Industrial Convergence
2.2 LSQ (Logistics Service Quality) Evaluation
3 Theoretical Framework and Data Collection
3.1 Construction of Evaluation Index System in Industrial Convergence
3.2 Data Collection and Research
4 Empirical Application and Numerical Experiments
4.1 Index Weight Determination of Agri-Fresh Food LSQ Evaluation
4.2 Fuzzy Comprehensive Evaluation Calculation Process
5 Case Analysis, Result, and Discussion
5.1 Analysis of Fuzzy Comprehensive Evaluation Results
5.2 Suggestions on Improving Agri-Fresh Food LSQ in Industrial Convergence
6 Conclusions and Prospects
References
Research on Financing Efficiency of Different Financing Methods in AI Industry Based on DEA Model
1 Instruction
2 Research Review
3 Construction of Financing Efficiency Model and Variable Selection
3.1 Model Construction
3.2 Variable Selection
4 Empirical Analysis
4.1 Debt Financing
4.2 Equity Financing
4.3 Endogenous Financing
5 Summary and Suggestions
References
The Model of the Novel One Windows Secure Clinic Management Systems
1 Introduction
2 Literature Review
3 Market Research
4 Methodology
5 Experiment Description
6 Results
7 Conclusion
References
Research on the Construction Scheme of Wuhan Emergency Logistics System Under the Background of Public Health Emergencies
1 Introduction
2 Wuhan Emergency Logistics System Construction Foundation
2.1 The Emergency Plan System is Gradually Improved
2.2 Continuous Improvement of Emergency Organization and Command System
2.3 Further Optimization of Emergency Logistics Facility System
2.4 The Main Body of Logistics Enterprises Has Strong Strength
2.5 Strong Emergency Logistics Support for Epidemic Prevention and Control
3 Problems in the Construction of Wuhan Emergency Logistics System
3.1 The Coordinated Operation of Emergency Logistics Command System is not Smooth Enough
3.2 The Support Capacity of Emergency Material Reserve, Transportation and Transit System is not Strong Enough
3.3 The Informatization Construction of Emergency Logistics Lags Behind
4 Thoughts on the Construction of Wuhan Emergency Logistics System
4.1 General Idea
4.2 Development Objectives
5 Main Tasks of Wuhan Emergency Logistics System Construction
5.1 Establish Three Emergency Logistics Systems
5.2 Construction of Emergency Logistics Command and Coordination Information Platform
6 Conclusion
References
Macroeconomic Determinants of Economic Development and Growth in Ukraine: Logistic Regression Analysis
1 Introduction
2 Logistic Regression as a Machine Learning Method of Classification
3 Economic Development and Economic Growth of Ukraine: A Logistic Regression Approach
4 Summary and Conclusion
References
Key Interest Rate as a Central Banks Tool of the Monetary Policy Influence on Inflation: The Case of Ukraine
1 Introduction
2 Autoregressive Methods as an Instrument of Macroeconomic Modelling
2.1 Basic Econometric Models of Central Banks
2.2 Autoregressive Econometric Models in Macroeconomics
3 Prediction Key Policy Rate of Ukraine with Autoregressive Models
3.1 Pre-model Analysis of Ukrainian GDP and Macroeconomic Indicators
3.2 Autoregressive Models of GDP of Ukraine, Their Quality Assessment and Forecast Based on ARIMA and VAR Models
3.3 Forecasting the Key Policy Rate of Ukraine Based on the Taylor Rule
4 Summary and Conclusion
References
Digital Transformation in Ukraine During Wartime: Challenges and Prospects
1 Introduction
2 Literature Review
3 Data and Methodology
4 Results and Discussion
4.1 The State of the Digitization Sector in the Pre-War Period
4.2 The Main Problems of the Digitization Sector During The War
5 Summary and Conclusion
References
Complex Network Analysis and Stability Assessment of Fresh Agricultural Products (FAPs) Supply Chain
1 Instruction
1.1 FAPs Supply Chain
1.2 Supply Chain Stability
1.3 Complex Network
2 An Assessment Method Based on Complex Network
2.1 Network of Factors Affecting FAPs Supply Chain Stability
2.2 FAPs Supply Chain Stability Assessment Methods
3 Practical Implementation
3.1 Influence Factor
3.2 Influence Factor Network
3.3 Influence Factor Assessment Results
4 Summary and Conclusion
References
Risk Assessment of Wuhan Frozen Food Supply Chain Based on AHP-FCE Method
1 Instruction
2 Literature Review
3 Risk Identification in the Frozen Food Supply Chain
3.1 Risk Factors for Frozen Food Supply Chain
3.2 Establish a Risk Index System for Frozen Food Supply Chain
4 Risk Assessment Model Based on AHP-FCE
4.1 Index Weight Construction Based on AHP
4.2 Risk Assessment and Analysis of Results
5 Conclusion
References
Forecasting of COVID-19 Dynamics by Agent-Based Model
1 Instruction
2 Materials and Methods
3 Results
4 Conclusions
References
The Intellectual Structure of Sustainable Leadership Studies: Bibliometric Analysis
1 Introduction
2 Literature Review
3 Research Design
3.1 Search Criteria and Data Extraction
4 Results and Discussion
5 Conclusion
References
A Crypto-Stego Distributed Data Hiding Model for Data Protection in a Single Cloud Environment
1 Introduction
1.1 Cloud Computing
1.2 Steganography
1.3 Crypto-Steganography
1.4 Distributed Steganography
1.5 Selected Works
2 Methodology
2.1 Existing Models
2.2 Weaknesses of the Existing Models
2.3 Proposed Model
2.4 Structure of the Proposed Model
3 Results
3.1 The Encryption Workflow
3.2 Secret Message Embedding Workflow
4 Evaluation
4.1 Security Analysis of the Proposed Model
5 Conclusion
References
Land Market Balance Computation Within the Digital Transformation
1 Introduction
2 Materials and Methods
3 Results
4 Simulation Case
5 Summary and Conclusion
References
The Impact of Environmental Social Responsibility Concept on Sustainable Development in the Context of Big Data
1 Introduction
2 Literature Review
3 Materials and Methods
4 Results
5 Summary and Conclusion
References
CyberSCADA Network Security Analysis Model for Intrusion Detection Systems in the Smart Grid
1 Introduction
1.1 CyberSCADA Systems
1.2 CyberSCADA Attack Models
1.3 Intrusion Detection Systems
1.4 Machine Learning Models
2 Methods and Datasets
2.1 Smart Grid Testbed and Datasets
2.2 Modelled Scenarios
2.3 Model Evaluation Parameters and Metrics
2.4 Tools and Experimental Setup
2.5 Model Selection and Prediction
2.6 Model Cross Validation
3 Discussion of Results
4 Conclusion
References
MHESIDM: Design of a Multimodal High-Efficiency Stock Prediction Model for Identification of Intra-day Movements
1 Instruction
2 Literature Review and Background
3 Methodology Describing Design of the Proposed Stock Prediction Model
3.1 Evaluation of Technical Indicators and ARIMA Model
3.2 Design of the Novel Sentiment Analysis Layer
3.3 Design of the Fusion Layer for Final Stock Value Prediction
4 Result Analysis and Comparisons
4.1 Experimental Process
4.2 Results
5 Conclusions
References
Kohonen Maps for Clustering Fictitious Human Capital
1 Introduction
2 Related Works
3 Materials and Methods
3.1 Date Mining Algorithms
3.2 Definition of Indicators
3.3 Data Collection
3.4 Clustering, Using Neural Networks
3.5 A Neural Network Creation
4 Results and Discussion
5 Summary and Conclusion
References
Advances in Intelligent Systems and Intellectual Approaches
Development of a Method for the Intelligent Interface Used in the Synthesis of Instrumental Software Systems
1 Introduction
2 The Implemented Works
3 Problem Statement for the ISS Synthesis
4 Model of the Synthesis Process
4.1 About the Intelligent Interface
5 Building a Knowledge Base
6 Development of a Mathematical Model for an Intelligent Interface
7 Experiments
8 Summary and Conclusion
References
Evaluation of Shoreline Utilization of Inland River Ports
1 Instruction
2 Evaluation Index System of Inland Port Shoreline Utilization
2.1 Evaluation Objectives
2.2 Evaluation Principles
2.3 Construction of Evaluation Index System
3 Evaluation Model for Utilization of Inland Port Shoreline
3.1 Research on Evaluation Criteria
3.2 Choice of Evaluation Method
3.3 Evaluation Model Construction
4 Demonstration of the Evaluation of Inland Port Shoreline Utilization
4.1 Evaluation Object Selection
4.2 Basic Data Processing
4.3 Shoreline Utilization Evaluation
4.4 Evaluation Conclusion
5 Summary and Conclusion
References
Build a Path of Integrated and Intelligent Low-Carbon Waterway Transportation System
1 Instruction
2 New Situation and Requirements Faced by the Development of Waterway Transportation in China
3 Promote the Integrated Development of Waterway Transportation
3.1 Drive the Overall Integration of Waterway Transportation and Major National Strategies
3.2 Promote the Overall Integration of Water Transportation and Other Transportation Modes
3.3 Promote the Overall Integration of Water Transportation and Processing and Manufacturing Industry
3.4 Encourage the Overall Integration of Ports and the Modern Service Industry
4 Accelerate the High-Quality Development of Water Transportation
4.1 Accelerate the Upgrading of Water Transportation Intelligence
4.2 Accelerate the Development of Green Water Transportation
4.3 Accelerate the Promotion of Water Transportation Carbon Reduction
4.4 Accelerate the Development of Water Transportation Safety
4.5 Improve China’s International Water Transport Governance Capacity
5 Summary and Outlook
References
Optimize the Layout and Improve the Toughness of Waterway Transportation and Logistics Facilities
1 Instruction
2 Current Situation and Basis of Waterway Transport Logistics Development in China
2.1 Continuous Improvement in the Layout of Waterway Transport Facilities
2.2 World-Leading Waterway Transport Facilities and Transport Sale
2.3 Positive Progress in Integrated Waterway Transport Development
2.4 Rapid Start of High-Quality Development of Waterway Transportation
2.5 Comprehensive Strength of Large Port and Shipping Enterprises Continues to Improve
2.6 Port and Shipping Enterprises Accelerate the Pace of Going Global
2.7 Actively Participation in the Global Maritime Governance System
3 New Situation and Requirements Faced by the Development of Waterway Transportation in China
3.1 Water Transport Demand Forecast
3.2 New Situation and Requirements
4 Ways to Improve the Toughness of Waterway Transportation and Logistics Facilities System
4.1 Take Advantages of the Main Hub of the Port and the Main Skeleton of the National High-Grade Channel
4.2 Supplement the Short Board of Inland Waterway and Jointly Build the Main Framework of the National Comprehensive Three-Dimensional Transportation Network
4.3 Strengthen the Function of Port Hub and Help the Construction of Multi-level Integrated National Comprehensive Transportation Hub
4.4 Optimize the Layout of Overseas Ports and Improve the International Maritime Logistics Network
5 Summary and Outlook
References
Study and Implementation of Biped Robot Soccer Based on Machine Vision
1 Introduction
2 Method of Biped Robot Kicking Ball Based on Machine Vision
2.1 Establishment of Small Ball Data Set
2.2 Training of Small Ball Target Recognition Model Based on YOLOv4
2.3 Biped Robot Leg Motion Control
2.4 Process of Biped Robot Kicking Ball
3 Kicking Experiment of Robot Soccer Based on Machine Vision
3.1 Target Ball Recognition Experiment
3.2 Robot Motion Stability Test Experiment
3.3 System Operation Speed and Stability Test Experiment
4 Conclusion
References
Character Recognition System Based on Deep Learning Networks
1 Introduction
2 System Composition
2.1 Hardware Composition
2.2 Software Composition
3 Identification Process
3.1 Image Acquisition
3.2 Image Preprocessing
3.3 Noise Reduction
3.4 Text Positioning and Outline Processing
3.5 Character Segmentation
3.6 Normalization
4 Identification Algorithm
4.1 KNN
4.2 SVM
4.3 ANN
5 Comparison of Test Results
6 Conclusion
References
Application of Artificial Intelligence Technology in Route Planning of Logistics Highway Transportation
1 Introduction
2 Application of Artificial Intelligence in Route Planning of Logistics Highway Transportation
2.1 Application of Artificial Intelligence and Logistics
2.2 Application of Artificial Intelligence in Route Planning of Logistics Highway Transportation
3 Logistics Highway Transportation Route Planning System Based on Artificial Intelligence
3.1 System Objectives and Key Information
3.2 System Architecture Design
3.3 System Module Design
4 Conclusion
References
Traffic Flow Characteristics of Speed Limited Roads Based on Cellular Automata NaSch Traffic Flow Model
1 Introduction
2 The Establishment of the Road Section Model with Speed Limit Zone
3 Analysis of Simulation Results
3.1 Influence of Speed Limit Zone Length La on Traffic Flow
3.2 The Effect of the Maximum Speed vmax2 Allowed in the Speed Limit Zone on the Traffic Flow
4 Conclusion
References
Practice Research on Zero External Discharge Management of Biochemical Wastewater from a Steel Plant
1 Introduction
1.1 Research Purpose and Significance
1.2 Main Research Contents
2 Literature Review and Method
2.1 Literature Review
2.2 Method
3 Analysis of Biochemical Wastewater Discharge Status in a Steel Plant
3.1 Introduction of a Steel Plant
3.2 Analysis of Biochemical Wastewater Discharge Status in a Steel Plant
4 Practical Study on Zero External Discharge of Biochemical Wastewater from a Steel Plant
4.1 Method of Achieving Zero External Discharge of Biochemical Wastewater
4.2 Biochemical Wastewater Zero Discharge Production Practice
4.3 Application of 5G+ Industrial Internet in Zero External Wastewater Discharge
5 Benefits of Zero Wastewater Discharge Research
6 Conclusion
References
Key Information Extraction Method Study for Road Traffic Accidents via Integration of Rules and SkipGram-BERT
1 Instruction
2 Preprocess of Road Traffic Key Information Extraction
2.1 Construction of Key Information Index System of Road Traffic
2.2 Sentence Segmentation and Word Segmentation
3 Information Extraction Model
3.1 The Theory Summary of Model
3.2 SkipGram-BERT Model
3.3 Build Information Extraction Model of Integration Rules and SkipGram-BERT
4 Example Verification
4.1 Road Traffic Accident Report
4.2 Model Training
4.3 Result Analysis
5 Conclusion
References
Pre-design Productivity Improving by Decisions Making Based on an Advanced Morphological Approach
1 Introduction
2 Morphological Analysis
3 Advanced Morphological Approach in the Choice of GTE Solutions
4 Conclusions
References
Condition Monitoring Method for Bridge Crane Equipment Based on BIM Technology and Bayesian Theory
1 Introduction
2 VSI Bayesian C-control Chart
2.1 C-control Chart Based on Posterior Predictive Distribution
2.2 VSI c-control Chart Based on the Posterior Predictive Distribution
3 Economic Statistical Control Chart
3.1 Parameter Definition and Model Assumptions
3.2 Quality Cycle Time
3.3 Lost Profit Per Hour
4 An Illustrative Example
5 Sensitivity Analysis
6 Conclusion
References
Combining OCR Methods to Improve Handwritten Text Recognition with Low System Technical Requirements
1 Instruction
2 Methods of Text Detection and Recognition
2.1 Convolutional Neural Networks as an Instrument for Feature Detection
2.2 A Recurrent Neural Network as the Best Way to Work with Text
3 Short Dataset Analysis
4 Results of Research
5 Summary
References
Extraction of Structural Elements of the Text Using Pragmatic Features for the Nomenclature of Cases Verification
1 Introduction
2 Materials and Methods
2.1 Data Structure
2.2 Pragmatic Features
2.3 Verification Scheme
3 Results
4 Discussion
5 Summary and Conclusion
References
Review on the Positioning Error Causes Analysis and Methods to Improve Positioning Accuracy of Parallel Robot
1 Instruction
2 Analysis of the Causes of Positioning Errors in Parallel Robots
2.1 Classification of Positioning Errors in Parallel Robots
2.2 Error Sources of Parallel Robots
3 Ways to Improve the Accuracy of Parallel Robots
3.1 Error Prevention Method
3.2 Parallel Robot Calibration Technology
4 Summary and Conclusion
References
Problems and Prospects for Minority Languages in the Age of Industry 4.0
1 Introduction
2 Sociolinguistic Aspects of Languages
3 Current State and Level of Use of World Languages
4 Principles of Multi-cultural Security in Azerbaijan
5 Challenges and Prospects for Language Protection in Cyberspace
6 Languages of National Minorities in the Context of Industry 4.0
7 Conclusion
References
The Method of Analyzing the Level of Foreign Language Knowledge of Higher Education Students Based on Machine Learning
1 Introduction
2 Overview of Forecasting Methods and Problem Statement
3 Analysis of Model Input Data
4 Development of the Object Model
5 Conclusions
References
Phishing Website Detection with and Without Proper Feature Selection Techniques: Machine Learning Approach
1 Introduction
2 Proposed Methodology
2.1 Dataset Information
2.2 Train-Test Dataset Spit Ratios
2.3 Proper Feature Selection Techniques
2.4 Model Evaluation Metrics
2.5 Implementation Tool
3 Result and Discussion
3.1 Comparison Among Train-Test Dataset Split Ratios
3.2 Comparative Performance Analysis Before and After Applying Proper Feature Selection Techniques
3.3 Performance Comparison Against Related Research Works
4 Summary and Conclusion
References
An Efficient Classification Techniques for Brain Tumor Using Features Extraction and Statistic Methods, with Machine Learning Algorithms
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 Pre-processing
3.2 Feature Extraction
3.3 Feature Selection
3.4 Machine Learning Algorithms
4 Results and Discussion with Comparative Analysis
4.1 Implementation Setup
4.2 Result of Classifier Algorithm
4.3 Result of Evolutionary Algorithms
4.4 Comparative Analysis
5 Conclusions and Future Directions
References
Advances in Educational Approaches
Teaching Dilemma and Solution of Mathematics Courses in Applied Undergraduate Universities Under the Background of Professional Certification
1 Introduction
2 Difficulties and Problems in the Teaching of Mathematics Courses
2.1 Problems in Mathematics Teaching
2.2 Traceability of Problems in Mathematics Teaching
3 Ways to Solve the Dilemma
3.1 Based on the Goal Orientation, Reconstruct the Mathematics Curriculum Content System Based on the Engineering Education Professional Certification
3.2 Build a Diversified Teaching Platform with Students as the Center
3.3 Establish a New Assessment and Evaluation Mode under the Guidance of the Concept of Continuous Improvement
4 Practice and Analysis of Mathematics Curriculum Reform
4.1 Design and Implementation of Teaching Reform Plan
4.2 Effect Analysis of Mathematics Teaching Reform
5 Conclusion
References
Teaching Strategies of Chinese Characters as a Foreign Language: A Corpus-Based Analysis
Abstract
1 Introduction
2 Research Methodology
2.1 Corpus Construction
2.2 Sampling
2.3 Statistical Analysis
3 Error Analysis of Chinese Character Writing
3.1 Stroke Error
3.1.1 Missing Strokes
3.1.2 Redundant Strokes
3.1.3 Confusion of Similar Strokes
3.1.4 Confusion Between Single Strokes and Compound Strokes
3.1.5 Misplaced Strokes
3.1.6 Wrong Stroke Direction
3.1.7 Wrong Stroke Combination
3.2 Components Error
3.2.1 Missing Components
3.2.2 Redundant Components
3.2.3 Confusion of Similar Components
3.3 Structure Error
4 Chinese Character Teaching Strategies
4.1 Chinese Character Stroke Teaching Strategies
4.2 Chinese Character Component Teaching Strategies
4.3 Chinese Character Structure Teaching Strategies
5 Conclusion
Acknowledgment
References
History, Hotspot and Evolution of University Governance Research—Visual Analysis Based on CSSCI Documents Collected by CNKI
1 Introduction
1.1 Basic Ideas of the Research
1.2 Literature Review
2 Number of Research Methods and Achievements on “University Governance”
2.1 Data Source
2.2 Research Methods and Tools
2.3 Quantity and Trend of Paper Achievements
3 Research Hotspots and Themes of University Governance in China
3.1 Research Hotspots of University Governance
3.2 Research Topics of University Governance
4 The Course and Trend of University Governance Research
4.1 Exploration and Development Stage of University Governance Research (1998–2007)
4.2 Deepening Development Stage of University Governance Research (2008–2017)
4.3 Connotative Development Stage of University Governance Research (2018 to Present)
5 Conclusion
References
Scientific Evaluation and Effectiveness Improvement of Talent Introduction in Universities in the New Era Based on AHP
1 Introduction
2 Introduction to Analytic Hierarchy Process
3 Evaluation Index System of High-Quality Introduced Talents in the New Era
4 Effective Management of High-Quality Talent Introduction
4.1 Top Level Design for High-Quality Talent Introduction
4.2 Organization and Implementation of High-Quality Talent Introduction
4.3 Inspection Feedback of High-Quality Talent Introduction
4.4 Evaluation and Incentive of High-Quality Talent Introduction
5 Conclusion
References
Sports Media Talent Training Based on PBL in the Context of New Liberal Arts
1 Introduction
2 Sports Media Talent Training in the Context of New Liberal Arts
2.1 Quality Requirements of Sports Media Talents
2.2 Construction of Sports Media Talent Training System under the New Liberal Arts Environment
3 PBL and Its Role in the Training of Sports Media Talents
3.1 Problem-Based Learning and Project-Based Learning
3.2 Connotation and Steps of Project-Based Learning
3.3 Core Elements of PBL
4 Practice of Sports Media Talent Training and Teaching Reform
4.1 “TRilogy” Project Teaching Method
4.2 PBT Teaching Mode of “STudent Development as the Center”
4.3 Continuous Promotion of Smart Teaching Projects
5 Conclusion
References
Research on the New Mode of Integrating Higher Vocational Aesthetic and Ideological and Political Education in the New Era
1 Introduction
2 The Idea of Innovative Practice
2.1 Adhere to Integrity and Innovation
2.2 Ideological and Political Course Teaching and Practice
3 The Goal and the Basis of the Innovation Practice
4 The Specific Implementation Link and Path of Innovative Practice
4.1 Innovative Practice Teaching Analysis
4.2 Innovate the Practical Teaching Strategy
4.3 Innovative Practice Teaching Implementation Process
4.4 Research and Extension
5 The Specific Implementation Link and Path of Innovative Practice
5.1 Happy to Live Ideological and Political Class, Improve the Effectiveness of Education
5.2 Develop Key Abilities and Shape a Perfect Personality
5.3 Highlight the Logical Thinking
5.4 Focus on Practice and Society
6 Conclusion
References
Training Path of International Talents in Smart Manufacturing Under the Background of Integration of Industry and Education
1 Introduction
2 Cultivation and Demand of International Talents in Intelligent Manufacturing
2.1 Analysis of Talent Supply and Demand of Intelligent Manufacturing
2.2 Urgency of Intelligent Manufacturing Talent Training Output
2.3 Importance of Intelligent Manufacturing Talent Training
2.4 Strategy of Intelligent Manufacturing Talent Training
3 New Requirements for International Talent Training of Intelligent Manufacturing
3.1 Construction of Talent Training System of China ASEAN Intelligent Manufacturing Industry Cluster
3.2 Formation of School Enterprise Characteristic Projects and the Concept of CO Education of Industry and Education
3.3 Training and Output of Talents
4 International Talent Training and Construction Path of Intelligent Manufacturing
4.1 Innovate the Talent Training Mode and Build a “FOur Double” Education System
4.2 Build an Internship and Training Base, and Establish a Training Base and Teaching Cases of “Integration of Industry and Education, Dual-Use of Schools and Enterprises”
4.3 Build a Teaching Resource Base, and Reform and Practice the Application Curriculum System Based on the Cultivation of Applied and Innovative Abilities
4.4 Improve the Quality of Professional Construction and Establish the Curriculum Standard of “1 + X” Vocational Skill Level Certificate in the Field of Intelligent Manufacturing
5 Conclusion
References
Quality Evaluation of University Maritime Education Based on Entropy Method—Taking Wuhan University of Technology as an Example
1 Introduction
2 Questionnaire Method
3 Research Method
3.1 Entropy Rights Method
3.2 Correlation Analysis
4 Results and Analysis
4.1 Do not Understand Options
4.2 Satisfaction Ranking
4.3 Correlation Analysis
5 Conclusion
References
Function of Cultural Construction on Service Quality of University Hospitals and Evaluation of Satisfaction
1 Introduction
2 Functions and Missions of University Hospitals
2.1 Basic Functions of the Hospital
2.2 Functional Orientation of University Hospitals
2.3 Work Tasks of University Hospitals in the Epidemic Era
3 Contents and Significance of Hospital Culture Construction
3.1 Meaning of Hospital Culture
3.2 Contents of Hospital Culture
3.3 The Essence of Hospital Culture
3.4 The Role of Hospital Culture Construction
4 Service Quality Evaluation of WHUT Hospital Driven by Cultural Construction
4.1 WHUT Hospital Management Philosophy
4.2 Satisfaction of School Hospital Services
4.3 Survey and Evaluation of Service and Satisfaction of WHUT Hospital
5 Conclusion
References
Construction of Experimental Teaching System for Mechanical Majors Under the OBE Concept
1 Introduction
2 The Current Situation of Mechanical Experimental Teaching
3 Theoretical Framework of Mechanical Professional Experimental Teaching System Based on OBE
4 Specific Construction of Experimental Teaching System for Mechanical Professions
4.1 Objective System
4.2 Curriculum System
4.3 Quality Assurance System
5 Conclusion
References
New Practice for University Innovation and Entrepreneurship Education Based-on “432” Model-Taking the Open Innovation Laboratory at WHUT as an Example
1 Introduction
2 Theories About “432” Model
2.1 Theory of Four Dimensions
2.2 Theory of Three Stages
2.3 Theory of “TWo Carriers”
3 Practical Applications
3.1 Four Dimensions’ Application in Our Research
3.2 Three Stages’ Application in College Students
3.3 Two Carriers Unite the Synergy of Dual Innovation and Promote the Double Innovation Leap
4 Conclusion
References
A Study on the Demand Orientation and Satisfaction Strategies for Vocational Training of Child Welfare Workers in China
1 Introduction
1.1 Research Background
1.2 Literature Review
1.3 Research Methods
2 Vocational Training Needs of Child Welfare Workers and Their Satisfaction
2.1 Demand Orientation and Satisfaction of Training Content
2.2 Demand Orientation and Satisfaction of Training Organization
2.3 Training Effect
3 Difficulties in Meeting the Professional Training Needs of Child Welfare Workers
3.1 Rapid Turnover of Personnel and Loss of Training Results
3.2 Ignoring Needs Assessment and Rigid Training Content
3.3 Training Organization Lacks Overall Planning
3.4 The Child Welfare Workers Lost Their Voices in the Training Feedback
4 Improvement Strategies for Meeting the Vocational Training Needs of Child Welfare Workers
4.1 Cultivate and Attract Talents and Strengthen Team Building
4.2 Conduct Training Needs Assessment
4.3 Improve Training Courses and Play a Supervisory Role
4.4 Training Effect Is Linked to Incentive Mechanism
5 Conclusion
References
The Study of Graduates’ Workplace Emotion and Performance Under the Background of Industry and Education Integration
1 Introduction
2 Literature Review
2.1 A Review of the Literature on Workplace Emotion
2.2 A Review of the Literature on Emotional Intelligence
3 Hypothesis
3.1 Effect of New Employees’ Negative Emotions on Their Performance
3.2 The Impact of New Employees’ Emotional Intelligence on Their Performance
3.3 The Moderating Role of Call Center Employees’ Emotional Intelligence
4 Research Method
4.1 Design of Questionnaire
4.2 Data Collection
4.3 Statistical Analysis
5 Data Analysis
5.1 Correlation Analysis
5.2 Multiple Regression Analysis
5.3 Moderating Effect of Emotion Intelligence
6 Discussion
6.1 Conclusion
6.2 Theoretical Significance
6.3 Practical Significance
References
Quality Evaluation of Graduates in Applied Technology Universities Based on Fuzzy AHP
1 Introduction
1.1 University of Applied Technology
1.2 Fuzzy AHP
2 Applied Technology Universities Students' Quality Evaluation Model
2.1 Applied Universities Students’ Quality Evaluation Process
2.2 Quality Evaluation Criteria for Graduates in Applied Universities
2.3 Modeling of Student Evaluation Index Based on Fuzzy AHP
3 Case Study
3.1 W University
3.2 Quality Evaluation of Graduates in W University
3.3 Student Quality Level
4 Conclusion
References
Digital Technologies in the Educational Process and the Effectiveness of Their Use
1 Introduction
2 Literature Review
3 Description of Digital Educational Technologies
4 Evaluation of Effectiveness
4.1 Assessment Procedures
4.2 Implementation
4.3 Analysis of the Survey Results
5 Summary and Conclusion
References
Educational FinTech: Promoting Stakeholder Confidence Through Automatic Incidence Resolution
1 Introduction
1.1 Related Works
2 Methodology
2.1 The Proposed System
3 Results
3.1 Design
3.2 Implementation
4 Conclusion
References
Developing Educational Content for Distance Learning Purposes Using Mobile Technologies and Optimized Filmmaking Models
1 Introduction
2 Background Study
2.1 Working Hypothesis
3 Practical Part
3.1 Preparatory Part
3.2 Conducting Process
3.3 Analysis and Results
4 Summary and Conclusion
References
Methods of Analytical Processing of Digital Data in Educational Management
1 Introduction
1.1 Problem Statement
2 Presentation of the Research Main Material
3 National Status of Educational Data Analytics
4 Innovative Methods of Analytical Analysis of Educational Data
5 Research Implementation and Research Perspective
6 Discussion and Conclusion
References
Author Index
Recommend Papers

Advances in Intelligent Systems, Computer Science and Digital Economics IV (Lecture Notes on Data Engineering and Communications Technologies, 158) [1st ed. 2023]
 3031244745, 9783031244742

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes on Data Engineering and Communications Technologies 158

Zhengbing Hu Yong Wang Matthew He   Editors

Advances in Intelligent Systems, Computer Science and Digital Economics IV

Lecture Notes on Data Engineering and Communications Technologies Volume 158

Series Editor Fatos Xhafa, Technical University of Catalonia, Barcelona, Spain

The aim of the book series is to present cutting edge engineering approaches to data technologies and communications. It will publish latest advances on the engineering task of building and deploying distributed, scalable and reliable data infrastructures and communication systems. The series will have a prominent applied focus on data technologies and communications with aim to promote the bridging from fundamental research on data science and networking to data engineering and communications that lead to industry products, business knowledge and standardisation. Indexed by SCOPUS, INSPEC, EI Compendex. All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/15362

Zhengbing Hu Yong Wang Matthew He •



Editors

Advances in Intelligent Systems, Computer Science and Digital Economics IV

123

Editors Zhengbing Hu International Center of Informatics and Computer Science, Faculty of Applied Mathematics National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” Kyiv, Ukraine

Yong Wang School of Management Wuhan University of Science and Technology Wuhan, China

Matthew He Halmos College of Arts and Sciences Nova Southeastern University Fort Lauderdale, FL, USA

ISSN 2367-4512 ISSN 2367-4520 (electronic) Lecture Notes on Data Engineering and Communications Technologies ISBN 978-3-031-24474-2 ISBN 978-3-031-24475-9 (eBook) https://doi.org/10.1007/978-3-031-24475-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book comprises high-quality peer-reviewed research papers presented at the 4th International Symposium on Computer Science, Digital Economy, and Intelligent Systems (CSDEIS2022), held in Wuhan, China, from November 11 to 13, 2022, organized jointly by the Wuhan University of Technology, Hubei University of Technology, Wuhan University of Science and Technology, the Polish Operational and Systems Society, and the International Center of Informatics and Computer Science (ICICS). The topics discussed in the book include state-of-the-art papers in the field of computer science and its technological applications; intelligent systems and intellectual methods; digital economics and educational approaches. It is an excellent source of references for researchers, engineers, management practitioners, graduate students, and undergraduate students interested in computer science and its applications in engineering and management. The development of artificial intelligence systems and their applications in various fields play important roles in interdisciplinary studies and practical problem-solving. The advancement of the intelligent system disciplines belongs to one of the most urgent tasks of modern science and technology. Simultaneously, with the rapid advances in computer technologies, quantum computing, and digital communications, the lives and professional activities of people are changing throughout the world. In particular, with the formation of the concept of the “digital economy”, these changes have led to profound transformations in economic and financial activities. Among all submissions to the conference, this book includes the best contributions selected by the program committee. In addition, we are grateful to Springer-Verlag and Fatos Xhafa, the editor responsible for the series “Lecture Notes on Data Engineering and Communications Technologies” for their great assistance and support in publishing the conference proceedings. Zhengbing Hu Yong Wang Matthew He v

Organization

Conference Organizers and Supporters Wuhan University of Technology, China Hubei University of Technology, China Wuhan University of Science and Technology, China Huazhong University of Science and Technology, China Polish Operational and Systems Society, Poland International Center of Informatics and Computer Science, Ukraine International Research Association of Modern Education and Computer Science, Hong Kong

vii

Contents

Advances in Computer Science and Their Technological Applications Bar-Code Recognition Based on Machine Vision . . . . . . . . . . . . . . . . . . Hui Jing, Hai-ping Luo, Tuo Zhou, and Dong-yuan Ge

3

Some Problems and Solutions for Non-absorbent Wall Climbing Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wugang Li, Jiaxu Mo, Fengmei Chen, Lanxiang Wei, and Zhenpeng Qin

14

SOC Estimation of Lithium Battery Based on BP Neural Network with Forgetting Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shiling Huang and Meiyan Li

25

Design of Train Circuit Parallel Monitoring System Based on CTCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianqiu Chen, Yanzhi Pang, and Hao Zhang

41

Information Spaces and Efficient Information Accumulation in Calibration Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peter Golubtsov

53

The Novel Multi Source Method for the Randomness Extraction . . . . . Maksim Iavich and Tamari Kuchukhidze

63

Post-quantum Scheme with the Novel Random Number Generator with the Corresponding Certification Method . . . . . . . . . . . . . . . . . . . . Maksim Iavich

76

Prediction of UWB Positioning Coordinates with or Without Interference Based on SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hua Yang, Haikuan Yang, Junxiong Wang, Dang Lin, and Kang Zhou

89

Analysis and Comparison of Routing and Switching Processes in Campus Area Networks Using Cisco Packet Tracer . . . . . . . . . . . . . . . . 100 Kvitoslava Obelovska, Ivan Kozak, and Yaromyr Snaichuk

ix

x

Contents

A Parallel Algorithm for the Detection of Eye Disease . . . . . . . . . . . . . . 111 Lesia Mochurad and Rostyslav Panto Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 G. K. Tolokonnikov A New Approach to Search Engine Optimization Based on the Synthesis of a Virtual Promotion Map . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Sergey Orekhov Software Reliability Models: A Brief Review and Some Concerns . . . . . 152 Md. Asraful Haque An Enhanced Session Based Login Authentication and Access Control Scheme Using Client File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Bello A. Buhari, Afolayan A. Obiniyi, Sahalu B. Junaidu, and Armand F. Donfack Kana Electric Meters Monitoring System for Residential Buildings . . . . . . . . 173 Fedorova Nataliia, Havrylko Yevgen, Kovalchuk Artem, Smakovskiy Denys, and Husyeva Iryna Implementation of Blockchain Technology for Secure Image Sharing Using Double Layer Steganography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Lalitha Kandasamy and Aparna Ajay Nature-Inspired DMU Selection and Evaluation in Data Envelopment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Seyed Muhammad Hossein Mousavi Hybrid Convolution Neural Network with Transfer Learning Approach for Agro-Crop Leaf Disease Identification . . . . . . . . . . . . . . . 209 Md Shamiul Islam, Ummya Habiba, Md Abu Baten, Nazrul Amin, Imrus Salehin, and Tasmia Tahmida Jidney Illumination Invariant Based Face Descriptor . . . . . . . . . . . . . . . . . . . . 218 Shekhar Karanwal Classification of Chest X-Ray Images for COVID-19 Positive Patients Using Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 N. Manju, V. N. Manjunath Aradhya, S. Malapriya, and N. Shruthi Arabic Sentiment Classification on Twitter Using Deep Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Donia Gamal, Marco Alfonse, Salud María Jiménez-Zafra, and Mostafa Aref Combination Probability in Finite State Machine Model for Intelligent Agent of Educational Game “I LOve Maratua” . . . . . . . . . . . . . . . . . . . 252 Reza Andrea, Amelia Yusnita, Jundro Daud, and Aulia Khoirunnita

Contents

xi

Multi-threaded Parallelization of Automatic Immunohistochemical Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Oleh Berezsky, Oleh Pitsun, Grygory Melnyk, Vasyl Koval, and Yuriy Batko Advances in Digital Economics and Methodological Approaches Digital Finance and Corporate Social Responsibility—Empirical Evidence from China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Zichao Han, Zhihong Zeng, Youtang Zhang, Liu Yang, Feifei Yuan, Quanfang Xiao, and Xiaochen Sun Establishing the Optimal Market Price for a Product Using a NeuroFuzzy Inference System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Nataliya Mutovkina Exploring the Application of Intelligent Logistics Technology in Pharmaceutical Cold Chain Logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 MeiE Xie and LiChen Qiao Quality Evaluation on Agri-Fresh Food Emergency Logistics Service . . . Yong Wang, Qian Lu, Qing Liu, Cornel Mihai Nicolescu, and Yiluo Sun

313

Research on Financing Efficiency of Different Financing Methods in AI Industry Based on DEA Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Yaqiong Pan and Zhengyi Lu The Model of the Novel One Windows Secure Clinic Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Maksim Iavich and Lasha Sharvadze Research on the Construction Scheme of Wuhan Emergency Logistics System Under the Background of Public Health Emergencies . . . . . . . . 349 Miao He, Jiaxiang Yu, and Zijun Kuang Macroeconomic Determinants of Economic Development and Growth in Ukraine: Logistic Regression Analysis . . . . . . . . . . . . . . . . . . . . . . . . 358 Larysa Zomchak and Iryna Starchevska Key Interest Rate as a Central Banks Tool of the Monetary Policy Influence on Inflation: The Case of Ukraine . . . . . . . . . . . . . . . . . . . . . 369 Larysa Zomchak and Anastasia Lapinkova Digital Transformation in Ukraine During Wartime: Challenges and Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Maryna Nehrey, Inna Kostenko, and Yuriy Kravchenko Complex Network Analysis and Stability Assessment of Fresh Agricultural Products (FAPs) Supply Chain . . . . . . . . . . . . . . . . . . . . . 392 Jianhua Chen and Ting Yin

xii

Contents

Risk Assessment of Wuhan Frozen Food Supply Chain Based on AHP-FCE Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Chen Xiaomeng, Huang Huaye, and Wang Zhangqiong Forecasting of COVID-19 Dynamics by Agent-Based Model . . . . . . . . . 420 Dmytro Chumachenko The Intellectual Structure of Sustainable Leadership Studies: Bibliometric Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Viktoriya Kharchuk and Ihor Oleksiv A Crypto-Stego Distributed Data Hiding Model for Data Protection in a Single Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Samuel O. Acheme, Wilson Nwankwo, David Acheme, and Chukwuemeka P. Nwankwo Land Market Balance Computation Within the Digital Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Nataliia Klymenko, Maryna Nehrey, and Vira Ohorodnyk The Impact of Environmental Social Responsibility Concept on Sustainable Development in the Context of Big Data . . . . . . . . . . . . . . . 472 Vira Ohorodnyk, Olha Nimko, and Maryna Nehrey CyberSCADA Network Security Analysis Model for Intrusion Detection Systems in the Smart Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 John E. Efiong, Bodunde O. Akinyemi, Emmanuel A. Olajubu, Ganiyu A. Aderounmu, and Jules Degila MHESIDM: Design of a Multimodal High-Efficiency Stock Prediction Model for Identification of Intra-day Movements . . . . . . . . . . . . . . . . . . 500 Mausami Sawarkar Dagwar and Sachin S. Agrawal Kohonen Maps for Clustering Fictitious Human Capital . . . . . . . . . . . . 519 Stefania Vyshnevska, Vasyl Pryymak, Oksana Hynda, and Hasko Roman Advances in Intelligent Systems and Intellectual Approaches Development of a Method for the Intelligent Interface Used in the Synthesis of Instrumental Software Systems . . . . . . . . . . . . . . . . . . . . . . 531 Shafagat Mahmudova Evaluation of Shoreline Utilization of Inland River Ports . . . . . . . . . . . 546 Xiaoqing Zhang, Hongyu Wu, Xunran Yu, and Yihua Shen Build a Path of Integrated and Intelligent Low-Carbon Waterway Transportation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 Changjian Liu, Yanbin Geng, Hongyu Wu, Ziwen Yuan, Biao Ge, Yijun Li, Rui Wang, Xing Xu, and Zhixin Geng

Contents

xiii

Optimize the Layout and Improve the Toughness of Waterway Transportation and Logistics Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Changjian Liu, Tianhang Gao, Li Huang, Hongyu Wu, Xunran Yu, Hanbing Sun, Shanshan Bi, Xiaoqing Zhang, and Zhixin Geng Study and Implementation of Biped Robot Soccer Based on Machine Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 Xiaozhe Yang, Jin Lv, and Huiting Lu Character Recognition System Based on Deep Learning Networks . . . . 606 Zhongwen Jin, Shuangyi Liang, Tuo Zhou, and Dongyuan Ge Application of Artificial Intelligence Technology in Route Planning of Logistics Highway Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617 Zhong Zheng, Wanxian He, and Jinming Chen Traffic Flow Characteristics of Speed Limited Roads Based on Cellular Automata NaSch Traffic Flow Model . . . . . . . . . . . . . . . . . . . . 629 Lanxiang Wei, Wugang Li, Hongguang Liang, and Fanglei Luo Practice Research on Zero External Discharge Management of Biochemical Wastewater from a Steel Plant . . . . . . . . . . . . . . . . . . . . . . 639 Xudong Deng, Feng Zhou, and Yong Wang Key Information Extraction Method Study for Road Traffic Accidents via Integration of Rules and SkipGram-BERT . . . . . . . . . . . . . . . . . . . . 658 Cuicui Li, Jixiu Zhang, Baidan Li, and Zhiyuan Xu Pre-design Productivity Improving by Decisions Making Based on an Advanced Morphological Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673 Dmitry Rakov Condition Monitoring Method for Bridge Crane Equipment Based on BIM Technology and Bayesian Theory . . . . . . . . . . . . . . . . . . . . . . . . . 682 Lei Tang, Zhong Tian, Shu Wu, and Yufei He Combining OCR Methods to Improve Handwritten Text Recognition with Low System Technical Requirements . . . . . . . . . . . . . . . . . . . . . . . 693 Volodymyr Semkovych and Volodymyr Shymanskyi Extraction of Structural Elements of the Text Using Pragmatic Features for the Nomenclature of Cases Verification . . . . . . . . . . . . . . . 703 Myroslav Havryliuk, Iryna Dumyn, and Olena Vovk Review on the Positioning Error Causes Analysis and Methods to Improve Positioning Accuracy of Parallel Robot . . . . . . . . . . . . . . . . . . 712 Liangliang Zhu, Sergey S. Gavriushin, and Jingzhong Zheng Problems and Prospects for Minority Languages in the Age of Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722 Afruz Gurbanova

xiv

Contents

The Method of Analyzing the Level of Foreign Language Knowledge of Higher Education Students Based on Machine Learning . . . . . . . . . . 735 Oleksii Kozachko, Serhii Zhukov, Tetyana Vuzh, and Oksana Kovtun Phishing Website Detection with and Without Proper Feature Selection Techniques: Machine Learning Approach . . . . . . . . . . . . . . . . 745 Kibreab Adane and Berhanu Beyene An Efficient Classification Techniques for Brain Tumor Using Features Extraction and Statistic Methods, with Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757 Shah Hussain Badshah, Farhatullah, Gul Zaman khan, Muhammad Abul Hassan, Hazrat Junaid, Muhammad Sohail, Muhammad Awais Mahbob, Izaz Ahamad, and Nadeem Ullah Advances in Educational Approaches Teaching Dilemma and Solution of Mathematics Courses in Applied Undergraduate Universities Under the Background of Professional Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779 Haoliang Zhu and Yu Huang Teaching Strategies of Chinese Characters as a Foreign Language: A Corpus-Based Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790 Ling Tao and Haifeng Yang History, Hotspot and Evolution of University Governance Research— Visual Analysis Based on CSSCI Documents Collected by CNKI . . . . . 801 Zhiyu Cui Scientific Evaluation and Effectiveness Improvement of Talent Introduction in Universities in the New Era Based on AHP . . . . . . . . . . 813 Fen Li and Pingxin Tu Sports Media Talent Training Based on PBL in the Context of New Liberal Arts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825 Ziye Wang, Bin Hao, and Wei Lu Research on the New Mode of Integrating Higher Vocational Aesthetic and Ideological and Political Education in the New Era . . . . . . . . . . . . 835 Lili Li, Feifei Hu, and Lijuan Zhao Training Path of International Talents in Smart Manufacturing Under the Background of Integration of Industry and Education . . . . . 845 Huajian Xin, Quan Yang, Chaolu Zhong, and Tong Xie Quality Evaluation of University Maritime Education Based on Entropy Method—Taking Wuhan University of Technology as an Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857 Yang Xiang, Tiankui Wang, Jiulong Zhang, and Qingying Zhang

Contents

xv

Function of Cultural Construction on Service Quality of University Hospitals and Evaluation of Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . 866 Tingting Wu Construction of Experimental Teaching System for Mechanical Majors Under the OBE Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880 Mengya Zhang, Zhiping Liu, Mengjian Wang, and Yun Chen New Practice for University Innovation and Entrepreneurship Education Based-on “432” Model-Taking the Open Innovation Laboratory at WHUT as an Example . . . . . . . . . . . . . . . . . . . . . . . . . . 891 Zaowei Dai, Shi Pu, Ye Yao, Yue Tao, Wanchen Zeng, and Qiang Qiu A Study on the Demand Orientation and Satisfaction Strategies for Vocational Training of Child Welfare Workers in China . . . . . . . . . . . . 903 Yanping Yu and Guochen Dong The Study of Graduates’ Workplace Emotion and Performance Under the Background of Industry and Education Integration . . . . . . . . . . . . . 914 Ping Liu, Ziyue Xiong, and Yi Zhang Quality Evaluation of Graduates in Applied Technology Universities Based on Fuzzy AHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925 Chen Chen Digital Technologies in the Educational Process and the Effectiveness of Their Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937 Nataliya Mutovkina Educational FinTech: Promoting Stakeholder Confidence Through Automatic Incidence Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947 Wilson Nwankwo, Paschal U. Chinedu, Aliu Daniel, Saliu Mohammed Shaba, Momoh Omuya Muyideen, Chukwuemeka P. Nwankwo, Wilfred Adigwe, Duke Oghoriodo, and Francis Uwadia Developing Educational Content for Distance Learning Purposes Using Mobile Technologies and Optimized Filmmaking Models . . . . . . . 964 Aidiye Aidarbekov, Gulden Murzabekova, Aitzhan Abdyrov, Zhuldyz Tashkenbayeva, and Alnur Shalkar Methods of Analytical Processing of Digital Data in Educational Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974 Nadiia Pasieka, Yulia Romanyshyn, Svitlana Chupakhina, Uliana Ketsyk-Zinchenko, Maria Ivanchuk, and Roman Dmytriv Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985

Advances in Computer Science and Their Technological Applications

Bar-Code Recognition Based on Machine Vision Hui Jing1 , Hai-ping Luo1 , Tuo Zhou2 , and Dong-yuan Ge1(B) 1 School of Mechanical and Automotive Engineering, Guangxi University of Science and

Technology, Liuzhou 545616, China [email protected] 2 Library, Guangxi University of Science and Technology, Liuzhou 545006, China

Abstract. In order to enhance the speed, accuracy and robustness of identifying static barcodes, this paper provides an efficient and high-precision barcode recognition technology based on machine vision. This paper analyzes the principle of barcode recognition technology and coding rules, saves the complicated traditional barcode recognition process, and focuses on enhancing barcode recognition. First, grayscale the image, and then use Halcon operators emphasize operator to enhance the image. After debugging the enhancement operator, finally set the MaskWidth of the emphasize operator to 100, MaskHeight to 3, and Factor to 2, the recognition degree is higher under the same conditions, and the system is more robust to static pictures. This system can be applied to book management and warehouse management in various libraries, and provides a faster, more accurate and more robust identification technology for barcode detection. Keywords: Machine vision · Bar-code · Halcon

1 Introduction In the stage of rapid development of science and technology, the country gradually tends to the stage of digital modernization. With the emerging of research results in large numbers of computer vision and deep learning, in the big data environment, the application of bar code is more and more extensive [1–3]. Many data management institutions have used bar code technology, such as library management, retail industry and now warehouse management and logistics, tracking etc. In the field of industrial manufacturing in the United States and European countries, the application of bar code technology has become quite common. Compared with other countries, there is still a big gap in the application field of bar codes in my country. Both the scope of promotion and the field of application need to be further improved. The two-dimensional code introduced later has the advantage of storing a large amount of and diversified data. Being widely used. Because the technology for bar code recognition is not mature enough, and there are still many problems, the halcon is introduced the identification technology for bar code with high precision and stability.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 3–13, 2023. https://doi.org/10.1007/978-3-031-24475-9_1

4

H. Jing et al.

2 Bar-Code Recognition Technology 2.1 Bar-Code Bar-codes first appeared on Wrigley chewing gum. These bar-codes are a combination of black bars and blanks according to certain rules. These bar codes have different widths and gaps. When bar codes were first invented, there were very few numbers that could be represented, and the information transmitted was therefore Very few, but with the continuous improvement in the future, many coding rules have been formulated. According to the combination of coding rules, they can represent different numbers, and these numbers perform their own duties and indicate various information. It can indicate countries, manufacturers, products and other information. At the same time, many bar code types have also been produced [4], such as EAN-13 type, which is used as an international commodity code; Codabar type, which is mostly used in medical institutions, library management, etc.; ITF-14 type, which is mostly used for There are dozens of logistics standard symbols and bar code types, which will not be described in this article. With the development of the times, the two-dimensional code has also quietly emerged. The two-dimensional code is also called the two-dimensional bar code. The common two-dimensional code is QR Code. The full name of QR is Quick Response. It is a super popular code on mobile devices in recent years. In this way, it can store more information and represent more data types than traditional Bar Code barcodes. As shown in Fig. 1 below, it is an EAN-13 barcode and encoding rules, and Fig. 2 is a two-dimensional barcode.

Fig. 1. ENA - 13 bar code

Fig. 2. 2-dimensional bar code

Bar-Code Recognition Based on Machine Vision

5

2.2 Bar-Code Recognition Technology 2.2.1 Bar-Code Recognition Process Early identification uses a single scanner that can emit light and receive reflected light, an edge locator to identify black bars or blanks, and a decoder to reflect the final result. This simple identification can only be used for early simple bar codes, and later After the bar code is improved, it becomes complicated and cannot be detected in such a single way [5]. With the complexity of bar codes, the development of identification technology has also been vigorously promoted, and various identification devices have been produced, such as flatbed laser scanners and handheld CCD scanners. The identification technology is usually completed by three systems: scanning system, signal shaping system and decoding system. Figure 3 shows the basic process of barcode recognition.

Fig. 3. Basic process of bar code identification

2.2.2 Bar-Code Recognition Principle In a one-dimensional bar code, there are black and white vertical bars with different widths, which is why it is called a bar code. First of all, you need to understand that the numbers on the bar code are composed of seven binary digits. All bar codes have specific encoding rules, which are not common ASCII codes [6]. The following Table 1 shows the EAN-13 encoding rules. The black bar of the bar code represents binary 1, white represents 0, and every 0.33 mm width of black or the white bar represents a basic binary bit; secondly, the principle of identifying the bar code is to identify the width of the black and white bar, convert it into binary, and finally output the ISBN code according to the encoding rules. A two-dimensional barcode is a graphic that is distributed on a plane (two-dimensional direction) according to a specific geometric figure according to a certain rule, is black and white, and records the data symbol information; in the code compilation, it cleverly uses the internal logic foundation of the computer. The concept of “0” and “1”, bit stream uses several geometric shapes corresponding to binary to represent textual numerical information.

6

H. Jing et al. Table 1. ENA-13 code rule

Character

Binary representation Left data character

Right data character

Odd characters (group A)

Accidental characters (group B)

Accidental characters (group C)

0

0001101

0100111

1110010

1

0011001

0110011

1100110

2

0010011

0011011

1101100

3

0111101

0100001

1000010

4

0100011

0011101

1011100

5

0110001

0111001

1001110

6

0101111

0000101

1010000

7

0111011

0001001

1000100

8

0110111

0001001

1001000

9

0001011

0010111

1110100

Starter

101

Middle delimiter

01010

Terminator

101

3 Bar-Code Recognition Based on Halcon 3.1 Bar-Code Recognition Steps Based on Halcon The general image recognition process for bar codes is shown in Fig. 4 below. First, the model is initialized; then bar code recognition is performed to identify the bar code type and the area where the bar code is located; the computer then processes the identified bar code; and finally clears the model.

Fig. 4. General steps for barcode identification

In this paper, in order to speed up bar-code recognition, a part of pre-processing algorithm is added in the middle to improve the accuracy and recognition speed of the algorithm. The specific process is as follows: Step 1. Use grayscale images, and choose gray for camera color space Step 2. Enhance the image with the emphasize algorithm

Bar-Code Recognition Based on Machine Vision

7

The two pre-processing algorithms in the middle are all to enhance the recognizability of the image. In step 1, grayscale images are used to make the image occupy a small space, and the second is to adjust the grayscale of the source image by establishing a grayscale map, so as to achieve the purpose of enhancing the image [7]; Step 2 directly use the emphasize algorithm to adjust the parameters to increase the barcode contrast, so as to achieve the purpose of enhancing the image. In this paper, the specific steps for identifying bar-codes based on Halcon are as follows: Step 1. Turn on externally connected cameras Step 2. Create 1D bar-codes Step 3. Refresh image window Step 4. Get pictures Step 5. Pre-processing Enhanced Images Step 6. Read barcode information Step 7. Print the read information on the picture Step 8. Turn off the camera, end step 3.2 Bar-Code Recognition Operator Algorithm Based on Halcon The following Table 2 shows the bar-code recognition operator based on Halcon in this paper. This paper mainly enhances the recognizability of the image and the accuracy of the recognition in the preprocessing. Using the emphasize operator, adjust the MaskWidth, MaskHeight and Factor of this operator to make it recognize The effect is the best. Table 2. Operator used by the algorithm Processing algorithm

Halcon

Operator function description

Turn on the camera

open_framegrabber

Turn on the camera

One-dimensional code preprocessing and pre-recognition

create_bar_code_model set_bar_code_param dev_close_window dev_open_window grab_image emphasize set_bar_code_param find_bar_code

Create 1D bar-codes Bar-code Graphics Preferences close the image window Open image window grab an image Enhance image Set the parameters for decoding Read bar-codes

QR code pre-recognition

set_data_code_2d_param find_data_code_2d

Set the parameters for decoding Read bar-codes

The result is displayed with the camera off

disp_message clear_bar_code_model close_framegrabber

Display text information Clear barcode model turn off the camera

8

H. Jing et al.

4 Experimental Platform and Experimental Results 4.1 Experiment Platform This experiment uses a fixed industrial camera, its model is xm500; the photosensitive device is 1/2.5 ; CMOS color, the diagonal length of the photosensitive plate is 1/2.5 in., which is 10.61 mm, which is enough to meet the normal shooting requirements.

Fig. 5. Camera model and parameters

Fig. 6. Actual operation diagram

The pixel is 5.0 Megapixels, that is, 5 million pixels, is enough to meet the normal shooting requirements. Figure 5 below shows the camera model and camera parameters, and Fig. 6 is the actual working diagram. 4.2 Experimental Results Connect an external camera based on the Halcon2019 software, and capture pictures synchronously. When setting up Image Acquisition, directly set the color part to gray. The Halcon software converts the 256-color bitmap into a grayscale image, and uses the grayscale processing in the point processing method to achieve digital The threshold transformation of the image provides the preconditions. To convert a 256-color bitmap into a grayscale image, the gray value corresponding to each color must first be calculated. The corresponding relationship between grayscale and RGB color is as follows: gray = 0.299R + 0.578G + 0.114B

(1)

Bar-Code Recognition Based on Machine Vision

9

In this way, the 256-color palette can be easily converted into a grayscale palette according to the above formula. In order to reduce the influence of lighting conditions and improve the grayscale contrast of the bar-code image, this paper will normalize the grayscale of the bar-code image. For the bar-code image I, I(i, j) represents the gray value of the pixel point in row i and column j [8]. The mean and variance of bar-code object I are calculated as follows: 1 N −1 N −1 I (i, j) (2) M (I ) = 2 i=0 j=1 N 1 N −1 N −1 (I (i, j) − M (I ))2 (3) VAR(I ) = 2 i=0 j=1 N Then the grayscale normalization formula of image I is as follows:   2 G(i, j) =

VAR (I (i,j)−M )

0 M0 + VAR  VAR0 (I (i,j)−M )2 M0 − VAR

otherwise

if I (i,j)>M

(4)

This setting reduces the time required to convert a color image to a grayscale image using an operator, and the resulting grayscale image reduces both the algorithm running time and the image size. Figures 7 and 8 show the one-dimensional code and twodimensional code respectively. Dimensional code grayscale image.

Fig. 7. Grayscale image of 1D bar-code

Fig. 8. Grayscale image of 2D bar-code

After the image is collected, it enters the preprocessing stage. In this paper, some tedious steps such as edge detection and image transformation are omitted, because as

10

H. Jing et al.

long as the image is properly enhanced, some noise can be properly removed, so that bar-code detection can be performed directly, which can reduce some of the middle of the operation. The algorithm time, so this paper uses the enhanced image emphasize operator in the preprocessing. The operator first uses the mean_image (mean filter) whose width is MaskWidth and height is MaskHeight to perform mean filtering. The gray value (res) calculated from the obtained gray value (mean) and the original gray value (orig) is as follows: res = round ((orig − mean) ∗ Factor) + orig

(5)

Its operator enhances the high frequency regions (edges and corners) of the image, making the image look sharper. Debug the best parameters under the same conditions, and finally set MaskWidth to 100, MaskHeight to 3, and Factor to 2. The images are shown in Fig. 9 and Fig. 10 below.

Fig. 9. 1D barcode enhanced image

Fig. 10. 2D barcode enhanced image

Figures 11 and 12 below show the Halcon identification bar-code display. The onedimensional code is covered by a red area, and the two-dimensional code is framed by a green wireframe. Based on the find_bar_code operator, find the bar-code area in the image, and then use the parameters set and execute the bar-code area covered with red. In order to increase the robustness of the program itself, the bar-code type is set to auto, but it will affect the recognition speed to a certain extent. The same is true for the find_data_code_2d operator. Next, the computer will Continue to perform bar-code detection on the selected area, specifically identify the width of the black and white

Bar-Code Recognition Based on Machine Vision

11

bar-code in the red area, convert it into a binary number, and then refer to the encoding rules of each barcode for conversion.

Fig. 11. 1D code identification area

Fig. 12. 2D code identification area

The one-dimensional code in this article uses EAN-13 bar-code, ISBN: 9787572218194, and the two-dimensional code uses QR code. The link is http://mp.xdf sjj.com/qr.html?crcode=120DKNC80D8, as shown in Figs. 13 and 14 below, respectively the final recognition results of the one-dimensional code and the two-dimensional code are consistent with the actual results.

Fig. 13. 1D code recognition result

12

H. Jing et al.

Fig. 14. 2D code recognition result

5 Conclusion This article omits the complicated traditional barcode recognition process and focuses on enhancing barcode recognition. After debugging the enhancement operator, the MaskWidth of the emphasize operator is finally set to 100, the MaskHeight is set to 3, and the Factor is set to 2, the same situation the lower recognition degree is higher, and the robustness of this system to static pictures is relatively high. The update of bar-code recognition technology is of great significance to the management field. This bar-code recognition technology can be combined with the actual management situation and connected to the database, so as to be integrated into each intelligent management system module, bringing greater convenience to warehouse management. This system provides a good identification and detection technology for library management and warehouse management. Acknowledgements. This work was supported by Innovation Project of Guangxi Graduate Education, grant number GKYC202206, and National Natural Science Foundation of China grant number 51765007.

References 1. Ge, D.-Y., Yao, X.-F., Xiang, W.-J., et al.: Calibration on camera’s intrinsic parameters based on orthogonal learning neural network and vanishing points. IEEE Sens. J. 20(20), 11856–11863 (2020) 2. Zhu, M., Ge, D.: Image quality assessment based on deep learning with FPGA implementation. Signal Process. Image Commun. 83, 115780 (2020) 3. Jin, J.: Study on two-dimensional code recognition algorithm in non-uniform illumination based on digital images processing technology. J. Phys. Conf. Ser. 1345(6), 062040 (2019) 4. Ye, H.: Application and development of library barcodes. Guangdong Sericulture 51(07), 36 (2017) 5. Li, S., Wang, Z., Yang, J., Zhu, S., Quan, H.: High-speed online recognition of 1D and 2D barcodes based on machine vision. Comput. Integr. Manuf. Syst. 26(04), 910–919 (2020) 6. Zhou, Q., Wu, L.: Application of barcode recognition and internet of things technology in mobile smart warehousing system. Electron. Technol. Softw. Eng. 06, 119–120 (2020)

Bar-Code Recognition Based on Machine Vision

13

7. Zhang, H.: Face detection technology based on shape features. J. Yellow River Water Conserv. Vocat. Tech. Coll. 29(02), 44–47 (2017) 8. He, B., et al.: Visual C++ Digital Image Processing. People’s Posts and Telecommunications Press, Beijing (2001)

Some Problems and Solutions for Non-absorbent Wall Climbing Robot Wugang Li, Jiaxu Mo, Fengmei Chen, Lanxiang Wei, and Zhenpeng Qin(B) School of Intelligent Manufacturing, Nanning University, Nanning 530200, China [email protected]

Abstract. At present, wall climbing robots are divided into non-adsorption wall climbing robots and adsorption wall climbing robots, but non-adsorption wall climbing robots are different from adsorption wall climbing robots and have their own outstanding features and advantages, which are special, comprehensive and complex in structure and technology. In order to allow robot designers to better design non-adsorption wall climbing robots, the key problems of non-adsorption robot research and design are highlighted. In this paper, three aspects of problems and solutions are proposed through the combination of mechanical design and circuitry, hardware and software, including motion and dynamics, safety and risk avoidance, and control and operation, specifically including seven problems and corresponding solutions: sufficient positive pressure, avoiding wheel overhang, fall prevention, obstacle avoidance, flexible crawling, stopping and stopping, and easy operation. The solutions to these problems have theoretical significance and practical application value, and can provide a theoretical basis and application reference for robot researchers and designers. Keywords: Non-adsorption · Wall climbing · Robot · Circuit

1 Introduction Wall-climbing robots have many practical uses, so people are constantly researching wall-climbing robots. As early as in the 1970s and 1980s, the Japanese Institute of Applied Research developed a wheeled magnetic adsorption wall-climbing robot [1]. This robot works by attaching permanent magnets [2] to the surface of ferrous structural objects to replace people’s inspection and maintenance work. Stanford University in the United States developed a wall-climbing robot that imitates a gecko, which is for the Stickybot series as a representative [3]. This is a kind of mechanical limbs of the wall-climbing robot by imitating the motion form of living creatures and making a gecko-like fleece, and the adsorption force generated by the fleece of the feet increases due to the contact surface of the feet on the wall, and the purpose of adsorbing the wall is achieved. The ALICIA series of wheeled negative pressure adsorption wallclimbing robot were developed by D. Longo et al. at the University of Catania, Italy [4– 7]. The tracked wall climbing robot from North China Electric Power University in China [8]. The pneumatic wall climbing robot invented by Beijing University of Technology © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 14–24, 2023. https://doi.org/10.1007/978-3-031-24475-9_2

Some Problems and Solutions for Non-absorbent Wall Climbing Robot

15

mainly accomplishes five basic movements of the robot, such as forward, backward, left, right, and stop, by alternating four suction cups in horizontal and vertical directions [9]. Chongqing University has researched electrostatic adsorption wall-climbing robot [10]. One common feature of the above wall-climbing robot is that they manage to make the wall climbing robot able to adsorb the wall surface, some of them by bionic adhesion, some by magnetic suction, some by electrostatic adsorption, and more by making the negative pressure between the chassis of the wall climbing robot and the wall surface to achieve, which has become the mainstream in the last decade. The shortcomings of such adsorption wall-climbing robot are: the general requirement for smooth walls, slow crawling speed, easy to leak air, loud, difficult to avoid obstacles, etc. The only nonabsorbent wall-climbing robot seen so far are the VertiGo wall climbing robot from the United States and the positive pressure bridge health inspection robot from Chongqing University [11], which has more flexible motion. Non-absorbent wall-climbing robot should be the direction of future development. Non-adsorption type, i.e. positive pressure type, is to make the wall crawling robot obtain positive pressure on the wall by counter-thrust to the air, and in this way, the walking wheels of the wall crawling robot have enough friction with the wall. The advantages of this approach are: it can adapt to rough walls, fast crawling speed, flexible steering, easy obstacle avoidance, low noise, etc. In this paper, several key problems are identified and specific technical solutions to solve them are proposed from nonadsorption general design considerations.

2 Motion and Power Problems and Solutions 2.1 Sufficient Positive Pressure Adsorptive wall-climbing robot relies on high-speed rotating blades to extract the air between their chassis and the wall surface to form a negative pressure [12], so that the wall climbing robot closes to the wall surface. In contrast, non-absorbent wall-climbing robot mainly rely on the high-speed rotating blades to generate outward thrust on the air, thereby obtaining a reaction force from the air against it and pressing it positively against the wall. This kind of wall climbing robot does not need to close the chassis like the adsorption wall climbing robot, theoretically the more open the better. To make the wall climbing robot get enough positive pressure on the wall, on the one hand, is to increase the positive pressure generator output power to increase the reaction force of the air on the robot, for this reason, it is best to use brushless motor, and pay attention to the direction of rotation of the paddle should be outward to produce thrust; on the other hand, to make more air can be pushed outward by the rotating blade, we should fully open the space between the chassis of the wall climbing robot and the wall.. Brushless motor together with the propeller and the corresponding controller is called positive pressure generator. Figure 1 is a diagram of the composition of the positive pressure adsorption space. When the positive pressure generator is running, the brushless motor will push the air through the propeller to produce a reaction force body, so that the wall climbing robot will be subjected to this reaction force and will cling to the wall.

16

W. Li et al.

Fig. 1. Composition of positive pressure adsorption space

Fig. 2. Force diagram of wall climbing robot

Figure 2 shows the force diagram of the wall-climbing robot. of when the wallclimbing robot is stationary on a vertical wall, due to the action of gravity, denoted by Fg; at the same time, there should be an upward static friction force F f between the wheels of the wall-climbing robot and the wall with a magnitude equal to the gravity Fg, so that the wall-climbing robot gets balanced in the vertical direction. The static friction force F f is obtained by relying on the positive pressure F N provided by the positive pressure generator to the wall-climbing robot against the building wall. Let µ be the coefficient of static friction between the bottom wheel of the wall-climbing robot and the wall when the wheel is not rotating, then it should be (1): Ff = µFN = G

(1)

Let F P be the thrust of the propeller on the air, according to Eq. (1) FP = FN = G/µ

(2)

In general µG. When the wall climbing robot is walking along a vertical wall, the wheels cannot slip, i.e., the wheels and the wall are slip-free rolling, which is the same force requirement as when the wall climbing robot is stationary.

Some Problems and Solutions for Non-absorbent Wall Climbing Robot

17

2.2 Avoid Wheel Overhang Previously, wall climbing robots have used four-wheeled structures or multi-wheeled foot structures [13]. The disadvantage of which is that the problem of wheel overhang inevitably occurs during operation, i.e., a wheel is separated from the wall. This brings about smoothness problems on the one hand, and handling problems on the other. The solution is to change the four-wheeled structure of the wall-climbing robot to a three-wheeled structure and install lightweight rubber wheels so that it can walk freely on the wall surface but overcome the bulges or cracks in the wall surface to avoid the problem of wheel overhang. Figure 3 and Fig. 4 show the schematic layout of the four and three wheels of the wall-climbing robot, respectively. The front two wheels are both driving and guiding wheels.

Fig. 3. Schematic diagram of four-wheel structure

Fig. 4. Schematic diagram of the three-wheel structure

3 Safety and Risk Avoidance Problems and Solutions 3.1 Fall Prevention If a sudden fall occurs during the operation of the wall climbing robot, the result may break the wall climbing robot. The solution is, first, to avoid falling, second, to fall to the ground when the wall climbing robot is basically undamaged, and third, to reduce the speed of falling. To make the wall climbing robot not fall easily, it is necessary to ensure quality when installing the robot, including the quality of electrical circuits and mechanical quality, and to ensure sufficient power supply when using it to prevent falling accidents when running out of energy. In addition to reducing the speed of falling, the robot itself should be made of lighter, stronger and firmer material as the overall frame, and the internal control system, sensors and other parts of the frame should be reasonably installed and planned, and the components should be well fixed, and the internal drive wheel motor and the motor involved in the positive pressure generator should be firmly fixed to ensure the normal operation of the positive pressure generator.

18

W. Li et al.

Once the fall occurs, the wall climbing robot should be able to do automatically to reduce the speed of the fall. There are three cases after the wall climbing robot leaves the wall: one is the chassis and the wall basically remains parallel, the second is the chassis facing up, and the third is the chassis facing down. When the positive pressure generator is still working and the propeller is still generating thrust to the outside air, even if the thrust is reduced, the first two are not bad, but on the contrary, it is helpful. For the case where the chassis is basically parallel to the wall, the outward thrust will bring the chassis closer to the wall, so that the wheels (which are not rotating at this time and will be mentioned later) maintain a certain friction (or sliding friction) with the wall and do not make the wall-climbing robot slide down quickly. For the case where the chassis is facing upward, the propeller actually acts to reduce the speed of the fall because the thrust generated by the propeller is downward. In the case of a downward-facing chassis, the propeller’s thrust is upward, which speeds up the fall. This requires a special circuit design for this case, so that the propeller rotates in the opposite direction and the thrust is downward, which greatly reduces the fall speed. An example of this special circuit design is shown in Fig. 5 and Fig. 6. Figure 5 shows the state of the circuit conduction when the wall-climbing robot is climbing normally. When the chassis of the wall-climbing robot is facing downward the metal ball falls into the position shown in Fig. 6, the current is reversed and the motor of the positive voltage generator reverses, and the propeller changes from thrust to upward pull, which greatly reduces the speed of the wall-climbing robot falling down.

Fig. 5. The state of normal conduction of the circuit

Fig. 6. The state of the circuit conduction during the fall

Some Problems and Solutions for Non-absorbent Wall Climbing Robot

19

3.2 Obstacle Avoidance How to evade different types of obstacles or walls encountered by the wall climbing robot during its movement. To overcome the problem of obstacle avoidance during the movement of the wall climbing robot, an obstacle avoidance module can be designed. The module can emit infrared light, which can be reflected back when it encounters an obstacle. The closer the obstacle is, the stronger the infrared light reflected back, and the further the obstacle is, the weaker the infrared light reflected back. The infrared transmitter tube keeps emitting infrared light, and the closer the obstacle is to the cart, the stronger the infrared light reflected back by the infrared receiver tube gets, and the voltage on the infrared receiver tube becomes smaller [14]. Therefore, using the obstacle avoidance module, the wall climbing robot can be realized with obstacle avoidance function, which can stop in time for large obstacles and then go around to avoid them, and can go around to avoid small obstacles directly. The controller core is designed to meet the control requirements of the wall climbing robot, and the operational design is relatively simple. Comprehensive consideration finally chooses 51 microcontrollers as the control module of the wall climbing robot. The controller core is designed to meet the circuitry of the control module of the wall climbing robot. The circuit design of the control module is shown in Fig. 7. VCC

S3

R2 10K

GND

Y1

U1 P10 1 P1.0 VCC P11 2 P1.1 P0.0 C5 P12 3 P1.2 P0.1 P13 4 P1.3 P0.2 1uF P14 5 P1.4 P0.3 P15 6 P1.5 P0.4 P16 7 P1.6 P0.5 P17 8 P1.7 P0.6 REST 9 RST P0.7 P30 10 RXD EA P31 11 TXD ALE P32 12 INT0 PSEN P33 13 INT1 P2.7 P34 14 T0 P2.6 P35 15 T1 P2.5 P36 16 WR P2.4 P37 17 RD P2.3 XTAL2 18 XTAL2 P2.2 XTAL1 19 XTAL1 P2.1 20 GND P2.0

VCC 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21

P00 P01 P02 P03 P04 P05 P06 P07

RP1 1 2 3 4 5 6 7 8 9 10K

P27 P26 P25 P24 P23 P22 P21 P20

VCC

STC89C52 GND

11.0592M C14 30pF

C15 30pF

GND

Fig. 7. Controller circuit

The STC89C52 is used as the control chip in this design. The external clock oscillator port is connected to XTAL1 pin and XTAL2 pin of the microcontroller respectively; a reset circuit is designed to be connected to the RST port to restart the operation of the program by means of keystroke, which effectively avoids the situation that the program cannot be stopped due to errors in execution; the power supply is obtained by connecting + 5V and GND to pins 40 and 20; the motor driver module operation delivery signal is executed through pins 1 The power supply is obtained by pins 40 and 20 connected to

20

W. Li et al.

+ 5V and GND; the motor driver module operation delivery signal is executed through pins 1, 2, 3, 4, 5 and 6; pins 10 and 11 are Bluetooth module input pins; pin 13 is used as the switch of the control function, etc. to realize the design of the whole wall-climbing robot control circuit. The circuit design of the IR obstacle avoidance module is shown in Fig. 8. OUT GND VCC

1 2 3

Fig. 8. Infrared obstacle avoidance circuit diagram

The OUT terminal is connected to microcontroller pin P37, and the GDN terminal and VCC terminal are connected to the GDN and VCC of the microcontroller respectively.

4 Control and Operation Problems and Solutions 4.1 Flexible Crawling The wall-climbing robot has to ensure steering flexibility as well as crawling capability during operation [15]. The following solution can be obtained by partially referring to the intelligent cart motion control approach [16]: See motor drive module circuit design Fig. 9. U2 5 7 10 12 6 11 8

IN1 IN2 IN3 IN4 EN A EN B GND

VSS VS OUT1 OUT2 OUT3 OUT4 ISEN A ISEN B

9 4 2 3 13 14 1 15

L298N GND

Fig. 9. Motor drive circuit

1) In order to realize the control motor forward and reverse, you can give the control terminal ENA/ENB output high level by microcontroller, and then output high and low level to its IN1 and IN2 to achieve the purpose of controlling forward and reverse; in order to realize the steering function, you can drive the right motor by reversing the left motor, which can realize the left turn, that is, to the left motor IN1 = 0, IN2 = 1, the right motor IN3 = 1, IN4 = 0; similarly, we can know the right turn as the left motor IN1 = 1, IN2 = 0, the right motor IN3 = 0, IN4 = 1 to realize the right turn.

Some Problems and Solutions for Non-absorbent Wall Climbing Robot

21

2) By setting the microcontroller output pulse signal for pwm (pulse width modulation) and changing the pulse width to control the output voltage, the speed regulation of the motor is realized, thus improving the climbing ability of the wall-climbing robot. 4.2 Stopping By stopping, the wall climbing robot stops without moving while maintaining the original positive pressure but without providing walking power. Solution: Design the motor control program by receiving or sending a control signal, which is processed by the microcontroller and then output to the motor driver module to execute the control signal. When the pin receives a low level signal, the motor stops rotating; when the pin receives a high level signal, the motor rotates normally to achieve the stop-stop problem of the wall climbing robot. Motor drive circuit design as shown in Fig. 10. U2 5 7 10 12

IN1 IN2 IN3 IN4

6 11

GND

2 3 13 14

OUT1 OUT2 OUT3 OUT4

EN A EN B

8

9 4

VSS VS

1 15

ISEN A ISEN B

L298N GND

Fig. 10. Motor drive circuit

IN1 pin to IN4 pin is to change the output signal pulse control motor forward and reverse; wall climbing robot drive motor ports are, respectively, by OUT1 pin and OUT2 pin port as the left motor positive and negative interface, while OUT3 pin and OUT4 pin port as the right motor positive and negative interface. The truth table of the logical functions of the drive module is shown in Table 1. Table 1. L298N logic function truth table Left side motor

Right side motor

IN1

IN2

ENA

Motor trends

IN3

IN4

ENB

Motor trends

0

x

x

Stop

0

x

x

Stop

1

0

1

Positive rotation

1

0

1

Positive rotation

0

1

1

Reversal

0

1

1

Reversal

The logic function truth table shows that the motor is controlled forward and reverse by changing the output signal to set the high level to 1 and the low level to 0. Therefore,

22

W. Li et al.

when the pin is low, the motor stops rotating; when the pin is high, the motor rotates normally.

Fig. 11. Worm gear transmission mechanism

In addition, a worm gear transmission mechanism is used on the wheel axle. As shown in Fig. 11. Because the spiral angle of the worm is generally very small, less than the friction angle, it can be self-locking, so the wheel will not turn without providing power. This will achieve the purpose of stopping. 4.3 Convenient Operation The operational problems of the wall-climbing robot during operation can be solved by the following solutions. 1) The operation of the wall-climbing robot is realized by sending a control signal through Bluetooth, which is received by the Bluetooth receiver and then output to the microcontroller for processing; the control is realized as follows: when the “forward” button is pressed for a long time, the wall-climbing robot can be controlled to move forward; when the button is released, the wall-climbing robot will be immediately powered off and in a static state. 2) The function control of the light chasing movement of the wall climbing robot is designed in such a way that the module is able to sense the light is greater than the threshold value, then output high level, and when the light is less than the threshold value, then output low level. When the high level is received, the wall-climbing robot moves according to the route planned by the design; when the low level is received, the wall-climbing robot is forbidden to move. 3) The black tracing function control of the wall climbing robot, the module is designed to detect the black route using reflective infrared sensor. The infrared emitting diode continuously emits infrared rays, due to the black absorbing light, when the emitted infrared rays are not reflected back or the intensity of the reflected back is not large enough, the infrared receiving tube is always off, at this time the output of the module is high level and the indicating diode is always off; when the detected object appears in the detection range, the infrared rays are reflected back and the intensity is large enough, the infrared receiving tube is saturated. At this time, the output of the module is low and the indication diode is lit [14]. It can follow the black track movement, which is convenient for operation.

Some Problems and Solutions for Non-absorbent Wall Climbing Robot

23

Figure 12 shows the Bluetooth module circuit.

Fig. 12. Bluetooth module circuit diagram

The VCC terminal and GND terminal are connected to the power terminal and ground terminal of the microcontroller respectively. The TXD terminal is the data output interface and is used to connect to the RXD terminal of the microcontroller; secondly, the RXD terminal is the data receiving interface to connect to the TXD terminal of the microcontroller. The circuit design of the light tracing module is shown in Fig. 13.

OUT GND VCC

1 2 3

Fig. 13. Circuit diagram of light tracing module

The OUT terminal is connected to microcontroller pin P20, and the GDN terminal and VCC terminal are connected to the GDN and VCC of the microcontroller respectively. The circuit design of the IR tracing module is shown in Fig. 14.

Fig. 14. Infrared tracing circuit diagram

Pin 1 is the positive terminal of the power supply, pin 2 is connected to the negative terminal of the power supply, and pins 3 and 4 are connected to P34 and P35 of the microcontroller, respectively, as the switch signal output.

5 Conclusion This paper identifies three major key issues, including motion and dynamics, safety and risk avoidance, and control and operation, from general design considerations for

24

W. Li et al.

non-adsorptive robots, specifically including: sufficient positive pressure, avoidance of wheel overhang, fall prevention, obstacle avoidance, flexible crawling, stopping, and easy operation, and proposes targeted solutions. The development and large-scale application of robots is a characteristic of the intelligent era. The contents covered in this paper are the core and key problems of robot design, which are summarized, strengthened and enhanced from the problems of previous robot studies, and will provide useful references for robot research designers and theoretical guidance with practical value for robot learners.

References 1. Xiao, L., Tong, S.-Z., Ding, Q.-M., Wu, J.-S.: Status and development of wall climbing robot. Autom. Expo (01), 85–86+88 (2005) 2. Tang, D., Long, Z., Yuan, B., et al.: Analysis of force and power consumption of permanent magnet adsorption wheeled wall climbing robot. Mech. Sci. Technol. (4), 499–506 (2019) 3. Huang, Z.F.: Research on electrostatic adsorption mechanism for wall-oriented mobile robots. Harbin Institute of Technology, Heilongjiang (2010) 4. Zhu, C.: Intelligent window cleaning robot control system design. Zhejiang University (2015) 5. Muscato, G., Trovato, G.: Motion control of a pneumatic climbing robot by means of a fuzzy processor. In: Proceedings of 1st International Symposium Climbing and Walking Robots (CLAWAR 1998) (1998) 6. La Rosa, G., Messina, M., Muscato, G., Sinatra, R.: A low cost lightweight climbingrobot for the inspection of vertical surfaces. Mechatronics 1, 71–96 (2002) 7. Longo, D., Muscato, G.: A modular approach for the design of the Alicia3 climbing robot for industrial inspection. Ind. Robot Int. J. 31(2), 148–158 (2004) 8. Xu, Z.: Design and research of magnetic adsorption wall climbing robot. North China University of Electric Power (2018) 9. Mei, Y., Peng, G., Fan, W.: Pneumatic wall climbing robot. Hydraul. Drive Seal (1), 19–21 (2002) 10. Xie, L.: Design and experiment of electrostatic adsorption wall climbing robot system. Chongqing University (2016) 11. Fan, C.: Research and design of a bridge pier tower health inspection robot system. Chongqing Jiaotong University (2020) 12. Ren, C.Q.: Design and research of negative pressure adsorption type wall climbing robot. North China Electric Power University (2018) 13. Lan, G.: Research on multi-wheeled foot wall corner climbing robot. Supervisor: Wang Yujun. Southwest University (2019) 14. Zhu, S., Li, Q.: The design of STC89C52-based trajectory avoidance intelligent cart. Light Ind. Sci. Technol. 34(03), 65–66 (2018) 15. Jiang, C., Wu, J., Zhu, Y., Zhu, Y.: Research on the motion mode of wall climbing robot. Autom. Appl. (01), 140–141 (2018) 16. Weiyi, Y., Ying, G., Luan Zhejiang, F., Yuxiang, L.H.: Intelligent toy car design based on STC98C52. Electron. Des. Eng. 24(10), 97–99 (2016)

SOC Estimation of Lithium Battery Based on BP Neural Network with Forgetting Factor Shiling Huang1 and Meiyan Li2(B) 1 Intelligent Manufacturing College, Nanning University, Nanning 530000, China 2 Global Software College, Nanning University, Nanning 530000, China

[email protected]

Abstract. Accurate estimation of state of charge (SOC) of lithium battery for new energy vehicle power is very important to improve the dynamic performance and energy utilization efficiency of the battery. Lithium battery has time-varying characteristics in the process of use, and the accuracy of SOC prediction using fixed parameter offline static network model will decline. In order to accurately estimate the SOC of power lithium battery, it is required that the parameters of the prediction model can be online self-learning and can change with the movement of the battery system. By analyzing the current popular SOC estimation algorithm based on circuit model, this paper introduces BP neural network model to estimate the SOC of lithium battery. In order to realize parameter self-learning ability, time-varying forgetting factor is added to the cost function. A lithium battery test platform was built to test the SOC-OCV relationship under different magnification and temperature. SOC is estimated by ampere hour integral method, extended Kalman filter and improved BP neural network. Experiments show that the improved BP neural network method has faster tracking ability and higher estimation accuracy for fast time-varying parameters, and is a very practical SOC real-time prediction method. Keywords: Lithium battery · SOC · BP neural network · Forgetting factor

1 Introduction SOC estimation is one of the core functions of battery management system (BMS). The accuracy of its value affects battery charge and discharge management, balance management, energy management, safety and fault diagnosis. The SOC value of lithium battery cannot be measured directly, but can only be estimated by the external parameters of the battery (terminal voltage, current, temperature, etc.). SOC is affected by factors such as charge discharge ratio, temperature, self-discharge, aging, etc., which makes the power battery show a high degree of nonlinearity in the process of use, which makes it difficult to accurately estimate SOC [1]. At present, the research methods of SOC estimation mainly include experimental method, model method and data-driven method. The methods based on experiments include discharge method, conductivity method, AC impedance method, open circuit © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 25–40, 2023. https://doi.org/10.1007/978-3-031-24475-9_3

26

S. Huang and M. Li

voltage method and ampere hour integration method; Model driven methods include Kalman filter, particle filter and so on; Data driven methods include neural networks, support vector machines (SVM) polynomial regression, etc. Discharge method and open circuit voltage method need to discharge or stand for a long time. Conductivity method and AC impedance method have great requirements on measurement accuracy and are greatly affected by temperature. Therefore, they cannot be applied to on-line detection. The model-based SOC estimation method is represented by Kalman filter (KF), which has strong anti-interference ability and is suitable for the SOC prediction of electric vehicles in complex environments. Literature [2] and literature [3] respectively use extended Kalman filter (EKF) algorithm and unscented Kalman filter (UKF) algorithm in the field of SOC estimation. The results show that the estimation method using EKF has achieved high SOC prediction accuracy. For the estimation of battery SOC, EKF has high robustness, but EKF depends on accurate battery model. Different battery models (PNGV model, RC model, etc.) need to adopt reasonable methods for parameter identification [4–6]. The data-driven SOC estimation method is used to nonlinear fit the input and output of the system with a large number of experimental data. The neural network has strong nonlinear system fitting ability and can be applied to the field of dynamic system SOC estimation of lithium battery. Literature [7] estimates the SOC of power battery by building neural network, and the estimation error is within 5% [7, 8]. Literature [9] mainly studies the error control of SOC prediction of lithium battery using BP neural network, and verifies the effectiveness of this method. Based on the above analysis, the SOC estimation model using BP neural network has high estimation accuracy [4]. However, after data training, the neuron weight coefficient in the traditional neural network model is fixed, which cannot adapt to the time-varying characteristics of battery applications. To sum up, the methods mentioned above have their own advantages and disadvantages. Based on the traditional BP neural network, this paper improves the cost function to make the BP neural network model parameters have the ability of online learning and updating. It can provide reference in improving the speed and accuracy of SOC estimation and other battery management.

2 SOC Estimation Algorithm Based on Circuit Model 2.1 SOC Estimation Based on Ampere Hour Method SOC refers to the ratio of the remaining capacity of the battery to the total available capacity after it is used or put aside for a long time, which is usually expressed as a percentage. At present, the practical application is that the ampere hour integral method (ah) is widely used. The SOC estimation method based on the ampere hour integral method is shown in formula (1). SOC k+1

η = SOC k + C0

T idt

(1)

0

Of which [SOC]_ (k + 1) is the current estimated value, [SOC]_ K is the estimated value of the last time, t is the current sampling period, I is the current (positive for

SOC Estimation of Lithium Battery Based on BP Neural Network

27

charging and negative for discharging), C_ 0 is the rated capacity of the battery, η Is the charge and discharge efficiency. Through the analysis of formula (1), the following problems exist in the SOC estimation of ampere hour integral method: (1) the initial value cannot be prepared and needs to be measured by experimental method; (2) The errors of current detection accuracy and sampling speed will accumulate and increase in the integration; (3) Charging and discharging efficiency of battery η It can only be provided by establishing model formulas through a large number of experiments. 2.2 SOC Estimation Based on EKF Algorithm Under the ideal condition of laboratory, the SOC estimation method of lithium battery is difficult to be applied to the actual SOC estimation of lithium battery. In the process of constantly improving the mathematical model of lithium battery, researchers proposed to apply the state estimation technology in modern control theory to the state space description of lithium battery, so as to form an effective estimation of the unmeasurable state variable SOC of lithium battery. At present, experts and scholars at home and abroad have made many research achievements around the equivalent circuit modeling of lithium batteries. Based on the equivalent circuit method of electrical parameters, the battery is equivalent to a two port network, and the battery characteristics are simulated by electrical components such as power supply, resistance and capacitance. According to the differences of electrical components, the classical equivalent circuit models include rint model [10], Thevenin model [11], PNGV model [11], DP model [12] and GNL model [13]; Next, based on EKF algorithm and Thevenin model, the SOC estimation steps of battery model are shown. (1) Select circuit model Thevenin equivalent circuit model of lithium battery is shown in Fig. 1.

Fig. 1. Thevenin equivalent circuit diagram

Thevenin equivalent circuit is composed of the following parts: 1) ideal voltage source, representing open circuit voltage UOC (varies with SOC); 2) Ohmic internal resistance R0 , polarization internal resistance R1 ; 3) Polarization capacitance C1 . Reflect the transient response of the battery [14]. Mark U1 as the voltage voltage at both ends

28

S. Huang and M. Li

R1 and C1 , I is current through R0 , current, Ut is the terminal voltage, and the external characteristic description equation of the first-order RC model can be written as:     U1 = IR1 · 1 − exp − τt1 (2) Ut = UOC − I · R0 − U1 In formula (2), τ1 = R1 · C1 (2) Offline identification: The battery capacity QC is calibrated based on the capacity test results. Calibration based on HPPC test results. Voltage of battery UOC , R0 , R1 , C1 are got under different SOC and different temperature conditions so as to get table lookup function [OCV,R0, R1, C1] = f (SOC,T). And OCV is the battery voltage in the open circuit state of the battery [15]. (3) SOC estimation with EKF algorithm For the equivalent circuit equation, the state Eq. (3) and the observation Eq. (4) are established.    SOC k =SOC k−1 − η · Ik t/C(T)  t t Up,k = Up,k−1 exp − τ (SOC,T + I · R (SOC, T ) · 1 − exp − 1 k−1 ) τ (SOC,T ) (3) Define the status variables as:  xk = SOC k , Up,k UL,k = UOC (SOC k , T ) − Ik · R1 (SOC, T ) − Up,k

(4)

Finally, the state equation and observation equation of EKF algorithm are obtained, as shown in formula (5). ⎧   t   t       ⎪ R · 1 − exp −τ 0 exp − U U 1 ⎨ p,k+1 p,k τ = + ∗ ∗ Ik + wk Ik t η · C(T SOC k+1 SOC k 0 1 ) ⎪   ⎩ Uk = h(xk , uk ) + vk = U OC − 1 0 xk − Ik · R0 + vk (5)

3 SOC Estimation of BP Neural Network 3.1 Traditional BP Neural Network The structure of BP neural network is shown in Fig. 2, which is composed of input layer, hidden layer and output layer [16]. The input signal x is output forward according to

SOC Estimation of Lithium Battery Based on BP Neural Network

29

the weight of each neuron, and its error is propagated back to modify the neuron weight during network training. Lithium battery SOC prediction belongs to the input-output mapping problem of nonlinear system [17], which can be well solved by using three-layer network structure. The following will show the design steps of three-layer BP neural network for SoC prediction. (1) Design of input layer and output layer The number of neurons in the input layer is related to the number of parameters of the monitoring battery system. SOC cannot be measured directly, but battery voltage, current, temperature, etc. can be measured. There is a nonlinear mapping relationship between these parameters and SOC [18]. Therefore, the neural network input layer neuron data is 3. The number of neurons in the output layer is the number of system output parameters, so the number of neurons in the network output layer is 1. The structure of BP neural network is shown in Fig. 2, in which the number of nodes in the hidden layer is the schematic number, which is not the real number of this experimental model.

Fig. 2. Structure diagram of BP neural network

In Fig. 2, X represents the system input variables, corresponding to the temperature, voltage, current and other parameters of the battery system; Y represents the output SOC value of the system; wij is the weight from the input layer to the hidden layer, wjk is the weight from hidden layer to output layer, θ Is the threshold of neurons, and the input of each neuron is composed of weight and threshold. Input Qj1 of the jth neuron in the hidden layer can be expressed as Eq. (6). m Qj1 = (xi wij + θj ) (6) i=1

In formula (6), m is the number of neurons in the input layer, j is the jth neuron in the hidden layer, wij is the connection weight between the ith input node of the input layer and the jth neuron of the hidden layer, xi is the ith input of the input layer, θj is the threshold value of hidden layer jth neurons.

30

S. Huang and M. Li

Output s of the jth neuron in the hidden layer Sj1 is shown in Eq. (7). Sj1 = f (Qj1 )

(7)

In Eq. (7), f is the activation function of the hidden layer. Similarly, let yk is the k th neuron of the output layer, and the final output of the neural network is shown in formula (8). m (S 1j wjk + θk )) (8) yk = g( j=1

In Eq. (8), g is the activation function of the output layer, θk is the threshold value of the k th neuron of the output layer, wjk is the connection weight between the jth neuron in the hidden layer and the k th neuron in the output layer. The activation function of neural network is a nonlinear function, which is the key point of the mapping between neural network and nonlinear system. The activation functions f and g used in this paper are sigmoid functions. (2) Hidden layer design From the neural network structure diagram in Fig. 2, the number of neurons in the input and output layers is relatively fixed, while there is no more scientific method for the number of neurons in the hidden layer. However, the number of hidden layer neurons is related to the accuracy and calculation speed of the whole system. Assuming the number of neurons in layer h, o and p are the number of neurons in the input and output layers of the network, then it is shown in empirical formula (9).  (9) h= o+p+b In formula (9), the constant between b [1, 10]. The number of neurons in the input layer, output layer and hidden layer of the three-layer BP neural network structure designed in this paper is 3, 1 and 12 respectively. (3) Design of traditional BP neural network algorithm The design steps of BP neural network algorithm are as follows: 1) Forward propagation of sample data The normal distribution random number is used to assign values to the weights and thresholds in the neural network, and the input and output of the hidden layer and the output layer are calculated forward according to Eqs. (7) and (8). 2) Error back propagation The least square method is used to calculate the error between the output value of neural network and the actual value. The training objective of neural network makes the

SOC Estimation of Lithium Battery Based on BP Neural Network

31

cost function error meet the requirements, and the cost function formula is shown in formula (10). 2 1 N (yk − tk ) (10) k=1 2 In Eq. (10), yk represents the output value of the model, tk is the supervision data, k is the number of output nodes of the model, and N is the total number of training samples. The gradient descent method is used to minimize C(e), that is, the weight of each training sample is changed along its negative gradient.

C(e) =

3) Neural network algorithm pseudocode

3.2 BP Neural Network with Forgetting Factor The traditional BP neural network model is an off-line model with fixed weight, while the lithium battery system is a time-varying system, and the system will change with the use time. Therefore, the traditional neural network cannot adjust the weight parameters according to the new data. The forgetting factor is introduced into the cost function to improve the traditional neural network. The improved algorithm steps are as follows.

32

S. Huang and M. Li

(1) Forgetting factor least square method Forgetting factor recursive least square introduces forgetting factor λ to adjust the weights of new and old data, λ value is close to 1, generally 0.95 ≤ λ ≤ 1.00. (2) Specific algorithm steps of BP neural network introducing forgetting function: Step 1: network initialization. Considering the large differences between different series of data, the input data is evenly distributed according to (−1, 1). Randomly assign initial values to the weights and thresholds of input data. Step 2: input samples. Input sample x i (i = 1, 2, …, p), y(n) is the expected output of the nth sample. Step 3: calculate the actual output. Calculate in the forward direction according to Eqs. (6) and (7). Step 4: introduce the forgetting factor and calculate the sum of squares of errors C(e) between the predicted value and the supervised value. C(e) =

2 1 N λi (yi − ti ) i=1 2

(11)

In Eq. (11), λi is the forgetting factor of the ith sample, and the forgetting factor of the sample in this paper λi is determined by exponential forgetting, and the formula is shown in formula (12). λi =

1 − μ L−i μ 1 − μL

(12)

In Eq. (12), L is the fixed number of recent small batch samples, μ is the weight coefficient. Step 5: use the gradient descent method to calculate the reverse weight correction δ. Step 6: n = n + 1 go to step 2. The operation can be exited until the sum of squared errors C(e) meets the predetermined requirements or the training times reach the predetermined upper limit.

4 SOC Estimation Experiment In order to determine the parameters of Thevenin model and obtain the training and test data of neural network model, a battery test-bed is designed. The purpose of identification is to estimate the model structure and unknown parameters according to a criterion and the measurement information of the known system [19]. The experimental data are used to train the model, and the SOC estimation experiments of ampere hour method, EKE algorithm, traditional BP neural network and BP neural network with forgetting factor are compared.

SOC Estimation of Lithium Battery Based on BP Neural Network

33

4.1 Experimental Test Platform The battery used in the experiment is 18650 ternary lithium battery, with a rated capacity of 2200 mAh, a charging cut-off voltage of 4.2 V, a discharging cut-off voltage of 2.75 V, a nominal voltage of 3.7 V, and a maximum continuous discharge current of 10 A. The experimental test platform is shown in Fig. 3, which mainly includes lithium battery charging and discharging machine GN-CD30V10A battery test equipment, thermostatic control box, lithium battery, and host equipped with MATALB2021a. The key equipment is GN-CD30V10A, which can charge and discharge the battery module with a maximum voltage of 30 V and a maximum current of 10 A, and can measure the voltage, current, temperature and other main parameters in time. The upper computer is installed with GN-CD30 software, which can program the experimental program and process the data collection in real time.

Fig. 3. Lithium battery experimental test platform

4.2 Parameter Identification of Thevenin Model Hybrid Pulsepower Characteristic (HPPC) 20] is a feature used to reflect the pulse charging and discharging performance of power batteries. In order to obtain the data of Thevenin’s identification model parameters, we carried out HPPC test procedures for lithium battery modules, from 1.0 to 0.1 SOC intervals (constant current C/3 discharge section), with 2 h of rest at each interval, so that the battery can reach electrochemical and thermal equilibrium conditions before the next discharge, During the test, the ambient temperature is usually set to room temperature (25 °C), and the voltage response of lithium battery HPPC test terminal is shown in Fig. 4.

S. Huang and M. Li

4

5

OCV/V

4

2

3 0

I/V

34

2 OCV

I

-2

1 0

-4 0

20

40 Time/S

60

80

Fig. 4. Complex response in HPPC test terminal voltage

(1) SOC estimation experiment based on Thevenin model Complete the parameter identification of Thevenin model according to Eqs. (1) and (2), and the results are shown in Table 1. Table 1. Parameters of Thevenin model SOC

R0 / 

R1 / 

C1 /kF

0.9

2.251

0.047

209.02

0.8

2.250

0.043

216.56

0.7

2.252

0.042

239.13

0.6

2.253

0.044

246.54

0.5

2.257

0.043

234.66

0.4

2.261

0.042

235.82

0.3

2.265

0.046

216.20

0.2

2.266

0.047

211.39

0.1

2.261

0.059

169.23

4.3 SOC-OCV Data Acquisition and Processing 4.3.1 SOC-OCV Data Acquisition The self-made experimental platform is used to collect the experimental data of lithium battery. The platform can realize the charge and discharge experiments according to different charge and discharge rates, and can collect voltage, current and other information in real time; The incubator can provide different experimental temperatures for

SOC Estimation of Lithium Battery Based on BP Neural Network

35

the battery. In this paper, the discharge data of the battery at different temperatures and different discharge rates are collected. Figure 5 is the SOC-OCV diagram, which enables the battery to discharge SOC-OCV data at different rates at 20. Experimental process: take the 0.1C discharge curve as an example, first control the temperature of the incubator at 20, and charge the battery to a voltage of 4.2V at constant current, and then charge the battery at constant voltage to make the current less than 20mA, which is considered to be full. Start the discharge experiment on the battery, and the discharge rate is 0.1C. Let the battery stand for 1h at the end of each discharge, and conduct 10 constant current discharges at equal intervals in this cycle. The obtained data is fitted with a quartic polynomial curve to obtain SOC-Uocv curve, as shown in Fig. 5. Discharge rate of 0.1C, SOC and Uocv at 20 relationship is shown in formula (13). y = −3.836x4 + 10.328x3 − 9.671x2 + 4.1621x + 3.1901

(13)

where y is Uocv , while x is SOC.

4.1 OCV/V

3.6 3.1

0.1C

0.5C

1C

3C

2.6 0

0.2

0.4 SOC 0.6

0.8

1

Fig. 5. SOC-Uocv curve under different discharge rates

4.3.2 Neural Network Experimental Data Processing The input data in neural network include voltage, current and temperature. The physical meaning and dimension of these parameters are different. Direct input will cause problems such as large result error and calculation overflow. It is necessary to normalize the input data. The commonly used normalization method is shown in formula (14). x=

x − xmin xmax − xmin

(14)

36

S. Huang and M. Li

In Eq. (14), xmax is the maximum value of input data, xmin is the minimum value of input data, and x is the normalized output value. According to Eq. (14), the normalized output value range is [0, 1]. 4.4 SOC Estimation Experiment Under DST Condition (1) DST working condition design Dynamic street test (DST) is a battery test condition simplified by USABC for federal urban driving schedule (fuds) [20, 21].

Fig. 6. Current variation diagram of a DST working condition

Fig. 7. Voltage variation diagram of a DST working condition

A DST working condition consists of three states: battery charging, discharging and static. The test process uses different time and current distribution to dynamically simulate the actual use of the battery. The duration of a DST working condition is 360 s. Figure 6 and Fig. 7 show the waveform of battery current and voltage changes under a DST working condition respectively. There is no time interval between single working conditions. Fill the battery according to the standard and let it stand for 1h. Repeat the charge and discharge experiment under DST working condition until the battery voltage reaches the cut-off voltage. Figure 8 shows the charging and discharging current waveform of the battery under the complete DST working condition (charging is positive and discharging is negative). Figure 9 shows the corresponding voltage waveform of the battery terminal under the current waveform of Fig. 8.

SOC Estimation of Lithium Battery Based on BP Neural Network

37

Fig. 8. Charging and discharging current of battery

Fig. 9. Battery terminal voltage data diagram

(2) SOC Estimation Experiment under DST Condition According to formulas (1) and (5), SOC is estimated by ampere hour integral method and EKF algorithm respectively, and SOC is estimated by improved BP neural network designed in Sect. 3.2. In the neural network SOC estimation, the current, voltage and temperature of the battery are the input X, and the output variables are the SOC value (percentage) of the battery. Initialize the neural network algorithm and set the learning rate η = 0.2, the number of iterations is 800, and the number of hidden layer neurons is finally determined to be 12 by using the error of structure output of experimental simulation method.

Fig. 10. Comparison of estimated values of different algorithms

The experimental results of SOC estimation are shown in Fig. 10 and Fig. 11. The curves of SOC estimated value and real value under three different algorithms are compared respectively. According to the DST working condition, monitor whether the battery

38

S. Huang and M. Li

Fig. 11. Partial enlarged comparison of estimated values

voltage is less than the cut-off voltage through the experimental platform, so as to stop the experiment. From the experimental results of SOC estimation, it can be seen that the dynamic performance of ampere hour integral method (SOC_AH) is the worst and the result is unstable. EKF algorithm (SOC_EKF) can better track the actual SOC, but with the continuous use of battery system, its tracking ability becomes worse. It can be seen from Fig. 11 that EKF algorithm has better performance than ampere hour integration method. The BP algorithm with improved cost function has better performance. The algorithm shows the advantages of small size and fast estimation and tracking speed in the DST test of the whole process. Figure 12 shows the error waveform between the SOC estimated value and the real value of the three algorithms.

Fig. 12. Comparison of estimation error values of different algorithms

Compare the error waveform in Fig. 12: the maximum error of ampere hour integration method is greater than 15%. For large current discharge, SOC estimation accuracy is poor and tracking speed is slow. The estimation error of EKF algorithm in SOC is less than 5%, which can meet the application in most fields, and its performance is better than that of ampere hour integration method. BP neural network algorithm is used to train the model, and the number of iterations converges around 300. The end of the experiment shows that the neural network driven by large numbers can better simulate the nonlinear

SOC Estimation of Lithium Battery Based on BP Neural Network

39

dynamic system. The error of the improved neural network algorithm in SOC estimation is less than 2%. To sum up, neural network algorithm has better advantages in the practical application of SOC estimation, both in accuracy and calculation speed.

5 Conclusion At present, there are many equivalent circuit methods used in SOC estimation. This paper analyzes the research status of SOC estimation, and analyzes the principles of ampere hour integral method and Kalman filter method in SOC estimation. Based on the problems of equivalent circuit model and experimental method, combined with the time-varying characteristics of lithium battery system, neural network is selected to fit the SOC of battery nonlinear dynamic system. The battery level experiment platform is designed. By collecting parameters such as voltage, current and temperature, the SOC-OCV function relationship under different temperatures and charge discharge rates is established. In the host computer, MATLAB software is used for data processing and SOC estimation. The SOC of battery is estimated by ampere hour integral method, EKF algorithm and BP neural network respectively, and the applicability of the algorithm is compared with the actual value. The SOC estimation method is based on the mismatch between the offline parameter model and the lithium battery time-varying system. In order to improve the estimation accuracy and convergence speed of battery SOC, the forgetting factor BP neural network algorithm is added to the cost function of the least square method. The experiment of DST working condition is designed, and the SOC estimation results of three algorithms (ampere hour integral method, EKF, neural network) under DST working condition are analyzed. The experimental results show that the BP neural network algorithm with forgetting factor has higher accuracy and tracking speed than the first two algorithms, can be applied in practical fields, and can meet the accuracy requirements of most battery SOC estimation fields. Acknowledgment. This project is supported by the Nanning University professor cultivation project “Research on power Lithium Battery Pack Management System Technology and SOC Estimation Strategy” (2018jsgc17) and the Guangxi University Young and Middle-Aged Teachers’ Scientific Research Basic Ability Improvement Project (2021ky1811).

References 1. Si, K.: Power battery and management system of electric vehicle core technology (III) - battery management system and its key technology. Transp. World (Transp. Veh.) (09), 40–44 (2012). (in Chinese) 2. Li, J., Wei, M., Li, Z., et al.: State of charge estimation of Li-ion battery based on adaptive extended Kalman filter. Energy Storage Sci. Technol. 9(4), 1147–1152 (2020) 3. Wei, M., Li, J., Li, Z., et al.: SOC estimation of Li-ion batteries based on Gaussian process regression and UKF. Energy Storage Sci. Technol. 9(4), 1206–1213 (2020)

40

S. Huang and M. Li

4. Zhang, Y., Wu, H.: ye Congjin SOC estimation of lithium battery based on AUKF-BP neural network. Energy Storage Sci. Technol. 10(01), 237–241 (2021). (in Chinese) 5. Maduranga, M.W.P., Nandasena, D.: Mobile-based skin disease diagnosis system using convolutional neural networks (CNN). Int. J. Image Graph. Signal Process. (IJIGSP) 14(3), 47–57 (2022) 6. Zhou, D.: A new hybrid grey neural network based on grey verhulst model and BP neural network for time series forecasting. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 5(10), 114–120 (2013) 7. Gang, Z., Haosai, S., Shuzhen, L.: SOC estimation of power battery based on BP neural network. Power Technol. 40(04), 818–819 (2016). (in Chinese) 8. Zhou, D.: Optimization modeling for GM(1,1) model based on BP neural network. Int. J. Eng. Manuf. (IJEM) 4(1), 24–30 (2012) 9. Xueping, Y., Zhengjiang, W., Chaoyu, J., et al.: State of charge of lithium-ion battery based on BP neural net work. Mater. Rep. 33(S2), 53–55 (2019) 10. Jianlin, L., Heng, X.: Review on modeling of lithium-ion battery. Energy Storage Sci. Technol. 11(2), 697–703 (2022) 11. Jie, Y., et al.: A review of lithium-ion battery model research. Energy Storage Sci. Technol. 8(1), 58–64 (2019) 12. Wu, X., Zhang, X.: Parameter identification of second-order RC equivalent circuit model for lithium battery. J. Nanjing Univ. (Nat. Sci.) 56(5), 754–761 (2020) 13. Jianlin, L., Zhonghao, L., Yaxin, L., Haitao, L.: Development status in modeling of the lithium battery energy storage system and preliminary exploration of its data-driven modeling. Pet. New Energy 33(4), 75–81 (2021) 14. Kallimani, R., Gulannavar, S.: A detailed study on state of charge estimation methods. In: Proceedings of Third International Conference on Communication, Computing and Electronics Systems, no. 08, pp. 191–207 (2022) 15. Jadhav, D.R., Patil, D.: State of charge estimation of electrical vehicle battery. In: Conference: 2022 Interdisciplinary Research in Technology and Management (IRTM) (2022) 16. Chen, J., Li, W., Sun, Y., Xu, J., Zhang, D.: Battery SOC prediction based on convolution bidirectional long-term and short-term memory network. Power Technol. 46(05), 532–535 (2022). (in Chinese) 17. Yin, C., Wang, Y., Li, P., Xiao, F., Zhao, Q.: Combined online estimation of SOC and SOH of energy storage battery based on LSTM. Power Technol. 46(05), 541–544 (2022). (in Chinese) 18. Chen, C., Pi, Z., Zhao, Y., Liao, X., Zhang, M., Li, Y.: SOC estimation of lithiumion battery based on adaptive catastrophe genetic cyclic neural network. J. Electr. Eng. 17(01), 86–94 (2022). (in Chinese) 19. Wang, Z., Wang, S., Yu, C., Xiong, R.: Parameter identification of highly adaptive battery based on SR-HPPC and EKF. Battery 52(01), 35–37 (2022). (in Chinese) 20. Xu, W., Wang, S., Yu, C., Li, J., Xie, W.: Research on SOC estimation method of lithium battery based on Thevenin model and UKF. Automatic Instrument 41(05), 31–36 (2020). (in Chinese) 21. Kuang, H.: Research on SOC estimation method of lithium battery based on adaptive learning. University of Electronic Science and Technology, Chengdu (2020). (in Chinese)

Design of Train Circuit Parallel Monitoring System Based on CTCS Jianqiu Chen, Yanzhi Pang(B) , and Hao Zhang School of Transportation, Nanning University, Nanning 530200, Guangxi, China [email protected]

Abstract. With the continuous speed increase of trains in China, the risk factors are also significantly increased. Therefore, the reliability requirements of train control system are becoming higher and higher. Based on the analysis of the characteristics of track circuit, this paper puts forward the concept of parallel monitoring, designs a camera module detection, and compares and arbitrates according to the collected data and detection results to prevent the occurrence of train accidents. This paper designs a parallel monitoring system to identify the gradient feature distribution of pixel points and the similarity distribution information of pixel points in the target area, quickly and accurately calculate and output the results to transmit to the train control center in bit mode, and compare the output data to determine the accident prevention scheme and ensure the safety of train operation. Keywords: Parallel monitoring · Track circuit · Information collection and data processing

1 Introduction Monitoring is a way to ensure safety [1]. With the improvement of people’s requirements for safety, including personal safety, social system and production organization, various monitoring systems are constantly popularized [2–4]. For high-speed trains, security measures are also increasing, and the level is also getting higher and higher [5]. In recent years, many train accidents have occurred at home and abroad. One of the main causes of the accidents is the collection of wrong data, which leads to the wrong route handling. At least, it causes the turnout, derailment and wrong direction, and at worst, it causes major accidents such as train collision and rear end collision. Therefore, by studying the shortcomings of the data collected by the track circuit, we should establish a monitoring system to form parallel monitoring with the track circuit, improve the correctness of the data source, and avoid the occurrence of wrong routes. The outdoor working environment of railway transportation is particularly bad. Both natural factors and human factors can cause vehicle failures. In fact, in addition to the failures caused by external factors, internal software and hardware may also lead to accidents. The “723” accident occurred when the safety tube of the train control center (TCC) was blown by lightning, which made the interaction between the acquisition drive © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 41–52, 2023. https://doi.org/10.1007/978-3-031-24475-9_4

42

J. Chen et al.

unit and the host of the train control center error. The failure information and failure data were not processed according to the “failure safety” principle, which eventually led to the “723” accident. The so-called “fault safety” principle is also called “fault oriented safety” principle. The output of railway signal devices, components and systems can be divided into normal, safe side fault and dangerous side fault. When the fault is about to occur, the technical means in line with the “fault safety” principle should be used for immediate adjustment to realize the safe side output. Therefore, the core of the “failure safety” principle is to exchange efficiency for safety. In this paper, the “fault safety” principle is abbreviated as F-S principle. In the era of train operation digitalization, the train control system has adopted the dual machine hot standby redundancy, 2-out-of-2 or 3-out-of-2 structure to calculate the data, and the possibility of error in the operation data becomes very small. However, if the train operation data is collected with an error, the calculated result will also be wrong. Therefore, the source of train operation data is particularly important [6, 7]. The parallel monitoring system based on CTCs (China train control system) is a monitoring system based on the original system of the train, which aims to improve the reliability and safety of the safe running of the train. Improve transportation efficiency and ensure that “fault safety” can be achieved quickly in case of failure during driving.

2 The Function of Parallel Monitoring System Design and Analysis of Existing Problems For high-speed trains, safety monitoring is undoubtedly an extremely important tool and means [8, 9]. 2.1 Concept of Parallel Monitoring Parallel monitoring is a control mode in which two parallel, independent and mutually different control systems monitor the same control items. In the process of parallel monitoring, when executing driving tasks, there are often two systems working at the same time. The existing train control systems are controlled independently by selfdiscipline control mode. ATP controls the tracking of train headway, train running speed, temporary speed limit (TSR) and train connection For the core control of departure route, if there is data receiving or transmission error, external signal interference and signal equipment failure during control, and ATP is not monitored by any equipment and system in independent control, assuming that the interaction error between equipment and host occurs again, the driver will not receive the correct driving information and cannot realize “fault safety”, which will probably lead to accidents. Therefore, the method of mutual monitoring and mutual constraint of train operation (parallel monitoring) is studied to realize “fault safety” [10]. The track circuit and the parallel monitoring camera collect the train operation data at the same time. The track circuit absorbs or drops the output result through the relay, and the monitoring camera identifies whether there is a train output result through the image. The output results of the parallel monitoring camera are compared with the output

Design of Train Circuit Parallel Monitoring System Based on CTCS

43

results of the track circuit. If the comparison results of both parties are consistent, the control results of the track circuit will be output. If the comparison results of both parties are inconsistent, it will be immediately interrupted and “fault safety” will be adopted. 2.2 Existing Problems of Track Circuit System In the train control system, the working mode of the existing track circuit is closedcircuit. The track circuit collects the information about whether the block partition is occupied through the excitation circuit. The track circuit is an outdoor equipment, and the outdoor equipment is subject to many factors of external interference. The interference may cause the track circuit fault, collect the wrong train running data, and the track circuit fault directly affects the signal light display at the entrance of the section [11]. There are two ways to express track circuit faults: (1) There is no train in the block section, and the signal light is red, indicating that there are trains in the section, which complies with the “fault safety” measures, but affects the driving efficiency. (2) If there are trains in the block section, the green light of the annunciator will show that there are no trains in the section, which does not comply with the “fault safety” measures, and will seriously lead to the recurrence of the 7.23 accident. No matter which of the above situations occurs in the track circuit, it will bring great inconvenience to production. Therefore, it is necessary to monitor the track circuit. Here, the principle of “traffic violation photography” is cited to study the parallel monitoring of track circuit. The reference to “traffic violation photography” is not to monitor whether the track circuit and the train have illegal operations [12]. Instead, it uses image recognition technology to take photos of traffic violations. The track circuit is equivalent to the human body’s sense of touch. When the train wheel set is pressed on the track circuit, the excitation circuit falls due to insufficient power supply. The surveillance camera is equivalent to the sensory vision of the human body. Taking photos when the train is running in a blocked section can also feed back to the signal, so that the signal will light the red light, reducing the occurrence of losing the train during the driving process. The track circuit parallel surveillance camera reflects the substitutability, parallelism and safety. 1) Substitutability means that when one party has a fault, the other party can identify whether there is a train running in the block section, and can arouse the annunciator to display a red light to achieve “fault safety” measures. 2) Parallelism means that the two main bodies of the parallel surveillance camera and the track circuit can parallel their respective tasks. The data collected by the track circuit and the surveillance camera will not affect the independent judgment of the other party and can avoid interactive errors. 3) The safety is mainly manifested in that as long as the results of whether there is a train in the block section are inconsistent between both parties, the annunciator will display a red light to improve the safety of train operation [10].

44

J. Chen et al.

3 Information Collection of Channel Circuit Parallel Monitoring System 3.1 Camera Layout for Parallel Monitoring of Track Circuit The identification content of the track circuit parallel monitoring camera includes the identification and processing of the accident segment monitoring image, and the monitoring of whether there is a train running on the track, which belongs to the macro monitoring of the running on the track of the whole line. The surveillance camera is connected with the annunciator through optical fiber. The camera image recognition transmission information is transmitted using the exclusive frequency of the track to prevent interference from external factors. The annunciator is connected through the image recognition computer to identify and decide which color signal light to represent. Therefore, the real-time train running position monitoring is realized by adding corresponding hardware beside the track circuit, which is the same as that of the track circuit, so the parallel monitoring is formed [13]. The monitoring system mainly includes four parts: video image acquisition, train operation recognition, train operation image recognition and processing, and train operation information storage. The composition of train operation detection system D is shown in Fig. 1. The train operation recognition part includes train operation detection module and background update module, and the image recognition processing part includes image recognition module and image processing module. Part of Image recognition processing

Modular of Train OPeration

Modular of Image Processing

Identification Modular of Image Recognition

Modular of Video Image Acquisition

Modular of Image Storage

Modular of Background Update Part of Video Image Acquisition

Part of Train Operation Identification

Part of Train Operation Information Storage

Fig. 1. Composition of train operation detection system

The architecture of the system is mainly composed of four modules. The first part is the video image acquisition module in image acquisition, which is responsible for detecting whether there are trains running on the track. The CCD camera inputs the train running image into the image acquisition card of the CCD camera as an analog signal. The image acquisition module is connected with the train running detection module to capture the train running image in real time and adjust it according to the resolution and brightness of the video image. The second part is train operation recognition, which mainly includes background update module and train operation detection module. Read the information obtained from the video image acquisition module, identify whether the train is now on the track, start the camera function to capture the train passing, and transmit the video image to the

Design of Train Circuit Parallel Monitoring System Based on CTCS

45

image processing module. At the same time, the background update module continuously updates the track background to reduce the reconfirmation of the background image by the image processing module. The third part is the image recognition module and image processing module. The image recognition module carries out image recognition, enhances the contrast of the input image, identifies whether the moving object is a train, and recognizes and distinguishes through the numerical mutation of image pixels. Once a train or something outside the train enters the shooting area of the CCD camera, it will be detected. The image recognition module makes a judgment through multiple photos of the same train or a very small dynamic video, and transmits the results through the 485 communication interface. After receiving the data provided by the image recognition module, the image processing module compares whether the output is unified according to whether there is train data and track circuit data, so as to achieve parallel monitoring. The image storage module in the fourth part completes the image compression function when the output comparison of parallel monitoring data and track circuit data is different, and stores the output data and comparison results provided by parallel monitoring. When the track circuit shows no train and the camera captures a train, the camera output data will be taken as the execution data. 3.2 Design of Subsection Installation Scheme of Track Circuit Monitoring Camera After the “7.23” accident, in order to reduce the impedance of traction current for track circuit equipment, the track bars are connected in parallel, the track bar impedance is reduced, and the interference of signal circuit is reduced [14]. After parallel connection, the length of track circuit increases. Camera

Train

1200 meter track circuit

Fig. 2. Track circuit division monitoring and detection section

At present, six traction rails are generally used in parallel connection, and the length of track circuit is 1200 m. In order to ensure the high operation efficiency of the train in the automatic block system, we have to reduce unnecessary safety margin and increase equipment investment [15]. Adding cameras to the track circuit can become an option. The installation of surveillance cameras is divided according to the length of track circuit and the maximum monitoring distance of surveillance cameras. As shown in Fig. 2. The 1200 m track circuit is combined into a monitoring and detection section through three groups of monitoring cameras installed in opposite directions. The monitoring

46

J. Chen et al.

distance of each group of monitoring cameras is 400 m, and the single camera distance is 200 m. The installed number of surveillance cameras is the total length of the line/1200 m of the monitoring and detection section * the number of surveillance cameras in the monitoring and detection section −2 = the installed number of line surveillance cameras. After the monitoring and detection section is divided, if the train stops due to fault, it can feed back the specific location and judge the level of fault of the train through video image in real time. At the same time, it also monitors the track in real time. If the track is impassable due to natural disasters, judge the amount of additional manpower and material resources according to the video images to avoid the waste of manpower and material resources or reinforcement again.

4 Data Processing of Track Circuit Parallel Monitoring System 4.1 Processing of Images Collected by Track Circuit Parallel Monitoring Camera The average installation density of parallel surveillance cameras is high, and most of the monitoring areas are short-range small scenes, which can reduce the image area of non target areas. High density installation can reduce a large number of waste of computing resources in non target areas, and improve the calculation and recognition speed of pixel points by dividing ROI. After dividing ROI and non target areas, ROI is different from non target areas, as shown in Fig. 3. Light yellow is ROI, and other parts are non target areas. In the process of pixel calculation, ROI will be defined as the positive sample of calculation, non target area as negative sample, and negative sample will not participate in pixel calculation. The positive sample pixels are normalized. Because the height of the identified train is between 3.65 m and 3.9 m and the width is 3.3 m, the pixel ratio of the train image transmitted from the train inspection module to the identification module is not changed. In this paper, the sample is normalized to the size image ratio of 24 * 72.

Fig. 3. ROI and non target areas

Through the ROI feature extraction method, through the recognition and calculation of the mutation in the red part of the image, scan the four major features of whether

Design of Train Circuit Parallel Monitoring System Based on CTCS

47

there is a train running in the image, deepen the color of the normalized image, and then calculate the gradient value of three out of four image pixels. When three of the four train running features appear, it meets the calculation of pixel gradient. The horizontal edge gradient operator is (−1, 0, 1), and the vertical edge gradient operator is (−1, 0, 1) t. The horizontal and vertical gradient values of pixel points (x, y) in the image are shown in formula (1) and formula (2) respectively. Where Z (x, y) represents the pixel value of the corresponding coordinate pixel point. Gx (x, y) = z(x + 1, y) − z(x − 1, y)

(1)

Gx (x, y) = z(x, y + 1) − z(x, y − 1)

(2)

The formula for calculating the gradient increment value and direction at pixel points (x, y) is shown in (3) and formula (4).  (3) G(x, y) Gx (x, y)2 + Gy (x, y)2 θ (x, y) = tan−1

Gy (x, y) Gx (x, y)

(4)

Through gradient segmentation, feature information and image pixel scale are added to obtain higher-level feature space, feature extraction and feature combination are automatically completed, and feature vectors that are easy to recognize and calculate are generated. Finally, the image recognition module completes the train image recognition function [16]. 4.2 Analysis of Data Exchange Mode of Existing Track Circuit Radio Block Center (RBC) is the ground core equipment of CTCS-3 Train Control System. It is a signal control system based on “fault safety” computer platform. RBC is the collection and interaction center of ground signal system information and instructions. According to the status information of route, block partition, signal and turnout given by computer interlocking, the temporary speed limit information given by temporary speed limit server, the train position information given by on-board equipment, and the train traffic right information sent by adjacent RBC, combined with the line parameter information configured by RBC, it generates control information such as train operation permit, And send it to the on-board equipment through GSM-R wireless communication to ensure the safe, reliable and efficient operation of the train within its control range [17]. The existing track circuit data is transmitted to RBC, and the communication transmission is mainly completed through the connection of RBC external interface and interlocking (CBI) interface. The hierarchical structure of RBC and CBI communication system includes application layer, security function layer and communication function layer. The hierarchical structure is shown in Fig. 4. TCC (train control center) manages the track circuit to feed back track information to RBC according to the occupied or non occupied states in the unit of section. RBC transmits the occupied or non occupied

48

J. Chen et al.

Fig. 4. Hierarchical structure of RBC and CBI communication protocol

states to CBI at the application layer, and CBI controls the annunciator and turnout to make corresponding actions. RBC adopts a dual computer hot standby redundant computer platform. The redundant computer calculates the data collected by the track circuit, and the probability of error results is extremely low. However, the wrong data is collected before the calculation, and the wrong data calculated by the redundant computer will cause the output result to be wrong data, which will affect the driving safety. After the track circuit collects the track section status information and equipment information, it will represent each variable of track circuit information through a section of track circuit status. Each variable occupies a bit, including occupied and non occupied values. The value definition is shown in Table 1. Track circuit information variables communicate with TCC through the communication interface module of track circuit, and the communication interface can also send carrier frequency and low frequency coding commands to track circuit. Table 1. Values of track circuit information variables Value

Meaning

0

Occupied

1

Idle

4.3 Comparative Study of Parallel Monitoring Data Output The image recognition module adopts the battleship STM32 single chip microcomputer of punctual atom, which has been connected internally. After writing the change of the recognition pixel value, it can be connected to the camera for use. The image recognition module and the modem of the camera can set the communication protocol and communication rate through the 485 communication interface to control the operation.

Design of Train Circuit Parallel Monitoring System Based on CTCS

49

The image recognition module starts to recognize the image data after receiving the pixel value of the train detection module. The input pointer in the image recognition module resets, identifies the higher value of the first pixel, then identifies the lower value of the first pixel, and then identifies the higher value of the second pixel, so as to identify the remaining pixel values in a circular manner. Until the pixel values change smoothly, the image recognition ends and enters the sleep mode. When the pixel value of the detection module is received again, it will be activated. There are four camera input interfaces on the image recognition module. According to the division of detection sections, three groups of surveillance cameras can be installed in each 1200m detection section, and the image recognition module will reserve one camera input interface for standby. The installation method of image recognition module is shown in Fig. 5. Image processing module A

Image processing module B

Camera

Detection section B

Detection section A

Fig. 5. Installation mode of image recognition module

After the parallel monitoring camera collects the track section status information and track equipment information, the parallel monitoring system sends it to TCC according to the pixel variables output by the image recognition module. Each variable of the pixel represents the data output after the image recognition of the parallel monitoring system. Each variable of the pixel occupies a bit, including two value taking methods of identifying train driving and no train driving. The value definition is shown in Table 2. The information variables of the parallel monitoring system communicate with TCC through the passage interface of the image recognition module. Table 2. Values of pixel variables for parallel monitoring Value

Meaning

0

There are trains

1

No train

The TCC redundant computer compares the variable data provided by the track circuit and the image recognition module [18]. The way of parallel monitoring variables is shown in Fig. 6.

50

J. Chen et al.

RBC

CBI

TCC Optical fiber network

Image recognition module n

Wireless Network

Image recognition module 1

Signal

Vehicle terminal

Switch

Track Circuit

Fig. 6. Design diagram of parallel monitoring

There are four situations in variable comparison: (1) The variable value output of track circuit is “1”, and the output of image recognition module is “1”, which determines that there is a train running on the track. (2) The variable value output of track circuit is “1”, and the output of image recognition module is “0”, which determines that there is a train running on the track. (3) The variable value output of track circuit is “0”, and the output of image recognition module is “1”, which determines that there is a train running on the track. (4) The output of track circuit variable value is “0”, and the output of image recognition module is “0”, which determines that there is no train running on the track. The problem of collecting wrong data in the process of track circuit self-regulation control is solved through variable comparison, which conforms to the principle of “fault safety”.

5 Conclusion Through the analysis of the existing train control system, this paper analyzes the lack of self-discipline control of track circuit, studies the concept of parallel monitoring, and proposes adding hardware to track circuit to realize parallel monitoring. Study the application advantages of CCD camera, establish the method of installing surveillance camera in the whole process and the installation of special position. Examine the track circuit of the self-discipline control system in the existing control system, collect wrong data, establish the whole track monitoring, and form parallel monitoring with the track circuit.

Design of Train Circuit Parallel Monitoring System Based on CTCS

51

According to the vector value of the pixel variable caused by the train characteristics, through the image acquisition and processing research of the track circuit parallel monitoring camera, combined with the track circuit data processing method, identify whether there is a train running on the track, The output results are compared and arbitrated with the track circuit results to improve the reliability, accuracy and safety of the information collected by the track circuit. Acknowledgment. This papert is supported by: (1) Sub-project of Construction of China-ASEAN International Joint Laboratory for Comprehensive Transportation (Phase I), (GuiKeAA210770116); (2) 2021 District Level Undergraduate Teaching Reform Project: Exploration of Innovative Teaching Methods for Seamless Integration of Transportation Courses and Ideological and Political Education Based on Semantic Analysis (2021JGB428); (3) “Curriculum Ideological and Political Education” of Nanning University Demonstration Courses: Train Operation Control System(2020SZSFK02); (4) Core Construction Courses of Undergraduate Major of Nanning University: Train Operation Control System(2020BKHXK16).

References 1. Rupasinghe, I.D.M.S., Maduranga, M.W.P.: Towards ambient assisted living (AAL): design of an IoT-based elderly activity monitoring system. Int. J. Eng. Manuf. (IJEM) 12(2), 1–10 (2022) 2. Cabrillas, M.Y., Luciano, R.G., Marcos, M.I.P., Aquino, J.C., Robles, R.C.F.: Mobile-based Attendance Monitoring System Using Face Tagging Technology. Int. J. Inf. Eng. Electron. Bus. (IJIEEB) 13(6), 22–35 (2021) 3. Chaudhary, A.S., Chaturvedi, D.K.: QR code based solar panel data monitoring system. Int. J. Image Graph. Signal Process. (IJIGSP) 12(3), 20–26 (2020) 4. Egejuru, N.C., Ogunlade, O., Idowu, P.A.: Development of a mobile-based hypertension risk monitoring system. Int. J. Inf. Eng. Electron. Bus. (IJIEEB) 11(4), 11–23 (2019) 5. Grebennik, I., Dupas, R., Lytvynenko, O., Urniaieva, I.: Scheduling FreightTrains in rail-rail transshipment yards with train arrangements. Int. J. Intell. Syst. Appl. (IJISA) 9(10), 12–19 (2017) 6. Li, X.: Research on route control dynamic monitoring method based on runtime verification. Beijing Jiaotong University, Beijing (2019). (in Chinese) 7. Dai, L.: Research on parallel monitoring method of train route. Beijing Jiaotong University, Beijing (2017). (in Chinese) 8. Gan, W., Zhang, T., Zhu, Y.: On RFID application in the information system of rail logistics center. Int. J. Educ. Manag. Eng. (IJEME) 3(2), 52–58 (2013) 9. Javed, A., Qazi, K.A., Maqsood, M., Shah, K.A.: Efficient algorithm for railway tracks detection using satellite imagery. Int. J. Image Graph. Signal Process. (IJIGSP) 4(11), 34–40 (2012) 10. Chen, Y.: Research on the application of cross law control method in train operation control system. Beijing Jiaotong University, Beijing (2013). (in Chinese) 11. Wang, J., Wang, J., Roberts, C., et al.: A novel train control approach to avoid rear-end collision based on geese migration principle. Saf. Sci. 91, 373–380 (2017) 12. Wang, Y.: Research on high-speed railway scene segmentation and recognition algorithm. Beijing Jiaotong University, Beijing (2019). (in Chinese) 13. Tong, S., Cheng, S., Li, J., Fu, P.: Design and implementation of traffic violation processing system based on video technology. Comput. Meas. Control (10), 88–90 (2005). (in Chinese)

52

J. Chen et al.

14. Yang, S., Chen, B., Chen, H., Cui, Y., Tang, Q.: Suppression method of track circuit to transient traction current interference in phase separation area. J. Southwest Jiaotong Univ. 54(06), 1332–1341 (2019). (in Chinese) 15. Zhu, J.: Research and implementation of section division of interstation rail. Southwest Jiaotong University, Xi’an (2010). (in Chinese) 16. Liu, A.: Research on signal recognition and warning system based on location features. Jilin University, Jilin (2018). (in Chinese) 17. Lu, P., Liu, X.: Overview of technical development of wireless block center system. Railway Commun. Signal 55(S1), 68–74 (2019). (in Chinese) 18. Ban, Y.: Application of redundancy technology in railway signal disaster prevention system. Beijing Jiaotong University, Beijing (2011). (in Chinese)

Information Spaces and Efficient Information Accumulation in Calibration Problems Peter Golubtsov(B) Lomonosov Moscow State University, Moscow, Russia [email protected]

Abstract. We introduce and study information spaces that arise in the problem of calibration of a measuring system in the case when the measurement model is initially unknown. Information extracted from calibration measurements is proposed to be represented by an element of the corresponding information space endowed with a certain algebraic structure. We also consider the possibility of further improving the estimation accuracy by repeatedly measuring an unknown object of study, which leads to yet another information space of a different type. As a result, a processing algorithm is constructed that contains the accumulation of information of two types and the interaction of information flows with the concurrent arrival of calibration and measurements data. The study presents the mathematical basis for a data processing model in which different types of input data are processed as parallel and independent as possible, and finally, the corresponding types of accumulated information interact to produce a complete processing result. Keywords: Decision making · Big data streams · Calibration data · Information spaces · Information algebra · Distributed parallel processing · MapReduce

1 Introduction For adequate processing of experimental data, knowledge of the measuring system model is required, which describes the relationship between the input and output. Such knowledge makes it possible to construct an optimal processing algorithm, see, for example, [1, 2] for linear estimation problems. Often, however, the measurement model itself is known imprecisely or not known at all. In this case, a series of measurements of known test signals is usually carried out, based on these results, an approximation of the model is built, and then this approximation is used to solve the estimation problem. Quite often, for example, a model is selected from a certain class in such a way as to provide the best agreement in a certain sense between predictions and measurement results on the training set. It is known, however, that even small deviations of the model used in processing from the true one can lead to large interpretation errors [3]. Moreover, if there are too few calibration measurements, the error in model estimation can become quite large. In this regard, if the measurement model is known inaccurately, it is necessary to adequately take into account this inaccuracy in the problem of interpreting the measurement results [4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 53–62, 2023. https://doi.org/10.1007/978-3-031-24475-9_5

54

P. Golubtsov

Another problem associated with calibration is the need to collect large amounts of calibration data. In the process of accumulation, the resources required for data storage and processing will grow. However, as will be shown, there is the possibility of efficient and compact accumulation of calibration information, eliminating the need to store the original calibration data. This leads to a calibration information space in the spirit of [5–7] and a processing algorithm that fits perfectly into the MapReduce approach [8], which is crucial in parallel distributed processing of big data [9, 10]. We will also consider the possibility of further improving the estimation accuracy by repeatedly measuring an unknown object of study, which will lead to a different type of information flow. As a result, we will obtain an algorithm for constructing an optimal estimate of the object of study with the simultaneous accumulation of calibration and repeated measurements, containing the accumulation of information of two types and the interaction of the corresponding information flows. This feature raises the problem to a new level and distinguishes this work from the author’s previous works on information spaces. It provides the mathematical basis for a data processing model in which different types of input data are processed as parallel and independent as possible, and finally, the two types of accumulated information, interacting, produce the result of complete processing. The main objective of this work is to analyze the calibration algorithm considered in [11] in the context of big data, build the corresponding information spaces and study the behavior of the estimation accuracy when large amounts of calibration and measurement data are accumulated. Note that in recent years there has been an explosive growth in the amount of data used in processing. Special attention in the Big Data problems is paid to the methods of data analysis that allow parallel and distributed processing [12–15]. This requires a systematic study of the scalability of both existing and new algorithms. However, main attention is usually paid to technical details, while mathematical aspects of scalability for the proposed methods are almost never considered. An important innovative feature of this work as well as the previous research of the author [5–7, 16] is the explicit identification and study of the mathematical structures that underlie processing of distributed data or big data streams. The author believes that systematic study of information spaces is of great practical importance, since it provides methods for optimizing the processing of big data. More importantly, the study of information structures in big data problems leads to the need to find a special type of information representation that has convenient algebraic properties. In a sense, such a representation reflects the very essence of the information contained in the data, which leads to a completely new view at the phenomenon of information and thus has an important academic value.

2 Calibration Problem for Linear Experiment In this section, we consider the problem of processing the results of a linear experiment, if the measurement model is unknown, and all information about it is extracted from a special series of measurements of known objects - calibration measurements. Since the information extracted from calibration measurements is inevitably inaccurate, we will need certain mathematics which deals with the optimal linear estimation for inaccurate information about the measurement model [17, 18].

Information Spaces and Efficient Information Accumulation

55

2.1 Linear Estimation with Inaccurate Information About the Measurement Model Consider a linear measurement scheme for a vector x ∈ D of the form y = Ax + ν, where y ∈ R is the measurement result, A : D → R is a linear operator, and ν ∈ R – is a random noise vector with zero mean Eν = 0 and a positive definite covariance operator S > 0. Also assume that there is a prior information about the vector x, represented by its prior mean Ex = x0 and the covariance operator F > 0. If the linear mapping A is not known exactly, then according to (Pyt’ev 1984, 2012), the optimal linear estimate xˆ of the vector x can be represented as −1    A∗0 (S + J )−1 y + F −1 x0 , xˆ = A∗0 (S + J )−1 A0 + F −1 where the operators A0 = EA : D → R and J = E(A − A0 )F(A − A0 )∗ : R → R describe information about the operator A. Namely, A0 is an estimate of the operator A, and J represents the effect of the inaccuracy of this estimate on the solution of the final problem - the estimation of the vector x. Here F : D → D – is the noncentral operator of the second moment of the random vector x, F = F + x0 x0∗ . For simplicity of reasoning, we will identify operators with their matrices in fixed orthonormal bases. The accuracy of the estimate xˆ is characterized by the covariance operator of the vector xˆ − x: −1  . Q = A∗0 (S + J )−1 A0 + F −1 2  In particular, the total estimation error Exˆ − x = trQ. 2.2 Calibration Measurements In the case when information about the operator A is initially absent, it can be extracted from the results of calibration measurements [11] of known signals ϕi : ψi = Aϕi + μi ,

i = 1, . . . , k.

Here ϕi ∈ D are known calibration signals, ψi ∈ R are observed results of calibration measurements, μi ∈ R are independent random error vectors having the same distribution as vector ν, i.e., zero mean Eν = 0 and covariance operator S. The sequence of pairs of vectors (ϕ1 , ψ1 ), . . . , (ϕk , ψk ) forms a set of calibration data. Let dim D = m and dim R = n. Then  = (ϕ1 · · · ϕk ) and  = (ψ1 · · · ψk ) are

56

P. Golubtsov

m × k and n × k calibration signal and result matrices, respectively. According to [11], information about the operator A is given by the expressions   −1 −1  J = αS, where α = tr ∗ F . A0 = ∗ ∗ , Let us note that the volume of calibration data is (m + n)k and grows indefinitely with an increase in the number of calibration measurements k. 2.3 Canonical Calibration Information However, it can be easily seen that all the information contained in the calibration data, which is needed for computing A0 and J can be represented by a pair of linear mappings of the form G = ∗ =

k 

ψi ϕi∗ : D → R,

H = ∗ =

i=1

k 

ϕi ϕi∗ : D → D,

i=1

where H is positive semidefinite, H ≥ 0. We will say that the pair (G, H ) represents the canonical calibration information for the of calibration dataset (, ). The set C of all such pairs (G, H ) will be called the calibration information space. Since the matrices G and H have fixed sizes n × m i m × m respectively, the canonical calibration information occupies a fixed volume, independent of the number of calibration measurements k. Moreover, if two such pairs (G1 , H1 ) and (G2 , H2 ) are obtained from two sets of calibration data, then the combined set will be represented by the pair (G1 , H1 ) ⊕ (G2 , H2 ) = (G1 + G2 , H1 + H2 ). Obviously, any set of calibration data can be represented by such a pair, while the absence of data is represented by the pair 0 = (0, 0). It is easy to verify that (C, ⊕, 0) is a commutative monoid with the cancellation property, i.e., for any a, b, c ∈ C: a ⊕ b = b ⊕ a, (a ⊕ b) ⊕ c = a ⊕ (b ⊕ c), a ⊕ 0 = a, a ⊕ b = a ⊕ c ⇒ b = c, but has no invertible elements other than 0, i.e., there is no “negative” information. The cancellation property allows to “subtract” already accumulated information if subsequently it is found to be unreliable. A more detailed discussion of the properties of information spaces and other examples can be found in [5–7, 16].

Information Spaces and Efficient Information Accumulation

57

As noted in [5], in terms of the algebraic structure of the information space, it is possible to uniformly describe the sequential “accumulation” of information and the “combination” of information obtained from different sources. At the same time, many intuitively expected properties of the very concept of “information” obtain an adequate mathematical representation in terms of the properties of the information space. 2.4 Measurement Model Information The canonical calibration information (G, H ) allows one to obtain explicit information about the operator A, namely, its estimate A0 and a characterization of the accuracy of this estimate J :   J = αS, where α = tr H −1 F . A0 = GH −1 , Note that the inaccuracy of information about the operator A manifests itself not only in the use of the approximate value A0 instead of the exact but unknown A, but also in the effective increase in the “noise” of the measurement: S = J +S = (α + 1)S instead of S. Often when implementing data processing based on an approximate model, information about the accuracy of the approximate model is not taken into account. This can lead to a large uncontrolled estimation error. Adequate consideration of such inaccuracy, expressed by the operator J allows not only to obtain the correct estimation error, but also has a regularizing effect (Tikhonov and Arsenin 1977) and helps to reduce the estimation error, especially for a small amount of calibration data. Under fairly general conditions, with unlimited accumulation of calibration information (i.e., for k → ∞) A0 → A, J → 0 and the estimation error Q → −1  ∗ −1 A S A + F −1 , namely, to an error corresponding to a precisely specified measurement model.

3 Improving Estimation Accuracy Through Multiple Measurements 3.1 Multiple Measurements of the Object of Study To further improve the accuracy of the estimate, consider the possibility of multiple measurements of the unknown vector x: yj = Ax + νj , j = 1, . . . , r. It turns out that such an r-fold measurement is equivalent to a single measurement of the form y = Ax + ν, where 1 yj r r

y=

j=1

58

P. Golubtsov

and ν ∈ R is a random noise vector with covariance operator 1r S = βS. Here, the coefficient β = 1r reflects the effect of improving the estimation accuracy by repeating the measurements. Thus, in the presence of k calibration measurements and r repeated measurements of the unknown vector, the estimate and its error are determined by the formulas −1  −1 , Q = A∗0 S A0 + F −1

  −1 xˆ = Q A∗0 S y + F −1 x0 ,

where S = J + S = (α + β)S.

3.2 Asymptotic Behavior of the Estimation Accuracy and Balance of Error Contributions Between Calibration and Repeated Measurements Let us assume that the calibration signals are chosen randomly from some distribution ˜ Then with the second moment F. 1 1 H= ϕi ϕi∗ → F˜ k k k

i=1

˜ for k → ∞ and for a sufficiently   large number of calibration measurements H ≈ k F and μ −1 α ≈ k , where μ = tr F˜ F . Note that if the calibration signals are selected from the same “ensemble” as the unknown vector x, then F˜ = F and μ = m, i.e., the dimension of the estimated vector x. Thereby,  μ 1 + S→0 S≈ k r for simultaneous increase in calibration and repeated measurements, k, r → ∞. Moreover, if the operator A is non-degenerate, i.e., kerA = {0}, then Q → 0. In other words, by increasing both the calibration and repeated measurements, the estimation error can be made arbitrarily small. The contributions to the estimation error, determined by the inaccuracy of the calibration information about A and by the error of multiple measurements, become comparable when μk ≈ 1r or k ≈ μr. 3.3 Canonical Information for Repeated Measurements Note that in the considered scheme of calibration and repetition of measurements, information is accumulated from two types of data: calibration data (ϕ1 , ψ1 ), . . . , (ϕk , ψk ) and repeated measurements data y1 , . . . , yr . As shown above, all calibration data can be effectively represented by canonical calibration information (G, H ). Similarly, since the data y1 , . . . , yr are only required to construct their average, as shown in [5], it is convenient to accumulate canonical measurement information in the

Information Spaces and Efficient Information Accumulation

59

form (u, r), where u = rj=1 yj ∈ D is the sum of all vectors yj and r ∈ N is their number. Then y = ur . Pairs of the form (u, r) constitute an information space with properties similar to those of a calibration information space, namely, a commutative monoid ( , ⊕, 0) with the cancellation property. As before, ⊕ is a componentwise addition of pairs of the form (u, r), and the neutral element 0 (the absence of measurements) is represented by the pair (0, 0), the zero vector from D and the natural number 0. In its turn, the complete canonical information obtained from two streams is represented by tuples (G, H ; u, r), i.e., elements of the monoid product (C, ⊕, 0)×( , ⊕, 0). It is easy to see that the complete information space obtained as a product of commutative cancellable monoids is also a commutative cancellable monoid. 3.4 Accumulation of Canonical Information of Two Kinds in the Calibration Problem with Repeated Measurements Simultaneous calibration measurements and multiple measurements of the object of study lead to the accumulation and interaction of canonical information of two types, illustrated in Fig. 1.

Fig. 1. Functional scheme of accumulation and interaction of canonical information of two kinds in the problem of calibration.

As can be seen from the figure, two types of canonical information, calibration (G, H ) and multiple measurement (u, r), can be accumulated completely independently from the respective data streams. Moreover, there is no need to store the data itself, they can be discarded immediately after the information contained in them is added to the canonical information of the corresponding type. In addition, since both representations of canonical information have fixed sizes, the resources required to store and process them remain constant, regardless of the volume of the source data sets. Note that the procedures for accumulating canonical information here are quite simple, they can be implemented on low-power controllers and accumulate canonical information as the fragments of data arrive. More time-consuming procedures for extracting explicit information (A0 , α) from calibration measurements and (y, β) from repeated measurements, as well as constructing the final estimation result xˆ , Q can be performed from time to time only as needed.

60

P. Golubtsov

3.5 Distributed Accumulation for of Two Types Information Within MapReduce Model In our previous studies [5–7] we have shown that proper information spaces emerge naturally in big data problems. It was shown there that the structure of adequate information spaces makes it possible to effectively parallelize the process of information accumulation using the MapReduce distributed data analysis model [8] and organize efficient processing without the need to accumulate and store the original data themselves. As a result, the information accumulation procedure organically “fits” into the architecture of distributed data storage and analysis systems, such as, for example, Hadoop MapReduce [19] or Spark [20]. On Fig. 1 the accumulation of information is illustrated for the case of two data streams: calibration and measurement. In the case of distributed data sets, the processing scheme will take the form shown  on Fig. 2. Here (i , i ), i = 1, . . . , K are calibration data sets, and Yj = yj1 · · · yjrj , j = 1, . . . , R are data sets of repeated measurements.

Fig. 2. Functional scheme of distributed accumulation and interaction of canonical information of two kinds in the problem of calibration within the MapReduce model

Here the Map operation extracts information fragments from multiple datasets and the Reduce operation combines all these partial information fragments into a single element which represents all the original datasets. In fact, any MapReduce algorithm can be said to be based on a certain information space. In our calibration problem we deal with two types of data. As a result, the whole processing algorithm starts with two MapReduce branches which independently and in parallel accumulate the relevant types of information in a most efficient way. Then certain explicit information of the form (A0 , α) or (y, β) is extracted from the respective accumulated canonical information and finally, these pieces of information interact and produce the final estimation result xˆ , Q.

Information Spaces and Efficient Information Accumulation

61

4 Conclusion In this article, we analyzed the problem of calibrating a measuring system in parallel with measurements. This problem not only has an important applied value, but also presents an elegant example of a real-time/distributed data processing procedure in which two different types of information are accumulated and two corresponding information flows interact. In fact, this is an example of a complex problem in which, in order to obtain a result, the interaction of information flows of different types is fundamental. It is this heterogeneity of data types and corresponding information flows that is an important new feature of the considered problem. As was shown, for each of the input data streams it is convenient to construct a special information space that allows the accumulation of information as efficiently as possible. Each of these spaces has an algebraic structure reflecting the properties of the accumulated information. Moreover, the complete accumulated information is described by the product of these spaces and inherits their algebraic properties. Thus, the considered problem demonstrates the possibility of decomposing a complex data processing algorithm with many types of input data sources into maximally efficient independent components, and the processes of information accumulation from these streams and their interaction are represented by elegant algebraic constructions. Systematic study of information spaces and their interaction in complex data processing problems could give not just understanding of their general properties, but also provide tools for constructing information spaces with “optimized” mathematical structure, which would lead to most efficient distributed processing algorithms. Perhaps a more important feature of this work is that it outlines an approach to the mathematical description of the concept of information contained in data in the context of the interaction of various types of information. Of course, a careful formulation and study of the corresponding concepts will require both a detailed examination of various special cases and the creation of a general mathematical infrastructure. This work is a step in this direction. Acknowledgment. The reported study was supported by RFBR, research project number 19-2909044.

References 1. Rao, C.R.: Linear Statistical Inference and its Applications. Wiley, Hoboken (1973) 2. Seber, G.A.F., Lee, A.J.: Linear Regression Analysis. Wiley-Interscience Inc. (2003) 3. Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. V. H. Winston & Sons, Washington, D.C.; Wiley, New York (1977) 4. Pyt’ev, Yu.P.: Reduction problems in experimental investigations. Math. USSR Sb. 48(1), 237–272 (1984) 5. Golubtsov, P.V.: The concept of information in big data processing. Autom. Doc. Math. Linguist. 52(1), 38–43 (2018). https://doi.org/10.3103/S000510551801003X 6. Golubtsov, P.V.: The linear estimation problem and information in big-data systems. Autom. Doc. Math. Linguist. 52(2), 73–79 (2018). https://doi.org/10.3103/S0005105518020024

62

P. Golubtsov

7. Golubtsov, P.: Scalability and parallelization of sequential processing: big data demands and information algebras. In: Hu, Z., Petoukhov, S., He, M. (eds.) CSDEIS 2019. AISC, vol. 1127, pp. 274–298. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39216-1_25 8. Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008) 9. Ekanayake, J., Pallickara, S., Fox, G.: MapReduce for data intensive scientific analyses. In: Fourth IEEE International Conference on eScience, Indianapolis, IN, pp. 277–284 (2008) 10. Palit, I., Reddy, C.K.: Scalable and parallel boosting with MapReduce. IEEE Trans. Knowl. Data Eng. 24(10), 1904–1916 (2012) 11. Golubtsov, P.V., Pyt’ev, Yu.P., Chulichkov, A.I.: Construction of the reduction operator by test measurements. Discrete information processing systems, Ustinov: Udmurt State University, pp. 68–71 (1986). (in Russian) 12. Bekkerman, R., Bilenko, M., Langford, J.: Scaling Up Machine Learning: Parallel and Distributed Approaches. Cambridge University Press, Cambridge (2012) 13. Fan, J., Han, F., Liu, H.: Challenges of big data analysis. Natl. Sci. Rev. 1(2), 293–314 (2013) 14. Farhan, N., Habib, A., Ali, A.: A study and performance comparison of MapReduce and apache spark on twitter data on hadoop cluster. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 10(7), 61–70 (2018) 15. Roy, C., Rautaray, S.S., Pandey, M.: Big data optimization techniques: a survey. Int. J. Inf. Eng. Electron. Bus. (IJIEEB) 10(4), 41–48 (2018) 16. Golubtsov, P.: Information spaces for big data problems in fuzzy bayesian decision making. In: Hu, Z., Gavriushin, S., Petoukhov, S., He, M. (eds.) Advances in Intelligent Systems, Computer Science and Digital Economics III, vol. 121, pp. 102–114. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-97057-4_10 17. Pyt’ev, Yu.P.: Pseudoinverse operators. Properties and applications. Math. USSR Sb. 46(1), 17–50 (1983) 18. Pyt’ev, Yu.P.: Methods of mathematical modeling of measurement-computer systems. Fizmatlit, in Russian, Moscow (2012) 19. White, T.: Hadoop: The Definitive Guide. O’Reilly, Sebastopol (2015) 20. Ryza, S., Laserson, U., Owen, S., Wills, J.: Advanced Analytics with Spark: Patterns for Learning from Data at Scale. O’Reilly, Sebastopol (2015)

The Novel Multi Source Method for the Randomness Extraction Maksim Iavich1(B) and Tamari Kuchukhidze2 1 Department of Computer Science, Caucasus University, Tbilisi, Georgia

[email protected] 2 Georgian Technical University, Tbilisi, Georgia

Abstract. Cryptography, statistical analysis, and numerical simulations are just a few of the areas where randomness is frequently applied. They are also a crucial resource in engineering and science. Usually, we have to supply unbiased, independent random bits for these applications. This raises the question of where these supposedly random bits can be found. Pseudorandom number generators are algorithms that produce seemingly random numbers but which are not actually random. When actual randomness is required, we employ true random number generators, which use unforeseen random events as their random source. Based on the inherent unpredictability of quantum measurements, quantum random number generators (QRNGs) produce actual random numbers. Sadly, due to classical noise, quantum randomness and classical randomness are invariably combined in practice. Also, randomness is frequently biased and correlated. The resulting raw bits sequence must be processed in order to provide output values of high quality that are as close to uniform distribution as is feasible. This calls for random extractors. We review numerous types of postprocessing as well as the randomness produced by quantum random number generators. The various randomness extractors are covered. Additionally, we employ information theoretically secure extractors and propose improved novel randomness extraction. Our new hybrid randomness extractor is based on a base of information-theoretically secure extractors. It uses multiple sources in the extraction process and can be changed based on our usage needs. Our extractor is resistant to quantum attacks, and the extraction process can be accelerated, say by means of parallelism. Keywords: Quantum · Quantum cryptography · Postprocessing · Entropy · Randomness extractors · Deterministic extractors · Seeded extractors

1 Instruction Random numbers have many uses such as cryptography, science, statistics, simulation. Unfortunately, finding true coincidence is incredibly difficult [1, 2]. Pseudo random number generators are algorithmically generated numbers that seem like random distributions but are not genuinely random [3, 5]. They can produce random numbers at rapid speeds while using few resources. Many tasks rely on the unpredictable nature © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 63–75, 2023. https://doi.org/10.1007/978-3-031-24475-9_6

64

M. Iavich and T. Kuchukhidze

of random numbers, which cannot be ensured in the case of pseudo random number generators; genuine randomness is necessary. In such cases, we employ actual random number generators whose randomness is generated by unpredictable random events. Quantum Random Number Generators (QRNGs) produce actual random numbers by using the intrinsic unpredictability of quantum measurements. True numbers can be obtained utilizing quantum mechanics and the unpredictable behavior of a photon, which is the basis of many modern cryptography systems [6]. It is theoretically possible to produce random numbers using a Quantum Random Number Generators. In reality, this is not the case since quantum signals, which for us are a source of actual randomness, must be combined with classical noise [7]. As a result, in order to display actual randomness, more processing is required. Using randomness extractors, this process, which is known as randomness extraction, is carried out. In other words, actual randomness is extracted using randomness extractors, and the effects of classical noise are eliminated. Standard random number generators are made to generate an equal amount of random characters. The resulting raw bits sequence is then processed in the post-processing stage to provide acceptable quality output values with as close to an equal distribution as possible. The processing period may include tasks that check that the generator is working properly or that test values are being generated before generating the final strings. The major purpose of the post-processing phase is randomness extraction, in addition to these tasks, which are varied for different generators. To correct biases and correlations, most physical RNGs include some sort of random extractor. Even if we have good random sources with high entropy, they originate from flaws in measurement and generation equipment. Two of a Random Number Generator’s most frequent parts are an entropy source and a randomness extractor. The randomness extractor in a quantum random number generator might be an algorithm that creates almost perfect random numbers from the output of the prior entropy source, which could produce imperfectly random results. The entropy source in a quantum random number generator could be a physical device with inherently unpredictable output. The two halves of quantum random number generators are connected by measuring the randomness using min-entropy. The randomness extractor’s input parameter is the minimal entropy, which is first determined for the entropy source. In information theory min-entropy is the smallest family of Rényi entropies. Minimal entropy describes the unpredictability of the result, which is determined only by the probability of the most probable outcome. This is a conservative measure that is good for describing passwords and other heterogeneous distribution of secrets. The lowest level of a random variable’s entropy is known as its minimal entropy. Min-entropy is never greater than normal or Shannon entropy (which measures the average unpredictability of results) and it, in turn, is never greater than Hartley or maximum entropy, which is defined as the logarithm of the number of outcomes with non-zero probability. The probability distribution’s randomness is typically measured using minimal entropy [8]. The probability distribution X’s min-entropy at {0, 1}n is defined as: H∞ (x) = −log( max n Prob[X = v]) v∈{0,1}

(1)

The Novel Multi Source Method for the Randomness Extraction

65

High entropy does not always indicate that the ensuing random sequence will be appropriate. Although there are techniques for identifying weak sources that can be used in randomized algorithms, not all protocols function flawlessly in imperfection. Many cryptographic protocols, in particular, for operations like bit binding, encryption, zero knowledge, or secret sharing are insecure unless we employ an almost equal random sequence. Some hardware random number generators combine randomness from many sources by performing a logical XOR (exclusive-OR) on their bits or using the hashing function of cryptography on strings. One of the first cases of a random extractor is the von Neumann extractor. He suggested a straightforward debiasing technique in which we could cancel out the results of 00 and 11, assigning 0 to 0 and even 1, for each pair of bits generated. If there is systemic bias, this strategy would eliminate it, but at the cost of wasting half the bits and cutting the bit rate by at least 25%. The initial sequence was skewed to a greater extent the more bits we throw [9]. It may be demonstrated that the Von Neumann extractor produces a uniform output even when the distribution of input bits is not uniform, provided that each bit has the same probability of being one and that there is no relationship between succeeding bits. Of course, this approach has been improved and made more potent. Refined versions of the von Neumann’s method retain the efficiency close to the information theory limit indicated by the source’s Shannon entropy while reducing the amount of entropy that is discarded. Additional adjustments lead to algorithms that, using a Markov’s chain as the input sequence, construct unbiased sequences under more varied situations. Before we go into further detail about random extractors, we need to figure out what qualifies as an equal result. The distance between distributions is a key notion for us. The statistical probabilities of two defined distributions, X1 and X2 , are as follows:   (2) d (X1 , X2 ) = maxPX1 (a) − PX2 (a) a∈A

In the finite alphabet A, they can have equivalent values. With regard to comparing distributions, this metric shows us the biggest variation in the probability of getting a specific result. Assume that X1 and X2 are two -close distributions: d (X1 , X2 ) ≤ 

(3)

The goal of random extraction is to create a sequence that is as similar to an equal one as possible. This typically involves taking n bits from the unprocessed output values and converting them into a string of m bits, whose distribution is -close to Um , depending on our needs. {0, 1}m is an equal distribution and  is of minimal significance. The best extractors would provide us with as many output bits as they can while only using a little amount of extra resources, like extra computation time or randomness. We have a maximum number of bits that can be recovered based on the minimal entropy of the original sequence distribution. If we take an n bit string from a raw bit sequence that has a minimum entropy of H∞ (X) = b in the X distribution, we can get a maximum of b random bits that are close to equal. The original length does not matter. A random source is called (n, b) - a source if it produces n bits with H∞ (X) = b with minimal entropy, from the X distribution.

66

M. Iavich and T. Kuchukhidze

Some simple sources may allow us to successfully use the deterministic extractor, unfortunately for most imperfect sources it is necessary to take extra measures. When dealing with quantum sources, in many cases it is not enough to use one simple source. Also, because we are connected with quantum operations, it is necessary to take the base of information theoretically secure extractors, such as: Toeplitz-hashing extractor and Travisan’s extractor. In our case, we use two-source extractor, during which we get information from two sources. If necessary, we can also use multiple sources for the extraction process. It is impossible to derive randomness from a single source of low quality. Even if two sources are used, at least one should be of higher than averege quality. It is important for us to relax the quality requirements of the extracted sources, but not to compromise the safety of the extractor. We go over the various techniques for creating a sequence of bits that roughly produces a sequence close to the minimal entropy limit. The benefits and drawbacks of various random extractor techniques are also covered. Our goal is to get a good quality extractor that can withstand quantum attacks without too many constraints and is not difficult to implement.

2 Literature Review The authors of paper [2] are researching quantum computers. Cryptosystems based on the integer factoring problem can be cracked by quantum computers. It means that the RSA system, one of the most popular public-key cryptosystems, is vulnerable to attack by quantum computers. Several pseudo-random number generators are taken into consideration in [3–5], each of which employs a different technique to guarantee the randomness of the sequences and a greater level of security. The study [6] explains the various technologies in quantum random number generation and the various ways to use this to acquire entropy from a quantum origin. Random numbers are widely used in various applications. In their paper [7], the authors offer a general framework for assessing the quantum randomness of quantum random number generators and use it to compare two different quantum random number systems that are already in existence. The measurement of quantum randomness, several quantum random number generator types, their certification procedures, and various randomness extraction techniques are all covered in Paper [8]. The paper [9] presents one of the first cases of a deterministic extractor, von Neumann extractor and proposes different variations of this extractor. Paper outlines a design technique for lightweight post-processing modules, along with hardware implementations for debiasing random bit sequences. Based on the iterated von Neumann technique, this study presents a technique to maximize iterated von Neumann’s effectiveness in applications with space and throughput restrictions. The hardware modules that are produced can be used to postprocess raw data in random number generators. In [10–12] authors present a study of efficient deterministic extractors. Until recently, only sources meeting strict independence requirements could use deterministic extraction techniques. Paper [10] examines sources that can be produced by an effective sampling algorithm and looks for an effective deterministic process that produces an output that is almost uniformly distributed. The existence of such deterministic extractors is examined by the authors.

The Novel Multi Source Method for the Randomness Extraction

67

Different seeded extractors are introduced in [13–15], which are effective against quantum attacks. Papers [17–21] describe two universal hashing, extraction method, where universal hashes with high entropy input values are almost equally random. In the presence of an adversary, two universal hashing methods are capable of successfully eliminating a weak source at random.

3 Randomness Extractors Many fields of computer science benefit greatly from the use of randomized algorithms and protocols. These algorithms and protocols are frequently more effective than deterministic ones. Access to randomness is also necessary for cryptography. It is assumed that computers have access to a stream of truly random bits when designing randomized algorithms and protocols (that is a sequence of independent and unbiased coin tosses). In actual implementations, a sample is taken from a “source of randomness” to create this sequence. A weak source of entropy can be transformed into an equal bit generator using a technique known as a “randomness extractor.” Although these functions were initially developed to analyze randomized algorithms, they have since grown to be an important tool in many theoretical computer science disciplines. The typical definition of a randomness extractor is a method that turns an erratic source of data into a nearly uniform distribution. Randomness extractors and related concepts such as dispersions, capacitors, and expansion graphs have a variety of uses and are found in many areas of pseudo random number generators, including error correction codes, samples, expansion graphs, and hardness amplifiers [10]. We explore some fascinating extractors as well as some of the best extraction approaches for quantum random number generators. There are numerous ways to generate randomness, and the choice you make will depend on the efficiency and accuracy of each approach. We need to accurately characterize the entropy that is accessible to us before selecting an appropriate random extractor in order to have an efficient approach and keep as many bits as possible. Otherwise, the extractor function’s output values won’t have the requisite qualifications. We take it for granted that we have a precise source of randomness. We assume that the raw sequence has well-known minimum entropy or, in some situations, at least well-known characteristics like bit independence, or it is obtained from the Markov process. We assume that by default we want (n, m, b, ) - extractor: A function that converts (n, b) - a source n bits into output bits whose distribution is -close to equal and m is as close as possible to b.

4 Deterministic Extractors Deterministic extractors are functions Ext : {0, 1}n → {0, 1}m

(4)

68

M. Iavich and T. Kuchukhidze

They take n bits of {0, 1}n input strings and gives us m output bits. These algorithms are especially appealing because they are deterministic and simply need an input sequence to function. They do, however, have some restrictions that make them unsuitable for some randomness sources. As with other extractors, the equation can be calculated if the input sequence has sufficient intrinsic entropy. The requirement for obtaining an approximation is - m ≥ b if the entered sequence (n, b) is a source. Unfortunately, there aren’t enough of the necessary ones, so we can only discover deterministic extractors for a small number of input distributions. The core argument proves that universal deterministic extractors are not viable. We do not have a single general deterministic extractor that would be appropriate for all forms of input distributions. Imagine the function from {0, 1}n to {0, 1}. We can denote all possible input values of n-bit strings in one set that gives us a value of 0, Ext−1 (0), and we have a second set that gives us 1, Ext−1 (1). At least one of them is larger than 2n−1 or more. The input value, which is a equal distribution in a larger set, has n − 1 minimum entropy at least, but always gives us the same output values, indicating that there is no one-size-fits-all extractor that is valid for any type of input distribution. However, there are working extractors for the input distributions of specific process families that characterize acceptable sources. In addition, for fixed sources of bits, there are feasible determinant extractors for selective distribution. A fraction of the bits in these extractors can be changed by the adversary, who can then generalize for affine sources or sources with output values that are evenly distributed over an unidentified algebraic diversity. Another intriguing class of deterministic extractors that somewhat deviates from the provided extractor’s equation is the variable-length deterministic extractors Ext : {0, 1}n → {0, 1}m

(5)

Von Neumann’s method, a deterministic technique that succeeds for an unknowable distribution and yields an output value with an unknowable length before extraction, serves as an example of how this is done. The only criterion of the von Neumann random extractor outlined is that each input bit must be independent of the bits before and after it. Modern iterations of Von Neumann’s method reduce wasted entropy and enable efficiency close to Shannon’s entropy, the theoretical upper bound of information theory. Further developments led us to discover algorithms that create unbiased sequences using a Markov’s chain as the input sequence under more general conditions. The original method’s greatest appeal is its simplicity. It takes the least amount of calculated energy. It may be done with very basic equipment and does not require a thorough understanding of the source distribution. The original system, however, has some significant drawbacks. If an external attacker is able to even slightly change the bias from bit to bit, the von Neumann extractor will no longer function. In reality, if the bias of the input bits changes so that the chance of finding 1 for the n-th bit depends on the value of the prior bit s, there is no deterministic algorithm that yields a variable of n bits of equal value for the variable X = (X1 , X2 , . . . , Xn ). δ ≤ PXi (1|x1 x2 . . . xn−1 = s) ≤ 1 − δ

(6)

The Novel Multi Source Method for the Randomness Extraction

69

where 0 < δ ≤ 21 ; This is known as the Santha-Vazirani source. Santha and Vazirani described it as a model of weak random sources with evidence for the impossibility of a deterministic extractor [11]. Despite this restriction, there are deterministic algorithms that enable us to imitate random algorithms with a weak Santha-Vazirani source. Compared to other purposes, like cryptography, the requirements for randomness are less strict. In some circumstances, weak sources that fail to yield nearly comparable results are valid [12]. One weak source is insufficient for many cryptographic protocols, even when we implement a deterministic extractor. In contrast to the safe use of weak randomness in signature schemes, high-quality keys are necessary for encryption and other related protocols in order to prevent vulnerabilities. Santha and Vazirani provide an easy fix for devices where it is essential for the output values to be nearly equal. In order to create an output sequence that cannot be distinguished from an equal distribution by a polynomial time method, combine the outputs of two weak, independent Santha-Vazirani sources. Any efficient algorithm can produce bit strings that are impossible to differentiate from random strings as long as we have access to a physical approach that creates some unpredictability. In many randomization implementations, such as cryptography, this performs as well as true randomness. The values used by each of the methods mentioned above come from the same source. This concept is followed by multiple source extractors, which gather information from two or more flimsy sources. It is possible to cultivate them to produce a nearly equal sequence. There are a variety of approaches available depending on the distribution of particular input values, the quantity of sources, and the desired characteristics of the output sequence. The GF(2) internal product of the n bit blocks is obtained by a straightforward extractor that is valid for two independent, weak sources that each include two n-bitblocks. This simplifies the calculation required to do the two-sequence bit AND. At least n/2 should be the minimal entropy of both. Naturally, different randomnesses can be mixed in various ways. The seeded extractors, which make up the second largest category of random extractors, also combine sources. They may be a specific instance of multiple source extractors where only a few bits are produced by a single weak source and a perfectly equal source.

5 Seeded Extractors We need a weaker concept of extractors. We can only create a nearly uniform output for the distribution of numerous raw bits by including some additional randomness. It may be necessary in some circumstances to supply a brief additional sequence of true random bits when deterministic randomness is insufficient. We anticipate that the output value of the extractors will be ε-close to the uniform distributions by utilizing a modest number of random bits to represent the additional input value [13]. For the purpose of attempting to extract pure randomness from a single weak source, seeded extractors employ an additional number of truly random bits, or seed. It goes

70

M. Iavich and T. Kuchukhidze

without saying that this is most intriguing when the seed is shorter than the sum of the lengths of the weak source and output. A seeded extractor is effective if the output values are nearly independent of the seed. A seeded extractor can be formalized as a two-source extractor, with the minimum entropy of the first source equal to its length. As we previously stated, the numerous distributions of input bits make it impossible for us to produce a uniform outcome without the assistance of any additional randomness. We have a function in seeded extractors. Ext : {0, 1}n × {0, 1}d → {0, 1}m

(7)

This function includes n bits of raw sequence, uniform random bit seed d, and they generate output bits m. We assume that d is much smaller than output bits. In addition to the seeds, we have a guarantee that there are extractors whose length is nearly equal to the maximum length and which provide output values that are nearly similar. These seeds play a similar role as seeds in pseudo random number generators. A function known as a (b, ) extractor generates an output sequence that is  close to the uniform for every input source b (raw sequence, minimum entropy b at least). The seed serves as a catalyst, enabling us to identify universal strategies that will always be effective. In the case of a seeded extractor, each extractor has values: input length - n, output value length m, seed length d, minimum entropy threshold b, extractor error ε. In the case of a good extractor, it is necessary to reduce the seed d, and increase the obtained values m. In the framework of randomized algorithms, random seeded extractors were initially developed. It has been demonstrated using probabilistic methods that there are almost always extractors from the input b source’s raw sequence that include all of the latent entropies that are known to exist. To insert n block bits of the source b, we can construct extractors of size m ≈ b + d that is -close to the uniform, using seeds of length d of the order log2 n. These seeded extractors come in a variety of designs. The requirement for uniform seeds appears paradoxical: we demand the resource we are attempting to generate. The seeds’ specifications, on the other hand, are less restricted than they appear. The seed length measure is logarithmic to the length of the input string in several apparent extractors. For a sufficiently small d, we can adjust the needed randomness condition to include all possible 2d sequences. The report, followed by majority voting permissions, is good for modeling a uniform source in randomized algorithms. However, in cryptography, where unpredictability is required, this strategy is obviously ineffective. Seeded extractors also protect us from external attackers in quantum random number generators. There are certain designs that have been demonstrated to be safe from a variety of quantum attackers. The Trevisan’s extractor is the first noticeable outcome. It sparked a lot of theoretical curiosity because of the scarcity of its data, but also because this extractor was protected from quantum enemies [14]. The Trevisan’s extractor’s seed length is a polylogarithm of the input value’s length and has been shown to be a strong extractor [15]. Also, Trevisan’s extractor’s random seeds can be reused. This is especially crucial for features like Toeplitz hashing, which are popular universal hashing features [16]. The Nisan-Widgerson pseudo-random number generator is the foundation of the Trevisan’s extractor. This is comparable to a random function, where a tiny number

The Novel Multi Source Method for the Randomness Extraction

71

of bits define the truth table. Equal random seed d-bits are expanded by the random function to produce values for the extractor and pseudo random number generators. Quantum random number generators and quantum key distributions were used to build a number of Trevisan’s extractor versions. Their key benefit is that they only have polylogarithmic seed sizes that are random and equal to the size of the input blocks. However, due to the fact that calculations are necessary during extraction, real implementation can slow down the bit generating process. As we have said, the Trevisan’s extractor is safe from quantum opponents, protecting us from quantum attacks. It is also a strong extractor (its seeds are reusable), and the seed length depends poly-logarithmically on the input. A single bit extractor and a combinatorial design make up the two primary components of the Trevisan’s extractor. The privacy-enhancement process for the quantum key distribution system made good use of the Toeplitz-hashing extractor. This kind of extractor is likewise a strong one. Using the fast Fourier transform technique, the working time of the Toeplitz-hashing extractor can be improved to O (n logn). Two-universal hashing is the second approach. The Leftover Hash Lemma demonstrates that the functions of two universal hashes with high entropy input values are almost equally random [17]. In the presence of an adversary, two universal hashing methods are capable of successfully eliminating a weak source at random. If we have a reasonable approximation or a conservative constraint on identifying our weak random source with the listener, then we must use the Lemma generalization employing conditional entropies and leveraging lateral information [18–21]. Lateral information can be quantum in a broad sense. We can hypothesize that all randomness that results from flaws or in any other way deviates from our model of the quantum system that generates the raw bit is caused by an intrusive quantum random number generator with technical noise. Under these circumstances, it is still conceivable to construct a seeded extractor that is self-contained and yields results that are almost equivalent [22–26]. These techniques are also used to improve privacy in quantum key distribution. We are forced to pick a somewhat lengthy seed that is equivalent to n block size yet is capable of recycling by a randomness extractor with two universal or, more generally, n-universal hashing. Equal seeds chosen at random can also be used again. You can also re-use uniform seeds that were chosen at random [19, 27–30]. This method offers a quick extraction feature that uses less computational power than the Trevisan’s extractor but results in larger seeds. Some implementations, such as Toeplitz random binary matrix hashing, are particularly effective. One such extractor is a rectangle matrix that multiplies on n-vectors from the source to generate almost independent bits. This method is applied in some commercial devices that distribute a pre-calculated random matrix that serves as a seed and contains the extraction function as part of the device’s encoded data. High quality seed randomness is a challenging procedure, but it only needs to be done once. Long, simple approaches are acceptable, such as repeatedly performing the XOR of different independent generators [31–33].

72

M. Iavich and T. Kuchukhidze

6 Novel Randomness Extractor Theoretically, a quantum random number generator is capable of producing random numbers with acceptable randomness. In reality, this is not the case since quantum signals, which are our source of actual randomness, must be combined with classical noise. To expose actual randomness, it is therefore required to utilize additional processing. Randomness extractors are used to carry out this process, which is known as randomness extraction. In other words, actual randomness is extracted using randomness extractors, and the effects of classical noise are eliminated. We use multiple source extractors whose model is similar to single source extractors. We gather information from two or more weak sources. They can be cultivated to generate a nearly equal sequence. Depending on the distribution of certain input values, the number of sources, and the desired properties of the output sequence, there are many ways that can be used. We get data from a number of unreliable sources, process it, and create a nearly equal sequence. The two-source extractor attempts to extract true randomness from two independent weak sources. Consider a class of X random variables consisting of two independent random variables X1 and X2 , such that H∞ (x1 ) ≥ b1 and H∞ (x2 ) ≥ b2 . We get the function Ext : {0, 1}n1 × {0, 1}n2 → {0, 1}m

(8)

Which is an extractor of two sources (b1 , b2 , ). For each independent random variable X1 and X2 , for which H∞ (x1 ) ≥ b1 and H∞ (x2 ) ≥ b2 : Ext(x1 , x2 ) is  close to the uniform distribution of m bits. Toeplitz-hashing and Trevisan’s extractors are two effective randomized extractors that we employ. They are both extractors with strong information theoretical foundations. The Trevisan’s extractor is resistant to quantum attacks and has a poly-logarithmic seed length based on the length of the input value. We are able to reuse random seeds because it is a strong extractor. Among the universal hashing algorithms, Toeplitz-hashing offers the benefit of a reduced random seed length (the number of random bits needed to create a hashing function) and flexibility of hardware processing. The output string in Toeplitz-hashing is longer than the random seed used to create a Toeplitz matrix. In order to build universal hashing functions, Toeplitz matrices are used. A Toeplitz matrix of dimension n × m requires only the first row and column to be supplied, and the remaining matrix elements are calculated by moving diagonally down from left to right. As a result, n + m − 1 random bits are required in total to create a Toeplitz matrix. In Toeplitz-hashing extractor we have n size raw data with the minimum entropy of b and a security parameter ε, output length will be m = b − 2logε

(9)

Toeplitz matrix is constructed with n+m−1 random seed. The input data is multiplied by the Toeplitz matrix to get the extracted random bit string. The error correction algorithm and the combinatorial design are the two key components of our upgraded version of the Trevisan extractor. We employ a sophisticated

The Novel Multi Source Method for the Randomness Extraction

73

Nisan-Wigderson design for the combinatorial design portion. The random, uniform seed size’s advantage is that it only grows poly-logarithmically in relation to the size of the input blocks. The bit creation process can, however, be slowed down by actual implementation because extraction requires calculations. Even so, it helps us get better results in statistical testing. Although Travisan’s extractor is more secure than Toeplitz hashing, strict speed limits reduce the use of this extractor in real-time applications. Therefore, hashing can be used in speed-critical applications, and Travisan’s extractor in applications where enhanced security is required. In this case, we choose an intermediate based on our usage needs. Furthermore, introducing parallelism allow us to use the extractors more efficiently, allowing us to do multiple extractions in parallel.

7 Conclusion Quantum random number generators generate genuine random numbers based on the intrinsic unpredictable nature of quantum measurement. Regrettably, in practice, quantum and classical randomness are always mingled due to classical noise. Randomness is usually correlated and biased as well. In order to provide output values of high quality and as close to equal distribution as is practical, the resulting raw bits sequence must be processed. We need random extractors for this. The generated data cannot pass statistical tests without further processing, given the fact that the output randomness is mixed with unclassified classical noise, making it impossible to create a uniform distribution. This demonstrates the requirement for appropriate quantum random number generator post-processing. The extractors’ results pass all standard statistical tests. In this paper we explored different randomness extraction methods. First variety of deterministic extractors are analyzed, which are secure against quantum attacks. After that, seeded extractors are addressed. They are built on the assumption that sometimes a deterministic extractor is insufficient and needs to be supplemented with a short series of genuine random bits. The output value of the extractors should be near to the uniform distributions because only a few random bits are used to represent the extra input value. We used multiple source extractors whose model is similar to single source extractors. We obtained results from several weak sources, process them, and generate a sequence that is close to equal. We used two strong randomized extractors, Toeplitz-hashing and Trevisan’s extractors. Both of them are information-theoretically proven extractors. We combined these extractors. We used random sources of average quality for the extraction process, but we may lower the quality requirement. During the actual implementation, our extractor may slow down, because extraction requires calculations. But introducing parallelism allows us to use the extractors more efficiently, allowing us to do multiple extractions in parallel. Using our extractor in parallel mode will increase speed.

74

M. Iavich and T. Kuchukhidze

References 1. Kabiri Chimeh, M., Heywood, P., Pennisi, M., et al.: Parallelisation strategies for agent based simulation of immune systems. BMC Bioinform. 20, 579 (2019). https://doi.org/10.1186/s12 859-019-3181-y 2. Gagnidze, A., Iavich, M., Iashvili, G.: Novel version of merkle cryptosystem. Bul. Georgian Natl. Acad. Sci. 11(4), 28–33 (2017) 3. Lewis, P.A.W., Goodman, A.S., Miller, J.M.: A pseudo-random number generator for the system/360. IBM Syst. J. 8(2), 136–146 (1969). https://doi.org/10.1147/sj.82.0136 4. Lambi´c, D., Nikoli´c, M.: Pseudo-random number generator based on discrete-space chaotic map. Nonlinear Dyn. 90(1), 223–232 (2017). https://doi.org/10.1007/s11071-017-3656-1 5. Mcginthy, J.M., Michaels, A.J.: Further analysis of PRNG-based key derivation functions. IEEE Access 7, 95978–95986 (2019). https://doi.org/10.1109/ACCESS.2019.2928768 6. Herrero-Collantes, M., Garcia-Escartin, J.C.: Quantum random number generators. Rev. Mod. Phys. 89, 015004 (2016). https://doi.org/10.1103/RevModPhys.89.015004 7. Ma, X., Feihu, X., He, X., Tan, X., Qi, B., Lo, H.-K.: Postprocessing for quantum randomnumber generators: entropy evaluation and randomness extraction. Phys. Rev. A 87(6), 062327 (2013) 8. Ma, X., Yuan, X., Cao, Z., Qi, B., Zhang, Z.: Quantum random number generation (2016) 9. Roži´c, V., Yang, B., Dehaene, W., Verbauwhede, I.: Iterating von Neumann’s post-processing under hardware constraints. In: 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), pp. 37–42. IEEE (2016) 10. Trevisan, L., Vadhan, S.: Extracting randomness from samplable distributions. In: Proceedings 41st Annual Symposium on Foundations of Computer Science, pp. 32–42. IEEE (2000) 11. Santha, M., Vazirani, U.V.: Generating quasi-random sequences from semi-random sources. J. Comput. Syst. Sci. 33(1), 75–87 (1986) 12. Vazirani, U.V.: Towards a strong communication complexity theory or generating quasirandom sequences from two communicating slightly-random sources. In: Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing, pp. 366–378 (1985) 13. Raz, R.: Extractors with weak random seeds. In: Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing, pp. 11–20 (2005) 14. De, A., Portmann, C., Vidick, T., Renner, R.: Trevisan’s extractor in the presence of quantum side information. SIAM J. Comput. 41(4), 915–940 (2012) 15. Raz, R., Reingold, O., Vadhan, S.: Extracting all the randomness and reducing the error in Trevisan’s extractors. J. Comput. Syst. Sci. 65(1), 97–128 (2002) 16. Trevisan, L.: Extractors and pseudorandom generators. J. ACM 48(4), 860–879 (2001) 17. Stinson, D.R.: Universal hash families and the leftover hash lemma, and applications to cryptography and computing. Faculty of Mathematics, University of Waterloo (2001) 18. Tsurumaru, T., Hayashi, M.: Dual universality of hash functions and its applications to quantum cryptography. IEEE Trans. Inf. Theory 59(7), 4700–4717 (2013) 19. Qoussini, A.E., Daradkeh, Y.I., Al Tabib, S.M., Gnatyuk, S., Okhrimenko, T., Kinzeryavyy, V.: Improved model of quantum deterministic protocol implementation in channel with noise. In: 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), vol. 1, pp. 572–578. IEEE (2019) 20. Hu, Z., Gnatyuk, S., Okhrimenko, T., Kinzeryavyy, V., Iavich, M., Yubuzova, K.: High-speed privacy amplification method for deterministic quantum cryptography protocols using pairs of entangled qutrits. In: ICTERI Workshops, pp. 810–821 (2019)

The Novel Multi Source Method for the Randomness Extraction

75

21. Gnatyuk, S., Okhrimenko, T., Azarenko, O., Fesenko, A., Berdibayev, R.: Experimental study of secure PRNG for Q-trits quantum cryptography protocols. In: 2020 IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT), pp. 183–188. IEEE (2020) 22. Ehsan Ali, U.AMd., Emran Ali, Md., Sohrawordi, Md., Sultan, N.: A LSB based image steganography using random pixel and bit selection for high payload. Int. J. Math. Sci. Comput. 7(3), 24–31 (2021). https://doi.org/10.5815/ijmsc.2021.03.03 23. Sathe, M.T., Adamuthe, A.C.: Comparative study of supervised algorithms for prediction of students’ performance. Int. J. Mod. Educ. Comput. Sci. (IJMECS) 13(1), 1–21 (2021). https:// doi.org/10.5815/ijmecs.2021.01.01 24. Eljinini, M.A.H., Tayyar, A.: Collision-free random paths between two points. Int. J. Intell. Syst. Appl. (IJISA) 12(3), 27–34 (2020). https://doi.org/10.5815/ijisa.2020.03.04 25. Sinha, P.K., Sinha, S.: The better pseudo-random number generator derived from the library function rand() in C/C++. Int. J. Math. Sci. Comput. (IJMSC) 5(4), 13–23 (2019). https://doi. org/10.5815/ijmsc.2019.04.02 26. Shrimpton, T., Terashima, R.S.: A provable-security analysis of Intel’s secure key RNG. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 77–100. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46800-5_4 27. Gnatyuk, S., Okhrimenko, T., Iavich, M., Berdibayev, R.: Intruder control mode simulation of deterministic quantum cryptography protocol for depolarized quantum channel. In: 2019 IEEE International Scientific-Practical Conference Problems of Infocommunications, Science and Technology (PIC S&T), pp. 825–828. IEEE (2019) 28. Gnatyuk, S., Zhmurko, T., Falat, P.: Efficiency increasing method for quantum secure direct communication protocols. In: 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), vol. 1, pp. 468–472. IEEE (2015) 29. Iavich, M., Kuchukhidze, T., Gagnidze, A., Iashvili, G.: Advantages and challenges of QRNG integration into Merkle. Sci. Pract. Cyber Secur. J. (2020) 30. Dachman-Soled, D., Gong, H., Kulkarni, M., Shahverdi, A.: Towards a ring analogue of the leftover hash lemma. J. Math. Cryptol. 15(1), 87–110 (2021) 31. Iavich, M., Gnatyuk, S., Odarchenko, R., Bocu, R., Simonov, S.: The novel system of attacks detection in 5G. In: Barolli, L., Woungang, I., Enokido, T. (eds.) AINA 2021. LNNS, vol. 226, pp. 580–591. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75075-6_47 32. Iavich, M., Kuchukhidze, T., Gnatyuk, S., Fesenko, A.: Novel certification method for quantum random number generators. Int. J. Comput. Netw. Inf. Secur. 13(3), 28–38 (2021) 33. Iavich, M., Kuchukhidze, T., Iashvili, G., Gnatyuk, S.: Hybrid quantum random number generator for cryptographic algorithms. Radioelectron. Comput. Syst. (4), 103–118 (2021)

Post-quantum Scheme with the Novel Random Number Generator with the Corresponding Certification Method Maksim Iavich(B) Caucasus University, 1 Paata Saakadze Street, 0102 Tbilisi, Georgia [email protected]

Abstract. Almost for two decades, the scientists are working hard to create quantum computers. Google together with Universities Space Research Association and the federal agency NASA cooperate with D-WAVE, which is considered a leader in the creation of quantum computers. The mentioned corporations are now ready for the transformation to quantum epoch. At October of 2019 Google claimed quantum supremacy, which caused a lot of controversy, but if it is considered the fact that tech giants are racing to create the first quantum computers, and they have reasonable success, the world can get to verge of a new era. Google believes its current chip design could increase the memory capacity from 100 to 1000 qubits. IBM steps on his heels, as claims to build the quantum processor which will have more than thousand qubits, including 10 to 50 logical qubits by the end of 2023 year. To date, the company’s quantum processors top out at 65 qubits. It plans to offer a 127-qubit processor in 2021 and a 433-qubit in 2022. As a result, quantum computers will manage to break the cryptographic codes, which are used now to secure communication sessions and the financial transactions in banking sector. In addition, digital signature systems, which are used in practice, are vulnerable to the attack vectors designed on quantum computers and the whole world must have adopted post-quantum cryptography. The security of used digital signatures, which are used now in practice is based on mathematical problems, such as the problem of calculating discrete logarithms and factoring the large numbers. Some commonly used cryptosystems for instance RSA - are safe to stand against the attacks vectors designed on the classical computer, but they are insecure against the attacks vectors designed on quantum computers. Scientists are working to create RSA alternatives, which can be secure against the attacks vectors designed on quantum computers. As the RSA alternative the hash based digital signature schemes can be considered. The security of these schemes is usually based on a security of the cryptographic hash functions. In the paper, we offer the improved hash based digital signature scheme. The offered scheme is very significant, because it is more secure and more efficient than the classical offer. It uses quantum seed and the secure hash based PRNG. The randomness of the seed is verified by the corresponding testing method. We generate one-time keys by means of post-quantum secure pseudo random number generator. As the seed for PRNG, the quantum seed is generated. We offer the novel hybrid method of generating the quantum seed and the novel testing method to test the seed. The efficiency and security of the scheme is also studied in the paper. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 76–88, 2023. https://doi.org/10.1007/978-3-031-24475-9_7

Post-quantum Scheme with the Novel Random Number Generator

77

Keywords: Post-quantum scheme · RSA · Random number generator

1 Introduction Nowadays RSA cryptosystem is very common, because it is used in many global organizations, for instance, in government agencies, financial organizations, big corporations, government research laboratories and the institutions. Besides, this cryptosystem is currently used in operating systems, different commercial products, hardware, network and smart cards, Ethernet and is widely used in the hardware with integrated cryptography. RSA BSAFE encryption technology has about 500 million users and the number of them is increasing rapidly. RSA algorithm is one of the most commonly used asymmetric cryptosystems [1–3]. The fact that breaking systems like RSA can lead to the easy hacking of the sensitive systems, is very problematic [4, 5]. Scientists are working hardly to create RSA alternatives, which can be safe against the attacks of quantum computers. As RSA alternative we can consider the hash based digital signature schemes, based on the cryptographically secure hash function. Collision resistance of hash functions is the main guarantor of security of this signature schemes. Despite the fact that exist hash based post-quantum alternatives, they still have different efficiency and security problems. When the researchers improve the efficiency the harm the security. The goal of the paper is to offer comparatively efficient and still security post-quantum alternative.

2 Literature Review Hash based one-time signature scheme designed by Lamport–Diffie is considered as the working digital signature scheme for the post-quantum epoch [6]. It can be seen, that signature and key generation is rather efficient in this scheme, but signature size is very big and equals to n2 , the size of the hashed message is denoted as n. Afterwards the improvement of this problem was offered. The edited one-time signature scheme was offered by Winternitz. It significantly reduced the signature size, it was achieved by using one line of the key to sign several bits of hashed message [7]. It must be mentioned, that we can not use these one time signature schemes in order to exchange the large number of keys, because it needs unique key pairs for the message. To solve this issue digital signature scheme of Merkle uses the binary tree. By means of this tree it uses one public key instead the big number of the verification keys [8]. This public key is the root of this binary tree. Key Generation: The tree length is selected as H ≥ 2. Here one public key can sign 2H documents. 2H key pairs Xi -s, and Yi -s are generated, where Xi is the signature key and Yi is the verification key, h(Yi )-s are computed and they are used as the leaves of the binary tree. Each node in the tree is the hash value of its children’s concatenation. a [1, 0] = h(a[0, 0]a[0, 1] )

(1)

78

M. Iavich

The public key of the Merkle crypto scheme is the root of the binary tree pub, to generate it 2H pairs of one-time keys must be computed. Signature Generation: Message denoted by m can be of the arbitrary size. It is transformed into concrete size n by means of hashing. h (m) = hash, the one time signature is generated using arbitrary one time key Xarb . It can be done using Lamport of Winternitz signature schemes. The final signature of the document will be the concatenation of: one time signature, index arb, one time verification key Yarb and all sister nodes authi in relation to one time verification key. Final signature = (sig||arb||auth0 , . . . , authH−1 Yarb )

(2)

Signature Verification: To verify the whole signature, first the one time signature must be verified using one time verification key, if it is verified successfully, all the needed nodes a [i, j] must be computed using the sister nodes, index Yarb and arb. Finally if the root of the tree is equal to the corresponding public key than the document’s signature is correct. The others of [9–11] offer different improvement mechanisms, by means of integration of pseudo random number generator. Pseudo random number generators are often used in computer science [12–14].

3 Improvements In order to generate the public key, it is necessary to calculate and afterwards to store 2H pairs of one-time keys. Saving this amount of data is not efficient in real life applications. In order to save more space, it was suggested to generate the keys by means of pseudo random number generator (PRNG). When we use PRNG, it is possible to save only the seed of PRNG and to use this seed to generate one-time keys. It is needed to generate one-time keys only twice: the first time in the key generation phase and the second time in the signature of the message phase. This PRNG gets a seed of the length n and outputs another seed and the random number of the same length n. PRNG: {0 . . . 1}n →: {0 . . . 1}n × {0 . . . 1}n

(3)

Key Generation Using PRNG: The seed seed0 of the length n is chosen randomly, using seedi we work out soti , as following: PRNG(seedi ) = (soti , seedi + 1) 0 ≤ i < 2H

(4)

soti mutates every time when PRNG is launched. For Xi -s calculation, it is absolutely enough to know only seedi . The work of PRNG is illustrated in Fig. 1:

Post-quantum Scheme with the Novel Random Number Generator

79

Fig. 1. The work of PRNG

Signature and verification occur in the same way as in the standard version of Merkle crypto system. Quantum computers can break many types of PRNGs, even, which were considered secure against the attacks of the classical computers. The quantum attack on PRNG Blum-Micali was shown in polynomial time. This pseudo random number generator is considered secure from the attacks of classic computers. The mentioned attack uses Grover algorithm together with the quantum discrete logarithm, and it can restore the values at the generator’s output. The attacks like described above represent a great threat of breaking PRNGs, which are used in many often-used crypto systems. Because of it, Merkle crypto system with integrated PRNG can be considered vulnerable to the attacks of quantum computers. Therefore, it is obligatory to choose PRNG, which is resistant to quantum computers attacks. In this work we suggest to use Hash_DRBG which is NIST standard. Hash_DRBG needs the truly random seed. In this work, we offer to use the quantum random seed. There exist different efficient ways to receive the quantum seeds.

4 Quantum Random Number Generators 4.1 Optical Quantum Random Number Generators Although randomness is the cornerstone of cryptography, we still don’t have an ideal random number generator, so pseudo random number generators are still being used. Some of them do not give the desired level of randomness at all, and some meet certain criteria and produce cryptographically secure random numbers, which is why they are called the cryptographically secure pseudo random number generators (CSPRNGs) [12– 15]. It is not easy to create real random numbers generator. We can use Physical Random Number Generators and Quantum Random Number Generators as seeds for Cryptographically-secure pseudorandom number generators. But there can occur some problems too, as some attacks specifically target TRNGs (essentially a random input source). Thus, sooner or later, the question of true randomness will be on the agenda,

80

M. Iavich

which is possible only for processes that are inherently random. Quantum Random Number Generator (QRNG) is considered as one such source. Any quantum process that breaks the superposition can become a source of true randomness. The most practical QRNGs to date are realized in photon systems. Description of the optical field at the quantum level is possible with photons. Of the various options for a quantum state, the most accurate picture of the quantum state of light is given by focal and successive states in random number generators. Focal state | n >, is the state where n photons have the same frequency, polarization, transition profile and common path. A coherent state can be registered by imposing a focal state condition. |α ≥ e−

|α|2 2

∞ n=0

αn √ |n > n!

(5)

α - complex number, n - number of photons. |α|2 - means number of photons in the condition. The faint laser light is close to the coherent condition. We can use the coherent condition to obtain a single photon state from a laser if low intensity is chosen. In most cases, we only need to produce uncorrelated photons. It’s possible using different technologies, for example most popular detectors are: single-photon avalanche photodiodes (SPADs), superconducting nanowire detectors, photomultiplier tubes (PMTs), usually they are limited in their ability to count photons. There are improved detectors, but they are expensive. That’s why, that most applications use a binary method to detect photons. The another limitation for this kind of detectors is so called dead time, the time it takes to recover from photon detection. 4.2 Time of Arrival Quantum Random Number Generators After detecting the photons, we can use several methods to generate random bits. Usually, QRNG based on time, has a faint photon source and is also equipped with a detector and timing charts that record the exact time each photon was detected or the time it took to click. It doesn’t take long to receive one or more photons [16]. The detector intermittently gets photons from the LED. The consistent state of the laser is transmitted to the detector in an exponentially distributed time, average number of photons received per second. The difference between two exponential random variables gives us exponential time of two detections. To get a uniform random bit, we set the time t0 and t1 , in case t1 > t0 we assign 1 and if t0 > t1 assign 0. 4.3 Photon Counting Generators of Quantum Random Numbers Some generators use a time measurement method. Therefore, generating random numbers requires a certain number of photons registered at a fixed time -T. For a random time, exponential variable, the number of photons coming in at a fixed time T is determined by the Poisson distribution method [17]. With this formula it is possible to determine the probability of detecting n photons in the time interval T. Pr(n) =

(ψT)n −ψT e n!

(6)

Post-quantum Scheme with the Novel Random Number Generator

81

Such an approach is used by a number of generators to compare differences over time. If we have n1 and n2 photons in first and second measurements, we generate 1 if n0 > n1 and 0 if n0 < n1 . In this case for one measurement one bit is generated. This photons frequency measurements method is characterized by higher entropy. There are few generators that attach more than one bit according to detected photons number. Possible outcomes are categorized into equal empowerment groups, for which all resource management is required. 4.4 Attenuated Pulse Quantum Random Number Generators There are attenuated pulse quantum random number generators that do not have special features but can still achieve the desired results. Most single-photon detectors today have limited ability to count the number of photons and have a binary response to click with or without clicks. Photon counting methods usually base on multi-clicks over long intervals of time, which are divided into small intervals by the detector [18]. OQRNG is an attenuated pulse generator with a weak light source and with photon generation and not generation equal probability. The single photon state is: |0 >1 + |1 >1 √ 2

(7)

In case of detection, we can assign 1, and if the photon could not be detected we can assign 0. We do not care how many photons are used. 4.5 Self-testing for the Quantum Random Number Generators Most QRNGs are wrong or do not fully describe the random source. As problems can arise when a photon is on a beam splitter. Theoretically, everything is ideal because theoretically there is a possibility of an ideal random bit because there is a 50% probability of splitting the beam and a 50 probability of reflecting the beam. However, in practice, everything is different, because detectors, lasers, beam splitters always have a problem, in addition, and their characteristics also depend on some extent on environmental conditions. There are various methods for checking the quality of random numbers produced in physical RNGs. Self-test approaches are directly related to the quantum properties of a random number generator. It is therefore necessary to check the quality of the random numbers generated in the physical generators. In the classic case, the obtained data can be validated through NIST and Diehard random tests. It is possible to configure QRNG in such way that randomness does not depend on any physical actions. A true match can result from introspection even without fine-tuning the tools for its implementation. The QRNG self-test framework is based on investigating Bell’s inequality, regardless of device, by observing quantum entanglement or nonlocality. Even if the randomness in the output is mixed with classical noise, it’s still possible to get a lower bound on the true probability based on observations of nonlocality. These types of QRNGs have the option of self-testing of randomness. However, its production rate is usually very low, as the self-tested QRNG has to display non-locality.

82

M. Iavich

4.6 Device-Independent QRNGs There is an even more different approach where results are evaluated solely on the basis of output. No attention is paid here to the processes going on inside the device [19, 20]. The device independent certification method can prevent data leakage, even when an attacker can manipulate the device and send its random output. Having access to the device, he will be able to generate random numbers that will pass all the tests without any problems. 4.7 Other Forms of Quantum Certification Instead of Bell’s inequality, it is possible to create quantum experiment based certified QRNGs. Based on the Kochen-Specker theorem, we conclude that in some conditions the predictions of quantum mechanics do not match any of the latent variable models. Nonlocality in quantum mechanics - is related to non-commutative observations, there is no pre-defined model and the main focus is on the measurement sequence [20]. RNGs based on contextuality testing provide admittance to quantum randomness. In this case, we are still dealing with unreliable devices. We hope that the RNG manufacturer is trustworthy, but we admit that the device may be defective and may be incorrectly designed. As a result of the contextuality test, we determine whether the origin of the bits is a quantum source.

5 Methodology 5.1 Novel Hybrid QRNG Our goal is to be able to get random numbers quickly and cheaply. At the same time, it is necessary to ensure a high level of randomness. True randomness is achieved by disruption of quantum processes, only the generation frequency depends on the capabilities of the detector. We suggest a refined QRNG that relies on arrival time QRNG. In the best case, we can obtain only one random bit from each photon detected, we face the side effects such as detector inefficiency or dead time reducing this probability. Mbps is mainly used to measure the frequency of RNGs, but for fast apps like QKD this is not sufficient. To get more random bits, when using multiple detectors, a certain bias arises due to detectors different performance. This bias can be ruled out by comparing three successful cases of detection time using a single detector. It is useful to use simple detectors with lower requirements. We suggest using the method used in attenuated QRNGs. We propose the use of weak light source OQRNG where the state of a single photon under conditions of equal probability of producing or not producing a photon must be: |0 >1 + |1 >1 √ 2

(8)

In case of detection, 1 is assigned, and if the photon is not detected, 0. No matter the number of photons used, in any case the superposition is written as follows: ∞ 1 αc |c>1 (9) √ |0>1 + c=1 2

Post-quantum Scheme with the Novel Random Number Generator

83

We get at the first click and it does not matter how many photons were used. To avoid bias, the detector should have an efficient average amount of photons. We can solve this issue using Von Neumann extraction. For two detections where the number of photons is n0 and n1 , the output will be 1 if n0 > 0 and n1 = 0 and 0 if n0 = 0 and n1 > 1. If a double click occurs in a row, or a blank period occurs twice in a row, the results are excluded. These results according to Poisson’s source are the probability pr (n > 0) pr (n = 0) = e(−ηψT) (1 − e(−ηψT)) . The resulting bits are obtained four times slower, but without avoidance and bias. We propose to use Poisson process to improve efficiency. As, for each photon detected it generates few random bits, besides in working process here events unfold independently. Generators of this type are photon counting generators of quantum random numbers. Outcomes with equal probability will be divided into groups. In this case, it is sufficient to use one detector. Photon arrival time is quantum random variable. With the help of a counter running parallel to the detector, the time of a successful photon can be divided into time planes. The detection time interval for each photon detected gives us a few bits. In this process, events unfold independently. To be able to increase efficiency and get more random numbers, we propose to conduct measurements multidimensional quantum space, for example, in photonic time and space modes. When measuring photon’s arrival time, we get few bits registering two events simultaneously in the interval t. So registering one photon we obtain few random bits in result. Based on spatial mode it’s possible to define random numbers parallel to the detector matrix. The dead time can affect on the speed of the counter so we should be attentive to improve detection rates. As, this gives us possibility to choose used bit’s quantity from the counted number of photons and get high randomness. 5.2 Novel Semi Self-testing Method It is impossible to achieve true randomness based only on classical mechanics procedures, so we rely on cryptographic protocols. There are several categories of quantum random generators. We first analyzed the device-independent QRNG self-test system with the advantage of a random self-test feature. But its generation rate is usually very low, since it must demonstrate non-locality. The second category includes hardwareindependent QRNGs. The components of which are fully reliable and, when properly configured, can achieve high output rates. However, if opponents can control the device, the result can no longer be random. Analyzing these methods, we can say that in a real implementation, any intermediate certification method based on certain characteristics is acceptable. With a combination of two, device-independent and a self-test QRNGs, we receive the partially device independent, semi-self-testing generator. We suggest using QRNG with semi-self-testing features, which combines self-test functions combined with device-independent QRNG functions. It is also possible to use self-testing in a QRNG designed to operate in a single-photon polarization superposition. ψ=

|H > +|V > √ 2

(10)

84

M. Iavich

Or in a tangled state: ψ=

|H >1 |V >2 + |V >1 |H >2 √ 2

(11)

QRNG applies the principles of trail branching. In theory the probability of splitting a photon beam is 50% and so is the probability of reflecting a beam, but in practice we face different facts because in reality we have some issues with detectors, lasers, beam splitters, their operation also depends on surrounding conditions. When a photon is present at the beam splitter, problems can arise: detector inefficiency, the splitting process misbalance, source faults and many unknown sources of correlation. There are specific treatments for specific devices, but usually after that, the program is used on a random basis and processing is done to correct the unequal distribution of probability. Polarizers emit a photon with a 50% probability. Theoretically, the co-occurrence counter records a perfect anti-correlation. The device has a test phase. On this stage on the basis of measurements we accomplish full tomography of input conditions, the main goal is to figure out 2-dimensional matrix that reports two-level photon system for one photon, and, if we have two photons, to determine the effective two-dimensional Hilbert space. The generator gets measurement results, and determines the minimum possible entropy for the general state of the user and listener H_∞((pr)ˆ) and the worst result out of all possible occasions (pr)ˆ. Afterwards bits are passed to a random extractor to produce random string with no bias for the disposable entropy. Described system can avoid attacks where the attacker can control the quantum state from which we derive entropy. It is also possible to measure entropy through tomography in models where errors in execution are expected or irregularities may occur during the process. This model is described in a self-testing QRNG where a quantum source of randomness is parted from the technical noise by means of a metering witness. ψ=

|H >1 |V >2 + |V >1 |H >2 √ 2

(12)

Determine the conditional probability of finding the result of o by pr (o | s, m), for one of the defined probabilities (±1) s = 0, 1, 2, 3. The parameter for measurement m can be zero or one. There is another approach where we apply the uncertainty principle, just in this case any attacker can reach some part of information. We aim to generate random bits, but they must be safe. Let’s say, if we measure the polarization of a photon on a horizontal vertical base in a tangled position, we can obtain absolutely random numbers, but the attacker will be able to determine their sequence, as in this case access to the second half of the bits is open. In result we get sequence of bits, that attacker give us, as the bits are simply homogeneous and not confidential. A good self-test generator is needed to get a good result, for which we combine selftest QRNGs with device-independent QRNGs. The last one is constructed with credible devices and work fast if handled correctly. But when the opponent has access to the device, the result will not be random. Clauser-Horne-Shimony-Holt (CHSH) formulation of The Bell Inequality helps us overcome this problem. In result of our observations on two similar devices we get two

Post-quantum Scheme with the Novel Random Number Generator

85

variables s and m from each module. The variables can have binary values 0 and 1. The measurements taken in the s configuration give a binary value of a and the measurement defined in m gives b. We are interested in following correlation function:  I= (13) (−1)s,m [Pr(a = b|sm) − Pr(a = b|sm)] s,m

Pr (a = b | sm) and Pr (a = b | sm) are a = b or a = b probabilities, s and m are the parameters. It’s necessary to find I ≤ 2 each time, as any value higher than 2 involves non-locality. We need to perform this experiment n times to be able to correctly estimate the bell inequality. The each (s, m) measurement selection is generated by an identical and subtractive distribution of probability Pr (sm). The output of n is r = (a1, b1;…; an, bn), and the input s = (s1, m1;…; sn, mn). I˜ is an CHSH formula (14) evaluator defined by: ∼

I=

1 (−1)s,m [N(a = b|sm) − N(a = b|sm)/Pr(sm)] s,m n

(14)

N (a = b, sm) is the number of measurements taken (s, m). After implementation we obtain the same values in a and b and n. N (a = B, sm) is determined the same way.

6 New Scheme Key Generation: The signer H ∈ N, after selecting H ≥ 2, selects the key pairs to be generated. To generate keys, the signer must generate the initial value. We offer to use the quantum initial value. The quantum initial value is generated using a new hybrid quantum QRNG described above. The initial value must be checked using the new Semi Self-testing Method also described above. The initial value is passed HASH_DBRG pseudo random number generator as input and HASH_DBRG outputs the signature keys. In the next step, it creates the appropriate verification keys. As a result, we obtain a 2H pair of one-time signatures “(Xj, Yj), 0 ≤ j ≤ 2^H”. Signature key “Xj” and “Yj” confirmation key. This gives us opportunity to sign 2H documents. Keys are strings made up of bits. The hash function is used to hash verification key, this way we get the leaves of the tree. H : {0, 1}∗ → {0, 1}n

(15)

To parent nodes are found by hashing of the connection of the two previous nodes. The public key is the root of the tree – public. The leaves of the Merkle tree are calculated as follows: the value of the parent leaf is the hashed value of the concatenation of the two child elements. We can get the public key by rooting MSS. After hashing the message, the signer receives an n-size hash. H (m) = hash, a single Xarb key used to sign a message, calculated using PRNG, with the same initial value derived from the new QRNG. In the next step, HASH_DBRG generates the signature keys again. It takes CSPRNG as input as the initial value received by the attenuated pulse generator to extract the signature keys. A signature is a set of one-time signatures,

86

M. Iavich

one-time verification keys, the initial index, and all nodes flatten selected according to a random key with the corresponding index. Signature = (sig||arb|| Yarb auth0 , . . . , authH−2 , . . . , authH−1 )

(16)

Signature verification is performed as follows: The one-time signature is verified using the confirmation key; if the authentication is successful, all required nodes are calculated using the “auth”, “Yarb” and “arb” index. If the tree root and the public key match, then the signature is correct.

7 Results and Security As the result, we offer the model of the improved post-quantum digital signature scheme, based on classic Merkle scheme. Into the scheme we integrate hash-based PRNG with the novel hybrid quantum random seed. Also the novel hybrid Semi Self-testing Method is integrated into the whole scheme. The Merkle algorithm works in much the same way, but the hash-based PRNG is integrated, generating the PRNG seed using quantum QRNGs. The offered QRNG is secure as it is the hybrid of the standard secure approaches, but is more efficient. The QRNG is checked by means Semi Self-testing Method. This testing method is also trusted as it uses the existing secure standard approaches, but combines them in order to make the testing process more efficient. The PRNG model we offer is protected from the attacks of quantum computers and in the NIST standart. Initial value is obtained using QRNG. Thus the resulting seeds are certified. Therefore the proposed algorithm of Merkle scheme is safe.

8 Conclusion and Future Plans The offered system can be used in post-quantum epoch. It is efficient and has the security guarantee. The system contains four modules: the post-quantum digital signature, the pseudo random number generator, the quantum seed and the testing method. In the future, it is planned to change the structure of the signature. It is planned to change the Mekle tree with other similar more efficient primitive.

References 1. Kocher, P.C.: Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems. In: Koblitz, N. (ed.) CRYPTO 1996. LNCS, vol. 1109, pp. 104–113. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-68697-5_9 2. Wiener, M.J.: Cryptanalysis of short RSA secret exponents. IEEE Trans. Inf. Theory 36(3), 553–558 (1990). https://doi.org/10.1109/18.54902 3. Yu, H., Kim, Y.: New RSA encryption mechanism using one-time encryption keys and unpredictable bio-signal for wireless communication devices. Electronics 9, 246 (2020). https:// doi.org/10.3390/electronics9020246

Post-quantum Scheme with the Novel Random Number Generator

87

4. Soni, K.K., Rasool, A.: Cryptographic attack possibilities over RSA algorithm through classical and quantum computation. In: 2018 International Conference on Smart Systems and Inventive Technology (ICSSIT), pp. 11–15 (2018). https://doi.org/10.1109/ICSSIT.2018.874 8675 5. Wang, Y., Zhang, H., Wang, H.: Quantum polynomial-time fixed-point attack for RSA. China Commun. 15(2), 25–32 (2018). https://doi.org/10.1109/CC.2018.8300269 6. Lamport, L.: Constructing digital signatures from a one way function. Technical report SRICSL-98, SRI International Computer Science L 7. Merkle, R.C.: A certified digital signature. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 218–238. Springer, New York (1990). https://doi.org/10.1007/0-387-34805-0_21 8. Iavich, M., Iashvili, G., Bocu, R., Gnatyuk, S.: Post-quantum digital signature scheme for personal data security in communication network systems. In: Hu, Z., Petoukhov, S., He, M. (eds.) AIMEE 2020. AISC, vol. 1315, pp. 303–314. Springer, Cham (2021). https://doi.org/ 10.1007/978-3-030-67133-4_28 9. Buchmann, J., García, L.C.C., Dahmen, E., Döring, M., Klintsevich, E.: CMSS – an improved merkle signature scheme. In: Barua, R., Lange, T. (eds.) INDOCRYPT 2006. LNCS, vol. 4329, pp. 349–363. Springer, Heidelberg (2006). https://doi.org/10.1007/11941378_25 10. Iavich, M., Kuchukhidze, T., Gnatyuk, S., Fesenko, A.: Novel certification method for quantum random number generators. Int. J. Comput. Netw. Inf. Secur. (IJCNIS) 13(3), 28–38 (2021). https://doi.org/10.5815/ijcnis.2021.03.03 11. Iavich, M., Kuchukhidze, T., Okhrimenko, T., Dorozhynskyi, S.: Novel quantum random number generator for cryptographical applications. In: 2020 IEEE International Conference on Problems of Infocommunications. Science and Technology (PIC S&T), pp. 727–732 (2020). https://doi.org/10.1109/PICST51311.2020.9467951 12. Saravana Kumar, R., Manikandan, P.: Medical big data classification using a combination of random forest classifier and K-means clustering. Int. J. Intell. Syst. Appl. (IJISA) 10(11), 11–19 (2018). https://doi.org/10.5815/ijisa.2018.11.02 13. Akyol, K.: A study on test variable selection and balanced data for cervical cancer disease. Int. J. Inf. Eng. Electron. Bus. (IJIEEB) 10(5), 1–7 (2018). https://doi.org/10.5815/ijieeb.2018. 05.01 14. Abisoye, B.O., Abisoye, O.A.: Simulation of electric power plant performance using Excel®VBA. Int. J. Inf. Eng. Electron. Bus. (IJIEEB) 10(3), 8–14 (2018). https://doi.org/10.5815/iji eeb.2018.03.02Reference 15. Stefanov, A., Gisin, N., Guinnard, O., Guinnard, L., Zbinden, H.: Optical quantum random number generator. J. Mod. Opt. 47(4), 595–598 (2000). https://doi.org/10.1080/095003400 08233380 16. Ma, X., Yuan, X., Cao, Z., et al.: Quantum random number generation. NPJ Quantum Inf. 2, 16021 (2016). https://doi.org/10.1038/npjqi.2016.21 17. Rarity, J.G., Owens, P.C.M., Tapster, P.R.: Quantum random-number generation and key sharing. J. Mod. Opt. 41(12), 2435–2444 (1994). https://doi.org/10.1080/095003494145 52281 18. Yang, J., et al.: 5.4 Gbps real time quantum random number generator with simple implementation. Opt. Express 24, 27475–27481 (2016) 19. Wayne, M.A., Jeffrey, E.R., Akselrod, G.M., Kwiat, P.G.: Photon arrival time quantum random number generation. J. Mod. Opt. 56(4), 516–522 (2009). https://doi.org/10.1080/095003408 02553244 20. Tisa, S., Villa, F., Giudice, A., Simmerle, G., Zappa, F.: High-speed quantum random number generation using CMOS photon counting detectors. IEEE J. Sel. Top. Quantum Electron. 21(3), 23–29 (2015). Art no. 6300107. https://doi.org/10.1109/JSTQE.2014.2375132

88

M. Iavich

21. Li, Y.H., Han, X., Cao, Y., et al.: Quantum random number generation with uncharacterized laser and sunlight. NPJ Quantum Inf. 5, 97 (2019). https://doi.org/10.1038/s41534-0190208-1 22. Pivoluska, M., Plesch, M., Farkas, M., et al.: Semi-device-independent random number generation with flexible assumptions. NPJ Quantum Inf. 7, 50 (2021). https://doi.org/10.1038/ s41534-021-00387-1 23. Avesani, M., Marangon, D.G., Vallone, G., et al.: Source-device-independent heterodynebased quantum random number generator at 17 Gbps. Nat. Commun. 9, 5365 (2018). https:// doi.org/10.1038/s41467-018-07585-0

Prediction of UWB Positioning Coordinates with or Without Interference Based on SVM Hua Yang, Haikuan Yang(B) , Junxiong Wang, Dang Lin, and Kang Zhou Institute of Mathematics and Computer, Wuhan Polytechnic University, Wuhan, China [email protected]

Abstract. Through UWB positioning technology, accurate positioning in the house can be achieved. However, the obstruction of indoor houses leads to the decrease of positioning accuracy, and UWB can’t judge whether there is signal interference when collecting data. Therefore, judging whether there is signal interference is the key and difficult point of UWB accurate positioning. In order to judge which data are collected under the interference of signals and which data are collected without interference of signals, The fast and effective programming tool by Python to deal with pretreatment problems of data set, and by way of the effective algorithm based on support vector machine, establishes a binary classification prediction model to judge whether the collected data have interference or not. Ultimately, 486 samples data sets were collected for training by the algorithm, and 162 samples data sets were used to test whether there was any interference to the signal. Six prediction methods, such as k-nearest neighbor algorithm, light gradient boosting machine, decision tree, naive bayes, logistic regression, and neural network, were used for comparison with support vector machine algorithm, and the results were visualized by Python and other tools. Finally, the F1-score, recall, accuracy, and AUC of the prediction outcomes of support vector machine model were the highest, and the model had a high classification. Through the prediction of the data set with or without interference, we can further use the data set to select an appropriate algorithm to predict the coordinate of target points. Keywords: UWB · Python · Support vector machine · Artificial neural network · Light gradient boosting machine · Logistic regression · K nearest neighbor algorithm decision tree

1 Introduction UWB is the abbreviation of Ultra-Wideband [1]. This is a short-range wireless communication technology that can complete data transmission by sending out nanosecond pulses without any carrier wave, and the power consumption during signal transmission is only a few tens of µW. Among them, UWB based positioning technology has real-time indoor and outdoor accurate tracking capability and high positioning accuracy, which can reach centimeter level or even millimeter level positioning, and has been applied in indoor accurate positioning. In the application of indoor positioning, UWB technology can achieve centimeter level positioning accuracy (generally refers to 2D plane © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 89–99, 2023. https://doi.org/10.1007/978-3-031-24475-9_8

90

H. Yang et al.

positioning) and has good anti-multipath interference and attenuation performance and strong penetration ability [2]. Nevertheless, owing to the complex and variable indoor circumstance, UWB communication signals are very easy to be blocked. Although UWB technology has penetration ability, it will still produce errors. The data will be fluctuated abnormally in a strong interference, which basically fails to complete indoor positioning, and so far as to causes fatal accidents. When accurately positioning UWB under signal interference, the data set collected by UWB positioning does not know whether there is signal interference or not. Therefore, judging whether there is signal interference or not has become an urgent problem to be solved in UWB positioning. Wait for Li Nan [3] The ultra-wideband (UWB)-based three-dimensional positioning and optimal filtering method suitable for indoor positioning is studied, and the UWBbased three-dimensional positioning model is established. Wait for Liu Qi [4] The influence of linearization error on the positioning accuracy of UWB system is analyzed. Ding Yanan [5] This paper studies the mainstream positioning technology and its solutions in the indoor environment, and compares their advantages and disadvantages, and finally gives the reasons for choosing ultra-wideband positioning technology. Wait for Li Huan [6] Aiming at the classification problem of multiple observation samples, a binary classification algorithm of multiple observation samples based on SVM is proposed. Chen Yu [7] For indoor positioning systems based on Bluetooth Low Power Technology (BLE), machine learning algorithms such as Multilayer Perceptron (MLP) are used as positioning algorithms, resulting in insufficient positioning accuracy. An indoor positioning method based on LSTM (Long-Short Memory Network) is proposed, which uses the time domain information in the positioning process to improve the positioning accuracy. Wu Chao [8] et al. put forward an RFID indoor positioning algorithm based on BP neural network, which introduced reference tags to assist positioning. Classification models of machine learning are generally divided into two categories: linear models and nonlinear models. For example, Logistic Regression is a linear model; For instance, deep neural network algorithm (DNN), Decision Tree regression and Light Gradient Boosting Machine are nonlinear models. Support vector machine (SVM) is a kind of generalized linear classifier that classifies data binary according to supervised learning. It has obvious advantages for dealing with a small amount of data. Because the support vector machine has a good effect on the binary classification problem. Based on this, in this study, according to UWB technology, the distances between four anchor points and the coordinates of the target points are measured without knowing whether there is signal interference or not. Firstly, data processing is carried out, and whether there is signal interference or not is the dependent variable, and a two-class prediction model of whether there is signal interference or not is established by support vector machine algorithm.

2 Problem Modeling 2.1 Support Vector Machine Support Vector Machine is a classification model with two classes. It differs from perceptron in that its fundamental model is a linear classifier with the greatest interval determined in the feature space. This technique has clear advantages when working

Prediction of UWB Positioning Coordinates with or Without Interference

91

with small data sets [11]. Support vector machine will be utilized to predict whether interference exists or not in this investigation. 1) The null values and missing values in the data sets are preprocessed, and the related variables are handled by normalization. 2) Differentiate training and test sets. 70% of the data sets are randomly selected as the training model parameters of the training set, and 30% of the data sets are randomly selected as the test data sets to test the model’s generalization ability. 3) SVM algorithm is used to build classification model and classify data sets. The result of signal interference is 1, and the result of signal non-interference is 0. 4) Accuracy, accuracy, recall, F1-score and AUC are used to evaluate the performance of the model. 2.2 Evaluating Indicator According to the prediction effect of the model, the precision, accuracy, recall rate, F1-score and AUC are used for comparison. The specific meanings are as follows: 2.2.1 Accuracy Accuracy refers to the proportion of the number of samples with correct classification occupies the total number of samples. Its calculation formula is as follows: Accuracy =

TP + TN TP + FP + TN + FN

(1)

Among them, Accuracy represents accuracy, TP (True Positive) represents true examples, FP (False Positive) represents False Positive examples, TN (True Negative) represents true counterexamples, and FP (false positive) represents false counterexamples. 2.2.2 Accuracy Rate

Table 1. Confusion matrix Forecast category

Real category 1

0

1

TP

FP

0

FN

TN

Basically, a threshold value is given to the probability value predicted by the model. If the probability value exceeds the threshold, the sample is predicted to be 1 (Positive), otherwise it is predicted to be 0 (Negative). TP (True Positive) in the table is a real

92

H. Yang et al.

example, indicating that the predicted value is 1, and the true value is also 1, indicating that the prediction is correct. FP (False Positive) is a false positive example, indicating that the predicted value is 1, and the true value is 0, indicating that the prediction is wrong. TN (True Negative) is a true negative example, indicating that the predicted value is 0 and the true value is 0. FN (False Negative) is a false counterexample, represents that the predicted value is 0 and the true value is 1 (Table 1). Accuracy refers to the proportion of true positive samples among the samples judged as positive by the classifier, that is, how many of all the samples judged as positive by the classifier are true positive samples, and its formula definition is shown in (3): Precision =

TP TP + FP

(2)

Among them, Precision represents the precision, and TP represents the true examples, and FP represents the false counterexamples. 2.2.3 Recall Rate The recall rate refers to the proportion of positive samples correctly judged by the classifier in the total positive samples, that is, how many positive samples are judged as positive samples by the classifier. The formula is defined as follows: Recall =

TP TP + FN

(3)

Among them, Recall represents the true example, and TP is the true example, and FN is the false counterexample. 2.2.4 F1-Score F1-score is an index used in statistics to measure the precision of binary classification models. In addition, it considers the classification model’s precision and the Recall rate. F1-score can be viewed as a harmonic mean of model precision and recall in order to balance precision and recall. Its value is between 0 and 1. The following is the formula’s definition: F1 − score = 2 ×

Precision × Recall Precision + Recall

(4)

Among them, Precision represents the precision rate, and Recall represents the recall rate. 2.2.5 ROC Curve ROC curve is the receiver’s operating characteristic curve. As shown in Fig. 1, the horizontal axis represents FP rate, the vertical axis represents TP rate, and ROC curve is used to draw TP rate and FP rate with different classification thresholds. Lowering the classification threshold will lead to more samples being classified into positive categories, thus increasing the number of false positive cases and true cases. One of the characteristics

Prediction of UWB Positioning Coordinates with or Without Interference

93

of ROC curve is that it can remain unchanged even when the distribution of positive and negative samples changes. TP rate and FP rate are calculated as follows: TPR =

TP TP + FN

(5)

FPR =

FP FP + TN

(6)

Fig. 1. TP rate and FP rate under different classification thresholds

2.2.6 AUC AUC represents the area under the ROC curve. Because ROC curve can’t clearly explain which classifier has better effect in many cases. However, AUC is a numerical value, the larger its value is, the better the classifier’s effect is. Suppose we have a sample labeled as class, where the positive sample is marked as and the negative sample is marked as. After training by using the machine learning algorithm, the probability value corresponding to each sample is marked as score, and its calculation formula is as follows:  M (M +1) i∈positiveClass ranki − 2 AUC = (7) M ×N Among them, i represents the i th sample, i ∈ positiveClass represents that the sample is a positive sample, ranki means that when n samples are sorted from large to small according to the probability value (score), the i-th sample can produce such a combined number that the probability value score is larger before and smaller after. M is the number of positive samples and N is the number of negative samples.

3 Methods and Analysis 3.1 Data Preprocessing Most of the initially collected data sets are normal, but some variables have null values, missing data, etc., so it is necessary to preprocess the data before model prediction. Different preprocessing methods are used for different abnormal data.

94

H. Yang et al.

1) if the entire row of the collected data sample is null, we delete the sample. 2) if there are many missing data in the variable, delete the whole column of the variable. 3) if there are individual null values in each row of samples or in each column of variables, the average value of the variable shall be used in the null value or replaced according to the actual meaning of the variable. 3.2 Build Classification Model 3.2.1 Selection of Prediction Model 1) Comparison of prediction models The prediction of whether the signal has interference or not is a typical two classification problem. Multiple variables affect the prediction results together. After preprocessing the data, we use different models to predict, observe the effects of different prediction models, and select the best algorithm model. The machine learning algorithm is capable of efficiently organizing and fitting parameters. We employ the algorithm for machine learning to address the nonlinear modeling problem. Machine learning focuses primarily on how computers replicate or actualize human learning ability in order to acquire new knowledge and abilities, reorganize the existing knowledge structure, and continuously improve their own performance. SVM is a machine learning technique based on the statistical learning theory. Support vector machine (SVM) was introduced in 1995 by Cortes et al. It may be used to various machine learning tasks, such as function fitting [9], and possesses a number of distinct advantages when handling limited sample, two-classification, nonlinear and high-dimensional pattern recognition issues. It has a positive effect on the problem of secondary classification in particular. On the basis of the features of the data set’s sample data, the support vector machine algorithm is chosen to construct a binary classification model indicating whether the signal contains interference or not. 2) Comparative experimental method Different algorithm choices will lead to the quality of the measured results. Only using a single algorithm model can not explain the quality of the prediction effect. Therefore, this paper selects multiple algorithms to build a binary classification model with or without interference to the signal and carries out comparative analysis. By comparing with logical regression, k-nearest neighbor algorithm, decision tree, light gradient elevator, naive Bayesian and artificial neural network algorithms, It shows that SVM model has better prediction effect than other models. 3.2.2 Model Frame After the model is determined, we create, train and verify a binary classification model with or without signal interference in three steps [10].

Prediction of UWB Positioning Coordinates with or Without Interference

95

1) Preprocessing the collected original data set, including normalization and data set division. 2) The data set to be trained is put into the prediction model for training, the model parameters are adjusted and the appropriate number of test rounds is selected. When the error is reduced to the appropriate level, the test set data is predicted, and the classification results corresponding to each sample of the test set are predicted. 3) The six models are trained and tested. According to the prediction results obtained, they are compared with the original labels, and the evaluation parameters such as accuracy, accuracy, recall, F1 score and AUC are obtained by using the formula, and the results are analyzed and demonstrated [11]. 3.2.3 Data Analysis and Preprocessing WE get 648 sample data after dimensionality reduction of the data sets. Every sample data embodies 7 variables and a label of whether the signal has interference or not. Due to the data are veritable distance data samples solved by TOF technology, and the given data are given in the form of txt, we need to convert and clean the data [12]. And it is necessary to carry out appropriate analysis and preprocessing in the light of the data characteristics and the data input formats of subsequent models. 1) Data missing value processing We fill in the missing values in each column in the data set with the average value of that column, so as not to affect the prediction of the model. 2) Similarity matrix The similarity matrix between different variables is shown as Fig. 2.

Fig. 2. Correlation among variables

96

H. Yang et al.

3) Normalization treatment Dimensionless refers to converting data of different specifications into the same specification. Common dimensionless methods include normalization and normalization [13]. The premise of standardization is that the eigenvalues obey the normal distribution. After standardization, the eigenvalues conform to the standard normal distribution. Standardization is to use the information of boundary values to scale the value range of features to a specific range. Here, we use the method of scaling the range to [0,1]. The formula is as follows: xnormalization =

x − xmin xmax − xmin

(8)

Among xnormalization indicates the normalized value, and x indicate the variable value, and xmin is the minimum value in the sample of this variable and xmax is the maximum value in the sample of this variable [14]. 4) Data set division Among the samples of the data set, 70% of the data set is put into the algorithm as a training set for training, and 30% of the data set is used as a test set to test the advantages and disadvantages of the algorithm [15].

4 Simulation 4.1 ROC Curve of SVM Algorithm See Fig. 3.

Fig. 3. Evaluation index diagram of seven models

Prediction of UWB Positioning Coordinates with or Without Interference

97

Fig. 4. Comparison of experimental results

4.2 Error Comparison Between Models From the experimental results, we can see that the line charts of precision, accuracy, recall, F1-score and AUC of different models are depicted, as caught sight of in Fig. 4. It can be seen from the figure that KNN has the lowest evaluation indexes, the accuracy and AUC of decision tree, light gradient machine, naive Bayes and artificial neural network are relatively low, and the indexes of logistic regression and support vector machine are relatively high, among which the indexes of support vector machine are the highest, with the recall rate of 0.69048 and AUC of 0.62729. The experimental results of seven prediction models are revealed in Table 2. The data in the table count the results of precision, accuracy, recall, F1-score and AUC of different methods respectively. Table 2. Statistical table of experimental results Algorithm

Precision

Accuracy rate

Recall rate

F1-score

AUC

LR

0.61111

0.65217

0.53571

0.58824

0.61401

KNN

0.37654

0.4

0.40476

0.40237

0.37546

Decision tree

0.56173

0.59701

0.47619

0.5298

0.56502

LGBM

0.53704

0.56164

0.4881

0.52229

0.53892

Naive Bayes

0.46296

0.48052

0.44048

0.45963

0.46383

Neural network

0.50617

0.4875

0.46429

0.47561

0.46932

SVM

0.62963

0.63043

0.69048

0.65909

0.62729

The prediction results of each algorithm are displayed visually. In practical terms, we put the training data sets into every model for training. We put the test data sets into

98

H. Yang et al.

the model after the model training, and compare the eventual result with the label value to compute the precision, accuracy, recall rate, F1-score and AUC of each model. The closer the value of the five evaluation indexes is to 1, the better the predicted result of the model. All the experimental results are shown in Fig. 4 and Table 2, and the collation map of each evaluation index is revealed in Fig. 4. In the meantime, in order to compare the prediction results of every model more intuitively, we use different models as the abscissa and the corresponding values of each evaluation index as the ordinate, draw the corresponding values of each model’s evaluation index, and analyze the training and testing results of each model by showing the binary classification prediction effect of each model. As can be seen from the above chart. The prediction effect of K-nearest neighbor algorithm is far inferior to that of other algorithm prediction models, because of the small amount of data, and the time taken by these models for prediction is not much different, so the running time is not compared here. The experimental results further verify that the prediction effect of K-nearest neighbor algorithm and Naive Bayes algorithm is not particularly good, while the prediction effect of support vector machine is relatively good. The AUC and recall rate of the two-class model predicted by this algorithm are 0.62729 and 0.69048, which are the best among all models, and each evaluation index is the highest among all models, which is obviously superior to other models. The experimental results further verify that the SVM model has a good effect on the two-class prediction model with or without signal interference.

5 Summary This paper studies the binary classification prediction of how to judge whether there is interference in the data set when TOF technology is used to collect the data set of positioning coordinates. Through processing, cleaning and normalizing the measured data set of the distance between the target point and the anchor point, seven prediction algorithms which contain SVM, K Nearest Neighbor Algorithm, Logistic Regression, Light Gradient Boosting Machine, Decision Tree, Naive Bayes and Neural Network are used for prediction. By comparison, it is found that SVM algorithm has the highest precision, accuracy, recall rate, F1-score and AUC when predicting whether the signal has interference or not, and the model fitting effect is good. The model was built in this experiment provides a solution to the problem of whether there is interference of signals in data set measurement during TOF ranging. We ensure that it can clearly judge whether there is interference of signals in this data set measurement in the process of further coordinate model prediction of data set, which is conducive to solving the problem of ultra-wideband (UWB) precise positioning, improving the accuracy of ultrawideband precise positioning, and promote the application of UWB precise positioning in practice. Acknowledgment. This paper was supported by the following projects: 1) Wuhan Polytechnic University School-level Teaching and Research Project (Grant No. 2020Y20), Hubei Postgraduate Workstation Project (Grant No. 8021341), Ministry of Education Industry-University Cooperation Project (Grant No. 202101130033).

Prediction of UWB Positioning Coordinates with or Without Interference

99

2) Name of scientific research project established by the university: multi-objective intelligent optimization algorithm and its application in pork storage and transportation, No. 2020Y20. 3) The name of the school established teaching and research project: the construction of artificial intelligence technology innovation talent training program, No. XM2021015. 4) Name of school enterprise cooperation project: Research on Application of Artificial Intelligence and Big Data Analysis System, No.: whpu-2021-kJ-762 (8021341). 5) Name of school enterprise cooperation project: production and operation information management platform for Qianjiang shrimp rice supply chain traceability, No.: whpu-2021-kJ-1145. 6) Name of school enterprise cooperation project: R&D of packaging equipment for small granular materials or powder materials “project, No.: whpu-2022-kJ-1586. 7) Hubei Provincial Teaching and Research Project Name: Research on Assessment and Evaluation of Higher Mathematics Teaching Process after the Cancellation of the Qingkao System, No.: 2018368.

References 1. Zhang, D., Niu, R., Liao, C.: Design of indoor positioning system. Sci. Technol. Innov. (28), 7–9 (2021) 2. Yang, G., Zhu, S., Li, Q., Zhao, K., Zhao, J., Guo, J.: An indoor cooperative positioning algorithm for firefighters based on UWB. Comput. Eng. Appl. 58(04), 106–117 (2022) 3. Nan, L., Bing, L.: 3D positioning and optimal filtering method based on UWB. J. Hebei Univ. (Nat. Sci. Ed.) 41(03), 329–336 (2021) 4. Liu, Q., Gao, C., Shang, R.: Analysis of the influence of linearization error on the positioning accuracy of UWB system. Surveying Mapp. Eng. 30(03) (2021) 5. Yanan, D., Zhang, X., Xu, L.: Overview of indoor positioning technology based on UWB. Intell. Comput. Appl. 9(05), 91–94 (2019) 6. Huan, L., Shitong, W.: Multi-observation sample binary classification algorithm based on support vector machine. J. Intell. Syst. 9(04), 392–400 (2014) 7. Chen, Y., Jiqing, Q., Wenjing, T., Ying, Z., Kexue, S.: Design and implementation of indoor positioning system based on LSTM. Electron. Meas. Technol. 44(19), 161–166 (2021) 8. Chao, W., Lei, Z., Chang, K.: Research on RFID indoor location algorithm based on BP neural network. Comput. Simul. 32(07), 323–326 (2015) 9. Haiyan, W., Jianhui, L., Fenglei, Y.: Review of support vector machine theory and algorithm. Comput. Appl. Res. 31(05), 1281–1286 (2014) 10. Hoang, M.T., Yuen, B., Dong, X., et al.: Recurrent neural networks for accurate RSSI indoor localization. IEEE Internet Things J. 6(6), 10639–10651 (2019) 11. Zhu, Y., Luo, H., Zhao, F., et al.: Indoor/outdoor switching detection using multisensor DenseNetand LSTM. IEEE Internet Things J. 8(3), 1544–1556 (2020) 12. Rigopoulos, G.: Assessment and feedback as predictors for student satisfaction in UK higher education. Int. J. Mod. Educ. Comput. Sci. (IJMECS) 14(5), 1–9 (2022). https://doi.org/10. 5815/ijmecs.2022.05.01 13. Ojo, J.S., Ijomah, C.K., Akinpelu, S.B.: Artificial neural networks for earth-space link applications: a prediction approach and inter-comparison of rain-influenced attenuation models. Int. J. Intell. Syst. Appl. (IJISA) 14(5), 47–58 (2022). https://doi.org/10.5815/ijisa.2022.05.05 14. Dhaliwal, P., Sharma, S., Chauhan, L.: Detailed study of wine dataset and its optimization. Int. J. Intell. Syst. Appl. (IJISA) 14(5), 35–46 (2022). https://doi.org/10.5815/ijisa.2022.05.04 15. Yang, H., Yang, H., Wang, J., Zhou, K., Cai, B.: RON loss prediction based on model of light gradient boosting machine. In: Hu, Z., Gavriushin, S., Petoukhov, S., He, M. (eds.) Advances in Intelligent Systems, Computer Science and Digital Economics III. LNCS, vol. 121, pp. 187–199. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-97057-4_17

Analysis and Comparison of Routing and Switching Processes in Campus Area Networks Using Cisco Packet Tracer Kvitoslava Obelovska(B) , Ivan Kozak, and Yaromyr Snaichuk Lviv Polytechnic National University, S. Bandera Street, 12, Lviv 79013, Ukraine [email protected]

Abstract. The routing process enables the exchange of data on a network and is one of the most important components that determine network performance. The authors analyzed the operation of Campus Area Networks (CAN) using routers and switches as data communication equipment. The former organizes the routing process at the third network layer of the network architecture, and the latter – is at the second data link layer. The Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) protocols were chosen as network layer routing protocols. The time indicators obtained by simulating the operation of the network according to these protocols were compared with the time indicators obtained by replacing the Layer 3 routing process with the process of Layer 2 switching. The results of the experiments showed that replacing the routing process with switching process makes it possible to reduce the packet delivery time, and with the increase in the CAN size, the positive effect of replacing third layer routing with second layer switching increases. The study was conducted using the Cisco Packet Tracer simulator. Keywords: Routing · Routing protocols · Layer 2 switching

1 Introduction The modern development of society is characterized by an intensive growth of data exchange flows and requirements for their transfer. Different types of data require different quality of service, for example, multimedia data is time critical. In turn, the time indicators of data transmission networks depend on many factors, in particular, on the method of data transmission, the method of determining transmission paths, algorithms for calculating the shortest paths, criteria for their optimization, parameters used in the calculation of transmission paths, as well as the paths themselves. Research, analysis and improvement of more effective routing methods are urgent tasks. Many works are devoted to the problem of routing, both of a general nature, in which, for example, static and dynamic routing are compared, and works where specific issues are considered. It can be an analysis and comparison of specific routing protocols, solutions for improving the routing process in different types of networks. Examples of such applications are underwater acoustic sensor networks [1], mobile ad hoc networks © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 100–110, 2023. https://doi.org/10.1007/978-3-031-24475-9_9

Analysis and Comparison of Routing and Switching Processes

101

[2, 3], Smart Grid neighborhood networks [4], Internet of Things networks (IoT) [5], and Wireless Sensor Network [6, 7]. Our work is dedicated to studying and improving the process of transferring data between remote nodes in Campus Area Networks (CAN). A campus area network is larger than a Local Area Network (LAN), smaller than a Metropolitan Area Network (MAN), and obviously smaller than a wide area network (WAN). CAN is a computer network that connects local area networks within a limited geographic area. Network equipment and transmission facilities are usually owned by the campus owner. The purpose of this work is to increase the efficiency of Campus Area Networks by improving the time indicators of the network. The main contributions of this article can be summarized as follows: • A research methodology for comparing routing and switching processes in campus area networks is proposed; • It is shown that replacing routing using OSPF and EIGRP protocols with layer 2 switching improved the time performance of the investigated network by an amount that depends on the protocol and network configuration; • It is also shown that the effect of replacing routing with layer 2 switching depends on the size of the network.

2 State-of-the-Art The architecture of computer networks can be represented as a set of layers. Each of these layers is responsible for certain functions, the realization of which requires resources, including time. When transferring time-critical data, it is important that the total time spent at all layers is within acceptable values. Many researchers use an architectural approach when studying various functions of systems and networks [8, 9]. This is especially observed for the network layer when studying the influence of routing protocols on the time characteristics of the network. The choice of routing protocols can optimize time indicators and thereby reduce the total time of message delivery. Routing protocols were analyzed in many articles. Researchers studied various protocols, paying the most attention to the Open Shortest Path First (OSPF) protocol, Routing Internet Protocol (RIP), Enhanced Interior Gateway Routing Protocol (EIGRP) [9–12], using various simulators such as OMNET++, GNS3, Cisco Packet Tracer, Riverbed Modeler [11–14]. Thus, in work [15], the use of the Riverbed Modeler simulator made it possible to study the behavior of protocols in networks of different topologies, at different data transfer speeds, to analyze various indicators, including the most important - delay and bandwidth. Routing protocols were compared among themselves [10–13], for example, in [13], researchers observed and compared the performance of routing protocols EIGRPv6, OSPFv3, and RIPng in IPv6 network environment. In addition to the research of routing provided at the third network layer, our research object was also the second data link layer of network architecture. Networking equipment at the network layer are routers, and at the channel layer is switches. Both routers and switches can be part of campus networks. Data received at a device input must be forwarded to one or more of its outputs for delivery to the

102

K. Obelovska et al.

remote destination device(s). To calculate a route, routers use Internet protocol addresses (IP-addresses), which are third-layer addresses, while switches use second-layer Media Access Control addresses (MAC-addresses). Routing protocols used today in IP networks, depending on the algorithm used, are divided into protocols that use: • Distance Vector Algorithm, DVA; • Link State Algorithm, LSA. For research, we took one protocol from each category: the EIGRP from the category that uses the Distance Vector Algorithm and the OSPF protocol from another category that uses Link State Algorithm. Enhanced Interior Gateway Routing Protocol developed by Cisco Systems, is a distance vector routing protocol but includes features of link-state routing protocols [11]. EIGRP is suited for many different topologies and media. The OSPF protocol is one of the most common routing protocols, especially in autonomous Internet systems. Most router manufacturers support this protocol. The OSPF protocol ensures the transmission of packets along the shortest paths calculated according to a certain criterion, by default it is usually the bandwidth of the channel. The larger the bandwidth of the channel, the lower its cost, so the packet will be sent along the path that will provide the maximum possible bandwidth. Based on the tree of the shortest paths, a routing table is built, which is stored in the memory of the routers and is used to select the next node on the path of the packet to the destination node. Typical routing algorithms also use the number of channels (hops) on the path between the source node and the destination node as a criterion for counting the tree of shortest paths. In the general case, it can be not only the bandwidth of the channel and the number of hops, but also other parameters, for example, distance, signal propagation delay, cost, or reliability [15]. However, a route optimized for a certain criterion may not be optimal or even satisfactory for other criteria. If in specific cases there is a need to take into account more than one criterion at the same time, there are modified algorithms for constructing the shortest paths tree, which allows taking into account several criteria at the same time, for example, three [9, 16]. The integral criterion that is formed takes into account both each individual criterion and its importance. Routing protocols were analyzed for their impact on various parameters, we focused on the effect of routing on time parameters. For specific scenarios, we compare the routing process based on the EIGRP and OSPF dynamic routing protocols with the switching process from the message delivery time point of view. For this purpose, we will conduct a ping test similar to what was done in [11] when studying the routing internet protocols using GNS3, ENSP, and Packet Tracer network simulators. Ping is a utility to test the reachability of a destination host. Ping measures the Round-Trip Time (RTT) for messages sent from the source host to a destination computer that are echoed back to the source. The time between sending a request and receiving a response allows to determine two-way delays in the route. The most expedient and effective research of networks should be carried out using their simulation. Therefore, it is important to create a model of an information system or network for its research and optimization [17]. The presence of the model will allow to investigate the network in different modes of operation and to evaluate its parameters.

Analysis and Comparison of Routing and Switching Processes

103

There are both commercial and free simulators that can be used to simulate how routing protocols work. Among them, in our opinion, the Cisco Packet Tracer simulator, which was chosen by us for research, deserves special attention. In this work, we chose two networks for research, one of which includes 3 subnets, the other - 6 subnets. In each of them, we set up the process of dynamic routing using the EIGRP and OSPF protocols, as well as the switching process.

3 Modeling and Results The modeling process and its results will be presented on the example of two Campus Area Networks: • A small network, which includes three subnets and contains three routers and five switches; • An extended network, in which the number of subnets is doubled and equal to six. Two scenarios are considered: • Scenario A when subnets are interconnected by routers; • Scenario B when routers are replaced by switches. Routers implement the third layer functions of network architecture, and dynamic routing protocols were configured for them using Cisco IOS operating system commands. For the switches present in these networks, the parameters set by default for the Cisco Packet Tracer environment were used. When implementing Scenario A, we will determine the time indicators for dynamic routing protocols EIGRP and OSPF, when implementing Scenarios B, the time indicators of the switching method will be determined. The time between sending a request and receiving a response, known as Round Trip Time (RTT), is chosen as a time indicator. This allows us to determine two-way delays in the route. We will consider the solution with the shortest RTT time to be the best. 3.1 Comparison of Time Indicators When Using EIGRP and OSPF Routing Protocols – Scenario A Small Network. First, consider a small network (see Fig. 1) that combines three subnets and contains three routers and five switches.

104

K. Obelovska et al.

Fig. 1. A small network topology for routing research

We will alternately examine the operation of the EIGRP and OSPF routing protocols, provided that the IPv6 protocol is used at the network layer. Accordingly, the Internet Control Message Protocol ICMPv6 was used to transmit requests to the remote node and receive responses. Figure 2 illustrates the Round Trip Time values for the Enhanced Interior Gateway Routing Protocol and the Open Shortest Path First protocol, and provides a visual representation for comparing them.

Fig. 2. Round Trip Time for a small network when using routing protocols EIGRP and OSPF

Extended Network. To conduct the next similar experiment, an extended network (See Fig. 3) consisting of 6 subnets was modeled.

Analysis and Comparison of Routing and Switching Processes

105

Fig. 3. Extended network topology

The extended network contains six routers and six switches. Thirteen hosts are connected to the network as shown in the figure. The results of similar studies on the definition of RTT for the extended network when using EIGRP and OSPF routing protocols are shown in Fig. 4.

Fig. 4. Round Trip Time for extended network when using routing protocols EIGRP and OSPF

As can be seen from Fig. 5 different routing protocols also correspond to different RTT values. Round trip time for the network in which the EIGRP is configured is 0.034 s. When using the OSPF protocol, the RTT time value is 0.026 s. This is 30% better than EIGRP.

106

K. Obelovska et al.

3.2 Time Performance When Using Layer 2 Switching Technology – Scenario B In the studies described above, we have shown the impact of routing protocols on the time performance of campus area networks and their dependence on the choice of routing protocol and network size. Now our goal is to evaluate the feasibility of using Layer 2 switching technology instead of routing technology in campus area networks. Therefore, in the networks that were modeled for RTT estimation when using routing protocols (See Fig. 1 and Fig. 3), we will replace the routers with two-layer switches and conduct a similar study. Small Network. First, let us consider a small network (see Fig. 5) formed by replacing three routers (Router0 – Router2) in the network of Fig. 1 with Layer 2 switches.

Fig. 5. A small network topology for switching research

The results of the experiment obtaining the round trip time in a network with switches are shown in Fig. 6. The same figure shows the results of a similar experiment when using routing with OSPF and EIGRP protocols. It makes it possible to visually demonstrate and compare time characteristics when using routing and switching technology. With

Fig. 6. Round Trip Time for a small network when using Layer 2 switching

Analysis and Comparison of Routing and Switching Processes

107

the Layer 2 switching technology, the RTT is equal 0.020 s, while with routing - 0.022 s when using the OSPF protocol and 0.026 s if the EIGRP protocol was used. This means that in a network configured with Layer 2 switches, the Round Trip Time is less than when using a routing process: 10% with OSPF protocol and 30% with EIGRP. Extended Network. In Fig. 7 shows an extended network formed by replacing six routers (Router0 – Router5) in the network of Fig. 4 with Layer 2 switches.

Fig. 7. Extended network topology for switching research

The results of the experiment obtaining the RTT in this network are shown in Fig. 8.

Fig. 8. Round Trip Time for an enhanced network when using Layer 2 switching

For comparison, it is shown in the same figure the results of a similar experiment when using routing with OSPF and EIGRP protocols in the extended network. With the

108

K. Obelovska et al.

Layer 2 switching technology, the RTT is equal 0.023 s, while with routing - 0.026 s when using the OSPF protocol and 0.034 s if the EIGRP was used. This means that in this network configured with Layer 2 switches, the Round Trip Time is 48% less than using EIGRP routing and 13% less than using OSPF.

4 Comparison and Discussion We modeled and investigated two networks (small and extended) that were configured for switching and dynamic routing processes with EIGRP and OSPF protocols. To visualize and analyze the obtained results, Table 1 summarizes the Round Trip Time values for the EIGRP and OSPF dynamic routing protocols and the Layer 2 switching process. Table 1. The Round Trip Time values for the EIGRP and OSPF protocols and the Layer 2 switching process. Protocol

Routing (Scenario A)

Layer 2 switching (Scenario B)

EIGRP

OSPF

RTT, seconds (3 Subnet)

0,026

0,022

0,020

RTT, seconds (6 Subnet)

0,034

0,026

0,023

K, %

31

18

15

The summarized results demonstrate the transition to the switching process in campus area networks reduces the round trip time in the case of 3 subnets and the use of the OSPF protocol by 10% and in the case of EIGRP by 30%. In the case of 6 subnets, respectively, on 13% and 48%. Let us denote by K the coefficient of increasing RTT when the network dimension increases from small to extended. Table 1 and Fig. 9 show the values of this coefficient for all three cases.

Fig. 9. The RTT increase coefficient for the EIGRP and OSPF protocols as well as for the switching process when the network size increases

As the network size increases, the RTT also increases, fastest for EIGRP (31%) and slowest for the switched case (15%).

Analysis and Comparison of Routing and Switching Processes

109

5 Summary and Conclusion A research methodology is proposed, which allows evaluation and comparison of the time indicators of the network when using routing of the third layer and Layer 2 switching of network architecture. The Enhanced Interior Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) protocols were chosen as third-layer dynamic routing protocols. The research was conducted using the Cisco Packet Tracer simulator. Research results have shown that replacing Layer 3 routing with Layer 2 switching in campus networks can significantly improve their performance by reducing packet delivery times.

References 1. Deepanshu, Singh, B., Gupta, B.: An energy efficient optimal path routing (EEOPR) for void avoidance in underwater acoustic sensor networks. Int. J. Comput. Netw. Inf. Secur. 14(3), 19–32 (2022). https://doi.org/10.5815/ijcnis.2022.03.02 2. Poluboyina, L., Mallikarjuna Prasad, A., Sivakumar Reddy, V., et al.: Multimedia traffic transmission using MAODV and M-MAODV routing protocols over mobile ad-hoc networks. Int. J. Comput. Netw. Inf. Secur. 14(3), 47–62 (2022). https://doi.org/10.5815/ijcnis.2022. 03.04 3. Debnath, S.K., Saha, M., Islam, M., et al.: Evaluation of multicast and unicast routing protocols performance for group communication with QoS constraints in 802.11 mobile ad-hoc networks. Int. J. Comput. Netw. Inf. Secur. 13(1), 1–15 (2021). https://doi.org/10.5815/ijcnis. 2021.01.01 4. Mohammadinejad, H., Mohammadhoseini, F.: Proposing a method for enhancing the reliability of RPL routing protocol in the smart grid neighborhood area networks. Int. J. Comput. Netw. Inf. Secur. 11(7), 21–28 (2019). https://doi.org/10.5815/ijcnis.2019.07.04 5. Mohamed, S.I., Abdelhadi, M.: IoT bus navigation system with optimized routing using machine learning. Int. J. Inf. Technol. Comput. Sci. 13(3), 1–15 (2021). https://doi.org/10. 5815/ijitcs.2021.03.01 6. Zagrouba, R., Kardi, A.: Comparative study of energy efficient routing techniques in wireless sensor networks. Information 12(1), 42 (2021) 7. Abidoye, A.P.: Energy efficient routing protocol for maximum lifetime in wireless sensor networks. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 10(4), 33–45 (2018). https://doi.org/10. 5815/ijitcs.2018.04.04 8. Trakadas, P., Sarakis, L., Giannopoulos, A., et al.: A cost-efficient 5G non-public network architectural approach: key concepts and enablers, building blocks and potential use cases. Sensors 21(16), 5578 (2021). https://doi.org/10.3390/s21165578 9. Liskevych, R.I., Liskevych, O.I., Obelovska, K.M., et al.: Improved algorithm for the packet routing in telecommunication network. Ukranian J. Inf. Technol. 3(1), 114–119 (2021). https:// doi.org/10.23939/ujit2021.03.114 10. Hossain, M.A., Akter, M.: Study and optimized simulation of OSPFv3 routing protocol in IPv6 network. Glob. J. Comput. Sci. Technol. 11–16 (2019). https://doi.org/10.34257/gjcste vol19is2pg11 11. Thu, D.K.A.: Simulation of campus area network using routing protocol. Int. J. Adv. Res. Dev. (IJARnD) 3(9), 55–59 (2018). www.IJARnD.com 12. Gajendra, S., Binay, S.: Comparison of routing protocols in-terms of packet transfer having IPV6 address using packet tracer. Eng. Technol. Open Acc. 2(4), 555593 (2018). https://doi. org/10.19080/ETOAJ.2018.02.555593

110

K. Obelovska et al.

13. Akter, M.S., Hossain, M.A.: Analysis and comparative study for developing computer network in terms of routing protocols having IPv6 network using cisco packet tracer. Softw. Eng. 7(2), 16–29 (2019). https://doi.org/10.11648/j.se.20190702.11 14. Warsame, M.A., Sevin, A.: Comparison and analysis of routing protocols using riverbed modeler. Sakarya Univ. J. Sci. 23(1), 16–22 (2019). https://doi.org/10.16984/saufenbilder. 447345 15. Teslyuk, V., Sydor, A., Karoviˇc, V., Pavliuk, O., Kazymyra, I.: Modelling reliability characteristics of technical equipment of local area computer networks. Electronics 10(8), 955 (2021). https://doi.org/10.3390/electronics10080955 16. Greguš, M., Liskevych, O., Obelovska, K., et al.: Packet routing based on integral normalized criterion. In: Proceedings of the 7th International Conference on Future Internet of Things and Cloud (FiCloud), Istanbul, Turkey, pp. 393–396 (2019). https://doi.org/10.1109/FiCloud. 2019.00064 17. Kovtun, V., Izonin, I., Gregus, M.: Model of information system communication in aggressive cyberspace: reliability, functional safety, economics. IEEE Access 10, 31494–31502 (2022). https://doi.org/10.1109/ACCESS.2022.3160837

A Parallel Algorithm for the Detection of Eye Disease Lesia Mochurad(B) and Rostyslav Panto Lviv Polytechnic National University, S. Bandera Street, 12, Lviv 79013, Ukraine [email protected]

Abstract. Over the past few years, the growth of eye diseases among people has become exponential. Eye problems affect not only the elderly but also the younger generation. But detecting these diseases at an early stage is a large problem. Therefore, diagnosing an eye disease at an early stage will be useful for the decision-making process of the medical system and help save human vision. After all, some diseases in the late stages lead to blindness. In addition, global eye disease is rapidly increasing, even if we take into account the increase in the number of people in recent decades. This, in turn, leads to a rapid increase in the amount of medical data. This growth in the amount of data in the healthcare industry means that artificial intelligence will be used much more often in this field. At the same time, for the effective operation of artificial intelligence methods, not only their accuracy, but also the speed of execution must be high. In this work, a parallel algorithm based on a combination of Adaptive Boosting and Bagging algorithms is proposed for determining the risk of eye disease in patients. A study of the efficiency of the improved algorithm was also conducted. At the same time, parallelization is implemented using ThreadPoolExecutor through the thread pool and CUDA technology. Parallelization of the training process and the training sample was carried out. To investigate the accuracy of the classification model, a data set containing symptoms that affect the risk of vision disease was used. A theoretical assessment of the computational complexity of the proposed algorithm was given. It was possible to achieve an acceleration of more than 6 for the CPU and more than 8 for the GPU. And the accuracy of the model’s operation was achieved at approximately 98% according to the Accuracy metric and 87% according to the F1 score. Keywords: Artificial intelligence · Machine learning · CUDA technology · Adaptive Boosting algorithm · Bagging algorithm

1 Introduction Nowadays, the medical field is increasingly relying on computer technology, and artificial intelligence (AI) is being used for diagnosis in a large number of projects. In medical diagnostics, an accurate diagnosis is very important for choosing the right therapy at an early stage. But in many cases it is very difficult for an expert to determine the patient’s condition. With the help of clinical records, machine learning (ML) methods can be used © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 111–125, 2023. https://doi.org/10.1007/978-3-031-24475-9_10

112

L. Mochurad and R. Panto

for descriptive analysis of clinical features. ML algorithms are widely used in the diagnosis of various diseases, such as diabetes, heart problems, cancer, the psychophysical state of a person, covid [1–5]. The use of AI in medicine has a number of significant advantages: it reduces human errors, saves time and costs, and improves the pace of service delivery. Moreover, ML methods repeatedly show better accuracy compared to medical personnel. The basis of medical diagnosis is the problem of classification. Diagnosis can be reduced to the problem of mapping data to one of N different results [6, 7]. Medical data classification tasks have their own difficulties. One of which is the amount of input data. The collection of medical data grows every day with the number of people and patients, respectively, and can count tens, hundreds of thousands and millions of data units and tens and hundreds of fields, and the development of an optimal sequential algorithm that can satisfy the time cost and accuracy is extremely difficult, and sometimes and an impossible task [8]. Therefore, the use of parallel algorithms is becoming more relevant. The latter can significantly speed up data processing, i.e. bring classification results to the desired accuracy faster than sequential ones. Eye injuries and eye diseases are one of the most important medical and social problems in all countries of the world [9]. About 5% of cases of incapacity for work of the working population of the planet are due to occupational diseases and eye injuries. According to the International Agency for the Prevention of Blindness, around 285 million people worldwide suffer from visual impairment, of which 39 million are affected by blindness. About 90% of people suffering from visual impairment live in low-income countries. According to experts, 82% of people suffering from blindness are in the age group of 50 years and older. 19 million children in the world suffer from visual impairments, of which 12 million have refractive errors. The main causes of visual impairment are: uncorrected refractive errors (myopia, farsightedness or astigmatism) – 43%; cataract – 33%; glaucoma – 2% [10]. The purpose of this work is to develop a parallel algorithm that combines Adaptive Boosting and Begging algorithms to achieve greater stability and higher accuracy, speed up and reduce computational complexity based on the use of modern parallel programming technologies for eye disease risk classification problems. The object of the study is the Adaptive Boosting and Bagging algorithm. The subject of research is the process of combining these two algorithms and parallelization methods. In our work, we used a medical dataset and Data Mining for its processing. At the same time, the medical dataset has a very large number of parameters with indicators of research, reviews, surveys, etc. For our classification task, that is, the task of determining the patient’s risk of an eye disease, the main thing will be to determine the target parameter for classification, that is, the determined level of risk itself and the fields that correlate with this parameter, these are the results of examinations. The level parameter must be divided into specific categories or classes. Also, for this type of disease, the parameters of age, sex, genetic affiliation, average parameters of blood pressure, etc. will be important [11]. Therefore, it will be necessary to include such parameters for classification. The task of classification considered in the work will consist of two main parts of the target parameter of disease risk, which will acquire categorical parameters and

A Parallel Algorithm for the Detection of Eye Disease

113

parameters based on which the classification will be carried out, these are the results of the examination and examination of the patient.

2 Related Works The healthcare information technology industry is changing every day. As a result of this rapid development, new scientific advances in the field of ML are giving the healthcare sector the opportunity to take advantage of a whole range of revolutionary tools that use natural language processing, pattern recognition and “deep” learning. Of course, the industry still has a long way to go before machine learning and AI will be able to meet the needs of modern medicine, but it is worth noting that innovative technologies are already making their mark in the medical data analysis environment [12]. Since the problem investigated in this work is relevant, an analysis of relevant literary sources was carried out. This, in turn, provided an opportunity to study the studied algorithms in detail, to determine achievements in their use for the diagnosis of ostogmatism, and to investigate already existing approaches for improving the Adaptive Boosting algorithm. In [13], the authors developed the idea of ensemble learning, combining adaptive boosting and bagging for the purpose of binary classification. Algorithms have been tested on different data sets, showing improvements in accuracy and reduction in error rates. However, this algorithm increased the calculation time. This work was supported by the Faculty of Computer Engineering. Cyril and Methodius, Skopje, Macedonia. This study further actualizes the problem of parallelization, which was emphasized in the conclusions of the work. C. Yu, D. B. Skillicorn parallelized boosting and bagging [14]. The studies were conducted using decision trees and two standard datasets. The main results are that sample sizes limit the achievable accuracy, regardless of computational time. The research was published in the International Journal of Computer Applications. Therefore, this work encourages experiments on larger data and attempts to improve accuracy. The paper also demonstrated the parallelization of boosting and bagging methods separately, which confirms the novelty of our proposed algorithm. The authors of [15] improved the parallel efficiency of the decision tree by proposing a new GBDT system - HarpGBDT. This approach contains a block parallelism strategy and an extension of the TopK tree growing method (which selects the best K candidate tree nodes to allow more levels of parallelism to be used without sacrificing the accuracy of the algorithm). In addition to the description of this approach, the paper provides a comparative analysis with other parallel implementations, which indicates that despite numerous advantages over other approaches, the work time is longer. This indicates the need to find a method of parallelization, which, among its numerous advantages, will also include significant acceleration. Boosting and bagging have also been applied to enhance the decision tree algorithm in disease risk prediction [16]. At the same time, the authors managed to achieve an accuracy of 89.65%. Despite the high accuracy, the issue of algorithm execution time remains open. After all, a small amount of data was used in this work, and this suggests that with a larger amount of data, the time spent will be too large. This necessitated the search for methods of speeding up these algorithms.

114

L. Mochurad and R. Panto

Therefore, the analysis of literary sources showed that the use of the Adaptive Boosting algorithm for diagnosing eye diseases is not sufficiently studied, the conducted studies usually use a small amount of data in binary classification tasks, and the implemented parallelization approaches do not give the expected speedup. In this work, we will use a larger amount of data, and we will also conduct multi-class classification, which will lead to more realistic results, since usually the classification of diseases and the risks of their detection are tasks of multi-class classification, not binary. In addition, no works were found that describe the development of the proposed parallel algorithm, which confirms the value and relevance of this work.

3 Materials and Methods AdaBoost is currently one of the most powerful recognition algorithms. One of the many advantages of the Adaptive Boosting algorithm is that it is easy, fast and simple to program. In addition, it is flexible enough to be combined with any ML algorithm. It can be extended to more complex learning tasks than binary classification and is quite versatile as it can be used with numerical or textual data [17]. This algorithm works sequentially and like any data algorithm uses data from the previous iteration. For medical data, due to its large amount as described above, bringing the algorithm to high accuracy can take a very long time. To divide a set of data into subsets and to perform this task for each is a non-trivial task, because the quite obvious question “How to combine and organize the results?” arises. Even after balancing the amount of data in the subsample, their set will differ, which will cause a number of significant problems, leading to unpredictable results of model training. For this, the Begging method will be used [18]. 3.1 Description of the Proposed Algorithm 1. We specify the required number of Adaptive boosting models – n, which we want to build. 2. We share the initial data N on n subsamples, where n – this is the number of Adaptive boosting models specified in point 1. We will get N1 , N2 , . . . , Nn size models m, de m = N /n – the amount of data in each Adaptive Boosting model. 3. Next, we implement the Bagging algorithm by running training for each Adaptive Boosting model separately. At the same time, each model is given a corresponding subsample for training (first model-first subsample, etc.). We will get w1 (), w2 (), . . . wn () independent weak students (one for each subsample). 4. At each iteration, we fit the weak learner to the gradient of the current fitting error with respect to the current ensemble model: sn () = sn−1 () − cn ∗ ∇sn−1 E(sn−1 ), where E() – fitting error of this model, cn − the coefficient corresponding to the step size, −∇sn−1 E(sn−1 ) −∇sn−1 E(sn−1 )– the gradient of the fitting error with respect to the ensemble model. 5. At the output of the Begging algorithm, we get n trained models (classifiers) of the Adaptive Boosting algorithm. 6. To make a prediction (assigning an object to a certain class), we transfer an object to each of the models, namely a number of its features that need to be classified.

A Parallel Algorithm for the Detection of Eye Disease

115

7. As a result, each model of Adaptive Boosting assigns an object to the class to which the probability of the object belonging according to its calculations is the highest. Object belongings x to each of the classes: P(y = 1|x) =

1 1 , P(y = −1|x) = . 1 + exp(−a(x)) 1 + exp(a(x))

8. After that, we find the average value between all received predictions with n models. 9. We round the determined average value to the nearest whole number, which will be the result of the algorithm of combining Bagging and Adaptive Busting, and, therefore, the class to which the transferred object belongs. 3.2 Defining the Parallelization Step Parallelization of the combination of the algorithms described above is made possible by step 3, during which the models are trained sequentially and independently of each other. Due to the fact that the models are trained independently, we can run their training in parallel. This is one of the main reasons why we used the idea of combining the Bagging algorithm with Adaptive Boosting. After all, for example, when learning the Adaptive Boosting model, several classifiers are also trained, but this process cannot be parallelized, since here each subsequent classifier uses the results of the previous one, and does not work independently, as in Bagging. So, when parallelizing, we change step 3 in the algorithm of combining Begging with Adaptive Boosting, running parallel training for each Adaptive Boosting model separately. At the same time, each stream will work with its own subsample and its own model. 3.3 Data Review and Analysis We will use a data set formed on the basis of a long-term survey conducted among US residents. The purpose of the classification is to predict whether a patient is at risk of developing eye disease. The dataset provides information on 50,000 patients and contains 24 attributes. [19] Each attribute is one of the potential risk factors. Let’s derive the main attributes that we will use in the future: • • • • • • • • •

Sex: male or female (nominal value); Age: age of the patient (nepepepvni znaqenn). Genetic - does the patient have relatives with vision problems (binary value) Vision (Left Eye %) - a percentage field with the value of the quality of vision of the patient in the left eye Vision (Right Eye %) - a percentage field with the value of the patient’s vision quality in the right eye Extensive stroke (Prerelevant Stroke): whether the patient suffered a stroke (nominal value); Systolic blood pressure (Sys BP) (continuous value); Diastolic blood pressure (Dia BP) (continuous value); Body mass index (BMI) (continuous value);

116

L. Mochurad and R. Panto

Prediction variable: • Risk factor (Risk_Factors) - The risk that the patient will have an eye disease. Values High, Low, Medium, Current. In the dataset there are also fields such as ‘year’, locationID, State and so on, but they will not be taken into account in the calculation, as they do not meet the purpose of this work. In the work, the analysis and preliminary preparation of the data was first carried out, which provided the basis for the transition to the experimental part of the work. 3.4 Application of the Proposed Approach We will divide the sample into validation and test in the ratio of 25% to 75%, where 25% will be test data, and 75% will be validation. After data distribution, we perform classification. To implement classification, we will use AdaBoostClassifier from the sklearn library. To begin with, we will separate the classification into a separate function, which we will then parallelize. Within this function, the announcement of adaptive boosting and its training will take place. The function will return an already trained submodel. At the input, the function will receive subsamples formed using bagging. You also need to choose parameters for boosting. These are the number of trees, the speed of descent, the maximum depth of the tree and the loss function that we will minimize. After a certain selection of parameters, we will get the following set: • Number of trees(n_estimators) – 150; • speed of descent (learning_rate) – 0.01; parameters such as random initial position and algorithm will be left unchanged as None and SOMME.R, respectively. iter_node creates an additive model at the initial stage, leaving a random initial state as the None selection to be initially selected for consideration by the algorithm; it allows optimization of arbitrary differentiable loss functions. Next, we declare our function, which we will then parallelize. Adaptive boosting and its training will be declared inside the iter_node function. Our function will return an already trained model of the negative gradient of a binomial or multinomial loss function. We will use f1_score and score to evaluate the work of research. Adaptive boosting is a sequential tree learning algorithm, because the output of the previous weak algorithm is the input for another. It is because of this that it is impossible to perform training in parallel mode. However, using adaptive boosting as a base for bagging will allow training to be performed in parallel mode. To parallelize bagging, the training set is divided equally among the available processors (threads). Each processor (thread) executes a sequential algorithm until the corresponding predictions are built. Then, by bagging, the results are combined and we get the result. In general, it is a good idea to split the training set in random order to ensure that the predictions produced by each processor (thread) do not contain unnecessary biases. The parameters to be chosen

A Parallel Algorithm for the Detection of Eye Disease

117

for the parallel ensemble technique are the sample size used and the number of iterations. Parallelization will be implemented using the Python module concurrent.Futures, namely its class ThreadPoolExecutor and CUDA technology [20]. We will parallelize the learning process. Parallelization of the study sample will also be carried out. In a single-stream system, training vectors will be sent to the classifier one at a time. In a parallel system, these training vectors will be split between threads. 3.4.1 Parallelization using ThreadPoolExecutor Through a Thread Pool Threads in Python are a form of parallel programming that allow a program to execute multiple procedures simultaneously. Thread-based parallelization is especially good for accelerating applications that work with large amounts of data. ThreadPoolExecutor is a special utility that is built into Python 3, located in the concurrent.Futures module, and is designed to distribute code execution among threads (a pool of threads is formed). Without using this utility, Python will work in single-threaded mode regardless of how many threads are created. This is a feature of the GIL, and the utility helps to bypass it. The ThreadPoolExecutor must first be imported from the specified module, and then the ThreadPoolExecutor() object must be initialized. The map function was also used, the syntax of which is as follows: map(func, *iterables) The map method, which is also built into the module, applies the func function to one or more iterable objects, in this case the function is the function of building and training the model of the Adaptive algorithm Busting, and iterating objects are the parts into which the total sample of data is divided. At the same time, each function call is launched in a separate thread. The map method returns an iterator with the results of the function execution for each element of the iterating object. The number of threads in which the code will be executed is specified in the max_workers parameter when declaring the ThreadPoolExecutor() object, which is applied to the map function. So, the input of ThreadPoolExecutor().map will accept the function that needs to be parallelized and the arguments that need to be passed to it, at the output we get an array of trained models on subsamples. Next, a quality check will be performed independently for each model and quality metrics for them (score and f1_score) will be calculated. At the end, when we exit the parallel area of our program, a repeated quality check will be performed, but already a complex of all models. The prediction of such a composition will be averaged and rounded. 3.4.2 Parallelization Using GPU The system that executes the software implementations presented in this paper supports CUDA technology, which provides an opportunity to use the GPU for parallelization and acceleration of program execution. We will use lightgbm to call the CUDA technology. To install lightgbm, enter the following command: pip install lightgbm --install-option = --cuda.

118

L. Mochurad and R. Panto

LightGBM is a Python framework for improving Adaptive Boosting. It is designed to increase efficiency with the following benefits: • • • • •

Faster training speed and higher efficiency. Less memory usage. Better accuracy. Support for parallel, distributed and GPU learning. Capable of processing large-scale data.

This framework provides an opportunity to create its CUDA version, which provides for the parallel execution of the functions of this framework based on a graphics processor. Classification will take place by calling the LGBMClassifier() method from the LightGBM library described above. 3.5 Computational Complexity of the Algorithm. Theoretical Evaluations The computational complexity of Adaptive Boosting is: O(n · p2 · m), de m – number of trees, p – number of signs, n – total amount of data. O(k · n · p2 · m) – difficulty of Busting for k iterations (k – number of Adaptive Boosting algorithms). Difficulty of Adaptive Boosting with Bagging: O(n) + O(k · t · p2 · m), where t – the size of the data subsample used, O(n) – complexity of sampling, n – total data size. Now let’s calculate the complexity of one round of training with parallelization: O( nl , t) + O(k · t · p2 · m), where t – the size of the data subsample used, O( nl , t) – sampling complexity (which depends both on the size of the local data set in the round and on the size of the selected set according to the specified number of threads), n – total data size, l – number of threads. O(t · p2 · m) – difficulty of Busting in one round.

4 Research Results A dataset containing symptoms that affect the risk of vision disease was used to investigate the accuracy of the classification model. Table 1 summarizes the characteristics of the data set used in the experiments. Table 1. Characteristics of the dataset Dataset

Examples

Train data

Class

No. of features

Vision_Eye_Health

49980

31800

4

10

Table 1 indicates the use of the Vision_Eye_Health dataset, which contains 49980 datasets, of which 31800 are used for training [7].

A Parallel Algorithm for the Detection of Eye Disease

119

Table 2. Program execution time during serial and parallel processing (ThreadPoolExecutor) The number of algorithms in the composition

Consequence

1

0.6014

2

1.6847

4

3.1446

Number of parallel processes 2

4

8

16

0.8144

0.8068

0.848

0.7704

0.9028

0.8555

0.903

0.8384

1.7672

0.9764

1.1219

1.0623

8

6,5627

3.3591

1.8922

1.5195

1.6055

16

12.8199

6.5196

3.9812

3.4717

3.0916

30

23.8434

12.6693

9.4319

6.1079

5.9624

Next, we will present the results of the execution time of the sequential and the proposed parallel algorithm using the ThreadPoolExecutor module and analyze them. It can be seen from Table 2 that, compared to sequential execution, the program’s running time is significantly reduced with parallelization. At the same time, the greater the number of streams, the shorter the time spent. Only when using more than eight threads does the time value hardly change. This is explained by the capabilities of the architecture of the system on which this program was implemented (only 4 cores and 8 logical processors, that is, the maximum efficiency can be seen when using eight threads). Now let’s conduct a comparative analysis of time costs for sequential and parallel execution on the basis of CPU and GPU. Table 3. Comparison of CPU-based and GPU-based sequential and parallel execution times Number of composition algorithms

Execution time, s Successively

CPU

GPU

1

0.7959

0.7653

0.0832

2

1.6734

0.8328

0.2415

4

3.1235

1.0551

0.3453

8

6.5187

1.5093

0.7773

16

12.7338

3.0708

1.4742

30

23.6833

5.9224

2.7813

Table 3 shows how the value of the program’s execution time drops rapidly when using GPU-based parallelization. To compare the time costs when working with the CPU and GPU, the best results achieved on the CPU were used, and at the same time, we received a fairly significant speedup on the GPU. This once again confirms the fact that using the GPU in parallelization is very efficient and can deliver impressive results. In this case, from 24 s of sequential execution, the GPU made it possible to speed up the program to 3 s, which is a significant improvement.

120

L. Mochurad and R. Panto

Now let’s calculate the experimental indicators of acceleration and efficiency of the parallel algorithm for different amounts of boosting algorithms in the bagging composition. We will use the following formulas to calculate these indicators: Sp (n) = TTp1 (n) (n)

T1 (n) – acceleration indicator, Ep(n) = PT – efficiency index, where T1 (n) – time comp (n) plexity of sequential execution of the algorithm, Tp (n) – time complexity of parallel execution of the algorithm for p processors (threads).

Table 4. Indicators of acceleration of the parallel algorithm with different number of threads and variation of the number of algorithms in the composition (CPU) Number of algorithms

Number of threads 2

4

8

16

1

0.9078

0.8994

0.9453

0.8588

2

0.4786

1.7590

1.6665

1.7949

4

1.5894

2.8766

2.5036

2.6441

8

1.7450

3.0979

3.8578

3.6510

16

1.7563

2.8762

5.0847

4.5970

30

1.7702

3.1512

6.1667

6.2514

Acceleration

In Table 4 shows acceleration indicators using parallel execution for different numbers of threads and algorithms in the composition when working on a CPU.

80.000 60.000 40.000 20.000 0.000 1

2

4

8

16

30

Number of Algorythms 2

4

8

16

Fig. 1. Acceleration indicators for different number of threads and different number of algorithms in composition (CPU)

From Fig. 1, we see that with an increase in the number of flows and an increase in the number of algorithms in the composition, the value of the acceleration indicator increases. With a small number of algorithms, the results are mixed and do not show a stable and significant increase in speedup. This is due to the fact that when a small

A Parallel Algorithm for the Detection of Eye Disease

121

number of algorithms are used, more time is allocated to the distribution of data between threads, that is, to parallelization, than to the work of the algorithms themselves. Also, the speedup goes directly to the number of threads in use, which is in line with the basic idea behind the speedup metric. The highest acceleration was recorded when using 16 streams and 30 boosting algorithms in the bagging composition. Table 5. Performance indicators of the parallel algorithm with different number of streams and variations in the number of algorithms in the composition Number of algorithms

Number of threads 2

4

8

16

1

0.4539

0.2248

0.1182

0.1074

2

0.2393

0.4397

0.2083

0.2244

4

0.7947

0.7192

0.3130

0.3305

8

0.8726

0.7745

0.4822

0.4566

16

0.8782

0.7190

0.6356

0.5746

30

0.8853

0.7878

0.7708

0.7815

In Table 5 shows performance indicators using parallel execution for different numbers of threads and algorithms in the composition when working on a CPU.

1.2

Acceleration

1 0.8 0.6 0.4 0.2 0 1

2

4

8

16

30

Number of Algorythms 2

4

8

16

Fig. 2. Performance indicators for different number of threads and different number of algorithms in the composition

Analyzing Table 5 and Fig. 2, we can say that, in contrast to acceleration, efficiency indicators decrease with an increase in the number of threads. This decrease in efficiency

122

L. Mochurad and R. Panto

is explained by the increase in the load on the system when more threads are called. However, as the number of algorithms increases, the efficiency also increases and at the same time approaches unity, which indicates the high-quality work of the parallelization method. Let’s move on to the comparison of the parallel execution of the program using CPU and GPU. Table 6. Acceleration indicators for parallel execution of the program using CPU and GPU Number of composition algorithms

CPU

GPU

1

0.9453

8.1854

2

1.6665

6.1880

4

2.5036

7.1860

8

3.8578

7.4905

16

5.0847

7.7154

30

6.1667

8.4989

From Table 6, we can see that with an increase in the number of algorithms in the composition, the acceleration value increases both when working on the CPU and when using the GPU, but the use of the GPU gives higher values, which indicates a much better performance of the program. So, according to the results of the experiments, we can say that the parallel implementation of the algorithm is the most effective when using graphics processors based on CUDA technology. Now let’s evaluate the quality of the model. For a more accurate assessment, we will use two metrics - Accuracy and F1-score. Table 7. Accuracy and F1-score metrics values for different number of algorithms in the composition Metrics

Number of composition algorithms 1

2

4

8

16

30

Accuracy

0.9749

0.9757

0.9797

0.9809

0.9811

0.9863

F1-score

0.7032

0.7698

0.7864

0.8289

0.8629

0.8709

Analyzing Fig. 3 and Table 7, it can be concluded that the accuracy is already high when using only one algorithm, but still increases slightly with the increase in the number of algorithms. The value of the F1-score metric is smaller than Accuracy, but here we already see more clearly the influence of the number of algorithms on the accuracy of the model - the more algorithms in the composition, the higher the value of the metric. Therefore, the use of the method of combining the Adaptive Boosting algorithm with Bagging tends to increase the accuracy of the constructed model. However, it must be

A Parallel Algorithm for the Detection of Eye Disease

123

Metrics Count

1.5 1 0.5 0 1

2

4

8

16

30

Number of algorythms Accuracy

F1-score

Fig. 3. Visualization of the values of the Accuracy and F1-score metrics depending on the number of algorithms trained in parallel

said that for problems of a medical nature, the accuracy of 98% and 87% according to F1-score is insufficient because in medicine every error is significant.

5 Conclusions The introduction of modern technologies provides an opportunity to improve the life of mankind. Medicine is only one of the fields where technology can take the main place. In this work, the issue of the relevance of the research topic was considered, data analysis was carried out for diagnosing the disease in a patient based on a set of indicators, such as symptoms, test results, and others. The pre-processed Vision_Eye_Health dataset was used for the study. A search and analysis of significant features and regularities between various factors affecting the disease, removal of insignificant characteristics and invalid data was carried out. In addition, in this work, the method of combining Adaptive Boosting with Bagging was used to improve accuracy in predicting the risks of disease in patients and the possibility of parallelizing program execution. Parallelization of this algorithm was carried out and a good time value and high acceleration indicators were achieved. Parallelization is carried out using two technologies - a thread pool through the Python ThreadPoolExecutor utility on the CPU and CUDA on the GPU. High acceleration rates were achieved. So, with the number of threads of 8, an acceleration of more than 6 was obtained. When using CUDA technology on graphics processors, an acceleration of approximately 8.5 was obtained. After analyzing the obtained results, it was concluded that CUDA works significantly more efficiently and prevails over the thread pool for the selected method and data set. It can be highlighted that graphics processors are much better suited for parallel classification tasks due to the peculiarity of their construction. This is a large number of small processors, which are designed for the calculation of the same type of small problems, which, when detailed, is included, or rather, is the basis of classification problems.

124

L. Mochurad and R. Panto

Also, an important result of the research is the achievement of better accuracy of the algorithm due to the increase in the number of algorithms in the composition. By using more algorithms due to parallelization, which are trained and whose values are then averaged, we get an even higher value of accuracy and therefore this is a great opportunity to increase the efficiency of the model. Further research will be carried out in three main areas for increasing the accuracy of solving the problem of determining the patient’s risk of eye disease. 1) the use of different machine learning methods [21]; 2) use of modern trends in the development of multi-core computer architecture [22]; 3) to investigate the method using the inverse propagation neural network for classification problems based on parallel calculations [23].

References 1. Izonin, I., Trostianchyn, A., Duriagina, Z., Tkachenko, R., Tepla, T., Lotoshynska, N.: The combined use of the wiener polynomial and SVM for material classification task in medical implants production. Int. J. Intell. Syst. Appl. (IJISA) 10(9), 40–47 (2018) 2. Mochurad, L., Hladun, Ya.: Modeling of psychomotor reactions of a person based on modification of the tapping test. Int. J. Comput. 20(2), 190–200 (2021) 3. Santra, A., Dutta, A.: A comprehensive review of machine learning techniques for predicting the outbreak of Covid-19 cases. Int. J. Intell. Syst. Appl. (IJISA) 14(3), 40–53 (2022) 4. Mochurad, L., Ilkiv, A.: A novel method of medical classification using parallelization algorithms. Int. Sci. J. «Comput. Syst. Inf. Technol.» 1, 23–31 (2022) 5. Jassar, S., Adams, S.J., Zarzeczny, A., Burbridge, B.E.: The future of artificial intelligence in medicine: medical-legal considerations for health leaders. Healthc. Manage. Forum 35(3), 185–189 (2022) 6. Deo, R.C.: Machine learning in medicine. Circulation 132(20), 1920–1930 (2015) 7. Mochurad, L., Dereviannyi, A., Antoniv, U.: Classification of X-ray images of the chest using convolutional neural networks. In: IDDM 2021 Informatics & Data-Driven Medicine. Proceedings of the 4th International Conference on Informatics & Data-Driven Medicine, Valencia, Spain, 19–21 November 2021, pp. 269–282 (2021) 8. Liu, J., Liang, X., Ruan, W., et al.: High-performance medical data processing technology based on distributed parallel machine learning algorithm. Supercomputing 78, 5933–5956 (2022) 9. Omar, R., Anan, N.S., Azri, I.A., Majumder, C., Knight, V.F.: Characteristics of eye injuries, medical cost and return-to-work status among industrial workers: a retrospective study. BMJ Open 12(1), e048965 (2022) 10. Jeganathan, V.S.E., Robin, A.L., Woodward, M.A.: Refractive error in underserved adults: causes and potential solutions. Curr. Opin. Ophthalmol. 28(4), 299–304 (2017) 11. Types of Retinal Eye Disease. https://www.verywellhealth.com/retinal-diseases-5212841 12. Pramanik, P.K., Pal, S., Mukhopadhyay, M.: Healthcare big data: a comprehensive overview. In: Bouchemal, N. (ed.) Intelligent Systems for Healthcare Management and Delivery, pp. 72– 100 (2019) 13. Arsov, N., Pavlovski, M., Basnarkov, L., et al.: Generating highly accurate prediction hypotheses through collaborative ensemble learning. Sci. Rep. 7(44649), 9 (2017)

A Parallel Algorithm for the Detection of Eye Disease

125

14. Yu, C., Skillicorn, D.B.: Parallelizing boosting and bagging. Technical report 2001-442, Queen’s University Department of Computing and Information Science Technical Report, pp. 1–22 (2001) 15. Peng, B., et al.: HarpGBDT: optimizing gradient boosting decision tree for parallel efficiency. In: 2019 IEEE International Conference on Cluster Computing (CLUSTER), pp. 1–11 (2019) 16. Taser, P.Y.: Application of bagging and boosting approaches using decision tree-based algorithms in diabetes risk prediction. Multidiscip. Digit. Publ. Instit. Proc. 74(1), 6 (2021) 17. Shen, Y., Jiang, Y., Liu, W., Liu, Y.: Multi-class AdaBoost ELM. In: Cao, J., Mao, K., Cambria, E., Man, Z., Toh, K.-A. (eds.) Proceedings of ELM-2014 Volume 2. PALO, vol. 4, pp. 179–188. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-14066-7_18 18. González, S., García, S., Ser, J., Del, R.L., Herrera, F.: A practical tutorial on bagging and boosting based ensembles for machine learning: algorithms, software tools, performance study, practical perspectives and opportunities. Inf. Fusion 64, 205–237 (2020) 19. Vision_Eye_Health Database. https://www.kaggle.com/datasets/rosberum/risk-factorsvision 20. Kalaiselvi, T., Sriramakrishnan, P., Somasundaram, K.: Performance of medical image processing algorithms implemented in CUDA running on GPU based machine. Int. J. Intell. Syst. Appl. (IJISA) 10(1), 58–68 (2018) 21. Khan, M.Z.: Hybrid ensemble learning technique for software defect prediction. Int. J. Mod. Educ. Comput. Sci. (IJMECS) 12(1), 1–10 (2020) 22. Mochurad, L., Kryvinska, N.: Parallelization of finding the current coordinates of the lidar based on the genetic algorithm and OpenMP technology. Symmetry 13, 666 (2021) 23. Barman, D., Singha, R.K., Chowdhury, N.: Prediction of possible business of a newly launched film using ordinal values of film-genres. Int. J. Intell. Syst. Appl. (IJISA) 5(6), 53–60 (2013)

Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence G. K. Tolokonnikov(B) VIM RAS, Moscow, Russia [email protected]

Abstract. The categorical theory of systems, the theory of functional and biomachsystems are based on a system-forming factor that assembles a system from disparate parts to achieve the planned result. In living organisms, ergatic and biomachsystems (human-machine-living), within their categorical models, the system-forming factor includes, among other things, both the principle of survival (for the living, physiology) and the principle of stationary action (for the inanimate, mechanics). An extension of the concept of a system within the framework of the categorical theory of systems to describe a system-forming factor for nonliving systems is considered. The living controls the mechanical part of the system (musculoskeletal subsystem, machine, etc.), the movement of which is described by equations with servoconstraints, through which the control formalized with the help of these equations takes place. System principles distinguish varieties of mechanical systems, Newton-Galileo mechanics, Lagrange-D’Alembert mechanics, vakonomic and other mechanics. The intellectual properties of the living, which controls the mechanical parts, are modeled within the framework of artificial intelligence. Keywords: Categorical and functional systems · Hamilton’s principle · Galileo’s principle · Lagrange’s equations · Vakonomic mechanics · Servoconstraints · Artificial intelligence

1 Introduction There are numerous approaches to the concept of a system, the most profound of which is the theory of functional systems by P.K. Anokhin [1], formalized in the categorical theory of systems [2], the apparatus of which is used in artificial intelligence [3]. The categorical theory of systems, the theory of functional and biomachsystems are based on a system-forming factor that assembles a system from disparate parts to achieve the planned result. The system-forming factor was discovered by P.K. Anokhin [1] as a useful result for the body. Since the concept of “useful” for inanimate objects does not exist, P.K. Anokhin, having not found a system-forming factor in this case, did not attribute dynamic and other physical systems to systems. We expand the concept of a system based on the intuitive idea of the transition from chaos to order, which reflects the concept of a system, which allows us to consider the principle of stationary © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 126–135, 2023. https://doi.org/10.1007/978-3-031-24475-9_11

Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence

127

action of Hamilton as a system-forming factor for physical systems. In living organisms, ergatic and biomachsystems (human-machine-living), within their categorical models, the system-forming factor includes the principle of survival (for the living, physiology) and the principle of stationary action (for the inanimate, mechanics). It is necessary to resolve the issues of combining these two components of the system-forming factor, of managing the living mechanical part of the system. The living controls the mechanical part of the system (musculoskeletal subsystem, machine, etc.), the movement of which is described by equations with servoconstraints. Mechanics with constraints arose in the works of Lagrange, Poincaré, Chetaev, Chaplygin and other prominent scientists [4]. The servoconstraints apparatus was discovered in 1921 by Beghin M.H. and comprehended within the framework of the general axiomatics of mechanics in [5, 6]. The implementation of control with the help of servoconstraints is a branch of control theory, on which a number of artificial intelligence methods are based. The necessary bridge from the control object (machine, mechanical parts of the body) to the control subject (living) can be built within the framework of the categorical systems theory. This work can be considered the first step in this direction. Already at the first steps, one has to deal with a number of unanswered questions in the foundations of mechanics and with the question of classifying theories (various mechanics) on the basis of their axioms and the systemforming factor. We propose conditions for the reduction to Newton-Galilean mechanics of traditional Lagrangian mechanics, discuss from a systemic point of view other varieties of mechanics, nonholonomic and vakonomic mechanics, mechanics with servoconstraints, necessary for control theory and artificial intelligence. The next section discusses the theory of systems and introduces a generalized concept of a system with a system-forming factor for physical systems. In the third section, we prove a theorem on the conditions for the reduction of traditional Lagrangian mechanics to Newton-Galilean mechanics. The fourth section discusses the various mechanics from a systemic point of view. In conclusion, the results are summarized and further steps in the development of the discussed direction are outlined.

2 Systems and Systemforming Factor In contrast to numerous systemic approaches in the theory of functional, categorical and biomachsystems, the concept of a system-forming factor, discovered by P.K. Anokhin [1], is at the forefront. Since he studied living organisms, he postulated a “useful result” for them as a system-forming factor. For mechanical objects, the concept of "utility" does not make sense, for them P.K. Anokhin did not find a system-forming factor, therefore, in his opinion, they are not systems. For systems of classical mechanics, based on a deeper understanding of the term “system” than in the theory of functional systems, we indicate the system-forming factor [2, 7]. The functional system of the body is formed [1] to satisfy the need that has arisen in the body, its satisfaction is called the result or the system-forming factor. The emerging need of the body causes motivation, felt in the form of desire, which mobilizes the afferent synthesis of the search for actions to solve the problem. The decision-making block selects a variant from memory, transfers the parameters of the required result to the action result acceptor, and mobilizes the action program, which sends signals to the effectors for execution. The effects exerted by

128

G. K. Tolokonnikov

effectors on the environment lead to some result, data about it are sent through receptors to the acceptor of the result of the action. If the parameters of the achieved result coincide with the expected values of the parameters, then the result is achieved, the functional system is disbanded. If the parameters is not coincide, then the cycle repeats. Let’s see what is called systems in mathematical theories of systems. Let two sets be given: a set X of inputs and a set Y of outputs. Then the system, or system according to Mesarovich, is the relation on the Cartesian product XxY. Mesarovic’s system is a black box with inputs and outputs. V.M.Matrosov [8] generalized the definition to systems of processes. The most general definition of a system, covering both systems of processes and many other approaches, was given by S.N. Vasiliev [8]. This approach covers algebraic systems, topology, however, it does not reflect any concept of a systemforming factor. One of the intuitive postulates of PK Anokhin is that the formation of a system begins from the whole to the parts. The language of category theory corresponds to such processes, in particular, convolution polycategories introduced in [2], collections of polyarrows with convolution operations, analogues of compositions in category theory. In category theory, systems are built from polyarrows without recourse to set theory, as in Mesarovitch or Vasiliev. It turns out that already a set of functional systems forms a non-classical topos [2], so that a departure from set theory to more general categories in systems theory is inevitable. The intuitive concept of a system correlates with the concept of order, and the systemforming factor with the mechanism of transition from chaos to order. For a functional system, we have a triple: the existing order (with the presence of a need) - the transition mechanism (functional system) - the necessary order (required result). During formalization, it is necessary to separate these concepts in order not to lose the “active” essence of the system-forming factor, which differs from the “passive” concept of the desired result. According to P.K. Anokhin, the backbone factor forms a system from its future subsystems. This function is formalized in the categorical language by the concept of convolution. The formalization of the concept of order makes it possible to generalize the “useful result” of the theory of functional systems and cover non-living ones. So, let there be a collection of some objects. An object with its specific description may correspond to a set of parameters. The set of values for these parameters will correspond to the state of the object. Let’s call the order of a set of objects some predicate (relation) on their collection. If objects have states, the order can be described by relations on the values of state parameters. The absence of relations on a set of objects is called systemic chaos or simply chaos. For example, for N points on the plane, the point is the object, and the coordinates of the point are its parameters. An example of a relation that is an order is the arrangement of points on the unit circle. Polyarrows can be defined on objects, and convolutions on polyarrows, we get a convolutional polycategory. Polyarrows of a polycategory (and convolutions of polyarrows) we call categorical systems. If the system is obtained as a convolution of polyarrows, then the latter are called its subsystems. Systems translate collections of objects into collections of objects, the same or different with the same or changed parameters. The resulting collection of objects can have an order, called the result of the system, either the same one that was available, or a new order, including, as a special case, chaos. A

Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence

129

convolution that transfers the source system with the order available for it to the target system with the planned order is called a categorical systemforming factor. Polyarrows themselves can be used as objects, then we come to the usual ideas about higher polycategories. Studying the properties of polycategories or their generalizations in the form of categorical splices [2, 7], we interpret the results as properties of the corresponding categorical systems. The simplest physical system in classical mechanics is the “system” of N material points (mi , r i ), i = 1,…, N with a mass mi , in R3 , acting on each other with some forces F ij . Let us turn it into a system according to P.K. Anokhin (in our generalization), having determined, in particular, the system-forming factor corresponding to the system. Particles can be located at any point (particle parameters) in space R3 . Over time t  [t 1 , t 2 ], the position is not limited by anything. Each point (mi , r i , t), occupies arbitrary positions r i (t)  R3 at each moment t  [t 1 , t 2 ]. In other words, we have N arbitrary functions r i :[t 1 , t 2 ] → R3 . So, for the objects of the system N of material points moving in space-time R3 × [t 1 , t 2 ], we choose pairs (mi , r i (t)). This set of points is in a state of systemic chaos. What makes particles move in time not randomly, but in an orderly manner? The answer is known, it is Hamilton’s principle of stationary action. The variation of the action leads to the equations of motion, solving which we obtain the law of motion of material particles, as a system by P.K. Anokhin in our generalization. We choose Hamilton’s principle as a system-forming factor. But nothing prevents us from taking for it the equations of motion themselves or other principles (the d’AlembertLagrange principle, the Gauss principle, and others) that lead to these equations of motion. However, today it is Hamilton’s principle of stationary action that is the basis of the most fundamental physical theories, classical mechanics, quantum mechanics, quantum electrodynamics, quantum field theory, the standard model and string theory. The proposed generalization of the concept of a system supplements the systemforming factor for living systems with the Hamilton principle. Thus, within the framework of the categorical theory of systems, in particular, the problem of studying composite system-forming factors, primarily containing the principle of survival and the principle of stationary action of Hamilton for living systems, has been posed and can be solved.

3 Newton-Galilean Mechanics, Lagrangian and Other Mechanics Three laws of mechanics, established by Newton for closed systems of material points from experimental data, later generalized by d’Alembert and Lagrange (the d’AlembertLagrange principle) to bodies are: (1) the law of uniform rectilinear motion of a material point dr/dt = ds/dt = const in the absence of other material objects acting on this point (closed a system consisting of one point) observed in an inertial frame of reference associated with fixed stars (here we include the principle of relativity of Galileo: the laws of mechanics act the same in all inertial frames); (2) the law d 2 r/dt 2 = F of equality of the acceleration of a material point multiplied by the mass of the resultant force F acting on it; (3) the law of equality of forces of action and reaction. The principle of d’Alembert-Lagrange is often postulated as the basis for constructing mechanics. The second approach consists in taking Hamilton’s principle of

130

G. K. Tolokonnikov

stationary action as a postulate and the basis for constructing mechanics. For a vast number of mechanical systems, these two principles are equivalent. However, for systems with nonholonomic constraints, the principles change and lead to different equations of motion, that is, to different mechanics, nonholonomic classical mechanics and vakonomic mechanics. We fix the inertial frame of reference associated with the “fixed” stars. Galilean transformations carry out the transition from this frame of reference to another, which moving uniformly and rectilinearly with velocity v relative to the first one. Let there be N points with masses mi , i = 1,…, N considered in the specified reference frame in three-dimensional Euclidean space E 3 . Time t forms a one-dimensional Euclidean space, we consider t  R. The simplest example of the fall of a particle of mass m near the earth’s surface, taking into account air resistance, g = 9.8 m/s2 , gives the equation md 2 x/dt 2 = −g − β dx/dt, the force may depend on coordinates and speed. However, Galileo’s principle of relativity, which means that the equations of mechanics md 2 x/dt 2 = f (dx/dt, x, t) are invariant with respect to the group Γ of affine transformations of space g: E 3 × R → E 3 × R, g  Γ imposes restrictions on the form of functions f (dx/dt, x, t). In contrast to the book [4], this rather subtle issue of restrictions on the form of expressions for forces is not mentioned in many monographs, including classical monographs on mechanics [9–13], as a result, it remains unclear what to do with Galileo’s principle of relativity, since, having been accepted, it contradicts the presentation of mechanics proposed in these monographs. In fact, the authors of [9–13] consider not Newton’s mechanics, but Lagrange’s mechanics, in which the Galilean principle may not hold. If the manifold (in our case, E 3 × R) has a group of transformations G, then in a natural way for the fields on the manifold, which are the forces f (dx/dt, x, t), the representation of the group is also defined f (gdx/dt, gx, gt) = g f (dx/dt, x, t), g  G, whence for we have the condition of invariance of the equations of motion (forces) with respect to the Galilean group f (dx/dt, x, t) = F(dx i /dt − dx j /dt, x i − x j ), f (Adx/dt, Ax, t) = f (dx/dt, x, t), Ax - orthogonal transformation. Thus, the forces do not depend on time, and the dependence of the forces exists only on the difference in coordinates and velocities with invariance under orthogonal transformations. If we consider the problem of three bodies, then the system satisfies Galileo’s principle of relativity. If one of the masses can be neglected, then a system arises (the restricted three-body problem) that no longer satisfies Galileo’s principle of relativity. “All laws of motion encountered in Newtonian mechanics that are not Galilean invariant are derived from invariant laws of motion using similar simplifying assumptions” [4]. We will not confine ourselves to this phrase, as it was done in [4], especially since it requires substantiation. Moreover, not all systems with broken Galilean invariance are obtained in this way. Let there be a system of N interacting material points (a possible passage to the limit from points to bodies has not been made). We can consider some of these points x 1 ,…, x k , k < N as a subsystem of the original closed and Galilean invariant system of N points. To explicitly take into account the influence on the subsystem of other points not

Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence

131

included in it from the system of N points, we substitute into the equations of motion v = dx/dt, a = dv/dt, mi ai = F i (…, vα − vβ , …, x α − x β , …), i = 1, …, N including, in the equations for the points of the subsystem explicit values x k+1 = x k+1 (t), …, x N = x N (t), vk+1 = vk+1 (t), …, vN = vN (t). As a result we get m1 a1 = f1 (v, x, t), . . . , mk ak = fk (v, x, t), k < N , v = (v1 , . . . , vk ), x = (x1 , . . . , xk ), fi (v, x, t) = Fi (. . . , vα − vβ , . . . , xα − xβ , . . . , vα − vμ (t), . . . , xα − xμ (t), . . . , vπ (t) − vμ (t), . . . , xπ (t) − xμ (t), . . . , α, β ≤ k, k < μ, π ≤ N .

Since the particles not included in the subsystem move, changing their position and speed over time, the forces acting from their side on the particles allocated to the subsystem clearly depend on time, and the forces depend directly on the coordinates, and not only on their differences. In other words, for an open subsystem, the Galilean relativity principle can be violated, for open systems, the forces f can explicitly depend on time. Thus, to the statement from [4] we must add this case of Galilean non-invariant systems. When we say that the forces in Newton’s equation depend on time (on coordinates, velocities, and not only on their difference), as is done in Lagrange mechanics [9–13], then we, in order to remain within the framework of Newton-Galileo mechanics, must show that there is such a closed system for which this system is a subsystem. Then we can assume that the Lagrangian system for the case of Galilean non-invariant forces is a non-closed system of Newton-Galilean mechanics, but we must show the presence of a closed system containing the original Lagrangian system. In this case, it suffices to consider the case of systems of material points (as it is declared in [9–13] bodies in mechanics consist of material points). Let us prove that for points on a straight line the Lagrangian system is a subsystem of Newton-Galileo mechanics under the conditions below. We emphasize that we are, in fact, solving the problem of control theory by ensuring the movement of the points of the Lagrangian system by choosing additional points that exercise control and complete the system to a closed system. We assume that the smoothness of the occurring functions is sufficient for the operations performed on them. Theorem 1. Let a material point move on a straight line according to the law a1 = d 2 x/dt 2 = f 1 (x 1 , t), then for any closed system with forces independent of velocities, consisting of a given point and a second point, there is a smooth function α(t), for which f 1 ( x 1 , t) satisfies in some area the partial differential equation ∂u(x1 , t)/∂x1 + (1/(d α/dt)) ∂u(x1 , t)/∂t = 0. Proof. For a closed system of points, the function F of the force a1 = F(x 1 − x 2 ) obviously satisfies the partial differential equation  ∂u(x, t)/∂xk = 0, k = 1, 2 (1) The system of equations d 2 x l /dt 2 = F l ( x 1 − x 2 ) in some area has a solution x i (t) = α i (t), i = 1, 2. There is a neighborhood where dα i /dt = 0. In this neighborhood      xj (t) = αj (t), t = αj−1 xj = βj xj , ∂/∂x2 = β2 (x2 ) ∂/∂t,      β2 (α2 (t)) = 1, β2 α2 = 1, ∂/∂x2 = 1/α2 (t) ∂/∂t.

132

G. K. Tolokonnikov

Let u(x 1 , t) = v(x 1 , α 2 (t)) = f 1 (x 1 , t), substituting this u into the Eq. (1) we get    ∂u(x1 , t)/∂x1 + 1/α2 ∂u(x1 , t)/∂t = 0 and   ∂u(x1 , t)/∂x1 + 1/α  ∂u(x1 , t)/∂t = 0, if we put 

1/α  = 1/α2 . The theorem has been proven. We see that for arbitrary forces there is no suitable closed system of two points in Newton-Galilean mechanics. Consider the possibility of controlling a point with two particles. Consider the possibility of controlling a point with an arbitrary number of particles. Theorem 2. Let a material point move on a straight line according to the law d 2 x/dt 2 = f (v, x, t), then the closed Newton-Galileo system, consisting of this point and additional m points on the same straight line, exists if there is a generating function G (…, vi , …, x i ,…, t), i, j = 1,…, m, such that f (v, x, t) = G(. . . , vi , . . . , xi , . . . , t), v = v1 = . . . = vm , x = x1 = . . . = xm , and which satisfies the equations   ∂G/∂t + η˜ i (t) ∂G/∂vi + α˜ i (t) ∂G/∂xi = 0, ∂/∂xi (∂G/∂xi ) = 0, ∂/∂xi (∂G/∂xi ) = 0, i = 1, . . . , m.

(2)

Here, Ái , α˜ i respectively, are the accelerations and velocities of the additional (control) m points. Proof. Let us assume that the system of m + 1 points indicated in the theorem exists as a mechanical system of Newton-Galileo. Then the system of equations.   d 2 xl /dt 2 = Hl . . . , vi − vj , . . . , xi − xj , . . . , l = 0, 1, . . . , m (x = x 0 , v = v0 the coordinate and the speed of the controlled point) in some area has a solution vi = ηi (t), x i = α i (t), i = 1,…,m. We substitute these solutions into the above system of equations. Since the system is closed, the resultant of all forces has the form  F0 (. . . , v − ηi (t), . . . , x − αi (t), . . .) = Ai (v0 − ηi (t), x0 − αi (t)), F0 = H0 (. . . , v0 − ηi (t), . . . , ηr (t) − ηs (t), . . . , x0 − αi (t ) αr (t) − αs (t), ), i = 1, . . . , m, r  = 0, s  = 0.

Here Ai is the force with which point i acts on the controlled point, while it is easy to check that they satisfy the partial differential equation ∂Ai (v, x, t)/∂t + η˜ i (t) ∂Ai (v, x, t)/∂v + α˜ i (t) ∂Ai (v, x, t)/∂x = 0, i = 1, . . . , m.

Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence

133

We introduce a generating function G (…, vi , …, x i ,…, t) = F 0 (…, vi – ηi (t), …, x i – α i (t),…), it is directly verified that it satisfies the Eqs. (2), as required. The theorem has been proven. The generating function G does not explicitly depend on unknown forces in the resultant, which is convenient when studying the control of a material point with the help of additional control material points. Newtonian mechanics, in which Galileo’s principle of relativity is fulfilled, we will call Newton-Galilean mechanics. Thus, closed systems in which forces depend on time and directly on the coordinates and velocities of particles, and not on their differences, and are not invariant with respect to orthogonal transformations, are not Newton-Galilean mechanical systems, but can be considered as subsystems of the systems of Newton-Galileo mechanics under the conditions specified in the theorems. In ordinary Lagrangian mechanics, forces depend directly on coordinates, velocities and time, as a result, the Galilean relativity principle is violated, we get a different set of axioms and Lagrangian mechanics different from Newton-Galilean mechanics. The most important section is made up of Lagrangian mechanics with nonholonomic constraints (rolling of wheels on surfaces, and so on). Nonholonomic classical mechanics is based on the d’Alembert-Lagrange principle (here the principle of Hamilton’s stationary action is violated), which serves as a system-forming factor in our approach. With corrections to the method of variation in vakonomic mechanics, discovered in [14], the principle of stationary action of Hamilton is restored, but the equations of motion here are not the same as the equations in nonholonomic mechanics, which should be, since this is another theory different from nonholonomic mechanics. The relevance and possibility of other mechanics, in addition to Newtonian mechanics, is stated, in particular, in [6] in response to the untenable criticism of vakonomic mechanics: “… Misunderstanding of the essence of the issue lies in a priori confidence in the only possible way to describe the dynamics of systems with constraints (as Euclidean geometry was once considered the only possible one)”.

4 Mechanics with Servoconstraints, Control, Artificial Intelligence In living organisms, the living (brain and the like) controls the movements of the musculoskeletal mechanical subsystem, and in human-machine systems, the living controls the machine. Such systems are modeled in robots, while decision-making about movements is carried out by artificial intelligence. Here we have an integrated backbone factor containing a functional system and Hamilton’s principle. The equations of motion of the mechanical part contain control forces that implement servo-constraints discovered by Béguin in 1922 [5, 6] and vakonomic constraints interpreted as servo-constraints, which are controlled not by external forces, but by changing the inertial properties of the system [15]. An analysis of the axiomatics of nonholonomic mechanics led (see, in particular, [15]) to the discovery of the need for an independent axiom that defines possible displacements. As a result, in addition to the discussed Newton-Galilean mechanics, Lagrangian mechanics, nonholonomic Lagrangian mechanics, and vakonomic mechanics, examples of various physical theories include mechanics with servoconstraints, which differ from

134

G. K. Tolokonnikov

each other in the axiom of determining possible displacements. In [16], new principles of mechanics were established that enriched its axiomatics and generalized the indicated d’Alembert-Lagrange principle, as well as the Hamilton-Ostrogradsky principle, from which the latter are derived by passing to the limit with the addition of anisotropic viscous friction forces. At this stage, the problem of describing the movements of systems with living subsystems is concentrated on modeling the living and its intellectual properties of motion control, which refers to artificial intelligence, including strong artificial intelligence. The fundamental contribution to this area is the further developed works of N.A. Bernshtein [17], in which a number of properties of the movement of living organisms are revealed, including feedback and, in fact, other blocks of the functional system. The categorical model for biomechanical systems proposed in this paper concentrates the direction of research on the composite system-forming factor of such systems, the mechanical part of which is formalized by the principle of Hamilton’s stationary action, as well as by the principles of mechanics that develop it from [16].

5 Summary and Conclusion The paper develops the author’s proposed extension of the concept of a system with a system-forming factor to the case of mechanical and other inanimate systems, and introduces the definition of systemic chaos. This generalization of the concept of a system made it possible to identify a system-forming factor for mechanical systems in the form of the principle of stationary action of Hamilton and its generalizations. The systems approach applied to mechanics poses the problem of their explicit classification based on axioms. Newton-Galilean mechanics and Lagrangian mechanics, in which the Galilean relativity principle is not fulfilled, are separate physical theories (the Lagrangian mechanical system can be considered under the conditions given in the work as a subsystem of the Newton-Galilean system), as well as vakonomic mechanics, classical nonholonomic mechanics and other notable mechanics. The task of further categorical research of mechanical systems is set, steps for its solution are outlined. It has been established those biomechanical systems, including living systems with musculoskeletal subsystems as mechanical subsystems, satisfy the composite systemforming factor integrating functional systems systemforming factor and the mechanical principle of Hamilton’s stationary action. The intellectual properties of biomechanical systems serve as an important basis for their modeling in artificial intelligence, while traditional methods of neural networks [18–21] are not enough, here the possibilities of the categorical approach should be used [3]. The task of mathematical modeling of system-forming factors that reflect the control by living for mechanical movements of both organisms and machines in the case of ergatic and biomachsystems is set, which is an important area in artificial intelligence.

References 1. Anokhin, P.K.: Fundamental Questions of the General Theory of Functional Systems, Principles of Systemic Organization of Functions, pp. 5–61. Nauka, Moscow (1973). (in Russian)

Systems Theory, Mechanics with Servoconstraints, Artificial Intelligence

135

2. Tolokonnikov, G.K.: Informal categorical systems theory. Biomachsystems 2(4), 7–58 (2018). (in Russian) 3. Tolokonnikov, G.K.: Convolution polycategories and categorical splices for modeling neural networks. In: Zhengbing, Hu., Petoukhov, S., Dychka, I., He, M. (eds.) ICCSEEA 2019. AISC, vol. 938, pp. 259–267. Springer, Cham (2019). https://doi.org/10.1007/978-3-03016621-2_24 4. Arnol’d, V.I., Kozlov, V.V., Neishtadt, A.I.: Mathematical Aspects of Classical and Celestial Mechanics. Encyclopaedia of Mathematical Sciences, vol. 3, p. 518. Springer, Berlin (2006). https://doi.org/10.1007/978-3-540-48926-9 5. Kozlov, V.V.: The dynamics of systems with servoconstraints. I. Regul. Chaot. Dyn. 20(3), 205–224 (2015). https://doi.org/10.1134/S1560354715030016 6. Kozlov, V.V.: The dynamics of Systems with servoconstraints. II. Regul. Chaot. Dyn. 20(4), 401–427 (2015). https://doi.org/10.1134/S1560354715040012 7. Tolokonnikov, G.K.: Categorical splices, categorical systems and their applications in algebraic biology. Biomachsystems 5(4), 148–237 (2021). (in Russian) 8. Matrosov, V.M., Anapolsky, L.Yu., Vasiliev, S.N.: Comparison Method in Mathematical Theory of Systems, 480 p. Nauka, Novosibirsk (1980). (in Russian) 9. Whittaker, E.T.: Analytical Dynamics, Izhevsk, Udmurt, 588 p. University (1999). (in Russian) 10. Pars, L.A.: Analytical Dynamics, 636 p. Nauka, Moscow (1971). (in Russian) 11. Trofimovn, V.V., Fomenko, A.T.: Algebra and geometry of integrable Hamiltonian differential equations, 448 p. Factorial, Moscow (1995). (in Russian) 12. Zhuravlev, V.F.: Fundamentals of teretic mechanics, Moscow, 320 p (2001). (in Russian) 13. Appel, P.: Theoretical Mechanics, vol. 2, 487 p. Physico-math. Literature, Moscow (1960). (in Russian) 14. Kozlov, V.V.: Dynamics of systems with non-integrable constraints: I-V, Vestnik Mosk. un-ta, Ser. 1, Mat. Mekhan. 1982(4): 70–76, 1983(3): 102–111, 1987(5): 76–83, 1988(6):51–54. (in Russian) 15. Kozlov, V.V.: Principles of dynamics and servo coupling. Nonlinear Dyn. 11(1), 169–178 (2015). (in Russian) 16. Kozlov, V.V.: On variational principles of mechanics. Prikladnaya Math. Mech. 74(5), 707– 717 (2010). (in Russian) 17. Bernstein, N.A.: Biomechanics and physiology of movements. Selected Psychological Works, 688 p. MPSI, Moscow (2008). (in Russian) 18. Karande, A.M., Kalbandez, D.R.: Weight assignment algorithms for designing fully connected neural network. IJISA 10(6), 68–76 (2018) 19. Rao, D.T.V.D., Ramana, K.V.: Winograd’s inequality: effectiveness for efficient training of deep neural networks. IJISA (6), 49–58 (2018) 20. Hu, Z., Tereykovskiy, I.A., Tereykovska, L.O., Pogorelov, V.V.: Determination of structural parameters of multilayer perceptron designed to estimate parameters of technical systems. IJISA 9(10), 57–62 (2017) 21. Awadalla, H.A.: Spiking Neural Network and Bull Genetic Algorithm for Active Vibration Control. IJISA 10(2), 17–26 (2018)

A New Approach to Search Engine Optimization Based on the Synthesis of a Virtual Promotion Map Sergey Orekhov(B) Intelligent Information Technologies and Software Engineering Department of National Technical University Kharkov Polytechnical Institute, Kharkiv 61002, Ukraine [email protected]

Abstract. Over the past ten years, we have been conducting research in the field of search engine optimization. More than 30 WEB projects of varying complexity were completed (online services, medicine, household goods, jewelry, lumber, auto parts). The results of these projects show two key problems. The first problem is that classic search engine optimization approaches such as Google AdWords advertising, HTML code optimization and building the semantic kernel of a WEB resource no longer provide significant results in a given period of time. The second problem is that there are no exact metrics that show when and with what budget the desired result can be achieved. The main reason for this is that these classic search engine optimization techniques are static, and the online market is constantly changing due to competition. Therefore, we have proposed a new information technology – virtual promotion of a product. Its skeleton is a virtual promotion map. This map is created based on the concept of a cognitive map. That is, it is a method of analyzing the current situation and automatically making a decision. By introducing a virtual promotion map into the project, we create a new decisionmaking mechanism for the implementation of a WEB search engine optimization project. The virtual promotion map includes two levels. The first one describes the Internet nodes where the semantic core of the e-content of our WEB project will be placed. The second level describes the cost of placing the semantic kernel. The second level is a mathematical programming problem, the solution of which gives us an answer on which nodes and for how long it is necessary to place the semantic core in order to obtain the desired result. The article shows a real example (WEB project in the American market of online services), in which a virtual promotion map was applied. Its application gave a tenfold increase in the main key performance indicator (traffic) over a given period of time (9 months) and a given promotion budget. Keywords: Virtual promotion · Search engine optimization · Customer journey map

1 Introduction and Related Works It is known from the literature [1–3] that the logistics channel represents the physical movement and placement of goods based on a number of logistics operations, such © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 136–151, 2023. https://doi.org/10.1007/978-3-031-24475-9_12

A New Approach to Search Engine Optimization

137

as: transportation, storage, order processing and cargo processing. Logistics costs for product transportation and storage prevail in relation to other types of operations in the logistics channel. But in the case of virtual promotion (VP) [4], the main emphasis also shifts to the consolidation of knowledge about the product and to the dispersion of this knowledge in the virtual space. Therefore, in the future, we will consider two types (types) of operations: consolidation or transportation and dispersion or storage. In turn, the marketing channel is formed by a number of firms that participate in the purchase and sale of goods from their producers to end consumers. The participants of the marketing channel are: wholesalers; retailers; agents or brokers; retailers, wholesalers. In virtual promotion, these are WEB resources that represent various promotion options: marketplace (retail), Telegram channel (retailers), corporate WEB site (wholesaler), YouTube channel (agent or broker), WEB resources for partner programs (retailers). Thus, we have two levels: the distribution of knowledge about the product and the organizational system for managing this distribution process. The first level actually describes the configuration of the distribution system, and the second level is the basis for building the organizational management structure (OMS). It should be emphasized that these two tasks are closely related, because depending on the configuration of the distribution channel, the organizational structure of management will depend. In turn, OMS can affect the channel configuration. Therefore, these two tasks must be solved on the basis of an iterative process. There are three main logistics operations, such as: consolidation (concentration), customization and dispersion. Consolidation presupposes the presence of appropriate nodes on which knowledge about the product accumulates. It can be a corporate WEB site or the sites of partners (trade intermediaries). According to the theory of logistics, the main goal of consolidation is to reduce the number of transactions in order to minimize costs. Then the correct placement of knowledge gives a quick response from the potential buyers of the product. In the work, this operation should be considered as the operation of creating a semantic core of e-content. That is, concentrated knowledge about the product is formed for further dissemination in the virtual space. After concentration, there is the task of sorting and grouping knowledge about the product into some combinations. This process is called customization. The result of customization determines such options of semantic kernels that encourage consumers to generate leads. The third stage is dispersion, which consists in sending semantic kernels to virtual space nodes, where end consumers can place an order and pay for it online. Next, we will determine (classify) the structure of the distribution channel by the number of nodes where end users read the semantic kernels describing the product. In our case, an intensive distribution channel is considered, which has a large number of virtual space points, which allows it to be intensively filled. Literary sources [1–3] determine that a characteristic feature of the logistics channel is straightness, which is characterized by the number of links of the vertical distribution channel. A distinction is made between direct and indirect distribution. In the first case, the promotion and sale of products is carried out directly to the consumer. In the second case, the products successively go through the stages of consolidation, customization

138

S. Orekhov

and dispersion. This corresponds to the phased delivery of products, which is considered in the dissertation.

2 Problem Statement On the basis of a more detailed description of the object of research, we will proceed directly to the presentation at the verbal level of the statement of the problem and the method of synthesis by the knowledge distribution system. The method proposed in the work is a development of the technology of forming a system of organizational management of logistics in strategic planning [1–3]. The prototype includes the following stages. Stage 1. The configuration of the distribution system corresponds to the formation of the logistics channel. This channel is synthesized on the basis of structural and parametric synthesis of a two-level system. Let’s consider these concepts in more detail in relation to the object of research (Fig. 1).

Nodes in Internet Facebook

Amazon

OLX

Coordination of layers Two-layer system

WEB site

Expenses of spreading Semantic kernels

WordPress

VP Map

Google API

XML schema

CRM/CMS

Frontend

Information technology NodeJS

MySQL WordPos

Backend

Fig. 1. The process of building of virtual promotion map

When conducting a structural synthesis, first of all, it is necessary to consider the channel from the point of view of two structural dimensions (components): horizontal and vertical. In our case, the horizontal component has four sectors.

A New Approach to Search Engine Optimization

139

The vertical component includes two levels (Fig. 1). That is, the channel is divided into two levels of distribution. Each level is determined by the number and location of the corresponding nodes. The Internet network identifies the location of nodes (logistics centers for online knowledge processing). The best locations for nodes to process product knowledge are marketplaces, social networks, corporate Web resources of partners. They are designed for the virtual distribution of goods and are located between the businesses whose products they promote and the markets they serve. In the thesis work, specific consumers of products, their location and demand for products are considered unspecified, but data about the manufacturer of the product, its price and volumes, location and method of payment for the product are known. Thus, in our case, structural synthesis determines: the number of consolidation and customization nodes. Usually, at the first stage, the number of customization nodes is determined, and then, taking into account the interdependence and interconnection between them, a decision is made about the number of nodes of consolidation of knowledge about the product, that is, logistics centers. The following quantitative factors influence the number and quality of product knowledge consolidation nodes: – – – –

product promotion budgets in the node; bandwidth of the node or traffic in the node; expert assessment of the attractiveness of the consolidation node; the number of opinion leaders working with this node.

There are already various methods of determining the optimal territorial placement of logistics centers depending on the chosen optimization criterion, which are considered in works [1–3]. But there is no optimization criterion and method that solves this problem of structural synthesis. The first attempt to introduce a metric for assessing the quality of a product knowledge consolidation node was presented in [5]. A metric and criterion describing the cost of placing a semantic kernel in an Internet node is presented here. The special class of optimization problems, which allow to determine the coordinates of the logistics center, include production and transport problems, in which the following participate: production enterprises, distribution warehouse centers, and consumers of products. The objective function of such a model is the total costs associated with movement, storage and other logistics costs associated with the supply of finished products [6–8]. In the research, the main goal is the formation of the organizational structure of the management of the product knowledge distribution system. Therefore, in further work, the problem of selecting consolidation nodes is reduced to the selection of some subset of nodes from a predetermined set based on optimization methods.

140

S. Orekhov

After the candidate nodes for inclusion in the level of knowledge distribution have been determined, parametric synthesis problems are solved based on two main classes of problems: – various modifications of transport problems, including transport problems with intermediate nodes; – inventory management tasks. The criterion for solving these problems is the total financial costs for placing the semantic core in the nodes. Thus, a series of nodes is a framework for the implementation of virtual promotion. The construction of a classic logistics channel is usually based on two logistics strategies [1–3]: minimization of financial costs and maximization of the level of consumer service. As an example, customer service can be measured as a percentage of order fulfillment in relation to the total volume. In the case of virtual promotion, customer service should consider from the standpoint of simplicity and availability of online payment for the product by a potential buyer. That is why the dissertation considers the strategic task planning. It is proposed to introduce the concept of a strategy for ensuring the stability of the distribution channel in relation to various emergency situations. And the method (technology) of the multi-criteria synthesis of the configuration of the two-level system of distribution of knowledge about the product should be based on three groups of criteria: 1. Criteria related to financial costs of construction and functioning of the distribution channel. 2. Criteria that ensure the level of appropriate service in relation to product buyers. 3. Criteria for channel resistance to various emergency situations. Each group corresponds to the channel construction strategy. Based on the specification of the knowledge distribution channel, which consists of two types of business processes: consolidation and dispersion of semantic kernels. We will assume that the following are under consideration: a set of M1 generators of semantic kernels; M2 - a set of kernel consolidation centers (corporate WEB resources); M3 - a set of customization centers (WEB resources of partners); M4 - a set of consumers, that is, those users who are ready to pay for a product online (Fig. 2). Then the set M3 and M4 coincide, because the user can also make a payment in the customization node. To form a general model of multi-criteria synthesis configuration of the knowledge distribution system (semantic kernels), we will introduce a number of parameters and variables that will be used when solving the problem. First, these are the parameters that are set at the beginning of solving the synthesis problem and do not change during the structural and parametric synthesis of the channel configuration. Such parameters include the following: • locations of a set of M1 - generators of semantic kernels; • locations of the set of M2 - consolidators of semantic kernels in the Internet; • locations of customization centers on the Internet (set of M3 );

A New Approach to Search Engine Optimization

M1

M2

M3

141

M4

Fig. 2. Source information for the synthesis of the distribution channel

• nomenclature and laws of distribution of demand for the product by users; • financial indicators related to the generation of the semantic kernel; • financial indicators related to the placement of the semantic kernel in the customization centers. Second, consider the variables that are responsible for structural and parametric channel synthesis based on three strategies (three complex criteria). The specified criteria are related to the financial costs for the construction and operation of the distribution channel, as well as related to the level of quality of consumer service. These two sets of criteria are contradictory. Therefore it is necessary analyze the variables that influence them and separate the most important (weighty) ones in strategic planning. It is these variables that must be considered during the structural-parametric synthesis of the distribution channel. The first group of criteria is related to capital and current costs for the formation of the channel. These criteria include: net present value, profitability, discounted payback period. Such criteria as: total logistics costs, efficiency of use of capital costs, return on investments in logistics infrastructure are also used. The data in Fig. 2 show the structure of the product knowledge distribution channel on the Internet. The first set of nodes, which are generators of semantic kernels, in real life usually includes only one node. Such a node is a corporate WEB site or a customer interaction management system of an enterprise that wants to promote a given product [9]. In turn, the power of the sets M2 and M3 is potentially unlimited. But if we take into account the presence of the criteria of the first group, then the number of nodes of these sets should be limited. Thus, it is actually necessary to solve the problem of synthesis with two levels of intermediate nodes. These are the sets M2 and M3 . Let’s create a list of input and output information, as well as control variables that affect its solution. It should first be understood that the level of demand from users in Internet is unknown. Next, we assume that the demand for the product has a normal distribution law and the length of the cycle is the same. Since this is a task of strategic planning, the duration

142

S. Orekhov

of the management cycle reaches one year. This assumption is made on the basis of the classical type of product life cycle described in marketing theory [10–12]. Then verbally, the task of synthesizing the configuration of the knowledge distribution channel can be formulated as follows: build the set M2 and M3 so that the number of nodes in these sets is minimal, when we have one generator of semantic kernels (the set M1 ), and under the condition of the maximum number of consumers (the set M4 ). Let’s consider the previous conditions when such a problem is solved.

3 Proposed Method First, to solve such a problem according to the business process of virtual promotion [4, 5], it is necessary to solve the problems of consolidation (set of M2 ) and customization (set of M3 ) in turn. In order to determine the minimum number of nodes in these sets, the indicated problems are interdependent. Second, given the list of nodes from the sets M2 and M3 , we do not know the number of potential buyers in the set M4 . That is, the set M4 is unknown. It is possible to determine the power of the set of M4 only after starting the technology of virtual promotion [4, 5]. Thus, in order to build an effective version of the configuration of the distribution system, that is, the sets M2 and M3 , it is necessary to do it iteratively. Solve such a problem at the first stage and get a rational option. This option should be placed on the Internet and receive feedback from potential buyers, that is, calculate the maximum power of the set of M4 . If the power of M4 is unacceptable, then we solve the specified problem again to reconsider the configuration option for a new one. The process can be stopped if the level of profitability is satisfactory or at the request of the enterprise. Accordingly, the task of synthesis is divided into two. The first is actually synthesis, and the second is coordination, when it is necessary to obtain the maximum number of elements of the set of M4 . The first problem can be solved statically, and the second works in dynamics, that is, the simulation mode is required. According to the data in [4, 5], our task is the task of dispersing knowledge on the nodes that are included in the sets of M2 and M3 . This solves the problem of linking consolidation nodes with customization nodes. The binding of the nodes of the sets M2 and M3 is due to the fact that the search on the Internet is carried out using search servers that function thanks to the PageRank algorithm [13, 14]. The main idea of the algorithm is that the more links to the nodes of the set of M2 from the nodes of M3 , then the WEB resource with a given semantic core will be in first place in the response to a potential buyer’s request. Thus, the probability of online ordering and payment increases. In the work, it is proposed to formulate the problem of dispersing knowledge between the nodes of sets of M2 and M3 as a transport problem, which may degenerate into a problem of structural and topological synthesis. Let’s enter a number of parameters and variables that will be used at formation of the dispersion problem model: m4 – given number of users who are ready to pay for the product online; m2 – possible number of consolidation nodes; m3 – possible number of customization nodes; μ is the number of consolidation nodes that will be used to place the semantic kernel under the condition 0 < μ ≤ (m3 + m2 ); C kj – costs for placing the

A New Approach to Search Engine Optimization

143

semantic kernel in the jth customization node for the activation of the kth consumer; C kj – costs for storing the semantic kernel in the jth consolidation node for the activation of the kth consumer; xkj ∈ {0, 1} is a Boolean variable that is equal to one if the kth consumer makes online payment at the jth customization node; xki ∈ {0, 1} is a Boolean variable that is equal to one if the ith node is used in solving the scattering problem. Based on the entered parameters and variables, the model of the scattering    problem is written in the following form. Find the following value of the variables xkj , xki , which ensures the minimum value of the objective function:         C kj xkj + C ki xki , (1) F xkj , xki = j∈m3

 j∈m3

 j∈m3

i∈m2



k∈m4

i∈m2

k∈m4

xkj ≥ 1, xkj ∈ {0, 1}, k ∈ m4 , j ∈ m3 ,





k∈m4

k∈m4

xkj ≤

xki ≤ μ · m4 , xki ∈ {0, 1}, k ∈ m4 , i ∈ m2 ,  i∈m2

 k∈m4

xki , xki ∈ {0, 1}, k ∈ m4 , i ∈ m2 ,

(2) (3) (4)

Objective function (1) determines the total cost of receiving orders through customization and consolidation centers. Condition (2) specifies that each product consumer must be associated with at least one customization center. Condition (3) determines the limit on the number of consolidation nodes. Their number depends on the number of potential customers and the total number of both customization nodes and consolidation nodes. Condition (4) determines the following. If the node is not used, the consumer cannot place an order from it. The physical meaning of condition (4) is that the number of consolidation nodes should be greater than the number of customization nodes. This is a real situation, because the buyer trusts the customization node that has the largest number of links from the consolidation nodes. Thus, we have the linear programming problem (1)–(4) with Boolean variables. Task (1)–(4) covers the criteria of the first group, i.e. minimization financial losses for the construction of the distribution channel. In addition, the measure of demand, i.e. the number of orders m4 , is uncertain. For its definition or evaluation, consider the second group of criteria. The second group of criteria expresses the desired level of service in relation to the buyer of the promoted product. In our case, the concept of service level is expressed in the online payment operation in the customization node. This operation in modern conditions is based on a percentage of the sale amount, which the node receives from the buyer. It is this amount that is important for the user to make a decision to buy the product online. Thus, it is necessary to construct the following problem, which allows minimizing user costs, which ensures the maximum level of service in the customization node. The lower the cost of online payment, the higher the user’s satisfaction with the purchase of the product. The second group of criteria can be described using a neural network, which will provide an estimate of the level of service at the node. Thus, problem (1)–(4) describes the virtual promotion map from the perspective of costs for processing the semantic kernel in a separate node. The solution to this problem provides a list of nodes that will be included in the virtual promotion map.

144

S. Orekhov

The linear programming problem (1)–(4) is solved by standard methods [6–8]. To implement the method of solving the synthesis problem, we will consider the verbal description of the algorithm. The main problem of the implementation of the algorithm is that in order to solve the problem (1)–(4) it is necessary to know the estimation of consumer demand on the Internet or to know the value of traffic. This estimate can be determined by solving an additional problem, either by implementing a neutron network or by expert evaluation. In turn, today the total cost of renting a node for hosting the semantic kernel will depend on the number of nodes in the product knowledge distribution system.

4 Proposed Algorithm The paper proposes the following stages of the algorithm. We believe that we have one generator of semantic kernels on the Internet. As a rule, the generator of the semantic kernel is a content management system or a customer interaction management system. Step 1. According to the method proposed in [15], build the semantic kernel of e-content, which is placed on a given corporate WEB resource, as the main source of e-content about the product. Step 2. Analyze possible consolidation nodes of the semantic kernel. The possible number of nodes is m2 . The analysis is carried out on the basis of Google Analytics WEB statistics for queries on the Internet similar to the semantic kernel. The final number of such nodes will be value M2 . Step 3. Analyze possible semantic kernel customization nodes. 3M. The analysis is carried out on the basis of Google Analytics WEB statistics for queries on the Internet similar to the semantic core. Each customization node also has its own internal statistics of end-user visits. Ago it is necessary to compare these statistics with Google Analytics. The final number of nodes will be M3 . Step 4. Given the estimates of the parameters M2 and M3 , a neural network can be constructed to predict the traffic at a given consolidation and customization node. Then we form the neural network β i (t) = NNi (t), β j (t) = NNj (t), j ∈ m3 , i ∈ m2 . The parameter t defines the time interval. Usually it is equal to either 15 days or one month. The paper suggests taking a time interval of one month.   Step 5. Finally the sum of two these values β j (t) and β i (t) gives us the estimation j

i

value of m4 (t). Step 6. We solve problem (1)–(4) to determine the final list of consolidation and customization nodes. That is, a virtual promotion map is formed as a set of consolidation and customization nodes. In addition, we will present the description of the algorithm in the UML language – Fig. 3. The last stage of the algorithm execution is devoted to the formation of a virtual promotion map. It looks similar to Fig. 1, which shows the consolidation and customization nodes. But each node has links to others. This is an important point that reflects the

A New Approach to Search Engine Optimization

145

Activate a generator

Synthesis of semantic kernel of e-content

Forming a set of consolidation nodes

Forming a set of customization nodes

Forecasting trafic in nodes

Solving the task (1)-(4) NO

YES Defining final set of nodes

Forming the map of virtual promotion

Fig. 3. Algorithm of the method

performance of the classic Page Rank search algorithm [13, 14]. Thus, the map takes the form of a tree of nodes, where the generator or corporate WEB resource acts as the primary source of the semantic kernel. The semantic kernel describes the product, which means that the number of kernels will correspond to the number of needs that the product covers. Then the virtual promotion maps will be equal to the number of need classes or the number of semantic kernels that can be built. The algorithm describes the situation when problem (1)–(4) has no solution. That is, the proposed list of nodes and the costs they bring are unacceptable both for a potential buyer and for an enterprise promoting a product on the Internet. In this case, it is necessary to form a new list of nodes and solve the problem again. It is quite clear that there is a possible case when changing the number of nodes may prevent solving the specified problem. Then it is necessary to change the method of using neural networks and genetic algorithm [16, 17].

146

S. Orekhov

5 Experiment The work tested a WEB project for the American market of WEB services. The basic WEB resource of this project is the URL: www.celestialtiming.com. Currently, the project is fully operational and is constantly online. Figure 4 shows the scheme of construction of such a map, where the main directions of its construction are shown.

Google catalogue

M1

www

US blogs NY, Chicago,

M3

M2 US catalogues 1…5

Fig. 4. Sample of VP map

Fig. 5. Real list of nodes in VP map

Carolina, Denver

Hawaii,

A New Approach to Search Engine Optimization

147

In turn, Fig. 5 presents real WEB resources (map of nodes) where the semantic kernel was placed. We must remember that this map was built in 2017. At that time, in the American segment of online services, site catalogs should be considered as nodes of consolidation. First of all, such a directory is the search server from Google and its web services, in particular Google Search Console. Existing WEB blogs (forums) were chosen as the centers of customization of the semantic kernel in different parts of the United States of America. A more detailed description of the nodes is given in Fig. 5. We test the process of solving problems (1)–(4) using the example of data from our WEB project at the beginning of 2017. According to Fig. 4 and 5, we have thirteen nodes of consolidation and customization. Then, under the condition of receiving only one online payment, you can construct the following mathematical programming problem: 8     24 3xj + F xkj , xki = j=1

24 j=1

8 i=1

i=1

3xi

(5)

xj ≥ 1, xkj ∈ {0, 1}, j = 1, 24,

(6)

xi ≤ 32, xki ∈ {0, 1}, i = 1, 8,

(7)

24 j=1

xj ≤

8 i=1

xi .

(8)

Let’s analyze the solution to problem (5)–(8). Among the nodes indicated in Fig. 6, thirty-two positions could be used as consolidation and customization nodes. Moreover, at that time, the budget for placing the semantic core in the nodes was approximately one hundred dollars. The duration of placement was unlimited. At that time, the cost per node was approximately three dollars. Condition (7) determines the fact that the customization nodes lie above the consolidation nodes. But without the existence of consolidation nodes, online ordering in customization nodes is impossible. Because the principle of links of one node to another. The minimum value of the criterion (5) is obtained at the point {1, 0, 1, 0}. This is a confirmation of the real situation where at least one consolidation node must correspond to each customization node. That is, one link to the semantic core in the customization node is on the side of a third-party URL, except for the corporate WEB site itself. Figure 6 shows the dependence of the costs of placing the semantic kernel on the number of online orders according to the solution of the problem (5)–(8). It is taken into account that the number of orders increases depending on the task of the virtual promotion customer. The obtained results coincide with the real WEB statistics recorded during the implementation of the test project (Fig. 7). Namely, a budget of one hundred dollars is guaranteed, according to our map, to give the number of orders at the level of 70–80 units. This can be seen in Fig. 7. Furthermore, we understand that solving problem (5)–(8) is a tuning step that shows minimal results.

148

S. Orekhov

Expenses on semantic kernel spreading

1400

1200 1000

800

600

400 200

0 1

2

3

4

5

10

100

300

Am ount of online or der s

Fig. 6. Costs for placing the semantic kernel according to virtual promotion map

But it was the use of the card that gave a positive effect and the number of orders, and for this WEB resource, it was the number of online user registrations. This fact is also confirmed by the internal statistics of this WEB project – Fig. 8. Accordingly, the solution to the problem (5)–(8) confirms the trends regarding the development of events around the virtual promotion of the first project in accordance with the requirements for the solution of the problem (1)–(4). Thus, using a semantic kernel in a heap with a map promotion had a positive effect in 2017–2019 for the first test project of virtual promotion, which can be seen from the statistics – Fig. 8.

Moment of applying VP map

Fig. 7. Traffic WEB statistics of the WEB project

A New Approach to Search Engine Optimization

149

Customers within testing time period

Fig. 8. Database of online users of the WEB project

6 Conclusion The article presents a real example of using a new search engine optimization methodology based on a cognitive map of the online market [18, 19]. Such a map shows a virtual path from the seller of goods to a potential buyer through Internet nodes. The map shows the movement of the semantic kernel from the WEB resource of the seller to the point of ordering goods online. In turn, the semantic kernel is a concise description of a product on the Internet in the form of a set of keywords. The map has two levels. The first level describes a set of Internet nodes, which are divided into three groups (Fig. 4 and 5). The first group of nodes is the generators of the semantic network. The second group is the nodes of consolidation and dispersion of the semantic core. The third group is customization nodes or nodes for online ordering our goods. Therefore, we generate a semantic core, then we disperse it on the Internet. As soon as the semantic core is in the order node, we receive an online order. At the same time, we also need an estimate of the costs of implementing these three operations. Therefore, the second level of the map is the mathematical model for assessing the quality and effectiveness of the map (formulas (1)–(4)). The article also shows a real example of building a virtual promotion map using the example of a WEB resource for selling online services in the US market in 2017–2019. This example showed the effectiveness of the card when the traffic and the number of online users of this WEB resource increased 10 times. In addition, if we compare the effectiveness of the virtual promotion map and the classic search engine optimization method, we can draw the following conclusions: 1) Firstly, the map does not contradict, but complements the classical approach. For example, in the case of conventional optimization, we place the semantic kernel in partner nodes. The card does not negate this approach, but strengthens it by placing

150

S. Orekhov

the kernel in those Internet nodes where the probability of an online order in this segment of the online market is maximum. 2) Secondly, in the classical approach there is no direct relationship between the actions that we perform and the result in the form of an online order. That is, we cannot clearly predict whether orders will be based on the results of our promotion. Such a classical methodology of promotion does not give guarantees. A lot of news and reviews have been written about this fact. In the case of a map, its second level (task (1)–(4)) gives a clear answer to this question. 3) Thirdly, the virtual promotion map shows the behavior of a potential buyer on the Internet when he wants to place an online order for a given product. That is, if the map is successful, then it actually describes the map of the online market. Such a result is important for any marketer or marketing management service in the enterprise. The purpose of our further work is the software implementation of a special component of virtual promotion and its testing on a wider class of WEB resources. Such a component will work as a machine learning system that will accumulate information about successful cards in order to generate new ones automatically.

References 1. Christopher, M., Peck, H.: Marketing Logistics, 2nd edn, 169 p. Butterworth-Heinemann, Great Britain (2003) 2. Liang, Z., Chaovalitwongse, W.A., Shi, L.: Supply Chain Management and Logistics. Innovative Strategies and Practical Solutions, 277 p. CRC Press, Boca Raton (2016) 3. Reveillac, J.-M.: Modeling and Simulation of Logistics Flows. Theory and Fundamentals, 362 p. Wiley, New York (2017) 4. Orekhov, S.: Analysis of virtual promotion of a product. In: Hu, Z., Petoukhov, S., Yanovsky, F., He, M. (eds.) ISEM 2021. LNNS, vol. 463, pp. 3–13. Springer, Cham (2022). https://doi. org/10.1007/978-3-031-03877-8_1 5. Orekhov, S., Malyhon, H.: Metrics of virtual promotion of a product. In: Bulletin of the National Technical University “KhPI”. System Analysis, Control and Information Technology, №. 2(6), pp. 23–26. NTU «KPI», Kharkiv (2021) 6. Thie, P.R., Keough, G.E.: An Introduction to Linear Programming and Game Theory, 3rd edn, 476 p. Willey, New York (2008) 7. Matousek, J.: Understanding and Using Linear Programming, 229 p. Springer, Berlin (2007). https://doi.org/10.1007/978-3-540-30717-4 8. Luenberger, D.G.: Linear and Nonlinear Programming, 547 p. Springer, Cham (2015) 9. Ziakis, C., Vlachopoulou, M., Kyrkoudis, T., Karagkiozidou, M.: Important factors for improving Google search rank. Future Internet 11(32), 3–14 (2019) 10. Kotler, P., Keller, K.L.: Marketing Management, 812 p. Prentice Hall, Hoboken (2012) 11. Swaim, R.W.: The Strategic Drucker. Growth Strategies and Marketing Insights from the Works of Peter Drucker, 324 p. Wiley, Singapore (2010) 12. Bendle, N.T., Farris, P.W., Pfeifer, P.E., Reibstein, D.J.: Marketing Metrics. The Manager’s Guide to Measuring Marketing Performance, 3rd edn, 456 p. Pearson Education, Inc. (2016) 13. Gozali, A., Borna, K.: SMAODV: a novel smart protocol for routing in ad-hoc wireless networks using the PageRank algorithm. Int. J. Comput. Netw. Inf. Secur. 9, 46–53 (2015) 14. Srivastava, A.K., Garg, R., Mishra, P.K.: Discussion on damping factor value in PageRank computation. Int. J. Intell. Syst. Appl. 9, 19–28 (2017)

A New Approach to Search Engine Optimization

151

15. Orekhov, S., Malyhon, H., Goncharenko, T.: Mathematical model of semantic kernel of WEB site. In: CEUR Workshop Proceedings, vol. 2917, pp. 273–282 (2021) 16. Moza, M., Kumar, S.: Finding K shortest paths in a network using genetic algorithm. Int. J. Comput. Netw. Inf. Secur. 5, 56–73 (2020) 17. Ogedengbe, I.I., Akintunde, M.A., Dahunsi, O.A., Bello, E.I., Bodunde, P.: Multi-objective optimization of subsonic glider wing using genetic algorithm. Int. J. Intell. Syst. Appl. 2, 14–25 (2022) 18. Smarandache, F., Kandasamy, W.B.V.: Fuzzy Cognitive Maps and Neutrosophic Cognitive Maps, 213 p. Xiquan Phoenix (2003) 19. Peterson, J.B.: Maps of Meaning: The Architecture of Belief, 403 p. Material (1999)

Software Reliability Models: A Brief Review and Some Concerns Md. Asraful Haque(B) Computational Unit, Zakir Husain College of Engineering and Technology, Aligarh Muslim University, Aligarh, India [email protected]

Abstract. Software reliability growth models (SRGMs), which statistically interpolate past failure data from the testing phase, are widely used to assess software reliability. Over the last four decades, researchers have devoted much effort to the problem of software reliability and suggested more than 200 reliability growth models. The common aim of all the models is to reduce cost of the testing process and improve reliability of the end product. In this review article, the different types of SRGMS, their examples and current research trends on this topic are discussed. It also discusses a few issues that should be addressed and might become the focus of further research. Keywords: Software reliability model · SRGM · NHPP model · Reliability engineering · Software testing

1 Introduction The software system’s reliability improves as testing progresses. Software must be released at some point; any additional delay will result in unacceptable loss. In order to decide when to terminate testing software and release it depending on the intended level of reliability, the domain of modeling software reliability has drawn a lot of interest. There are mainly two approaches for software reliability estimation: the deterministic method and the probabilistic method. Performance measures of the deterministic type are obtained by using different software metrics, such as Halstead’s metric and McCabe’s complexity metric. These metrics are produced by studying the source codes and their structure. They don’t include any likelihood events. The probabilistic approaches include different statistical and analytical models that mostly rely on failure incidences for the reliability assessment. These statistical models are known as SRGMs. Researchers concentrated on developing new SRGMs that could better match historical data, which were then utilized to produce quantitative findings. The models are based on a simple concept: if the past record of fault detection and correction processes follows a certain pattern, it is possible to derive a mathematical form of that pattern. The non-homogeneous Poisson process (NHPP) models are capable of providing an analytical set-up for the phenomenon of software defect elimination. In NHPP, the testing progress pattern is represented by © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 152–162, 2023. https://doi.org/10.1007/978-3-031-24475-9_13

Software Reliability Models

153

the mean value function or ‘m(t)’ that counts the total number of errors identified over time period ‘t’. The formula used in NHPP to determine the probability of exactly ‘η’ failures observed within a time frame (0, t) is as follows [1, 2]: P{N(t) = η} =

e−m(t) m(t)η , η!

η = 0, 1, 2, 3 . . .

where, N(t) denotes the counting process of NHPP. When reality is abstracted via a model, few assumptions are required. Based on these presumptions, the model’s parameters and structure are chosen. Initial fault count and fault detection rate are two frequently utilized parameters in SRGMs. Researchers are increasingly coming up with novel reliability models on regular intervals. Those models have some pros and cons. A good software reliability model should be: 1) 2) 3) 4) 5) 6)

simple, flexible and easy to use. designed with the least number of parameters; based on the realistic assumptions; based on the consideration of all variables that impact reliability. widely applicable across different software failure datasets; good predictor of future failures.

This is a review paper. The types of reliability models and a brief history of those models are covered in Sects. 2 and 3, respectively. The main aim of this article is to alert researchers to the limitations of software reliability models in order to improve the accuracy of estimated reliability. These limitations are discussed in Sect. 4.

2 Types of SRGMs Since the 1970s, a wide range of SRGMs have been designed under a variety of presumptions and testing conditions. The fundamental justification for recommending such models is to enhance the precision of software reliability prediction which subsequently assists in reducing testing costs. The SRGMs examine the failure pattern employing different probabilistic or statistical techniques. Based on the strategies adopted, these models may be divided into two classes, such as data-domain models and time-domain models (Fig. 1). 2.1 Data Domain Models They are premised on the notion that if all possible input configurations of a system are recognized, an estimation of the system’s reliability may be obtained by checking all the corresponding outputs. However, in practical context, it is difficult to determine all input/output combinations. The data-domain models again are divided as input domain models and fault seeding models. Input-domain models estimate software reliability by testing the software with some randomly selected inputs. Examples of such models include Nelson model [3], Ramamurthy and Bastani Model [4] etc. In fault seeding

154

Md. Asraful Haque

models, some faults are artificially seeded in a software system. The system reliability is then measured by determining the percentage of seeded faults that remain undiscovered at the end of the testing activities. Some models of this category are Mills seeding model [5], Cai model [6] etc. 2.2 Time Domain Models The time-domain models keep track of the pattern of software failure throughout the testing and use it to illustrate how software performance changes over time. They are further classified as NHPP models, Markov models, Bayesian models etc. In NHPP models, the debugging process is defined as a counting process with a mean value function. These models are again categorized as finite-failure models and infinite-failure models. Finite-failure models consider that a software product will be error-free after an endless amount of testing. According to infinite-failure models, it is impossible for software to be absolutely error-free. The Markov models represent software as a set of finite states with each state equivalent to a fault or failure. The key feature of such models is that future failure is determined by the current state of the software rather than the history of previous failures. In Bayesian models, the parameters are treated as random variables. They first create a prior distribution for the model parameters before examining the test data and later utilize the test data to update the initial assessment.

Fig. 1. Types of software reliability models

3 Literature Review A huge and diverse collection of literature exists on reliability modeling. It is unfeasible to mention all of the SRGMs due to their large number as well as the non-availability of all related journals and publications. This section briefly covers the development history of many models that have received a lot of attention by the researchers.

Software Reliability Models

155

In 1972, the Jelinski-Mornada (J-M) model [7] was suggested as a time-betweenfailures model. Many SRGMs have since been developed as upgraded and modified variants of this model. For example, the Schick-Wolverton or S-W model [8] is analogous to the J-M model, with the exception that the hazard function is considered to be proportionate to the system’s present fault count and the interval since the previous failure. Goel-Okumoto (G-O) Model [9] is a very simple and popular NHPP model that uses a constant fault detection rate in reliability modeling. Goel [10] later modified this model by introducing an idea of time dependent failure rate. A model proposed by John Musa and Okumoto [11] represents the idea that faults identified earlier have a bigger influence on decreasing the failure intensity function than faults discovered later. Yamada et al. [12] developed the delayed S-Shape (DSS) model by suggesting that the cumulative number of failures discovered usually follows an S-shaped curve rather than an exponential one. This model considers the experience and learning capacity of a testing team. Ohba’s inflection S-shape (ISS) model considers the fact that some defects are not visible until the removal of others [13]. Yamada et al. [14, 15] presented three reliability-models that account for the amount of testing-effort invested in testing stage. The time-variant nature of testing-effort function was characterized by exponential and Rayleigh curves in [14] and by Weibull curve in [15]. All three models are formulated by non-homogeneous Poisson processes. In 1838, Belgian mathematician P. Verhulst introduced the idea of the logistic model as a biological model to calculate population growth. Yamada and Osaki [16] used this model to estimate software reliability growth under the assumption that the fault detection rate will not be exponential for very long. Duane [17] proposed a reliability model in 1962 to examine the reliability of various aircraft systems. Later a modified Duane model [18] was proposed to evaluate the software reliability. Xie and Zhao also proposed a modification of the Duane model which is known as Log Power Model [19]. It is an NHPP-based model with a convenient graphical representation. Khoshgoftaar [20] first introduced a generalized k-stage model based on the Erlang cumulative distribution function where, k = 1 provides the structure of G-O model and k = 2 provides the DSS model. The versions with k = 3 and k = 4 are widely used in practice. Hossain and Dahiya [21] suggested a modified and improved version of the G-O model, known as the H/D-GO model. The authors derived that the maximum likelihood estimation method (MLE) does not always produce reasonable results for model parameters. Thus they added an important criterion that the parameter estimations have a positive and finite value. Chen suggested a two-parameter Weibull extended model [22] that offers certain useful qualities when compared to other Weibull extended models. First, the bathtub shaped failure rate function can be modeled using only two parameters. Second, the shape parameter’s confidence intervals, as well as the joint confidence regions for the two parameters, are closed. However, this model is not flexible and lacks a scaling parameter. Kapur et al. [23] developed a model which is flexible enough to describe a variety of software reliability/failure curves. In terms of testing effort, the model implies that the rate of fault reduction is time dependent. It indirectly integrates the testing team’s ability to learn. The model also divides the errors into two categories based on their severity. The quality and effectiveness of a testing process may be assessed using testing coverage. Chang et al. [24] presented a software reliability model with the help of a

156

Md. Asraful Haque

time-dependent testing coverage function. It is an NHPP-based model that considers the uncertainty of operational stage. Recent researches on reliability estimation process have focused on improving the prediction accuracy of the models. New modeling approaches are based on either the use of various machine learning techniques or the removal of any unrealistic assumptions, or the evaluation of the influencing aspects that present in the testing and/or operational environment [25–29]. Standard machine learning methods may be effectively used to create a probability distribution and evaluate faults in software reliability assessment, which addresses learning processes and fault types. Park et al. [30] introduced a neural network-based model that is entirely dependent on failure data from the testing phase and requires no prior assumptions. For training, the model employs the cascade-correlation algorithm. Tian and Noore [31] proposed a software reliability prediction method in which the SVM learning method is used to failure dataset to comprehend the software failure sequence’s intrinsic internal temporal feature. Kiran and Ravi [32] constructed ensemble models based on several AI techniques (i.e. neural network, neuro–fuzzy system, TreeNet etc.) to effectively estimate software dependability. J. Zheng [33] demonstrated that the neural network based models are effective alternatives of NHPP models. Sudarshan and Prabha [34] proposed an efficient model incorporating EM algorithm that uses both Gaussian function and k-means algorithm to identify faults in software. Haque and Ahmad [35] developed a model based on the idea of fault removal efficiency “ϕ”. It is the ratio of fixed to discovered defects during testing. The model is adaptable with all real values of “ϕ” (i.e. ϕ ≤ 1 or ϕ ≥ 1). One major issue that is inevitable in software development process is uncertainty. Various forms of uncertainty may exist in an SRGM and interact with one another, affecting the overall estimated reliability. Software reliability prediction under uncertain circumstances has lately piqued the interest of the researchers, and few models have been suggested in this regard. Zhang et al. [36] developed a model that captures the variation between the testing phase and the operational phase. They included a calibration factor to account for the discrepancy between the estimated failure rate and the observed failure rate. YW Leung [37] suggested three optimization models considering an uncertain operational profile with the aim that users should not suffer if the operational profile deviates from the expected one. AHS Garmabaki et al. [38] suggested a Weibull model to determine software reliability using fuzzy approach that considers the uncertainty effects of both the testing and operational profiles using a unit free environmental factor. Haque and Ahmad [39] modified Goel-Okumoto model by simply inserting an uncertainty parameter in the equation. Haque and Ahmad [40] developed a logistic growth model that illustrates how in an uncertain environment, the fault detection rate grows with time until the number of detected faults reaches to its maximum value. The model uses a special variable to indicate the overall impact of all potential uncertainty. A systematic survey is conducted based on the expert judgment and historical data to point out the major uncertain factors and assign some weightage to them. Pham et al. has presented a wide range of models [41–46] that consider the uncertainty of operating environments under different testing assumptions. Software failure is commonly acknowledged to be a deterministic process. However, we are unable to properly capture all of the aspects that determine the failure process of sophisticated software due to our limited knowledge.

Software Reliability Models

157

Most likely, this is why researchers have been searching for an accurate SRGM for the past four decades. A set of selected models along with their equations have been listed in Table 1. Most of the models (Sl. No. 1 to 14) employ “a” to represent the starting fault count, “b” to represent the rate at which faults are discovered and “t” to represent time. Additional symbols are typically employed for constants, scale and shape parameters of the fault distribution curve, etc. However, the last model (Sl. No. 15) uses “N” for the initial fault count, “a” for the scale parameter and “b” for the shape parameter. Table 1. A set of well-known SRGMs Sl. No.

Model

Mean value function ‘m(t)’

1

G-O Model [9]

a (1 − e−bt )

2

Generalized Goel Model [10]

c a (1 − e−bt )

3

Musa Okumoto Model [11]

a ln(1 + bt)

4

DSS Model [12]

a(1 − (1 + bt)ebt )

5

ISS Model [13]

6

Yamada Exponential Model [14]

  a 1−e−bt 1+βe−bt

   −βt a 1 − e−bα 1−e ⎡

7

Yamada Rayleigh Model [14]



⎢ a⎢ ⎣1 − e 

−bα 1−e−

βt2 2



⎥ ⎥ ⎦

  γ −bα 1−e−βt

8

Yamada Weibull Model [15]

a 1−e

9

Logistic Growth Model [16]

a 1+ke−bt

10

Modified Duane Model [18]

11

Log Power Model [19]

12

Generalized K-stage Model [20]

a × lnb (1 + t)

k−1  e−bt (bt)n a 1− n!

H/D-GO Model [21]

    −bt log ea − c / eae − c

    b c a 1 − b+t

n=0

13 14

Chen’s Model [22]

15

TC Model [24]

 β  −λ at −1

1−e   N 1−

α  β β+(at)b

158

Md. Asraful Haque

4 Open Issues and Future Directions The industry’s best interests, as well as a big challenge, would be to employ an effective SRGM that will accurately predict the software reliability. There are some risks when applying SRGMs to estimate reliability. 4.1 Lack of Universally Accepted Model There isn’t a single model that will be applicable to all kinds of software. Each model is good for a certain data set, but none of them is good for all data sets [46]. Therefore, a universally accepted SRGM is yet to be developed. The type and amount of failure data obtained during the testing phase determines an SRGM’s performance. Data is often collected weekly, resulting in a lower amount of information. A lower amount data does not provide reasonable results in parameter estimation process. Parameter estimation process plays a vital role in prediction accuracy. Prior to determining the parameters, a lower constraint on the database size should be established. Furthermore, weekly variation in testing efforts and many other factors like the inappropriate assumption may also hamper the performance of an SRGM [39]. 4.2 Lack of Standard Practices In literature survey, we have seen that two common parameters in most of the models are starting fault content and the fault discovery rate. Some models use ‘a’ to represent the initial faults and some use ‘n’ or ‘N’ to represent the same. A standard practice for mathematical notations should be followed up to avoid any ambiguity during model comparison process. SRGMs are developed based on certain assumptions on testing activities. The SRGMs should counter the issue when there is a violation of any prior assumption. 4.3 Not Applicable to Open Source Software The majority of the existing models are useful and effective for closed source software since they are based on the idea that software faults are only reported by the testing team and that the testing profile is representative of the operating environment. The fundamental SRGM assumptions do not align with the idea of “open source software”. Some researchers are now attempting to do something in this field (i.e. ref. [47, 48]). However the problem is that unlike closed-source software, the knowledge base of open source software (OSS) is not restricted to a single firm. As a result, for open-source software (OSS), such as Linux operating systems, Mozilla browsers, and VLC media player, reliability monitoring is critical and difficult. In the future, a substantial effort will be required to either adjust the layout of traditional SRGMs or build a parallel version of SRGMs based on the special characteristics of OSS so that they may be utilized to open source software as well.

Software Reliability Models

159

4.4 Problem in the Definition of Software Reliability Last but not least, there is a misinterpretation about what software reliability is. According to ANSI/IEEE (Std.-729-1991) [49, 50] — “Software Reliability is defined as the probability of failure-free operation of a software application for a specified period of time in a specified environment”. The likelihood of error-free operation, the time duration for error-free operation, and the execution environment are all important aspects of the concept. Electronic and mechanical components may get “old” and deteriorate over time. However, in case of software, it does not change over time unless it is modified or upgraded deliberately. As per the standard definition and also the representations in all SRGMs, it becomes a basis to express the software reliability as a function of time. Multiple representations of time exist, including clock time, calendar time, program execution time, etc. This definition of software reliability doesn’t seem accurate because the reliability quality is related to system usage, not to time. The author believes that the software reliability should be determined based on the number of services. The paper suggests the following simple definition of software reliability: “Software reliability is the number of services or transactions provided by the software without any failure”. This definition is time independent. The research community should consider the definition for review and comments.

5 Conclusion A software package’s performance and quality are judged by its reliability. The software industry strives hard in producing trustworthy software. SRGMs are regarded as a useful tool for tracking software reliability at testing stage. Identifying realistic assumptions and properly simulating the testing operations within a reasonable analytical framework are essential components of constructing these models. This paper provides a brief overview of many software reliability growth models with their underlying assumptions. It is a fact that the number of SRGMs has grown significantly over time, and concurrently, software reliability performance has improved. However, the existing models are not widely accepted as they have many flaws and lack to achieve expected accuracy all the time. The research to develop a perfect model is still going on. The paper highlights some major flaws of the existing models. A new, time-independent definition of software reliability has been suggested at the end of the paper. In the future, we will concentrate on finding solutions to the SRGM-related issues raised in the study.

References 1. Anjum, M., Haque, M.A., Ahmad, N.: Analysis and ranking of software reliability models based on weighted criteria value. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 5(2), 1–14 (2013)

160

Md. Asraful Haque

2. Iqbal, J.: Analysis of some software reliability growth models with learning effects. Int. J. Math. Sci. Comput. 2(3), 58–70 (2016) 3. Nelson, E.: Estimating software reliability from test data. Microelectron. Reliab. 1(1), 67–74 (1978) 4. Ramamurthy, C.V., Bastani, F.B.: Software reliability: status and perspective. IEEE Trans. Softw. Eng. 8(4), 354–371 (1982) 5. Mills, H.D.: On the statistical validation of computer programs. Technical report FSC 72-6015, IBM Federal Systems Division (1972) 6. Cai, K.Y.: On estimating the number of defects remaining in software. J. Syst. Softw. 40(2), 93–114 (1998) 7. Jelinski, Z., Moranda, P.B.: Software reliability research. In: Freiberger, W. (ed.) Statistical Computer Performance Evaluation, pp. 465–484. Academic Press, New York (1972) 8. Schick, G.J., Wolverton, R.W.: An analysis of competing software reliability models. IEEE Trans. Softw. Eng. SE-4(2), 104–120 (1978) 9. Goel, A.L., Okumoto, K.: Time-dependent error-detection rate model for software reliability and other performance measures. IEEE Trans. Reliab. R-28(3), 206–211 (1979) 10. Goel, A.L.: Software reliability models: assumptions, limitations, and applicability. IEEE Trans. Softw. Eng. SE-11(12), 1411–1423 (1985) 11. Musa, J.D., Okumoto, K.: A logarithmic poisson execution time model for software reliability measurement. In: Proceedings of the 7th International Conference on Software Engineering, Piscataway, NJ, USA, pp. 230–238. IEEE Press (1984) 12. Yamada, S., Ohba, M., Osaki, S.: S-shaped software reliability growth models and their applications. IEEE Trans. Reliab. R-33(4), 289–292 (1984) 13. Ohba, M.: Inflection S-shaped software reliability growth model. In: Osaki, S., Hatoyama, Y. (eds.) Stochastic Models in Reliability Theory, pp. 144–162. Springer, Heidelberg (1984). https://doi.org/10.1007/978-3-642-45587-2_10 14. Yamada, S., Ohtera, H., Narihisa, H.: Software reliability growth models with testing-effort. IEEE Trans. Reliab. 35(1), 19–23 (1986) 15. Yamada, S., Hishitani, J., Osaki, S.: Software-reliability growth with a Weibull test-effort: a model and application. IEEE Trans. Reliab. 42(1), 100–106 (1993) 16. Yamada, S., Osaki, S.: Software reliability growth modeling: models and applications. IEEE Trans. Softw. Eng. SE-11(12), 1431–1437 (1985) 17. Duane, J.T.: Learning curve approach to reliability monitoring. IEEE Trans. Aerosp. 2(2), 563–566 (1964) 18. Xie, M.: Software Reliability Modeling. World Scientific Publishing (1991). ISBN: 9789810206406 19. Xie, M., Zhao, M.: On some reliability growth-models with simple graphical interpretations. Microelectron. Reliab. 33, 149–167 (1993) 20. Khoshgoftaar, T.M.: Nonhomogeneous Poisson processes for software reliability growth. In: Proceedings of 8th Symposium in Computational Statistics, pp. 11–12 (1988) 21. Hossain, S.A., Dahiya, R.C.: Estimating the parameters of a non-homogeneous Poissonprocess model for software reliability. IEEE Trans. Reliab. 42(4), 604–612 (1993) 22. Chen, Z.: A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist. Probab. Lett. 49(2), 155–161 (2000) 23. Kapur, P.K., Goswami, D.N., Bardhan, A., Singh, O.: Flexible software reliability growth model with testing effort dependent learning process. Appl. Math. Model. 32, 1298–1307 (2008) 24. Chang, I.H., Pham, H., Lee, S.W., Song, K.Y.: A testing-coverage software reliability model with the uncertainty of operation environments. Int. J. Syst. Sci. Oper. Logist. 1(4), 220–227 (2014)

Software Reliability Models

161

25. Yamada, S.: Recent developments in software reliability modeling and its applications. In: Dohi, T., Nakagawa, T. (eds.) Stochastic Reliability and Maintenance Modeling, pp. 251–284. Springer, London (2013). https://doi.org/10.1007/978-1-4471-4971-2_12 26. Park, J., Baik, J.: Improving software reliability prediction through multi-criteria based dynamic model selection and combination. J. Syst. Softw. 101, 236–244 (2015) 27. Kaswan, K.S., Choudhury, S., Sharma, K.: Software reliability modeling using soft computing techniques: critical review. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 7(7), 90–101 (2015) 28. Haque, M.A., Ahmad, N.: A software reliability growth model considering mutual fault dependency. Reliab. Theory Appl. 16(2), 222–229 (2021) 29. Cheng, J., Zhang, H., Qin, K.: Safety critical software reliability model considering multiple influencing factors. In: Proceedings of 12th International Conference on Machine Learning and Computing, New York, pp. 560–566 (2020) 30. Park, J.Y., Lee, S.U., Park, J.H.: Neural network modeling for software reliability prediction from failure time data. J. Electr. Eng. Inf. Sci. 4, 533–538 (1999) 31. Tian, L., Noore, A.: Dynamic software reliability prediction: an approach based on support vector machines. Int. J. Reliab. Qual. Saf. Eng. 12(4), 309–321 (2005) 32. Kiran, N.R., Ravi, V.: Software reliability prediction by soft computing techniques. J. Syst. Softw. 81(4), 576–583 (2008) 33. Zheng, J.: Predicting software reliability with neural network ensembles. Expert Syst. Appl. 36(2), 2116–2122 (2009) 34. Sudharson, D., Prabha, D.: Improved EM algorithm in software reliability growth models. Int. J. Powertrains 9(3), 186–199 (2020) 35. Haque, M.A., Ahmad, N.: A software reliability model using fault removal efficiency. J. Reliab. Stat. Stud. 15(2), 459–472 (2022) 36. Zhang, X., Jeske, D.R., Pham, H.: Calibrating software reliability models when the test environment does not match the user environment. Appl. Stoch. Model. Bus. Ind. 18, 87–99 (2002) 37. Leung, Y.-W.: Software reliability allocation under an uncertain operational profile. J. Oper. Res. Soc. 48(4), 401–411 (1997) 38. Garmabaki, A.H.S., Ahmadi, A., Kapur, P.K., Kumar, U.: Predicting software reliability in a fuzzy field environment. Int. J. Reliab. Qual. Saf. Eng. 20(3), 1–10 (2013) 39. Asraful Haque, Md., Ahmad, N.: Modified Goel-Okumoto software reliability model considering uncertainty parameter. In: Sahni, M., Merigó, J.M., Sahni, R., Verma, R. (eds.) Mathematical Modeling, Computational Intelligence Techniques and Renewable Energy: Proceedings of the Second International Conference, MMCITRE 2021, pp. 369–379. Springer Singapore, Singapore (2022). https://doi.org/10.1007/978-981-16-5952-2_32 40. Haque, M.A., Ahmad, N.: A logistic growth model for software reliability estimation considering uncertain factors. Int. J. Reliab. Qual. Saf. Eng. 28(5), 1–15 (2021) 41. Teng, X., Pham, H.: A new methodology for predicting software reliability in the random field environments. IEEE Trans. Reliab. 55(3), 458–468 (2006) 42. Pham, H.: A new software reliability model with Vtub-shaped fault-detection rate and the uncertainty of operating environment. Optimization 63(10), 1481–1490 (2014) 43. Song, K., Chang, I., Pham, H.: A software reliability model with a Weibull fault detection rate function subject to operating environments. Appl. Sci. 7(10), 983 (2017) 44. Song, K., Chang, I., Pham, H.: An NHPP software reliability model with S-shaped growth curve subject to random operating environments and optimal release time. Appl. Sci. 7(12), 1304 (2017) 45. Lee, D.H., Chang, I.H., Pham, H., Song, K.Y.: A software reliability model considering the syntax error in uncertainty environment, optimal release time, and sensitivity analysis. Appl. Sci. 8(9), 1483 (2018)

162

Md. Asraful Haque

46. Li, Q., Pham, H.: A generalized software reliability growth model with consideration of the uncertainty of operating environments. IEEE Access 7, 84253–84267 (2019) 47. Nidhi, N., Anu, G.A., Vikas, D.: An SRGM for multi-release open source software system. Int. J. Innov. Technol. Manag. 15(2), 1–20 (2018) 48. Islam, S.R., Hany, M.H., Hamdy, M.M., Mohammed, G.M.: Reliability assessment for opensource software using deterministic and probabilistic models. Int. J. Inf. Technol. Comput. Sci. 14(3), 1–15 (2022) 49. Haque, M.A., Ahmad, N.: Key issues in software reliability growth models. Recent Adv. Comput. Sci. Commun. 15(5), 741–747 (2022) 50. Iannino, A., Musa, J.D.: Software reliability. Adv. Comput. 30, 85–170 (1990)

An Enhanced Session Based Login Authentication and Access Control Scheme Using Client File Bello A. Buhari1(B) , Afolayan A. Obiniyi2 , Sahalu B. Junaidu2 , and Armand F. Donfack Kana2 1 Department of Computer Science, Usmanu Danfodiyo University, Sokoto, Nigeria

[email protected] 2 Department of Computer Science, Ahmadu Bello University, Zaria, Nigeria

Abstract. This research enhanced session based login authentication and access control scheme by storing session login information on a client file store in an external memory instead of database. AES-256 is used to encrypt the user ID and biometric prior to storage on a client file to ensure user privacy. Password is also protected using SHA-2 hash function. Access control is also ensured using time stamp, device ID and network IP address. It prevents session hijacking and SQL injection because the above parameters are neither stored in session nor in server but only in client file. It also, remedies the impact of session hijacking and weak session management commonly occurred in session based login authentication because access control is verified via the client file in respect of other parameters like login time, device ID and network IP address in addition to session ID. User privacy is also ensured by encrypting the user ID to be used for automatic password protection of the client file and to be stored in both the database and client file. It is also more efficient than the previous scheme because session login is verified at the client not at the server. It is simulated using model as documentation for its implementation using any programming language. The enhanced session based login authentication scheme is analyzed for using cryptanalysis. And the results of analysis show that it ensures user privacy, unauthorized access control and defense against impersonation attack in addition to those provided by the previous research under consideration. Keywords: Session · Login authentication · Access control · Client file · Session login · Login

1 Introduction With the rapid technological advancement, more and more people rely every day on the internet and web for both primary and secondary activities. Thus, these internet and web applications are vulnerable to dangerous attack. So, there is strong need for security, privacy and access control on these applications. These vulnerabilities include SQL interpolation, session hijacking, code inclusion, file inclusion, cross-site scripting, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 163–172, 2023. https://doi.org/10.1007/978-3-031-24475-9_14

164

B. A. Buhari et al.

buffer overflow attack, weak session management, cross-site request forgery, security misconfiguration, impersonation attack, unauthorized access and broken authentication [1]. Therefore, a comprehensive web authentication system is significant to the logical operation of web applications. This may include the use of single cryptography or hybrid cryptography [2, 3], RFID authentication [4]. Password is still the most admired means of authentication even with the proposal of many authentication schemes [5]. That is why password is part of every authentication scheme. Authentication policy is a technological solution which is applied after complete deliberation of security judgments, credential attributes, costs of implementation, and user familiarity [6]. Normally, text passwords are the most regular method employed for authentication. But text passwords are susceptible to eves dropping, dictionary attacks and shoulder surfing. So, Graphical passwords are introduced as a substitute technique to text passwords. Graphical passwords are said to be extra secure than traditional text passwords. Most of the existing graphical password schemes are not safe to spyware and shoulder surfing. A novel graphical password scheme color login is implemented to make the user to feel more interesting and to avoid the baring feelings to a user [7]. This research enhanced session based login authentication and access control scheme by storing session login information on a client file store in an external memory instead of database. The previous scheme under consideration provide the best solution but it is prone to SQL injection because the session login information are stored in the database and other important parameters for authentication and access control like device ID and IP address are not taking into consideration. The client file will exhibit the properties of binary file and password protected automatically with the encrypted user ID. Time stamp, device ID and network IP address are neither stored in session nor in server but only in client file to prevents session hijacking and SQL injection. Access control is verified via the client file in respect of these parameters in addition to session ID to remedies the impact of session hijacking and weak session management commonly occurred in session based login authentication. User privacy is also ensured by encrypting the user ID to be used for automatic password protection of the client file and to be stored in both the database and client file. AES-256 is used to encrypt the user ID and biometric prior to storage on a client file to ensure user privacy. Password is also protected using SHA-2 hash function. It is simulated using model as documentation for its implementation using any programming language. The enhanced session based login authentication scheme is analyzed using cryptanalysis. And the results of the analysis shows that it ensures user privacy, unauthorized access control and defense against impersonation attack in addition to those provided by the previous research under consideration. The contributions of this research are as follows: 1) Introduced the used of client file in an external for the storage of login session information instead of database and mechanisms to protect the client file by automatically password protecting it with user ID and ensure access control. 2) Modeled the enhanced scheme for implementation on different web application platforms and different programming languages.

An Enhanced Session Based Login Authentication

165

2 Related Works Many researches have been conducted on session authentication. These include Castellano in [8] presented tutorial on PHP login authentication. In his technique account session information such as user id, session id and login time are store in database in addition to session cookies. After successful login, session login is used to continually access different sections of the web application. One the login time expire the user is logout and the account session are deleted from the database. Rao and Akulain in [9] proposed a new session password authentication scheme by means of virtual keyboard called pair based authentication technique. Alphanumeric grids are allowed to be chosen as password. The use of pair based technique to combat shoulder surfing attacks is their objective. This is because the technique is generating a given session password for every session or transaction. Bavani and Markco in [10] highlighted that enough security and efficiency are not guaranteed by the existing multi-factor protocols using 3DES algorithm. They therefore proposed new security standard in multi factor technique that make use of four factor authentications. They merge textual, graphical, and biometric and device password to login and an efficient AES algorithm for data operation from user to server. This combination provide single 3-D virtual environment. Kumar et al. in [7] proposed two techniques to generate session passwords. These techniques are images and colors. Hence, legitimate user’s login time is minimized by introducing image background color. This is critical to the usability of password method. New password is generated every time according to their scheme. That is, password is only used on one occasion. Nizamani et al. in [11] changed the password input method and add a password conversion layer to enhance the security of textual password methods. Users do not need to remember any new kind of passwords such as used in graphical authentication because their enhanced technique utilizes alphanumeric character based passwords. The alphanumeric password typescripts are presented as random decimal numbers. The limitation of their scheme is that the mean time of the proposed technique is higher than the textual password technique. Arak et al. in [12] employ magic rectangle generation algorithm (MRGA) to combined text and color in order generate session password. This generates new grid of 6x6 consisting alphanumeric characters which aid in generating new session passwords each time user login. The singly even magic rectangle is formed from seed number, start number, row sum and column sum. This makes the value of row sum and column sum to be very hard to compromise. Tandon et al. in [13] proposed three authentication schemes based on text and colors. There is no need for special type of registration throughout login time because pair based method is used. But a session password is generated based on the grid displayed. They given ratings to colors that generate session passwords for hybrid textual method by combining the grid presented during login. They recommend their proposed scheme as appropriate for personal digital assistants. Yerne and Qureshi in [14] proposed method that emply two level authentication scheme as a password for mobile user. They also discuss the security and ease of use of their proposed scheme. Text based graphical password are the first authentication stage

166

B. A. Buhari et al.

and 3D image which provide high security to the user is the next stage. And the 3D image will be adjusted every time the session is started. Amlani et al. in [15] proposed new session authentication technique for PDAs. The usages of PDAs that store confidential information like passwords and PINs are authentication. They are secure against dictionary attack, brute force attack and shoulder-surfing. Wang et al. in [16] proposed a session-based access control (SAC) method for Information-Centric Networking ICN. They application of symmetric and asymmetric encryption are merged and dynamic naming adapted to ensure anonymity of the content name and the cache. Access control technique can ensure communication security and anonymity to both sides of the session according to their analysis.

3 Methodology The enhanced session based login authentication and access control is based on AES and SHA-256. These algorithms will be discussed in this section. 3.1 AES Cryptographic Algorithm AES is a block cipher private key algorithm. It is normally used in many industry standards and in many commercial systems like IPsec, the internet Skype, the IEEE 802.11I and TLS [16]. AES has three diverse versions and each version operates at diverse bit levels of secret key. Based on the number of bits in the secret key, AES can be categorized as AES-128, AES-192, or AES-256 [17]. AES-128 encryption algorithm is as follows [18]: Input: The 128-bit plaintext blocks P and key K. Output: The 128-bit ciphertext block C. X ← AddRoundKey(P,K) for i ← 1 to 10 do X ← SubBytes(X) X ← ShiftRows(X) If i ≠ 10 X ← MixColumns(X) end K ← KeySchedule(K) X ← AddRoundKey(X,K) end C ← X return C

3.2 SHA-256 Hash Function SHA-256 Hash Algorithm is a density operation for the message whose length is fewer than 264 bits, and the length of output hash value is 256 bits [19]. The specific calculation process is described as follows [20, 21].

An Enhanced Session Based Login Authentication

167

3.2.1 Message Filling Add a “1” and several “0” after the message. The number of “0” must be just sufficient to make the remnants result of the number of the entire message bits to 512–448, and the 64 bits original message is added afterward to make the entire length of the message block is 512 × L bits after filling. 3.2.2 Variable Definition The early link variable values, the intermediate link variable values, and the finishing hash values are all stored in eight registers A, B,C, D, E, F, G, H, where the values of early link variables are H0 = 6a09e667, H1 = bb67ae85, H2 = 3c6ef 372, H3 = a54ff 53a, H4 = 510e527f, H5 = 9b05688c, H6 = 1f 83d9ab, H7 = 5be0cd19. 3.2.3 Compression Function Operation The compression function of SHA-256 is shown in Fig. 1. 64 rounds of cyclic operations are performed on each message block. The values of registers A, B,C, D, E, F, G, H are used as the input of each round of computation, and their present come from the output link variable values of the prior round.

Fig. 1. The comparison function operation [20, 21]

168

B. A. Buhari et al.

4 Enhanced Session Based Login Authentication and Access Control Scheme In this section, we proposed an enhanced session based login authentication scheme stored on a client file stored in an external memory instead of database. That login session information such as user ID, session ID, login time, login device ID and network IP address will be stored in client file stored in an external memory. The user ID and biometric will be encrypted using AES to ensure privacy of users. It comprises of two phases namely: credentials login and session login phases. The enhanced scheme is also simulated by creating the model of each of these phases for more explanation of how it can be implemented using any programming language of ones choice. 4.1 Login Phase The user Uj must already register with the web server WSj before he/she can be able to login. User provide user ID, password, biometric and secrete key to login. Login phase has the following steps: 1) 2)

3)

The user Uj provides such information as: IDj , PWj , Bj and Kj Then the web server WSj computes: EIDj = Ek (EIDj )

(1)

hPWj = h(PWj )

(2)

EBj = Ek (Bj )

(3)

4)

The web server WSj compares user’s ID IDj provided and the IDj on the server.  That is, EIDj = EIDj , if it is true it continues with the login, request else the login request is discarded.  The web server WSj compares h(PWj ) provided and h(PWj ) on the server. That

5)

is, h(PWj ) = h(PWj ), if it is true it continues with the login request else the login request is discarded.  The web server WSj compares EBj provided and EBj of the server. That is, Ek (Bj ) =





Ek (Bj ), if it is true it continues with login request else the login request is discarded. 6) The web server WSj generate session ID SIDj 7) The web server WSj creates or updates a binary client file BCFj and automatically password it using EIDj . The file must exhibit all the characteristics of binary file. 8) Create session cookies SCj and store such information as {SIDj , EIDj } 9) The web server WSj gets login time LTj , device ID DIDj and network IP address NIPj 10) The web server WSj store {SIDj , EIDj , LTj , DIDj , NIPj } on the client file BCFj The model of login phase can be shown in Fig. 2.

An Enhanced Session Based Login Authentication

169

Fig. 2. Login phase

4.2 Access Control Phase Access control is ensured through session based authentication. After user Uj has already login with his/her credentials he/she can henceforth continue to navigate throughout the web application sections and perform various operations provided by the web application in accordance with their role and privilege. As they navigate throughout the web application sections, session base login is used to authenticate their legitimacy and control their access to the web application sections or pages. The session based login has the following steps. 



1) The web server WSj gets EIDj and SIDj from session cookies SCj created on the browser and open the client file BCFj using the EIDj as password 2) The web server WSj gets SIDj , EIDj , LTj , DIDj , NIPj from the client file BCFj .  3) The web server WSj compares EIDj and EIDj from the session cookies, if they are equal it continues access request else it logout.  4) The web server WSj compares SIDj and SIDj from the session cookies SCj , if they are equal it continues with the access request else it logout. 5) The web server WSj gets the current time CTj and compute change in time from login time LTj as T , that is T = CTj − LTj

(4)

if permissible login time PTj expires it logout, else it continues with the access request.  6) The web server WSj gets the accessing device ID DIDj and compares it with DIDj , if they are equal it continues with the access request else it logout. 7) The web server WSj gets the accessing network IP address NIPj and compares it with NIPj , if they are equal it continues with the access request else it logout. The model of access control phase can be shown in Fig. 3.

170

B. A. Buhari et al.

Fig. 3. Access control phase

5 Analysis of the Enhanced Session Based Login Authentication and Access Control Scheme In this section, we analyse the security of the enhanced scheme using cryptanalysis. The security of the enhanced is based on the use of client file, private key cryptography, hash function, biometric, device identification number, IP address and time stamp. Also, user privacy is also imposed. The enhanced scheme ensures user privacy, unauthorized access control and defense against impersonation attack based on the analysis in addition to those provided by the previous research under consideration. 5.1 User Privacy In our enhanced scheme, the user Uj identity IDj is not actually available in {SIDj , EIDj , LTj , DIDj , NIPj } stored on the client file BCFj and {SIDj , EIDj } stored in the session cookies. But only a cryptographic representation of the IDj . This means, unauthorized user cannot know the exact user from these messages and even on the server. And also, the secrete key Kj is not stored everywhere in session cookies, client file or server. 5.2 Unauthorized Access Control Access control is a security technique that regulates who or what can view or use resources in a computing environment. The web server WSj comparison of EIDj and   EIDj , SIDj and SIDj , prevent attacker who modify these values of the BCFj or SCj 

access to the server or web application. Also, comparison of DIDj and DIDj , NIPj and

An Enhanced Session Based Login Authentication

171



NIPj , prevents attacker who uses different device or different network access to the server or web application. Lastly, use of time stamp to computes T = CTj − LTj restricts access to the server or web application within the permissible time. 5.3 Defense Against Impersonation Attack For the attacker to impersonate user Uj he/she must compute valid login request message {SIDj , EIDj , LTj , DIDj , NIPj } and one of the criteria for getting this message is the web  server WSj compares EBj provided and EBj of the server. So, this is impossible because biometric cannot be easily forge and redistribute. And also, because the secrete key Kj is not stores anywhere in the server, client file or session cookies but only known by the user Uj .

6 Conclusion Authentication plays an important role in protecting resources against unauthorized and criminal use. It’s currently known that user authentication is the most essential part within the field of information security. This research enhanced session based login authentication and access control scheme by storing session login information on a client file store in an external memory instead of database. The previous scheme under consideration provide the best solution but it is prone to SQL injection because the session login information are stored in the database and other important parameters for authentication and access control like device ID and IP address are not taking into consideration. The client file will exhibit the properties of binary file and password protected automatically with the encrypted user ID. Time stamp, device ID and network IP address are neither stored in session nor in server but only in client file to prevents session hijacking and SQL injection. Access control is verified via the client file in respect of these parameters in addition to session ID to remedies the impact of session hijacking and weak session management commonly occurred in session based login authentication. User privacy is also ensured by encrypting the user ID to be used for automatic password protection of the client file and to be stored in both the database and client file. AES-256 is used to encrypt the user ID and biometric prior to storage on a client file to ensure user privacy. Password is also protected using SHA-2 hash function. It is simulated using model as documentation for its implementation using any programming language. The enhanced session based login authentication scheme is analyzed using cryptanalysis. And the results of the analysis shows that it ensures user privacy, unauthorized access control and defense against impersonation attack in addition to those provided by the previous research under consideration. This enhanced scheme can be evaluated using empirical evaluation to present a more its practical application both academically and commercially.

References 1. Rasheed, B.H., Ahamed, B.B.: Calibration techniques for securing web application in dual delegation interoperability network model with green communication. J. Green Eng. 10, 6681–6693 (2020)

172

B. A. Buhari et al.

2. Buhari, B.A., Obiniyi, A.A.: Web applications login authentication scheme using hybrid cryptography with user anonymity. Int. J. Inf. Eng. Electron. Bus. (IJIEEB) 14(5), 42–50 (2022). https://doi.org/10.5815/ijieeb.2022.05.05 3. Buhari, B.A., Mubarak, A., Bodinga, B.A., Sifawa, M.D.: Design of a secure virtual file storage system on cloud using hybrid cryptography. Int. J. Adv. Comput. Sci. Appl. 13(5), 5143–5151 (2022) 4. Pourpouneh, M., Ramezanian, R., Salahi, F.: An improvement over a server-less RFID authentication protocol. Int. J. Comput. Netw. Inf. Secur. 7(1), 31–37 (2015) 5. Kaur, A.A., Mustafa, K.K.: A critical appraisal on password based authentication. Int. J. Comput. Netw. Inf. Secur. 11(1), 47–61 (2019) 6. Wang, Z., Sun, W.: Review of web authentication. In: Journal of Physics: Conference Series, vol. 1646, no. 1, p. 012009. IOP Publishing, September 2020 7. Kumar, G.V., Yugandhar, M., Ramesh, B.: Session passwords authentication using colors and images. Int. J. Res. Adv. Comput. Sci. Eng. 4(11), 1–8 (2019) 8. Castellano, A.: PHP login and authentication: the complete tutorial. Alex Web Develop, May 2021. https://alexwebdevelop.com/user-authentication. Accessed 7 Oct 2022 9. Rao, M.S.B., Akula, V.G.: Improved session based password security system. Int. J. 9(4), 1–4 (2020) 10. Bavani, S., Markco, M.: Authentication and key management with session based automated key updation. High Technol. Lett. 26(9), 565–571 (2020). https://doi.org/10.37896/HTL26. 09/1745 11. Nizamani, S.Z., Khanzada, T.J., Hassan, S.R., Jali, M.Z.: A text based authentication scheme for improving security of textual passwords. Int. J. Adv. Comput. Sci. Appl. 8(7), 513–521 (2017) 12. Arak, M.V., Salunke, D.T., Merai, T.A., Sutar, P.B.: Session password authentication using magic rectangle generation algorithm (MRGA). Int. J. Innov. Sci. Res. Technol. 2(4), 264–269 (2017) 13. Tandon, S., Singh, R., Sonkar, S.: User authentication scheme using session passwords based on color and numerical matrix. Int. J. Sci. Res. Dev. 4(2), 950–952 (2016) 14. Yerne, B.S., Qureshi, F.I.: Design 3D password with session based technique for login security in smartphone. In: 2016 Online International Conference on Green Engineering and Technologies (IC-GET), pp. 1–4. IEEE, November 2016 15. Amlani, S., Jaiswal, S., Patil, S.: Session authentication using color scheme. Proc. Int. J. Comput. Sci. Inf. Technol. (IJCSIT) 6(2), 1420–1423 (2015) 16. Wang, Y., Xu, M., Feng, Z., Li, Q., Li, Q.: Session-based access control in information-centric networks: design and analyses. In: 2014 IEEE 33rd International Performance Computing and Communications Conference (IPCCC), pp. 1–8. IEEE, December 2014 17. Kumar, T.M., Reddy, K.S., Rinaldi, S., Parameshachari, B.D., Arunachalam, K.: A low area high speed FPGA implementation of AES architecture for cryptography application. Electronics 10(16), 2023 (2021) 18. Buhari, B.A., Obiniyi, A.A., Sunday, K., Shehu, S.: Performance evaluation of symmetric data encryption algorithms: AES and blowfish. Saudi J. Eng. Technol. 4, 407–414 (2019) 19. Wang, J., Liu, G., Chen, Y., Wang, S.: Construction and analysis of SHA-256 compression function based on chaos S-box. IEEE Access 9, 61768–61777 (2021) 20. Martino, R., Cilardo, A.: SHA-2 acceleration meeting the needs of emerging applications: a comparative survey. IEEE Access 8, 28415–28436 (2020) 21. Martino, R., Cilardo, A.: A flexible framework for exploring, evaluating, and comparing SHA-2 designs. IEEE Access 7, 72443–72456 (2019)

Electric Meters Monitoring System for Residential Buildings Fedorova Nataliia, Havrylko Yevgen, Kovalchuk Artem, Smakovskiy Denys, and Husyeva Iryna(B) National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv 03056, Ukraine [email protected]

Abstract. The process of creating a electric meters monitoring system for residential buildings is considered. Certain technologies, methods, tools have been chosen, compared to existing competitors. The Eastron SDM630-MODBUS electric meter was considered for sending data on consumed electricity to the server. Big data processing platforms and their necessary functions are analyzed. An analysis of existing data clustering algorithms was carried out, the most suitable ones for the task set in the article were selected, and the basic implementation of the algorithm was provided. The basic structure of controllers and electric meters is considered. A script was created to collect data from sensors for further sending to the server and processing. The data load from the sensors was simulated to check the correct operation of the developed system. The results of the development of the cross-platform front-end part for visualizing the operation of the service connecting the application to the server are shown. An analysis of the behaviour of the sensors in emergency situations was carried out, and the algorithm was tested under the condition of entering data that do not fall under the standard classification. The implementation of the analysis of data clusters that appear as a result of a hypothetical accident based on the centres of mass of the clusters is considered. Keywords: Monitoring system · Energy consumption · Energy meters · Big data · Sensors · Big data processing

1 Introduction In today’s world, an acute problem is the consumption of electricity. It is obvious that in order to reduce the use of electricity, it is necessary to save it. But without knowing its actual consumption, it is impossible to reduce the amount of consumed electricity. For this, electric meters are used, which can provide information in real time. The Internet of Things uses a large number of technologies that must be combined with each other for the full operation of the monitoring system, but the question of choosing the necessary technologies always arises during work planning [1, 2]. Therefore, the choice of technologies is formed on the basis of compliance with certain criteria, namely: the possibility of scaling the monitoring system; ease of development; combination of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 173–185, 2023. https://doi.org/10.1007/978-3-031-24475-9_15

174

F. Nataliia et al.

different components among themselves (possibility of their joint operation); financial validity of use; availability of support from the manufacturer/developer. Also, the electric meters monitoring system for residential buildings includes work with a large volume of data coming from sensors in real time and their subsequent processing and display to the user, which is a rather complex process.

2 Literature Review and Problem Statement The principles of working with large volumes of data are considered in many literary sources. For example, with the help of sensors connected to the network, it is possible to collect data from city residents and robotic systems in real time [3]. By collecting and processing information in real time, it is possible to use available resources more efficiently and thus save money and provide a higher level of service to the public. Among the most promising technologies are “Smart City” technologies, which have the greatest potential in case of implementation, namely [4]: – intelligent systems of housing and communal service, including records of resources (electricity, heat, gas, water) and manage street lighting; – elements of the “smart” urban transport system; – solutions in the field of intelligent technologies for managing the quality of the urban environment; – “smart” health care platforms. Also, a sufficient number of literary sources [3–5] are devoted specifically to data transmission technologies and methods of processing and visualization of large data sets. One of the most popular radio technologies today - LoRaWAN - provides an environment for intelligent measurement and control, allowing smart cities to collect and analyse data from thousands of connected devices. A large number of projects have been implemented on the basis of LoRaWAN in various countries of Western Europe. For example, in Copenhagen, with the help of this technology, the garbage can cleaning system was optimized. About 6 million euros were spent annually on this process. After installing these smart sensors, costs were reduced by about 15–20% just because of that the utility trucks only went out to pick up when the tanks were full, but not on a schedule. DBSCAN clustering algorithm (density-based spatial clustering of applications with noise), which clusters a set of points based on their density among themselves, can also be used for the electric meters monitoring system. According to the algorithm only those points which have a sufficient number of “neighbours” in a certain radius around them can become the “supporting” points of the cluster. If the number of neighbours is insufficient, the point cannot be considered a reference point, its neighbours will not be considered as candidates for the cluster, however, the point itself can be part of the cluster if it is a neighbour of the reference point. As a result, if we present the sensor indicator as a separate vector, we have a multidimensional structure of points that are clustered according to the Euclidean distance between them.

Electric Meters Monitoring System for Residential Buildings

175

It is worth noting that it is not quite correct to cluster the data of the sensor indicators as it is. The indicators of counters, for example, are obviously constantly increasing, and in the results of a long operation of the clustering algorithm, it would be possible to notice that the cluster is “extended” due to the fact that the indicators cannot be in a certain neighbourhood. Therefore, it is advisable to cluster not the indicators themselves, but the differences of these indicators, that is, data on increase. Therefore, the main task of this article is to show the developed electric meters monitoring system with a clear choice of the basic structure of controllers and electric meters; creating a script for collecting data from sensors for further sending them to the server and data processing; simulating the load of data from sensors to check the correct operation of the developed monitoring system and big data processing algorithms for the electric meters monitoring system.

3 The Aim and Objectives of the Study The purpose of this article is to develop an electric meters monitoring system, which will allow: – to process larger volumes of data compared to “standard” scenarios; – to be able to work with data that is quickly received in very large volumes; – the ability to work with structured and unstructured data in parallel and in different aspects. The main tasks are: – analysis and selection of sensors; – choosing a big data processing platform - researching the principles of working with big data, technologies and trends of working with big data and the use of big data algorithms for the implementation of an electric meters monitoring system; – selection of data visualization methods for the user. The software offered in the article is intended for monitoring data from sensors and electricity meters. This task solves the problems of data collection and processing to ensure the possibility of reducing the amount of consumed electricity, as well as unifies the process of detecting emergency situations and monitoring the state of the infrastructure and automates the process of using human resources. The input information for the software is the results of sensor measurements, which as a result of processing are provided to the user in a convenient and understandable form for further decision-making.

4 Technologies and Research Methods To fulfil the tasks of the article such technologies were used. The Raspberry Pi is an inexpensive credit card-sized computer that plugs into a computer monitor or TV and uses a standard keyboard and mouse. Raspberry Pi has the ability to interact with the outside world and is used in a wide range of digital

176

F. Nataliia et al.

maker projects, from jukeboxes to weather stations and tweeting birdhouses with infrared cameras [8, 9, 12, 13]. Modbus is a protocol which defines the rules of device communication. For example, it defines that one device should be the master, and the rest should be slaves. The slave sends a message of a certain format to the communication bus, in which either the address of the desired slave device is indicated, or it is intended for all devices. The slave to which the message was sent can reply to the master. The protocol regulates the format of the message, its length, possible values of message elements. There is also a checksum that is required to verify that the message arrived uncorrupted. But the Modbus protocol does not regulate the commands and the data transmission medium used. There is Modbus serial - it works with RS-485 or RS-232, that is, one twisted pair of cables. There is Modbus TCP - this is work in a TCP/IP computer network, where each device has an IP address and a port [11]. MinimalModbus is an easy-to-use Python module for communicating with instruments (slaves) from a computer (master) using the Modbus protocol and designed to run on the master. The only dependency is the pySerial module (also pure Python) [10, 14]. The developed software supports Modbus RTU and Modbus ASCII versions of the serial communication protocol and is intended for use on Linux, OS X and Windows platforms. An analogue-to-digital converter (ADC) is one of the most important electronic components in measurement and test equipment. The ADC converts the voltage (analogue signal) into a code on which the microprocessor and software perform certain actions. The SDM630M CT electric energy meter is a three-phase, DIN rail-mounted multifunctional meter. It can measure and display the characteristics in the following 1P2W, 3P3W and 3P4W appliance connection options, including voltage, current, power, active and reactive energy imported or exported. Energy is measured in kWh. Maximum current values can be measured during specified periods within 60 min. To measure large energy values, external current transformers must be connected. This device can be configured to work with a wide range of current transformers. The meter works with the following communication protocol: Modbus RTU; RS485 on a communication line up to 1000 m long. The input parameters of the SDM630 sensor are shown in Fig. 1. The clustering problem belongs to statistical processing, as well as to a broad class of unsupervised learning problems. It can also be described through the problem of classification. The problem of clustering is in fact a problem of classification, because in both cases we divide objects based on their similarity to each other. Clusters can be formed based on the distance between them, on the density of areas in the data space, on intervals, or on specific statistical distributions. It all depends on the specific data set and the purpose of using the results [14]. The solution is ambiguous, and there are several reasons for this. First, there is no best criterion for clustering quality. A number of fairly effective criteria are known, as well as a number of algorithms that do not have a clearly defined criterion, but still perform fairly high-quality clustering by construction. All of them can give different results. Secondly, the number of clusters, as a rule, is not known in advance and is set according to some

Electric Meters Monitoring System for Residential Buildings

177

Fig. 1. Input parameters for the SDM630 sensor

subjective criterion. Thirdly, the result of clustering depends significantly on the metric ρ, the choice of which, as a rule, is also subjective and determined by a specialist. Data used in cluster analysis can be interval, ordinal, or categorical. However, having a mixture of different types of variable will make the analysis more complex. This is because cluster analysis requires some way of measuring the distance between observations, that is, the type of measure used depends on the type of data.

5 Discussion of Experimental Results Figure 2 shows the general scheme of system design. First, the analysis of existing data clustering algorithms was performed, after which the first basic implementation of the algorithm was created. A search was made for a set of real data of water meter indicators to test the operation of the clustering algorithm and further understand the behaviour of the sensor. A data processing service and a basic version of the front-end part for visual display were developed. The application was connected to the server, basic requests to API were configured. An analysis of the behaviour of sensors in hypothetical emergencies. Clusters of emergency data were analysed based on the centres of mass of the clusters, which allows you to determine the sensor that displays the emergency data. Graphic visualization and extension of the project model (addition of sensors and infrastructure units (apartments, elevator, etc.)

178

F. Nataliia et al.

The software presented in this article consists of two main parts: server part developed in JavaScript using Node.js and Express, which allows to perform “strict” calculations and perform requests to interact with the data, and a client application which is also developed in JavaScript, using the React Native framework and Expo technology, which allows to create cross-platform application for smartphones based on iOS and Android.

Fig. 2. System design scheme

JavaScript was chosen as the development language, because of fairly large base of built-in functionality and a relatively easy visualization process, which speeds up the development process without losing the quality of user-experience offered. The basis of the application, the logic of interaction with the user is developed using React Native - an open framework which was released by Facebook in 2015. Previously, in mobile development Java programming language was required for the Android platform and Objective C - for the iOS platform. As a result, the expertise in two different languages was required, and the software itself should be developed twice. When the Apache Cordova was released, using HTML, CSS and JavaScript, the project could be opened in the smartphone browser as a website [6, 7]. This approach significantly speeded up the development, but had a number of disadvantages: the project using Cordova required optimization, plugins aged quickly and there was a need for custom development, so implemented projects could not compete with native applications, which in turn were fast and had reliable support. That’s why Facebook decided to find a way to write the same code that will work on both platforms, so React Native was invented. React Native applications are developed using JavaScript, but the end result, which runs on a smartphone, uses native code (Java for Android and Objective C for iOS), so the result is the same user-experience as when developing a program “native” for OS language.

Electric Meters Monitoring System for Residential Buildings

179

Fig. 3. Implementation scheme of the monitoring system

Figure 3 shows the implementation scheme of the monitoring system. Procedure: Request to the server - Data processing (clustering/classification) - Detection of emergency situations - Sending data for processing by the client part - Data visualization. As a result of the DBSCAN algorithm’s operation on the data taken from the open library of meter datasets, a cluster of “normal” data was obtained, after which testing on data that simulate an “emergency situation”, such as a gas leak or pipe burst, was made. The advantage of the chosen algorithm is that the noise is not of interest to the user, because the hypothetical emergency will be a separate cluster of emergency data, which can be seen at the clustering stage, and after obtaining the centres of mass of clusters we can understand which parameter is in the “emergency” space. This data is sent to the client application, which shows the user in which apartment and which indicator should be checked, or, if necessary, take certain measures. Data collection from electric meter sensors. The data collection script is developed for the communication of the Eastron SDM630-MODBUS electricity meter (Fig. 4) and the Raspberry PI controller (Fig. 5). The RS-485 interface and the minimalmodbus library were used to read data from the electric meter.

180

F. Nataliia et al.

Fig. 4. Eastron SDM630-MODBUS

Fig. 5. Raspberry PI

The SDM630 sensor is connected via an analogue-to-digital converter MODBUS RS485 shield to the Raspberry PI. The working diagram is shown in Fig. 6.

Fig. 6. Working diagram of the developed system

The programming language for writing the script is Python. The MinimalModbus library was used to read data from the electric meter sensors. After receiving power, the command to send data to the server is triggered. It is necessary to: 1) 2) 3) 4)

Specify the URL address of the server Specify DEVICE_KEY Transfer the time of receiving data Transfer the received value

Electric Meters Monitoring System for Residential Buildings

181

DEVICE_KEY is a unique counter key that can be obtained from the web portal that displays the data (Fig. 7).

Fig. 7. List of counters on the web portal

The sensor load simulation was implemented using the data load script from the described sensors. Figure 8 shows the visualization of the created method. As can be seen from the figure, with the passage of time (x-axis), the value from the electricity meter increases or remains constant. Data Processing. As a final result, the need for readings of electricity consumption by the electric meter for the period specified by the user is considered, that is, if it is a week, then the interval will be in days, accordingly, it is necessary to calculate the delta of consumption per day between the starting point and the final point for each interval.

Fig. 8. Data visualization from the data load simulation script

Algorithm for improving the system. A clustering algorithm (cluster analysis) was used to improve the system. For the clustering of electric meters, it was decided to use the k-means method, which seeks to minimize the mean squared distance between points in one cluster. Despite the fact that it does not guarantee absolute accuracy, its simplicity and speed make up for it.

182

F. Nataliia et al.

The advantage of the k-means method is that it is more convenient for clustering a large number of observations than, for example, the method of hierarchical cluster analysis, where dendograms quickly become overloaded and tend to lose visibility [2]. Among the disadvantages of the method, it can be noted that it is sensitive to outliers that can distort the average value, and the fact that the number of clusters (which is k here) must be determined by a specialist in advance. The algorithm is as follows: 1. Determine k - the number of clusters that will be created. 2. Choose random k objects from the data set as initial cluster centres. 3. Assign each observation to the nearest centroid based on the Euclidean distance between the object and the centroid. 4. For each of the k clusters, recalculate the centroid of the cluster by calculating the new average value of all data points in the cluster. 5. Iteratively minimize the total amount within the area. Repeat steps 3 and 4 until the centroids change or the maximum number of iterations is reached (R uses 10 as the maximum number of iterations by default). The total amount, or the total variation within the cluster, is determined as follows: k  k=1

W (Ck ) =

k  

(xi − µk )2

k=1 xi ∈Ck

Thus, as a result, we get a graph showing all the meters and their energy consumption for the period, which will make it possible to determine which of the meters consume the most energy, and which ones consume the least (Fig. 9).

Fig. 9. Model of the graph with readings of electricity meters

As a result of the algorithm, a cluster of “normal” data was obtained, after which testing began on data simulating an “emergency situation”.

Electric Meters Monitoring System for Residential Buildings

183

Figure 10 shows the result of data clustering.

Fig. 10. “Emergency situation” simulation

To identify an error, it is possible to use the graph of the sensor readings growth (Fig. 11).

Fig. 11. Data visualization

184

F. Nataliia et al.

6 Summary and Conclusion As a result, the process of creating an electric meters monitoring system for residential buildings, which is capable of processing data from sensors, clustering and classifying them, which will allow monitoring the level of electricity consumption, is considered. As a result of the work, data clustering algorithms, data processing methods, creation of cross-platform software applications communicating with the server using REST API paradigm requests were used. Certain technologies, methods, tools have been chosen, compared to existing competitors. The Eastron SDM630-MODBUS electric meter was considered for sending data about the consumer of electricity to the server. Big data processing platforms and their necessary functions are also analysed. During the development of the monitoring system the basic structure of controllers and electric meters was considered; a script was created to collect data from sensors for further sending to the server and processing; the data load from the sensors is simulated to check the correct operation of the developed system. The software is implemented in the PyCharm 2021.10 programming environment in the Python programming language. With the help of sensors connected to the network, it is possible to collect data from city residents and robotic systems in real time. By collecting and processing information in real time, the available resources are utilized efficiently and provide a higher level of service to the public.

References 1. What is the Internet of Things (IoT)? Electronic resource. https://bit.ly/3D5F6rS 2. McEwen, A., Cassimally, H.: Designing the Internet of Things, 1st edn, 336 p. Wiley, Hoboken (2013). 3. Fedorova, N.V., Kovalchuk, A.M., Nikolaev, N.A.: Concepts and solutions for the organization Smart City. Magyar Tudományos J. (52), 42–46 (2021) 4. Fedorova, N.V., Nikolaev, N.O.: Automation of urban infrastructure processes using the Smart City concept. In: Collection of Science Proceedings of the II International Scientific and Practical Conference “An Integrated Approach to Science Modernization: Methods, Models and Multidisciplinarity, Vinnytsia, Vienna, 27 October 2021, pp. 202–205 (2021) 5. Anthopoulos, L.G.: Understanding the smart city domain: a literature review. In: RodríguezBolívar, M.P. (ed.) Transforming City Governments for Successful Smart Cities, pp. 9–21. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-03167-5_2 6. Wieruch, R.: The Road to React: Your Journey to Master Plain Yet Pragmatic React.js, 1st edn, 250 p. Independently Published (2017) 7. Vanderkam, D.: Effective TypeScript: 62 Specific Ways to Improve Your TypeScript, 2nd edn. 264 p. O’Reilly Media (2019) 8. Whaley, B.: Amazon Web Services in Action, 2nd edn, 528 p. Manning (2018) 9. Petin, V.A.: Raspberry PI Microcomputers. Practical Guide, 132 p. BHV-Kyiv, St. Petersburg (2007) 10. Hiller, G.S.: IoT with Python, 456 p. Packt Publishing (2016) 11. Modbus Documentation. Electronic resource. Access Mode. https://bit.ly/3zkx6Ca 12. Energy Meter Logger. Electronic resource. Access Mode. https://bit.ly/3DBYNJ9

Electric Meters Monitoring System for Residential Buildings

185

13. Energy monitoring on Raspberry PI. Electronic resource. Access mode. https://bit.ly/3Fk Rt5Y 14. Work with monitoring on Python. Electronic resource. Access mode. https://habr.com/ru/ post/401313/

Implementation of Blockchain Technology for Secure Image Sharing Using Double Layer Steganography Lalitha Kandasamy(B) and Aparna Ajay SRM Institute of Science and Technology/ECE, Ramapuram Campus, Chennai 600087, India [email protected]

Abstract. Healthcare information management has received an enormous deal of attention in recent times due to enormous potential for delivering more precise and cost-effective patient care. The Blockchain network could be used in the healthcare to exchange user data among hospitals, diagnostic laboratories, and pharmaceutical enterprises. Nowadays securing images is a big provocation to maintain confidentiality and integrity. The developed technology in the health industry might be misused by the public network and give chance to unauthorized access. The Blockchain network could be used in the healthcare to exchange user data among hospitals, diagnostic laboratories, and pharmaceutical enterprises. To put it another way, blockchain provides a public record of peer-to-peer transactions so that everyone can view them. This technology helps medical organizations in obtaining insight and enhancing the analysis of medical records. Blockchain technology provides a robust and secure framework for storing and sharing data throughout the healthcare business. In the health sector, the image-based diagnostic is an essential process. This proposed research in blockchain technology allows sharing of patient records in a secured way for telemedicine applications. These images will be shared geographically because these medical images will be passed through public networks, so the security issues like integrity and authentication may occur. These images will be encrypted using cover image and final steganography image is created. Steganography is used as major tool to improve the security of one’s data. This proposed system will have two layers in medical security by using LSB (Least Significant Bit) method with encryption. The medical image should be inserted into a cover image by LSB this is also known as the Stego image. Encryption will be provided in integrity which is a piece of cryptography. The medical image will be secured in the steganography process. The entire process can be executed by using MATLAB 2021 version. The simulation results show that the medical images are secured from various attacks. The extracted image shows minimum mean square error of 0.5. Keywords: Encryption · Blockchain technology · Steganography · Telemedicine · Medical image encryption

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 186–195, 2023. https://doi.org/10.1007/978-3-031-24475-9_16

Implementation of Blockchain Technology

187

1 Introduction Blockchain is a decentralized public ledger database maintained by a system of verified users or nodes that contains unchangeable blocks of data that may be safely transferred without the involvement of a third party. Data is maintained and documented using cryptographic signatures and consensus techniques, which are crucial facilitators of its implementation. This capacity to preserve data is a crucial argument for the blockchain, where a huge volume of data is exchanged and distributed extensively. Blockchain has progressed through three stages: 1.0, 2.0, and 3.0. Blockchain versions 1.0 and 2.0 focused on finance and transactions, respectively. The blockchain 3.0 revolution includes educational, governmental, research, and healthcare applications. The blockchain 3.0 revolution has given the healthcare industry reason to be optimistic. This research’s primary objective is to implement and study blockchain technology as it refers to applications in the field of healthcare especially biomedical imaging. In recent years, the advancement in technology has made securing data quiet challenging. Due to development of technology many tools are developed to steal the data [1– 3]. To tackle this, one must improve the security of their data/applications. So, steganography is used as major tool to improve the security of one’s data [4, 5]. In this paper, the concept of both steganography and cryptography are used to avoid data loss from unauthorized users. Generally, people want to hide their confidential data like medical images, Passwords and House assets by applying above security strategies [6, 7]. Here, we have secured two images in a single image by least significant bit (LSB) method. Steganography means process of hiding the confidential data in another file (Image/Audio). In this Steganography it will secure the secret image by covering through an object, which will be image or text is known as steganography and it combines the cryptography and steganography which will allocate more security. This securing image will be more imprudence by ensuring the message which is not unclassified for the unauthorized image and by integrity it will be ensuring that image by tampered with more absorption to secure the image. Steganography is used as major tool to improve the security of one’s data. Generally, people want to hide their confidential data like medical images, Passwords and House assets by applying above security strategies. Steganalysis is the study of detecting messages hidden using steganography and is analogous to cryptanalysis applied to cryptography. Steganography is classified into five major types such as Text Steganography, Audio Steganography, Video Steganography, Image Steganography, Folder Steganography, and Email Steganography. In this proposed approach, image Steganography is employed which hides information such as image or data in an image file. The applications of image Steganography includes secret data storage, access control and protection of data alteration by unauthorized users. M.Saravanan et al. have used Novel method approach for steganography which results in transformation of all pixels. It is one of the best efficient technologies used for secure data transmission in public networks. Here our confidential image is embedded in Audio file. With the help of clustering modification direction the image was divide into sub parts and embedded was done. Converting image into audio file which is one dimensional. The image was reshaped into one dimensional and then it was normalized, and it converted into.wav file and then transmitted and then it was decoded to get reconstructed image [8]. Many algorithms are developed to improve the Steganography process to ensure privacy [9–11].

188

L. Kandasamy and A. Ajay

A.H.Mohsin et al. proposed blockchain technology to update and share medical Covid19 dataset between hospitals in the network. This provides secret medical data hiding to improve the healthcare services. But the architecture looks complex to eliminate central point in the network during transmission [12]. Irina V. Pustokhina et al. developed a method called blockchain-based secure data-sharing scheme to communicate patient record in a secured and private way for telemedicine services. This method involves three stage process namely image steganography, encryption, and secure data sharing. But the presented result has a peak signal to noise ratio (PSNR) of 51.75 dB [13]. Recent research techniques utilize image segmentation and cryptography techniques to secure medical image transmission in IoT based Blockchain network using machine learning algorithms. But Transmission of encryption key is major issue [14–17]. So, this proposed method applies steganography process for hiding the image. The major objective of this research is to apply Steganography and encryption algorithm to transfer medical images in a secured way. Here two-level security is provided by hiding one image with two images by least significant method. The image will be implanted into another image by pixel arrangement. To extract the original image file, encryption key is used in the form of bitmap image file. This method is proposed to increase PSNR and to include high double layer image steganography for improving quality of services.

2 Proposed Model In this proposed model, the password is hidden in a base image to secure the passwords where one cannot find the passwords hidden in an image. This image will have a secure key to protect the images. This security image will be transformed into a secured manner. Symmetric encryption will be messaged, and it will be secured with the help of an encryption algorithm. It uses a key that will be utilized as the common sender and receiver. This encryption message is also used as a key in decryption. The diagnostic center usually stores the data like CT, MRI, and X-Ray of a patient as their policy. While sharing across public networks there might be a chance of data breach. To overcome this, we can use asymmetric encryption. The Method can be carried out with three basic components which can cover media, secret messages, and steganography algorithm, the algorithm used for encryption is the LSB algorithm. In general, the least significant bits can be used for storing the information. By doing modification of least significant bits can make less effect on image. In LSB the hidden image should of bitmap format because they don’t lose anything, and they are not compressed.so in LSB it replaces the least significant bits by the cover image (binary bits of secret message). In any cryptographic technique, there is a need to send a secret key and easily a third person can identify it that secret information is going on. This is overcome by this proposed approach using the concept of steganography here the confidential file was hidden in an ordinary file so that no one can think that it can have sensitive data. Steganography is used as a major tool to improve the security of one’s data. In this paper, we have introduced the concept of both steganography and cryptography to avoid data loss from unauthorized users. Generally, people want to hide their confidential data like medical images, Passwords, and House assets by applying the above security strategies. In this approach, we have secured two images in a single image by the least

Implementation of Blockchain Technology

189

significant bit method. The image will be implanted in another image by transmitting it will adopt a pixel arrangement. The security image is achieved by compression of the LSB method. By compressed image it will cover steganographic by encoding the image it will split and it shares into encrypted by using the Encryption algorithm by the resilient hacker.

3 Methodology The image or data is hidden in the base image using Steganography and encryption method. The block diagram shown in Fig. 1 illustrates a method used in proposed system. A reference image which is used to hide both input messages having the dimension of 256 × 256 is called base image. Then, the normal image is converted into gray scale and then transformed into binary format. The LSB substitution technique is used to embed secret image into cover image. At the receiver, when it reaches the other end of the network, the embedded image is extracted to get two hidden images. 3.1 Base Image This the image taken as reference to hide both input images. The base image having the size of 256 × 256 dimensions. 3.2 Gray Scaling The normal image will convert and manipulate into gray scale. The gray scaling is a process which will convert the color image into gray scaling. The gray scaling is background threshold it will set the image into black color with a pixel if the RBG value is corresponding threshold. It will change the geometric character of the image. 3.3 Binarize The image will create an image which is replaced with some values with 0 to binary image. Binarize will convert the color image into gray scale image which will produce a pixel value with 2D/3D image. The binarize image will done using the threshold this is type of thresh binary. 3.4 Resize It is tool which compresses the size of the image so the image quality will increase with low data. The image Format like JPG and PNG it will be resized into base image size. To perform steganography, process the base image and hidden image should be of same size.

190

L. Kandasamy and A. Ajay

3.5 LSB Substitution The LSB substitution is a technique that embedded the secret image into cover image. LSB substitution is the confidential data which it will not affect the pixel value in the embedding process. 3.6 Embedding The least significant bit of base image is replaced by least significant bit of input images to generate embedded images and they are saved as bitmap image file. 3.7 Extraction To perform extraction, the bitmap image file has to extracted to get two hidden images.

Fig. 1. Block diagram

Implementation of Blockchain Technology

191

4 Results and Discussion The algorithm is executed in MATLAB, in which a folder has been selected where we can hide an image which will convert the RGB image or color map image to grayscale image. For hiding the RGB image the similarity of the dimensions must be verified. LSB substitution is applied to cover the image with the base image. This image is placed in another image to be secured. This image color is changed into gray scale or RGB. Then the Steganography for two images is done using LSB algorithm. The extraction is done in double layers to separate steganography image from the base image. The Fig. 2 shows the first base image. This is the reference image where the hidden image is stored and is converted into binary. Figure 3 represents first secured image and is a text image where password is stored. The secured image will be embedded in the base image. This embedded image represents a key to be secured. The first embedded image is recovered and is represented in Fig. 4. Similarly, the second example image and its steganography process are represented from Fig. 5 and Fig. 6. Figure 5 is converted into binary image and resizing is done. The extracted image for the respective process is shown in Fig. 6 and Fig. 7. The quality of image compression and reconstruction is verified by two parameters namely Peak Signal to Noise Ratio and Mean Square error. PSNR and MSE are expressed in decibels (dB). The PSNR and MSE are calculated from Eq. (1) and (2).  2  R (1) PSNR = 10 log10 MSE  2 M ,N [I1 (m, n) − I2 (m, n)] (2) MSE = M ∗N where, R = maximum fluctuation in the input image data type I1 (m, n) is the compressed image I2 (m, n) is the original image M represents number of rows N represents number of columns The Peak Signal to Noise Ratio (PSNR) for first image and second image are 99.33 dB and 99.36 dB respectively. The high value PSNR indicates high quality in image compression technique. The mean square error (MSE) achieved for the two images are 0.5005 dB and 2.4904 dB. The low value of MSE represents the better quality of image compression (Fig. 8 and Table 1).

192

L. Kandasamy and A. Ajay

Fig. 2. Base image

Fig. 3. First secured image

Fig. 4. First embedded image

Implementation of Blockchain Technology

Fig. 5. Second secured image

Fig. 6. Second embedded image

Fig. 7. First extracted image

193

194

L. Kandasamy and A. Ajay

Fig. 8. Second extracted image

Table 1. Simulation results S. no

First image

Second image

PSNR (dB)

MSE (dB)

PSNR (dB)

MSE (dB)

01

99.33

0.5005

92.36

2.4904

5 Conclusion The confidential medical image of a person is concealed from security attacks using robust double layer steganography technique. The least significant bit of base image is replaced by least significant bit of input images to generate embedded images and they are saved as bitmap image file.Thus the two images are secured at the transmission side and retrieved back at the receiver side. The secured image sharing is done with PSNR of 99.33 dB and MSE of 0.5. In future, this stego image is converted into QR code and security level can be further increased.

References 1. Lu, W., Xue, Y., Yeung, Y., Liu, H., Huang, J., Shi, Y.-Q.: Secure halftone image steganography based on pixel density transition. IEEE Trans. Dependable Secure Comput. 18(3), 1137–1149 (2020) 2. Sarmah, D.K., Bajpal, N.: Proposed system for data hiding using cryptography and steganography. J. Comput. Appl. 8(9), 7–10 (2010) 3. Hassaballah, M., Hameed, M.A., Awad, A.I., Muhammad, K.: A novel image steganography method for industrial Internet of Things security. IEEE Trans. Ind. Inform. 17(11), 7743–7751 (2021) 4. Singh, A.K., Singh, J., Singh, H.V.: Steganagraphy in images using LSB technique. Int. J. Latest Trends Eng. Technol. 5(1), 426–430 (2015) 5. Vinothkanna, R.: A secure steganography creation algorithm for multiple file formats. J. Innov. Image Process. 1(01), 20–30 (2019)

Implementation of Blockchain Technology

195

6. Hamid, N., Yahya, A., Ahmad, R.B., Al-Qershi, O.M.: Image steganography techniques. Int. J. Comput. Sci. Secur. (IJCSS) 6(3), 168–187 (2019) 7. Lu, W., Huang, F., Huang, l.: Edge adaptive image steganography based on LSB matching revisited. IEEE Trans. Inf. Forensics Secur. 5(2), 201–214 (2010) 8. Saravanan, M., Priya, A.: An algorithm for security enhancement in image transmission using steganography. J. Inst. Electron. Comput. 1(1), 1–8 (2019) 9. Feng, B., Lu, W., Sun, W.: Secure binary image steganography based on minimizing the distortion on the texture. IEEE Trans. Inf. Forensics Secur. 10(2), 243–255 (2015) 10. Anderson, R.J., Petitcolas, F.A.P.: On the limits of steganography. IEEE J. Sel. Area Commun. 16(4), 474–481 (1998) 11. Sedighi, V., Cogranne, R., Fridrich, J.: Steganography by minimizing statistical detectability. IEEE Trans. Inf. Forensics Secur. 11(2), 221–234 (2016). https://doi.org/10.1109/TIFS.2015. 2486744 12. Mohsin, A.H., et al.: PSO–blockchain-based image steganography: towards a new method to secure updating and sharing COVID-19 data in decentralised hospitals intelligence architecture. Multimed. Tools Appl. 80(9), 14137–14161 (2021). https://doi.org/10.1007/s11042020-10284-y 13. Pustokhina, I.V., Pustokhin, D.A., Shankar, K.: Blockchain-based secure data sharing scheme using image steganography and encryption techniques for telemedicine applications. In: Wearable Telemedicine Technology for the Healthcare Industry, pp. 97–108. Academic Press (2022). ISBN: 9780323858540. https://doi.org/10.1016/B978-0-323-85854-0.00009-5 14. Jassbi, S.J., Agha, A.E.A.: A new method for image encryption using chaotic permutation. Int. J. Image Graph. Sig. Process. (IJIGSP) 12(2), 42–49 (2020). https://doi.org/10.5815/iji gsp.2020.02.05 15. Isinkaye, F.O., Aluko, A.G., Jongbo, O.A.: Segmentation of medical X-ray bone image using different image processing techniques. Int. J. Image Graph. Sig. Process. (IJIGSP) 13(5), 27–40 (2021). https://doi.org/10.5815/ijigsp.2021.05.03 16. Tiwari, C.S., Jha, V.K.: Enhancing security of medical image data in the cloud using machine learning technique. Int. J. Image Graph. Sig. Process. (IJIGSP) 14(4), 13–31 (2022). https:// doi.org/10.5815/ijigsp.2022.04.02 17. Agrawal, S., Kumar, S.: MLSMBQS: design of a machine learning based split & merge blockchain model for QoS-aware secure IoT deployments. Int. J. Image Graph. Sig. Process. (IJIGSP) 14(5), 58–71 (2022). https://doi.org/10.5815/ijigsp.2022.05.05

Nature-Inspired DMU Selection and Evaluation in Data Envelopment Analysis Seyed Muhammad Hossein Mousavi(B) Pars AI Company, Tehran, Iran [email protected]

Abstract. In order to have best efficient frontier, it is needed to have closer efficiency value for most of Decision Management Units (DMU) in Data Envelopment Analysis (DEA). So, selecting best DMU’s for businesses in management before running it has high of importance. Also, having best DMU’s pushes the business to the ideal point in the data space. Obviously, the closer features or DMU’s to ideal point, the more efficient system is. Nature-inspired optimization algorithms gain the most optimized approaches based on natural selection which differentiates them from other un-intelligent mathematical selection models. Here Biogeography-Based optimization (BBO) algorithm is employed to select the best features or DMU’s for business benchmark datasets which are contain a lot of samples which is the main significance of this research. The system evaluates with four common DEA methods of Charles Cooper and Rhodes (CCR), InputOriented Banker, Chames and Cooper (IOBCC), Output-Oriented BCC (OOBCC) and Additive efficiently. Proposed approach compares with original DEA model, mathematical Lasso regularization and other nature-inspired methods such as GA and PSO feature selection for all four components of evaluation. Returned result depict more efficient values in most cases belongs to proposed nature-inspired DEA method. In order to present more comparisons, another optimization (Firefly) fuzzy regression algorithm is used for final linear regression part as they represent more correlation coefficient compare to traditional regression methods. Average value of 0.9149 is returned for all ranks and all datasets and for all DEA methods after the proposed feature selection method. Additionally, average correlation coefficient of 0.9451 and MSE of 0.1327 are returned by fuzzy firefly regression method for all datasets and ranks. Keywords: Decision management units · Data envelopment analysis · Optimization · Nature-inspired feature selection · Nature-inspired fuzzy regression

1 Instruction Businesses and companies are looking for best ways to increase their facilities productivity and efficiency in Data Envelopment Analysis (DEA) [1] by various of factors which in management, these factors call Decision Management Units or in short DMU’s [1]. Considering the efficiency of hospitals (DMU’s), and in that number of Doctors (nD), © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 196–208, 2023. https://doi.org/10.1007/978-3-031-24475-9_17

Nature-Inspired DMU Selection and Evaluation

197

number of Nurses (nN) and number of Patients (nP) as input features and Treatment per day as output, total Efficiency could be calculated. For instance, nD = 10, nN = 20 and nP = 50 is more efficient than nD = 15, nN = 27 and nP = 25, as with less resources more patients are cured. So, selecting these DMU’s wisely are so crucial to have the most efficient business experience. To do so, number of techniques and algorithms are proposed by different researchers during years. Most of them used old traditional feature selection methods and some of them used these algorithms for other purposes of DEA which are the limitations. Here, DMU’s or feature are selecting based on nature intelligence which is called nature-inspired or bio-inspired algorithm [2] which is the main objective. Two of the best nature-inspired algorithms for features selection are Particle Swarm Algorithm (PSO) and Genetic algorithm feature selection [3] but, none of them is as power full as Biogeography-Based Optimization (BBO) algorithm [4] for this task. We come up to this fact by multiple experiment on these algorithms for feature selection purposes. BBO is faster and returns more optimized results in less iteration which, lead to select best DMU’s (with higher value closer to 1) for the business. Basically, BBO feature selection, removes any weak DMU’s which may result low value efficiency and DEA. Furthermore, by increasing number of samples to for example 100 and number of DMU’s or features to for example 10, selecting best DMU’s is a laborious task. So, when dealing with big datasets, removing some of the weak samples is so rational. The main goal is to select those DMU’s which are closer to efficient frontier. This paper is consisted to 4 main sections. Section 1 is all about basics and fundamentals. Section 2 pays to some of the relevant researches conducted by other researcher in the field of feature selection. Section 3 firstly, describes the proposed method in details and secondly, pays to validations and result alongside with comparisons with other methods. Section 4 includes conclusion, future works and suggestions.

2 Literature Review Feature selection or dimensionality reduction is a vital task in data mining [5] and big data [6]. By decreasing number of feature and selecting best ones, not only processing speed increases but also, outliers which are less desired, eliminate from the main process. Principle Component Analysis (PCA) [7, 24] is one of the most effective but, traditional feature selection in data mining. Another more advanced feature selection algorithm is called Lasso regularization [8, 25] which, is more time consuming but ends up with more effective features. Another interning feature selection based on DEA research is belonged to Zhang, Yishi, et al. [9]. They did something in revers of this research’s purpose. Actually, they used DEA of each DMU as their final feature selection factor; in which those DMU’s with higher DEA values got selected. Among nature inspired algorithms, PSO feature selection [10] and Genetic feature selection [11] are two best to mention. Nature inspired techniques always return more impactful features but gain more complexity. Proposed BBO feature selection for DMU would compare with all these four mentioned algorithms in validation section.

198

S. M. H. Mousavi

3 The Proposed Method and Evaluation Clearly, DMU’s in DEA, defines system’s performance and in a small business, DMU’s values could be determine easily. However, in a big business or in dealing with real big data, things are different. Nature has best approach for selection which is called natural selection. Any mathematical implementation of nature could handle certain number of mathematical problems. One of this nature inspire algorithms is BBO algorithm which, which we intend to used it for features (DMU) selection in DEA for the first time. Based on multiple experiment on different algorithm, BBO selected, as it returned more optimized result in faster runtime. Here, each DMU is considered as a feature in the feature space which makes our final feature matrix for the main process. After feature selection by BBO algorithm, Efficiency and DEA calculation starts and the process ends up to non-linear fuzzy [15] Firefly [12] regression algorithm which is another bio-inspire algorithm. Based on multiple experiment, nature inspired regression algorithms returned higher correlation coefficient than other traditional methods which, best performance among them that was Firefly algorithm is considered for this part. Figure 1 represents the whole process of our system.

Fig. 1. Workflow of proposed method

• BBO Feature Selection BBO algorithm [4] is consisted of important parameters of number of Habitants or “H”, Human Suitability Index or “HIS”, Emigrations Rate or μ, Immigration rate or λ and Suitability Index Variable (SIV). This algorithm is all about moving living creatures from one habitant to another with better life condition and room to grow. In feature selection, we are dealing with Number of Features of “NF”, weight of feature or “w” and Mean Square Error (MSE) which should be minimized to select thee feature. Also, if xi are values of NF then, xi∧ would be selected features out of NF. So,

Nature-Inspired DMU Selection and Evaluation

199

considering number of features entering the system, “y’ would be the output and “t” would be the target. In order to calculate final error, ei need to be calculated which is ti − yi . So final error is min MSE = 1n ni=1 ei2 + w * NF. This goes for all features and finally those features with lowest MSE will be selected. In combination of BBO and feature selection, each feature vector or DMU is considered as a habitant with different HIS. Those habitants which could fit into final iteration would be selected alongside with their related features with lowers error as it mentioned. Pseudo code of BBO feature selection is presented in Table 1. Table 1. Pseudo code of BBO feature selection Start Load dataset (DMUs) Generate a random set of habitats (H1, H2, …, Hn (DMU’s or features) Define FN (number of features) and w (weights for DMUs) Compute HSI value (Fitness function and sort best to worst) While termination criterion is not satisfied Keep the best individuals (elites (best DMUs)) Calculate immigration rate λ and emigration rate μ for each habitat based on HSI Start Migration Select Hi with probability by λ Select Hj with probability by μ Randomly select a SIV from Hj Replace random SIV Hj with Hi End of migration Start Mutation Select a SIV in Hi with probability of mutation rate If Hi (SIV) is selected Replace Hi (SIV) with a randomly generated SIV End if End of Mutation Recalculate the HIS value of new habitats Calculate MSE of DMU’s Sort population (best to worst (cost)) Replace worst with preview generation’s elites (DMUs with best cost) Sort population (best to worst (cost)) End of While Select NF first ones End

• Data Envelopment Analysis (DEA) Originally introduced by Charnas and Cooper in 1978 [13] based on Farrell ideal in 1957 [14], DEA faces multiple forms of it by different researchers around the world but, still one of the most flexible tools in various areas specifically in management. DEA employs to evaluate the performance of different kinds of entities in various range of activities and applications [1]. Obviously, DEA main target is to estimate the system’s efficiency by different input and output factors which leads to select best DMU’s for having the most efficient system. Evaluates takes place with four common DEA methods of Charles Cooper and Rhodes (CCR), Input-Oriented Banker, Chames

200

S. M. H. Mousavi

and Cooper (IOBCC), Output-Oriented BCC (OOBCC) and Additive efficiently [1]. The CCR model considered that constant return to scale exists at the efficient frontiers whereas BCC says variable retunes to scale frontiers. CCR model measures the Overall Technical Efficiency (OTE), while BCC model assesses the Pure Technical Efficiency (PTE). Actually, CCR model has a straight-line efficiency frontier, whereas the BCC model has a convex line efficiency frontier as Fig. 2 represents.

Fig. 2. CCR and BCC models

• Fuzzy Firefly Regression Firefly Algorithm (FA) [12] is one the most robust and famous bio-inspired optimizations algorithms which has proper application in learning for regression task. Now, this regression could be aided with other clustering techniques as input data, which one of them is Fuzzy [15, 26] C-means clustering (FCM) [16]. Fuzzy C-means clustering is fuzzy model of K-means or Lloyd’s clustering algorithm [17]. By clustering the data in the initial step, data will be organized in an optimized manner for training step. Clearly, more clusters, means more accuracy but brings more computational time too. The goal here is to adjusting base fuzzy parameters according to modeling error by FA and returning best fuzzy parameters values as the final result. Considering pi∗ as final regression optimized value, two parameters of it which are xi and pio determines by FA and Fuzzy logic, accordingly. Firstly, data (inputs and targets) passing through fuzzy system divides into train and test parts by 70% and 30%, respectively. Second step is to define linguistic variables, constructing membership functions, sets and rules and finally converting crisp feature (inputs and target) matrix to fuzzy model (fuzzification) which end up to initial fuzzy model ready for training by FA algorithm. Fuzzy part uses “Sugeno” inference system as it performs better than “Mamdani”. Each input represents one feature (each three membership functions) and three rules (“and” operator) followed by an output which contains targets in this step.

Nature-Inspired DMU Selection and Evaluation

201

The fuzzy model od DMU’s sends to FA as input for adjusting basic fuzzy parameters by nature inspired behavior of FA under xi value as it mentioned above. The change goes over membership functions and changes gaussian curve as range and variance to fittest form by FA. Just like any other bio-inspired algorithm, number of population and iterations, plays important rule in the algorithm. Here, population and iteration considered 15 and 1000, respectively. Also, number of decision variable and its lower and upper bounds are considered to be 10, −10 and 10, respectively. Table 2. Pseudo code of fuzzy firefly regression Start Loading Data (Extracted features or DMUs) Diving Train and Test data (Both into Inputs and Targets) Generating Basic Fuzzy C-Means Model Define Linguistics Variables Construct Membership Functions, Sets and Rules Initial Training Using Fuzzy Logic Fuzzification (Crisp Inputs to Fuzzy) Training Using Firefly Algorithm (Input: Fuzzy Sets and Rules) Goal: Adjusting Base Fuzzy Parameters According to Modeling Error by Firefly Objective function f (x), x=(x1,x2,…,xd)T Generating population of fireflies xi (i=1,2,…,n) Define light intensity Ii in xi by f(xi) Define light absorption coefficient γ or gamma While maximum generation is not satisfied For i=1 to n fireflies For j=1 to n fireflies (inner loop) If (Ii < Ij), firefly i goes toward firefly j End if Change attractiveness by distance of r via exp [- γr] Evaluate new solutions and update light intensity End End Sort and rank fireflies and find the current global best or g* End of while Inference (Evaluating Fuzzy Rules and Combining Results Based Firefly Model Error) Defuzzification (Fuzzy Outputs to Crisp) Getting Optimized Value of ( Calculating Polynomial Nonlinear Regression Between Targets (Inputs Labels) and Outputs ( Returning MSE, RMSE, Error Mean and Error STD End

FA is consisted of five main parts of number of population (fireflies) or x, light intensity of each firefly or I, light absorption coefficient or γ, attraction coefficient or β and mutation rate. It simply works as moving lower light intensity fireflies toward higher ones, effecting mutation and updating old and new solutions. Now, fuzzy input model is transformed to a better fuzzy model after taking effect by FA on its membership functions and parameters. By evaluating fuzzy FA model using fuzzy inference engine, final trained

202

S. M. H. Mousavi

data (train and test) is available. In order to calculate error fuzzy data should return to its original crisp mode which this action is called defuzzification. Clearly, inputs are train and test inputs; and evaluated version of them is train and test outputs. The difference between train, test outputs and train, test targets, provides system error which here is MSE, RMSE, Mean Error and STD Error. Pseudo code of fuzzy Firefly regression is available via Table 2. Also, Table 3 presents, BBO and Firefly algorithms’ parameters which is used in the experiment to evaluate. Table 3. Nature-inspired algorithms’ parameters -

BBO algorithm

Firefly algorithm

Iteration

1000

1000

Population

20 habitats

15 fireflies

Human suitability index (HIS)

Objective function

*

Light intensity of I

*

Objective function

Variables

10

10

Lower bound (var min)

−10

−10

Upper bound (var max)

10

10

Keep rate

0.2

*

No of kept habitats

Keep rate * habitats

*

No of new habitats

Habitats − kept habitats

*

Emigration rate

μ = 0.2

*

Immigration rate

λ = 0.3

*

Alpha

α = 0.9

α = 0.2 (mutation coefficient)

Mutation probability

0.1

*

Sigma

ς = 0.02 * (var max − var min)

*

Light absorption coefficient

*

γ = 0.1

Attraction coefficient

*

β=2

Delta (mutation range)

*

δ = 0.05 * (var max − var min)

Table 4 shows the details of datasets which are used in the experiment. Figure 3 illustrates DEA calculation for first 15 samples of daily demand forecasting order dataset before BBO feature selection and after features selection by BBO algorithm in 7, 8 and 10 features (DMUs) out of 13, respectively. Also, Fig. 4 represents, BBO algorithm training stage over 1000 iterations. Table 5 is retuned results from proposed method plus four other methods including original DEA and three feature selection methods before DEA. These results are average of CCR, IOBCC, OOBCC and Additive methods on four mentioned datasets in three 25%, 50% and 75% ranks. Proposed BBO DMU

Nature-Inspired DMU Selection and Evaluation

203

selection method compared with traditional Lasso regularization [8], Genetic Algorithm (GA) feature selection [11] and Particle Swarm Optimization (PSO) feature selection [10]. Obviously, higher DEA value to 1 means more efficiency for the system. Figure 5 represents acquired results from the experiment in the Table 5 as box plots. Table 4. Experiment’s datasets Name

Area

Associated task (s)

Instances

Features

Reference

Clickstream data for online shopping

Business

Classification, regression, clustering

165474

14

[18]

Daily demand forecasting orders

Business

Regression

60

13

[19]

Online news popularity

Business

Classification, regression

39797

61

[20]

Statlog (Australian credit approval)

Financial

Classification, regression

690

14

[21]

Fig. 3. Testing bar plot of BBO feature selection on samples of daily demand forecasting order dataset versus DEA on original data

204

S. M. H. Mousavi

Fig. 4. BBO algorithm training stage over 1000 iterations based on Table 3 parameters

Table 5. Average of CCR, IOBCC, OOBCC and additive methods on datasets and comparison with other methods in three ranks Features = DMUs

Clickstream

Daily demand

Online news

Statlog

Original DEA (all features)

0.837

0.926

0.844

0.883

Rank 1 = 75% of features

0.691

0.839

0.799

0.817

Rank 2 = 50% of features

0.601

0.782

0.749

0.781

Rank 3 = 25% of features

0.588

0.754

0.694

0.758

Rank 1 = 75% of features

0.825

0.937

0.905

0.899

Rank 2 = 50% of features

0.799

0.879

0.866

0.857

Rank 3 = 25% of features

0.767

0.867

0.805

0.796

Rank 1 = 75% of features

0.812

0.951

0.857

1.000

Rank 2 = 50% of features

0.786

0.928

0.817

0.881

Rank 3 = 25% of features

0.772

0.876

0.800

0.856

Rank 1 = 75% of features

0.919

1.000

0.961

1.000

Rank 2 = 50% of features

0.884

0.969

0.863

0.957

Rank 3 = 25% of features

0.803

0.900

0.840

0.883

Lasso DEA

GA features DEA

PSO features DEA

BBO features DEA

Figure 6, illustrates fuzzy firefly regression result on 250 samples of online news popularity dataset, using 25% of the features (DMUs). Returned Correlation Coefficient (CC) [23] and errors, shows outstanding performance between target and out for train and test. Also, Fig. 7 shows related error for the same experiment. Figure 8. Presents Fuzzy Firefly Regression performance in training stage. Table 6, covers Correlation Coefficient (CC) and Means Square Error (MSE) [23] for the experiment on four datasets in three ranks of 25%, 50% and 75% which are achieved from fuzzy firefly regression and

Nature-Inspired DMU Selection and Evaluation

205

Fig. 5. Comparison of different methods as box plots

Fig. 6. Fuzzy Firefly non-linear regression test on samples of online news popularity dataset after BBO feature selection by 25% of DMUs

compared with fuzzy regression [22] on extracted DMUs from previews step by BBO features selection algorithm. By looking at the results from Table 5 and Fig. 5, performance of the Lasso regularizations DEA meets the modest. However, original DEA on all data have achieved

206

S. M. H. Mousavi

Fig. 7. MSE, RMSE, Error Mean, Error STD for regression experiment of Fig. 6

Fig. 8. Firefly algorithm training stage over 1000 iterations based on Table 3 parameters

medium performance as it is clear in the Table 5. GA DEA shows a little bit better performance than original DEA in all ranks. Second places go to PSO DEA and best results belongs to BBO DEA. Average value of 0.9149 is returned for all ranks and all datasets and for all DEA methods after the proposed feature selection method. Additionally, average correlation coefficient of 0.9451 and average MSE of 0.1327 are returned by fuzzy firefly regression method for all datasets and ranks which is significant.

Nature-Inspired DMU Selection and Evaluation

207

Table 6. CC and MSE comparison for fuzzy regression and fuzzy firefly regression in three ranks of features on four datasets Features = DMUs

Clickstream

Daily demand

Online news

Statlog

Rank 1 = 75% of features

CC = 0.836 MSE = 0.285

CC = 0.882 MSE = 0.154

CC = 0.944 MSE = 0.081

CC = 0.970 MSE = 0.022

Rank 2 = 50% of features

CC = 0.779 MSE = 0.300

CC = 0.875 MSE = 0.181

CC = 0.937 MSE = 0.099

CC = 0.969 MSE = 0.082

Rank 3 = 25% of features

CC = 0.783 MSE = 0.319

CC = 0.850 MSE = 0.187

CC = 0.912 MSE = 0.058

CC = 0.920 MSE = 0.061

Rank 1 = 75% of features

CC = 0.936 MSE = 0.211

CC = 0.928 MSE = 0.119

CC = 0.991 MSE = 0.004

CC = 0.998 MSE = 0.001

Rank 2 = 50% of features

CC = 0.889 MSE = 0.260

CC = 0.905 MSE = 0.117

CC = 0.980 MSE = 0.059

CC = 0.983 MSE = 0.036

Rank 3 = 25% of features

CC = 0.849 MSE = 0.297

CC = 0.900 MSE = 0.162

CC = 0.987 MSE = 0.032

CC = 0.996 MSE = 0.007

Fuzzy regression

Fuzzy firefly regression

4 Conclusion By empowering DEA with nature-inspired algorithms, it is possible to achieve more efficient results compare to using traditional DEA. Evolutionary feature selection (here BBO) could remove more inefficient DMUs rather than mathematical PCA or Lasso algorithms which are not intelligent. Furthermore, to have better understanding of relation between variables after DUM selection, a nature-inspired regression (here Firefly) in combination with fuzzy logic which could provide higher correlation coefficient versus old techniques, is used. Overall, by employing mentioned techniques, more efficient and precise result returned in comparison with other methods; however, due to using optimization algorithm, runtime is sacrificed. Using more related business and management datasets plus comparing the propose system with deep learned feature selection and regression methods is of future works. For this research the system had 7 cores of CPU, so it is suggested to run the proposed system on over 5000 iterations and more than 50 populations, as optimization algorithms demand more hardware resource for higher parameter values and obviously, they will return better results in that condition. The proposed system could be used as a decent tool for resource allocation in business and management applications, when number of DMUs are so high.

References 1. Charles, V., Kumar, M. (eds.): Data Envelopment Analysis and Its Applications to Management. Cambridge Scholars Publishing, Cambridge (2013) 2. Mousavi, S.M., Hossein, V.C., Gherman, T.: An evolutionary pentagon support vector finder method. Expert Syst. Appl. 150, 113284 (2020)

208

S. M. H. Mousavi

3. Ghosh, M., et al.: Binary genetic swarm optimization: a combination of GA and PSO for feature selection. J. Intell. Syst. 29(1), 1598–1610 (2020) 4. Simon, D.: Biogeography-based optimization. IEEE Trans. Evol. Comput. 12(6), 702–713 (2008) 5. Mousavi, S.M.H., MiriNezhad, S.Y., Mirmoini, A.: A new support vector finder method, based on triangular calculations and K-means clustering. In: 2017 9th International Conference on Information and Knowledge Technology (IKT). IEEE (2017) 6. Charles, V., Gherman, T.: Big data analytics and ethnography: together for the greater good. In: Emrouznejad, A., Charles, V. (eds.) Big Data for the Greater Good. SBD, vol. 42, pp. 19–33. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-93061-9_2 7. Abdi, H., Williams, L.J.: Principal component analysis. Wiley Interdisc. Rev. Comput. Stat. 2(4), 433–459 (2010) 8. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. Ser. B (Methodol.) 58(1), 267–288 (1996) 9. Zhang, Y., et al.: Feature selection using data envelopment analysis. Knowl.-Based Syst. 64, 70–80 (2014) 10. Tu, C.-J., et al.: Feature selection using PSO-SVM. Int. J. Comp. Sci. 33, 1–18 (2007) 11. Babatunde, O.H., et al.: A genetic algorithm-based feature selection (2014) 12. Yang, X.-S.: Nature-Inspired Metaheuristic Algorithms. Luniver Press (2010) 13. Charnes, A., Cooper, W.W., Rhodes, E.: Measuring the efficiency of decision making units. Eur. J. Oper. Res. 2(6), 429–444 (1978) 14. Farrell, M.J.: The measurement of productive efficiency. J. Roy. Stat. Soc. Ser. A (Gener.) 120(3), 253–281 (1957) 15. Zadeh, L.A.: Fuzzy logic. Computer 21(4), 83–93 (1988) 16. Bezdek, J.C., Ehrlich, R., Full, W.: FCM: the fuzzy c-means clustering algorithm. Comput. Geosci. 10(2–3), 191–203 (1984) 17. Likas, A., Vlassis, N., Verbeek, J.J.: The global k-means clustering algorithm. Pattern Recogn. 36(2), 451–461 (2003) 18. Łapczy´nski, M., Białow˛as, S.: Discovering patterns of users’ behaviour in an E-shopcomparison of consumer buying behaviours in poland and other European countries. Studia Ekonomiczne 151, 144–153 (2013) 19. Ferreira, R.P., et al.: Study on daily demand forecasting orders using artificial neural network. IEEE Latin Am. Trans. 14(3), 1519–1525 (2016) 20. Fernandes, K., Vinagre, P., Cortez, P.: A proactive intelligent decision support system for predicting the popularity of online news. In: Pereira, F., Machado, P., Costa, E., Cardoso, A. (eds.) EPIA 2015. LNCS (LNAI), vol. 9273, pp. 535–546. Springer, Cham (2015). https:// doi.org/10.1007/978-3-319-23485-4_53 21. Quinlan, J.R.: Simplifying decision trees. Int. J. Man-Mach. Stud. 27(3), 221–234 (1987) 22. Zuo, H., et al.: Fuzzy regression transfer learning in Takagi-Sugeno fuzzy models. IEEE Trans. Fuzzy Syst. 25(6), 1795–1807 (2016) 23. Murphy, A.H.: Skill scores based on the mean square error and their relationships to the correlation coefficient. Mon. Weather Rev. 116(12), 2417–2424 (1988) 24. Javed, A.: Face recognition based on principal component analysis. Int. J. Image Graph. Sig. Process. 5(2), 38 (2013) 25. Singh, Y., Tiwari, M.: A novel hybrid approach for detection of type-2 diabetes in women using lasso regression and artificial neural network (2022) 26. Singh, A., Singh, P.I., Kaur, P.: Digital image enhancement with fuzzy interface system. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 4, 51 (2012). https://www.mecs-press.org/ijitcs/ijitcsv4-n10/IJITCS-V4-N10-6.pdf

Hybrid Convolution Neural Network with Transfer Learning Approach for Agro-Crop Leaf Disease Identification Md Shamiul Islam1 , Ummya Habiba2 , Md Abu Baten3 , Nazrul Amin3 , Imrus Salehin4(B) , and Tasmia Tahmida Jidney5 1 Bangladesh University of Business and Technology, Dhaka, Bangladesh 2 Bangladesh Agricultural University, Mymensingh, Bangladesh 3 Northern University Bangladesh, Dhaka, Bangladesh 4 Dongseo University, Busan 47011, South Korea

[email protected] 5 Ahsanullah University of Science and Technology, Dhaka, Bangladesh

Abstract. Transfer learning is an optimization that assumes faster development and superior performance when modeling a second task. In this study, we have constructed an integrated model based on improved VGG16 and ResNet-50 network. For the improvement of CNN architecture, VGG16 full connected layer drops then it is connected with the next full collected layer due to reduce the computing complexity. ResNet-50 has been updated with its input image size layer. For the agro-crop leaf disease identification, BAU-agro data has been trained with this particular Improved CNN architecture model. Resulting in the trained model representing 95.89% accuracy in integrated algorithm model architecture. In our study, we have proposed fewer parameters and consumed less time for the absolute experiment and identification. The high-dimensional substance image feature data output by the improved VGG16 and Resnet-50 is input into the Convolution Neural Network (CNN) for training so as evolution the expression types with high accuracy. The research’s main novelty as we have focused on minimum time-consuming image classification architecture and compare better performance accuracy with parallel other pre-train neural network models. Keywords: VGG-16 · Resnet-50 · Image identification · Agriculture · CNN

1 Introduction At the present time, food safety is first priority all over the world. The quality and amount of Agro-production can be impactful if food development cannot turn into automated and advanced [1]. But, it can be more dangerous if we do have not enough development in disease control. In the crop disease identification field primitive or manual calculation is still used. With the fast reformation of Machine learning (ML) and Artificial Intelligence (AI) application, the precision and accuracy of Crop leaf disease identification processing © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 209–217, 2023. https://doi.org/10.1007/978-3-031-24475-9_18

210

Md. Shamiul Islam et al.

developed on Deep learning (DL) in substantive agricultural scenes have overcome that of consecutive agricultural specialists. However, for the limited data set and the impact of the network’s structure, the complex network structure has an overfitting issue, so with the lower accuracy of image recognition, which cannot correspond to the need for efficient analysis of authentic agricultural work scenes. In the previous experiment, I VGG-16 [12], VGG16-transfer [13], and Pre-trained VGG-16 [14] these are performed for image classification in very recent years. That architecture accuracy gradually fluctuates for the fine-tuning and image classification model. So, due to this reason, we have proposed our improve model to better perform. In our study main novelty is as follows: • We have reduced our parameter from the existing CNN model to our improved model. Due to reducing the parameter, our model performed well as well as reduced the time complexity. • Concatenation algorithms have been applied for better performance and model accuracy evaluation. • Advanced identification system than the previous classification model according to CNN. The past experiment has used some separate and predefined CNN architecture model which has batch size and fine-tuning limitation. In our improved model, we have improved this problem using the hybrid method with transfer learning. To overcome the problem, in our study we have proposed a developed transfer learning network [2]. In this proposal, the image analysis model is made with the VGG16 and ResNet-50 network model. At the same time the fine turn improves the image analysis model. Both algorithms have been developed to reduce the overfitting problem.

2 Related Work High-amount agro production is a large issue all over the world. So, this is essential to ensure quality and security. However, crop disease has an impact on crop production. The improvement of big data and AI provides renewed ideas for crop-pest analysis and experiment [3, 4]. D. Xiao et al., in their research paper, introduced a potato insect identification model based on Faster R-CNN and it residual the convolution network [5]. The mobileNetv2-YOLOv3 lightweight network model has proposed the normalization and segment theory for pest image identity [6]. The deep network formation will conduct to overfitting during architecture model training and testing. Resulting from the collapse of image accuracy analysis. Moreover, the image fact of diseases and insect is difficult and the present methods cannot be supported by dependable and full datasets. In this paper authors compared six different CNN Architectures VGG16, InceptionV3, Xception, Resnet50, MobileNet, and DenseNet121 but they got best accuracy 95.48% on test data by using DenseNet121.They take the dataset from Plant Village dataset, and use 15 different plants 10 classes potato and tomato plants, 8984 images for training, 1176 images for validation and 1173 images for testing [7]. Using Xception and DenseNet

Hybrid CNN with Transfer Learning Approach

211

(CNN Architectures) authors build a multi-plant disease diagnosis method. They collect their dataset from various online sources (tomato, potato, rice, corn, grape, apple) which contain total 28 diseases of 6 plants [8]. To solve the prior problems, this study proposes a leaf disease image analysis experiment establish by the update Transfer Learning network.

3 Identification Process of Crop Diseases and Recognition Model From the idea of Transfer Learning, we have proposed the flowchart model. In this model, Crop Disease data passing through the pre-trained convolutional network with fine-tuning for the output generation.

Fig. 1. Diagram of leaf diseases identification on a multilayer network model

Figure 1 shows, obtained the VGG16 and ResNet-50 based identification system which is using pre-training and fine-tuning. Initial network design is acquired by using ResNet-50 trained by dataset. Enhanced confirmation sets were used to fine-tune the pre-training pattern. After that, some number of iterations to optimize the network convergence of the corresponding convolution neural network sequence disease detection models dl-VGG16 and ResNet-50 are obtained.

4 Data Processing 4.1 Dataset In this experiment, we have used an Image dataset from the Bangladesh Agricultural University Agro-disease database. For the analysis, the data set contain a total of 14 class and 13,024 different images. BAU-Agro Disease contains a large quantity of image arrangement of crop leaves such as Corn, Rice, Potato and Wheat. A total of 10,000 images have used for training and verification, and another 3024 were used for testing. The resolution of each sample image is 224 × 224 × 3. Some samples are shown in Fig. 2.

212

Md. Shamiul Islam et al.

Fig. 2. Data sample BAU-Agro Disease

4.2 Dataset Processing The number of leaf diseases image provided on the BAU-Agro dataset is unsmooth. Therefore, the dataset has to optimize and divided for testing and training [9]. Furthermore, we are aware that in order to train deep learning models, larger datasets or a considerable amount of data are required. As a consequence of this, the process of augmenting images can be utilized to achieve the objective of increasing the size of the dataset. After the stage of image augmentation, the picture dataset goes through another pre-processing step which is known as size normalization. In this stage, you will resize each of the photos so that they can be contained within a single image that has been standardized. For the sake of this specific research, the dimensions of each image were altered to be 224 by 224 pixels. After that, the photographs are divided up into groups and given the appropriate processing for each of those groups. The batch size that we employ is 32, which is the default value for this accuracy-controlling parameter. This parameter is used to control how accurate the model is. Training the model: The suggested model is trained with the deep neural architectures ResNet50 and VGG16, and those weights are utilized to initialize the weights of the learned model once it has been trained. The neural network is able to learn from the features that were retrieved by repeatedly carrying out the process of back propagation. This process involves modifying the weights in order to reduce error, which allows the neural network to learn from the features that were retrieved. “Fine-tuning” refers to the process of making minute adjustments to the weights, while “dl-VGG16” is the name given to the improved version of the formula.

5 Transfer Learning Network 5.1 VGG-16 Architecture In this research, the VGG16 architecture network has been selected as a pre-train model of transfer learning. In the VGG16 model [10] architecture, we have eliminated the

Hybrid CNN with Transfer Learning Approach

213

Fc layer 1 is drop and directly linked with the Fc (fully connected) layer 2. Secondly, the number of neurons in the remaining two fully connected layers is reduced. While decrement of the number of parameters can perform the features adopted by the last convolution layer more peculiar, which is beneficial to enhance the unification outcome. Reducing the depth of the neural network in several processes to minimize the number of parameters (dl-VGG16) is beneficial to counteract overfitting to a particular limitation in Fig. 3.

Fig. 3. VGG-16 improve architecture

5.2 ResNet-50 Resnet-50 is a pertained model for image classification in Convocation Neural network (CNN, or ConvNet) which is applied to solve exploding gradient and degradation problem that faced while training a deep neural network model. Resnet-50 has 50 layers and it trained on a million images of 1000 categories from imageNet. ResNet-50 contain of 48 ConV layers, 1 average pool layer and 1 max pool layer and it consists of 3.8 × 109 floating point operations. The symbol images are fed to the model and various parameters are configured like batch size is equal to 32, number of epoch is equal to 50 and learning rate is 3 × 102 . The workflow of ResNet-50 is shown in Fig. 4.

Fig. 4. Workflow of ResNet-50 architecture.

214

Md. Shamiul Islam et al.

In CNN every layer learns low or high level features while being trained for the task at hand. It instead of trying to learn features, model and tries to learn some of residual. Here we can see, The output function Y is define in Eq. (1), when input and output both are of the same dimensions.   (1) y = F(x, Wj ) + x where, X is the input to the residual block F(x,{Wj }) and Wj represents the weight layers. If take different dimensional input and output, the shortcut performs identity mapping by padding the extra zero entries with the dimension that is increased and then matched by using the projection shortcut as in Eq. (2).   (2) y = F(x, Wj ) + Ws x

5.3 CNN Model Concatenation Algorithm ResNet-50 and VGG16 Convolution neural network integration network models have several structures and have their particular convenience and characteristics. The efficient unification of the two models can unravel the overfitting issue of the network model of the convolution neural network itself. Equation (3), Pavg represents the number of integrated models, and i represents the predicted value of the model. This equation represents the mean method of this algorithm evaluation. n Pi (3) Pavg = i=1 n Therefore, the unification of the two models is helpful in progressing the sustaining accuracy of Agro-crop diseases [11]. We also represent the integration algorithm mean value for performance analysis.

6 Experiment and Analysis The data presented in Figure illustrates that the dl-VGG16 obtains a higher level of training accuracy during the training process at a faster pace, which is proof that the suggested network has a quicker fitting time, in Fig. 5. Taking a look at the outcomes of the training makes this very clear. When compared to the VGG16 network, the accuracy achieved by the dl-VGG16 network is significantly higher. If we use the test set as an illustration, the improved VGG16 network has a recognition accuracy of over 90%, which suggests that its fitting effect is also significantly improved. The goal of the proposed method is to further improve the recognition accuracy by combining the feature extraction advantages supplied by the dl-VGG16 network with the classification characteristics offered by the network. This will allow for a more robust combination of both sets of benefits. Enhance CNN’s ability to identify diseases that affect agro-crop leaves.

Hybrid CNN with Transfer Learning Approach

215

Fig. 5. Accuracy Loss graphical view

7 Experimental Result In this study, different approaches have been used for image analysis for the identification process. Table 1; have been shown different implements in other research. After analysis and accuracy comparison we can see our proposed pre-tarin model architecture is much outperformed. The recognition accuracy of each category of the method in this study is uniform, the difference is 5.51%, and the recognition accuracy of each category is higher than that in reference [12]. Other side, our new hybrid model’s accuracy performance is higher than other models. Table 1. Experimental analysis and baseline comparison Model name

Model accuracy

Model loss

Parameter

I VGG16 (Wu S 2021) [12]

90.51

9.49

1.3M

VGG-16-transfer (Yong Wu et al. 2018) [13]

83.53%

16.47

1M

VGG-19-transfer (YongWu et al. 2018) [13]

84.71%

15.29

-

PretrainedVGG16 ( Tammina S 2019) [14]

86.50%

13.5

-

VGG19 (Sahili Z.A et al. 2022) [15]

90%

10

-

ResNet50 (A. Sai Bharadwaj Reddy 2019)

95.91%

-

-

ResNet50 (Sagar A et al. 2021) [16]

0.98

-

-

ResNet50 (our)

99.50%

0.50

1.8M

dl-VGG16 (our)

96.02%

3.9

1.7M

Concatenation Mean (our)

95.89%

4.11

1.3M

216

Md. Shamiul Islam et al.

8 Conclusion The performance of the image identification using this dataset accuracy shown well performs. This method and architecture framework is easier than the other outperforming algorithms or frameworks. In the future, we will extend our architecture improve VGG16 and ResNet-50 according to the experimental dataset and will try to improve the model with some baseline comparison. Simulation results show that the proposed algorithm can realize the task of agro leaf identification and classification and has good network model performance. Future research work will continue to explore the image analysis of diseases and pests and realize the calculation of the effective area of crop diseases and the judgment of the severity of plant diseases and insect pests, so as to carry out an orderly and effective treatment and prevent large-scale economic losses. Agriculture [17, 18] and a combination of artificial intelligence could develop the upcoming food security. Acknowledgement. To complete our research, work our honorable Faculty and Advisor help us. We use a Machine learning/ Deep Learning lab at Dongseo University, South Korea for better results and experiments. But to complete our research we have no special funding.

References 1. Paymode, A.S., Malode, V.B.: Transfer learning for multi-crop leaf disease image classification using convolutional neural network VGG. Artif. Intell. Agric. 6, 23–33 (2022) 2. Thenmozhi, K., Reddy, U.S.: Crop pest classification based on deep convolutional neural network and transfer learning. Comput. Electron. Agric. 164, 104906 (2019) 3. Parraga-Alava, J., Alcivar-Cevallos, R., Morales Carrillo, J., et al.: LeLePhid: an image dataset for aphid detection and infestation severity on lemon leaves. Data 6(5), 1–7 (2021) 4. Zhu, J., Wu, A., Wang, X., Zhang, H.: Identification of grape diseases using image analysis and BP neural networks. Multimed. Tools Appl. 79(21–22), 14539–14551 (2019). https://doi. org/10.1007/s11042-018-7092-0 5. Xiao, D., Feng, J.Z., Feng, J., Lin, T., Pang, C., Ye, Y.: Classification and recognition scheme for vegetable pests based on the BOF-SVM model. Int. J. Agric. Biol. Eng. 11(3), 190–196 (2018) 6. Liu, J., Wang, X.: Early recognition of tomato gray leaf spot disease based on MobileNetv2YOLOv3 model. Plant Methods 16(1), 1–16 (2020). https://doi.org/10.1186/s13007-020-006 24-2 7. Sumalatha, G., Krishna Rao, D., Singothu, D.: Transfer learning-based plant disease detection. Comput. Biol. J. 10(03), 469–477 (2021) 8. Kabir, M.M., Ohi, A.Q., Mridha, M.F.: A multi-plant disease diagnosis method using convolutional neural network. In: Uddin, M.S., Bansal, J.C. (eds.) Computer Vision and Machine Learning in Agriculture. AIS, pp. 99–111. Springer, Singapore (2021). https://doi.org/10. 1007/978-981-33-6424-0_7 9. Jepkoech, J., Mugo, D.M., Kenduiywo, B.K., Too, E.C.: Arabica coffee leaf images dataset for coffee leaf disease detection and classification. Data Brief 36(1), 107–142 (2021). Article ID: 107142 10. Liu, Y., Zhang, X., Gao, Y., Qu, T., Shi, Y.: Improved CNN method for crop pest identification based on transfer learning. Comput. Intell. Neurosci. 2022, 8 (2022). Article ID: 9709648

Hybrid CNN with Transfer Learning Approach

217

11. Salehin, I., Talha, I.M., Saifuzzaman, M., Moon, N.N., Nur, F.N.: An advanced method of treating agricultural crops using image processing algorithms and image data processing systems. In: 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), pp. 720–724 (2020). https://doi.org/10.1109/ICCCA49541.2020.925 0839 12. Wu, S.: Expression recognition method using improved VGG16 network model in robot interaction. J. Robot. 2021, 1–9 (2021) 13. Wu, Y., Qin, X., Pan, Y., Yuan, C.: Convolution neural network based transfer learning for classification of flowers. In: 2018 IEEE 3rd International Conference on Signal and Image Processing (ICSIP), pp. 562–566. IEEE, July 2018 14. Tammina, S.: Transfer learning using VGG-16 with deep convolutional neural network for classifying images. Int. J. Sci. Res. Publ. (IJSRP) 9(10), 143–150 (2019) 15. Sahili, Z.A., Awad, M.: The power of transfer learning in agricultural applications: AgriNet. arXiv preprint arXiv:2207.03881 (2022) 16. Sagar, A., Jacob, D.: On using transfer learning for plant disease detection. BioRxiv, p. 202005 (2021) 17. Angon, P.B., Salehin, I., Khan, M.M.R., Mondal, S.: Cropland mapping expansion for production forecast: rainfall, relative humidity and temperature estimation. Int. J. Eng. Manuf. (IJEM) 11(5), 25–40 (2021). https://doi.org/10.5815/ijem.2021.05.03 18. Bhagawati, K., Bhagawati, R., Jini, D.: Intelligence and its application in agriculture: techniques to deal with variations and uncertainties. Int. J. Intell. Syst. Appl. (IJISA) 8(9), 56–61 (2016). https://doi.org/10.5815/ijisa.2016.09.07

Illumination Invariant Based Face Descriptor Shekhar Karanwal(B) CSE Department, Graphic Era University (Deemed), Dehradun, UK, India [email protected]

Abstract. Majority of the local descriptors performance is not satisfactory in illumination variations. Therefore, the major significance of this research is to develop such descriptor in light variations. This work presents Robust Binary Pattern (RBP) descriptor by utilizing the two state of art descriptors i.e., MRELBP-NI and ELBP. Both descriptors capture discriminant information under the influence of illumination variations. By integrating features of there is outcome of more discriminant feature (RBP) in light variations. MRELBP-NI strength is using the two local statistics median and mean for developing the size. ELBP strength is development of the feature size by integrating multiple features (i.e. from HELBP and VELBP). FLDA and SVMs are used for compaction and classification. Results conducted on EYB proves RBP ability in contrast to the alone descriptors and several literature methods. The value of RBP is justified by comparing with numerous methods. Keywords: Local feature · Global feature · Dimension reduction · Classification

1 Introduction To achieve discriminancy in harsh light variations is the most taunting task. Under normal light conditions local descriptors performs exceedingly well but as light variations become severe their performances are not adequate. Even most prolific local descriptor Local Binary Pattern (LBP) [1] degrades rapidly in light variations. Besides light variations the other challenges which further degrade performance are noise, blur, emotion, pose, occlusion and corruption. All these challenges are also classified as the intrapersonal variations. In some scenarios the effective global method, when used as the feature compression, will improve the accuracy. If the local & global methods both are discriminant than results are much finer. So, by considering the light variations challenge, the local and global methods should be used. The local methods works on finer image locations such as nose, eyes, mouth to develop its size. In contrast the global methods work on full image. Majority of the local descriptors performance is not satisfactory in illumination variations. Therefore, major objective is to develop such descriptor in light variations. The existing descriptors are not as impressive as it should be in light variations. The main limitation with these existing descriptors is the incomplete methodologies. The proposed work introduces the novel descriptor Robust Binary Pattern (RBP) in light variations, by utilizing the two state of art descriptors i.e. MRELBP-NI [2] and ELBP [3]. Both © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Hu et al. (Eds.): CSDEIS 2022, LNDECT 158, pp. 218–226, 2023. https://doi.org/10.1007/978-3-031-24475-9_19

Illumination Invariant Based Face Descriptor

219

descriptors capture discriminant information under influence of illumination variations. By integrating features of there is outcome of more discriminant feature (RBP) in light variations. MRELBP-NI strength is using the two local statistics median and mean for developing the size. ELBP strength is development of the feature size by integrating multiple features (i.e. from HELBP and VELBP). FLDA [4] and SVMs [5] are used for compaction and classification. Results conducted on EYB dataset [6] proves the ability of RBP in contrast to the alone descriptors and several literature methods. The mathematical modeling process of the proposed framework is illustrated in Fig. 1. Road map: Sect. 2 illustrates related works, all descriptors are described in Sect. 3, results are performed in Sect. 4, discussions are given in Sect. 5 and conclusion with futuristic work are delivered in Sect. 6. Input image

Feature Extraction

Feature Compaction

Classification

Fig. 1. The mathematical modeling of the proposed work

2 Related Works Zhang et al. [7] developed MFPLBP method for recognition of finger vein. There are two shortcomings found in LBP i.e. feature unity and microscopic limitation. To eliminate all these MFPLBP is introduced. During the processing of partition, local and global image aspect is enhanced, and noise influence is minimized. Furthermore, the fusion of features (multiple) is achieved to cope for previous alone tested descriptors. Results shows that MFPLBP improves the accuracy in contrast to the other traditional methods. The major limitation with this method is that no global method is used for compaction. Luo et al. [8] proposed the Scale selective and Noise robust Extended LBP (SNELBP) for Texture Analysis (TA). Initially gaussian filter is deployed to transform each image into different scale space. Then noise discriminant histograms are derived by using MRELBP. Further the scale invariant feature is produced by choosing the best among all the scales. Ultimately the most instructive patterns are chosen from the pre-trained dictionary by CDFS method. Results on five datasets proves the SNELBP ability. This method lacks multi feature fusion ability, which is very effective in increasing the discriminativity in light variations. Khanna et al. [9] imposed the novel method for Expression Recognition (ER) by using STFT and LBP. STFT is utilized for acquiring frequency features and LBP is used for acquiring local features. The feature compression utilized is FDR, chi-square and threshold variance. The extracted features are conceived by SVMs for learning and matching. On JAFFE dataset, the invented method proves it’s potent. This method doesn’t utilizes effective techniques for feature extraction. Shakoor et al. [10] developed the feature selection and mapping method of LBP for TA. The CLBP develops its feature size by integrating the three histograms. Although this feature size is discriminant, but the feature size is on the larger side. To minimize the feature size, some mapping concepts are launched and then feature mapping to the histogram. All developed methods are light and rotation invariant. Furthermore, for

220

S. Karanwal

selecting the robust features a CFS method is proposed. Results on different texture datasets improves the accuracy. The main demerit of this work is that it is not effective at it should be in light variation. Furthermore, not impressive global methods are used for feature compaction. Karanwal et al. [11] developed the ROM-LBP descriptor for Face structure, under light variations. In ROM-LBP, instead of taking the raw center value for thresholding, which LBP and OC-LBP does, the mean value of the radial orthogonal pixels are utilized for thresholding. This concept proves very dominant than the other local methods in light changes on EYB and YB datasets. The only demerit found in ROM-LBP is that some better global method could be used. Tabatabaei et al. [12] developed noise invariant descriptor for TA called as MACCBP. For increasing the performance in noisy and noisy free conditions, micro and macro essential features are acquired by using distinct radius patterns. In additions, the magnitude and center details are also incorporated for enhancing the accuracy. On different texture datasets under noisy and noisy free conditions MACCBP proves its ability. The methodology is not effective as it should be. The proposed work eliminates all these demerits observed in the previous work and utilizes two formidable descriptors under the light variations and these are MRELBPNI and ELBP. The combined feature is termed as RBP. Furthermore, the accuracy is improved furthermore by using the global method FLDA. The RBP descriptor proves its potent on EYB dataset. The proposed methodology helps to achieve the research objective by integrating the two effective descriptors in light variations and these are MRELBP-NI and ELBP. The joined feature size is termed as RBP. RBP outperforms various method on EYB dataset.

3 Description of Descriptors 3.1 MRELBP-NI In MRELBP-NI, the median computation in nine different regions is the first step. Out of nine regions, eight are the neighborhood regions and one is the center region. After median computation, the 3 × 3 patch evolves. Then neighborhood pixels are thresholded to 1 for bigger or similar mean value. Else 0 is given as label. This brings the 8 bit pattern size, which is converted into decimal code by weights apportion. Computing decimal codes in all locations of the image develops the MRELBP-NI map image. This map image builds the  size of 256. Equation 1 and Eq. 2 gives the MRELBP-NI concept. In Eq. 1, P, R2 , (QR2 ,P,3, p ) and µR2 ,P,3 define size (of neighborhood), radius, filter median and mean. Equation 2 computes mean value.   P−1   1x≥0 MRELBP − NIP, R2 ,3 = f( QR2 ,P,3, p − µR2 ,P,3 )2p , f(x) = p=0 0x