Information Management and Machine Intelligence: Proceedings of ICIMMI 2019 [1st ed.] 9789811549359, 9789811549366

This book features selected papers presented at the International Conference on Information Management and Machine Intel

371 92 32MB

English Pages XVI, 677 [658] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xvi
A Study of Data Hiding Using Cryptography and Steganography (Priya Mathur, Amit Kumar Gupta)....Pages 1-13
A Review on Offline Signature Verification Using Deep Convolution Neural Network (Deepak Moud, Sandeep Tuli, Rattan Pal Rana)....Pages 15-21
Designing of SAW-Based Resonator Under Variable Mass Load for Resonance Shift (Yateesh Chander, Manish Singhal)....Pages 23-30
Ablation of Hepatic Tumor Tissues with Active Elements and Cylindrical Phased Array Transducer (Sarita Zutshi Bhan, S. V. A. V. Prasad, Dinesh Javalkar)....Pages 31-43
A Brief Analysis and Comparison of DCT- and DWT-Based Image Compression Techniques (Anuj Kumar Singh, Shashi Bhushan, Sonakshi Vij)....Pages 45-55
Topic Modeling on Twitter Data and Identifying Health-Related Issues (Sandhya Avasthi)....Pages 57-64
Effort Estimation Using Hybridized Machine Learning Techniques for Evaluating Student’s Academic Performance (A. J. Singh, Mukesh Kumar)....Pages 65-75
Frequency Sweep and Width Optimization of Memos-Based Digital Logic Gates (Parvez Alam Kohri, Manish Singhal)....Pages 77-83
Performance Improvement of Heterogeneous Cluster of Big Data Using Query Optimization and MapReduce (Pankaj Dadheech, Dinesh Goyal, Sumit Srivastava, Ankit Kumar, Manish Bhardwaj)....Pages 85-100
Signaling Load Reduction Using Data Analytics in Future Heterogeneous Networks (Naveen Kumar Srinivasa Naidu, Sumit Maheshwari, R. K. Srinivasa, C. Bharathi, A. R. Hemanth Kumar)....Pages 101-109
Modelling and Simulation of Smart Safety and Alerting System for Coal Mine Using Wireless Technology (Om Prakash, Amrita Rai)....Pages 111-117
Green Algorithmic Impact of Computing on Indian Financial Market (Krishna Kumar Singh, Sachin Rohatgi)....Pages 119-126
Coarse-Grained Architecture Pursuance Investigation with Bidirectional NoC Router (Yazhinian Sougoumar, Tamilselvan Sadasivam)....Pages 127-134
Home Automation and Fault Detection (Megha Gupta, Pankaj Sharma)....Pages 135-143
Performance Evaluation of Simulated Annealing-Based Task Scheduling Algorithms (Abhishek Mishra, Kamal Sheel Mishra, Pramod Kumar Mishra)....Pages 145-152
Epileptic Seizure Onset Prediction Using EEG with Machine Learning Algorithms (Shruti Bijawat, Abhishek Dadhich)....Pages 153-161
A Review of Crop Diseases Identification Using Convolutional Neural Network (Pooja Sharma, Ayush Sogani, Ashu Sharma)....Pages 163-168
Efficiency of Different SVM Kernels in Predicting Rainfall in India (M. Kiran Kumar, J. Divya Udayan, A. Ghananand)....Pages 169-175
Smart Trash Barrel: An IoT-Based System for Smart Cities (Ruchi Goel, Sahil Aggarwal, A. Sharmila, Azim Uddin Ansari)....Pages 177-182
Iterative Parameterized Consensus Approach for Clustering and Visualization of Crime Analysis (K. Lavanya, V. Srividya, B. Sneha, Anmol Dudani)....Pages 183-197
Assistive Technology for Low or No Vision (Soumya Thankam Varghese, Maya Rathnasabapathy)....Pages 199-202
A Socio Responding Implementation Using Big Data Analytics (S. GopalaKrishnan, R. Renuga Devi, A. Prema)....Pages 203-207
An Optimised Robust Model for Big Data Security on the Cloud Environment: The Numerous User-Level Data Compaction (Jay Dave)....Pages 209-216
An Adapted Ad Hoc on Demand Routing Protocol for Better Link Stability and Routing Act in MANETs (Yatendra Mohan Sharma, Neelam Sharma, Pramendra Kumar)....Pages 217-225
An Efficient Anonymous Authentication with Privacy and Enhanced Access Control for Medical Data in WBAN (K. Mohana Bhindu, R. Aarthi, P. Yogesh)....Pages 227-235
A Study on Big Data Analytics and Its Challenges and Tool (K. Kalaiselvi)....Pages 237-243
“Real-Time Monitoring with Data Acquisition of Energy Meter Using G3-PLC Technology” (Deepak Sharma, Megha Sharma)....Pages 245-253
A Profound Analysis of Parallel Processing Algorithms for Big Image Applications (K. Vigneshwari, K. Kalaiselvi)....Pages 255-261
Breast Cancer Detection Using Supervised Machine Learning: A Comparative Analysis (Akansha Kamboj, Prashmit Tanay, Akash Sinha, Prabhat Kumar)....Pages 263-269
An Analytical Study on Importance of SLA for VM Migration Algorithm and Start-Ups in Cloud (T. Lavanya Suja, B. Booba)....Pages 271-276
In-Database Analysis of Road Safety and Prediction of Accident Severity (Sejal Chandra, Parmeet Kaur, Himanshi Sharma, Vaishnavi Varshney, Medhavani Sharma)....Pages 277-283
Identifying Expert Users on Question Answering Sites (Pradeep Kumar Roy, Ayushi Jain, Zishan Ahmad, Jyoti Prakash Singh)....Pages 285-291
SentEmojis: Sentiment Classification Using Emojis (Sangeeta Lal, Niyati Aggrawal, Anshul Jain, Ali Khan, Vatsal Tiwari, Amnpreet Kaur)....Pages 293-300
Protection of Six-Phase Transmission Line Using Bior-6.8 Wavelet Transform (Gaurav Kapoor)....Pages 301-313
Protection of Nine-Phase Transmission Line Using Demeyer Wavelet Transform (Gaurav Kapoor)....Pages 315-326
A Comparative Analysis of Benign and Malicious HTTPs Traffic (Abhay Pratap Singh, Mahendra Singh)....Pages 327-336
Comparative Study of the Seasonal Variation of SO2 Gas in Polluted Air by Using IOT with the Help of Air Sensor (Vandana Saxena, Anand Prakash Singh, Kaushal)....Pages 337-345
Fake News Detection: Tools, Techniques, and Methodologies (Deependra Bhushan, Chetan Agrawal, Himanshu Yadav)....Pages 347-357
Synthesis and Analysis of Optimal Order Butterworth Filter for Denoising ECG Signal on FPGA (Seema Nayak, Amrita Rai)....Pages 359-369
Prognosis Model of Hepatitis B Reactivation Using Decision Tree (Syed Atef, Vishal Anand, Shruthi Venkatesh, Tejaswini Katey, Kusuma Mohanchandra)....Pages 371-376
Novel Approach for Gridding of Microarray Images (D. P. Prakyath, S. A. Karthik, S. Prashanth, A. H. Vamshi Krishna, Veluguri Siddhartha)....Pages 377-385
Neighbours on Line (NoL): An Approach to Balance Skewed Datasets (Shivani Tyagi, Sangeeta Mittal, Niyati Aggrawal)....Pages 387-392
Competent of Feature Selection Methods to Classify Big Data Using Social Internet of Things (SIoT) (S. Jayasri, R. Parameswari)....Pages 393-398
Big Data Analytics: A Review and Tools Comparison (V. Dhivya)....Pages 399-406
Secured Cloud for Health Care System (K. Kalaiselvi, R. Seon Kumarathi)....Pages 407-411
Minimising Acquisition Maximising Inference—A Demonstration on Print Error Detection (Suyash Shandilya)....Pages 413-423
Data Management Techniques in Hadoop Framework for Handling Small Files: A Survey (Vijay Shankar Sharma, N. C. Barwar)....Pages 425-438
Maintaining Accuracy and Efficiency in Electronic Health Records Using Deep Learning (A. Suresh, R. Udendhran)....Pages 439-443
An Extensive Study on the Optimization Techniques and Its Impact on Non-linear Quadruple Tank Process (T. J. Harini Akshaya, V. Suresh, M. Carmel Sobia)....Pages 445-450
A Unique Approach of Optimization in the Genetic Algorithm Using Matlab (T. D. Srividya, V. Arulmozhi)....Pages 451-464
Deep Learning Architectures, Methods, and Frameworks: A Review (Anjali Bohra, Nemi Chand Barwar)....Pages 465-475
Protection of Wind Farm Integrated Double Circuit Transmission Line Using Symlet-2 Wavelet Transform (Gaurav Kapoor)....Pages 477-487
Predicting the Time Left to Earthquake Using Deep Learning Models (Vasu Eranki, Vishal Chudasama, Kishor Upla)....Pages 489-496
Fully Informed Grey Wolf Optimizer Algorithm (Priyanka Meiwal, Harish Sharma, Nirmala Sharma)....Pages 497-512
A Study to Convert Big Data from Dedicated Server to Virtual Server (G. R. Srikrishnan, S. Gopalakrishnan, G. M. Sridhar, A. Prema)....Pages 513-518
Fuzzy Logic Controller Based Solar Powered Induction Motor Drives for Water Pumping Application (Akshay Singhal, Vikas Kumar Sharma)....Pages 519-525
Identity Recognition Using Same Face in Different Context (Manish Mathuria, Nidhi Mishra, Saroj Agarwal)....Pages 527-533
Using Hybrid Segmentation Method to Diagnosis and Predict Brain Malfunction (K. Dhinakaran, R. A. Karthika)....Pages 535-544
A Blockchain-Based Access Control System for Cloud Storage (R. A. Karthika, P. Sriramya)....Pages 545-554
Cost-Effective Solution for Visually Impaired (Abhinav Sagar, S Ramani, L Ramanathan, S Rajkumar)....Pages 555-565
When Sociology Meets Next Generation Mobile Networks (Harman Jit Singh, Diljot Singh, Sukhdeep Singh, Bharat J. R. Sahu, V. Lakshmi Narasimhan)....Pages 567-581
Image Enhancement Performance of Fuzzy Filter and Wiener Filter for Statistical Distortion (Pawan Kumar Patidar, Mukesh Kataria)....Pages 583-594
A Review of Face Recognition Using Feature Optimization and Classification Techniques (Apurwa Raikwar, Jitendra Agrawal)....Pages 595-604
Rapid Eye Movement Monitoring System Using Artificial Intelligence Techniques (M. Vergin Raja Sarobin, Sherly Alphonse, Mahima Gupta, Tushar Joshi)....Pages 605-610
Analysis of Process Mining in Audit Trails of Organization (Swati Srivastava, Gaurav Srivastava, Roheet Bhatnagar)....Pages 611-618
Modern Approach for the Significance Role of Decision Support System in Solid Waste Management System (SWMS) (Narendra Sharma, Ratnesh Litoriya, Harsh Pratap Singh, Deepika Sharma)....Pages 619-628
Integration of Basic Descriptors for Image Retrieval (Vaishali Puranik, A. Sharmila)....Pages 629-634
Bitcoin Exchange Rate Price Prediction Using Machine Learning Techniques: A Review (Anirudhi Thanvi, Raghav Sharma, Bhanvi Menghani, Manish Kumar, Sunil Kumar Jangir)....Pages 635-642
A Critical Review on Security Issues in Cloud Computing (Priyanka Trikha)....Pages 643-652
Smart Traveler—For Visually Impaired People (Amrita Rai, Aryan Maurya, Akriti, Aditya Ranjan, Rishabh Gupta)....Pages 653-662
Comparative Study of Stability Based AOMDV and AOMDV Routing Protocol for MANETs (Polina Krukovich, Sunil Pathak, Narendra Singh Yadav)....Pages 663-668
IoT-Based Automatic Irrigation System Using Robotic Vehicle (Sakshi Gupta, Sharmila, Hari Mohan Rai)....Pages 669-677
Recommend Papers

Information Management and Machine Intelligence: Proceedings of ICIMMI 2019 [1st ed.]
 9789811549359, 9789811549366

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Algorithms for Intelligent Systems Series Editors: Jagdish Chand Bansal · Kusum Deep · Atulya K. Nagar

Dinesh Goyal · Valentina Emilia Bălaş · Abhishek Mukherjee · Victor Hugo C. de Albuquerque · Amit Kumar Gupta   Editors

Information Management and Machine Intelligence Proceedings of ICIMMI 2019

Algorithms for Intelligent Systems Series Editors Jagdish Chand Bansal, Department of Mathematics, South Asian University, New Delhi, Delhi, India Kusum Deep, Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, India Atulya K. Nagar, School of Mathematics, Computer Science and Engineering, Liverpool Hope University, Liverpool, UK

This book series publishes research on the analysis and development of algorithms for intelligent systems with their applications to various real world problems. It covers research related to autonomous agents, multi-agent systems, behavioral modeling, reinforcement learning, game theory, mechanism design, machine learning, meta-heuristic search, optimization, planning and scheduling, artificial neural networks, evolutionary computation, swarm intelligence and other algorithms for intelligent systems. The book series includes recent advancements, modification and applications of the artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, fuzzy system, autonomous and multi agent systems, machine learning and other intelligent systems related areas. The material will be beneficial for the graduate students, post-graduate students as well as the researchers who want a broader view of advances in algorithms for intelligent systems. The contents will also be useful to the researchers from other fields who have no knowledge of the power of intelligent systems, e.g. the researchers in the field of bioinformatics, biochemists, mechanical and chemical engineers, economists, musicians and medical practitioners. The series publishes monographs, edited volumes, advanced textbooks and selected proceedings.

More information about this series at http://www.springer.com/series/16171

Dinesh Goyal Valentina Emilia Bălaş Abhishek Mukherjee Victor Hugo C. de Albuquerque Amit Kumar Gupta •







Editors

Information Management and Machine Intelligence Proceedings of ICIMMI 2019

123

Editors Dinesh Goyal Poornima Institute of Engineering and Technology Jaipur, Rajasthan, India Abhishek Mukherjee CISCO Technologies Milpitas, CA, USA

Valentina Emilia Bălaş Department of Automatics and Applied Informatics Aurel Vlaicu University of Arad Arad, Romania Victor Hugo C. de Albuquerque Universidade de Fortaleza Fortaleza, Brazil

Amit Kumar Gupta Poornima Institute of Engineering and Technology Jaipur, Rajasthan, India

ISSN 2524-7565 ISSN 2524-7573 (electronic) Algorithms for Intelligent Systems ISBN 978-981-15-4935-9 ISBN 978-981-15-4936-6 (eBook) https://doi.org/10.1007/978-981-15-4936-6 © Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

The First International Conference on International Conference on Information Management & Machine Intelligence (ICIMMI 2019) was jointly organized by Rajasthan Technical University and Poornima Institute of Engineering and Technology, Jaipur. The ICIMMI 2019 was hosted by Poornima Institute of Engineering and Technology, Jaipur and Rajasthan, India, during December 14–15, 2019. ICIMMI 2019 showcased the trends for developing intelligence in all the domains of information management. It illustrated the current scenario of IT, which has taken over the complete technology market in all disciplines of engineering, whether it is infrastructure (Civil Engineering), machines and automobile (Mechanical Engineering), power and energy (Electrical Engineering), communication and devices (Electronics and Communication) or IT-based applications and services (Computer Engineering and IT), especially with the emergence of AI, machine learning and deep learning. This conference obtained quality, academic and research contributions from all the experts working in the domain of providing Intelligence to their machines and systems. The objective of this conference was to provide opportunities for the researchers, academicians, industry persons and students to express and communicate data, skill and proficiency in the recent developments and strategies with respect to the information management and machine learning. • The conference focused on the great figure of high excellence submissions and encouraged the progressive to explore consultation among various scholastic revolutionary researchers, scientists, industrial engineers, students from all around the world and provided a forum to researchers too. • Proposed new technologies, shared their experiences and discussed the trends for recent development and strategies with respect to the information management and machine learning.

v

vi

Preface

• Provided a common platform for academic pioneering researchers, scientists, engineers and students to share their views and achievements. • Augment technocrats and academicians by presenting their original and productive information. There has been a recent onslaught of “intelligent” products and services on the market; however, some of these are not really that intelligent at all. It is important for customers to be able to see what is behind the scenes and ideally understand how well an intelligent solution really works with their own data, not just demo data. To be able to execute testing with real data before the decision to buy, the intelligent solution needs to be easy enough to implement. If it takes too much effort to get artificial intelligence to work, it can be totally impossible to try it out with real data. On the surface, an intelligent feature can sometimes appear rather mundane or obvious to a user. All the clever algorithms and mechanisms are invisible to them, and they just see something happen automatically.

ICIMMI 2019 Highlights The ICIMMI 2019 has been sponsored by TEQIP-III (RTU-ATU) Scheme under the Rajasthan Technical University, Kota, Rajasthan. The ICIMMI 2019 has held ten sessions: • Emerging Technologies on Data Science Analytics, Artificial Intelligence, Machine Learning and Deep Learning. • Advance Computing and Block Chain Technology. • Contemporary Research Trends in Neural Information Processing System. • Impact of Cognitive Services, Systems and Experimentations in Intelligent Information Processing and Management. • Emerging Trends in Computing Intelligence, Automation and Machine Learning for Communication and Wireless Network System. • Modeling Machine Learning Algorithms and Applications in Artificial Intelligence. • Emerging Challenges and Opportunities of Machine Learning and High Performance Computing for IOT and Image Processing. • Knowledge Discovery, Integration and Transformation in Information Management. • Advances in Computational Intelligence and Application. • Issues in Quantum Computing, Security in Edge/Cloud in IOT Paradigm. The International Conference on Information Management & Machine Intelligence (ICIMMI 2019) has been a big success and really proved to be very beneficial and informational for everyone. In this conference, we received totally 198 papers in which ten papers are international, 188 papers are national. Totally 98 papers were selected and presented in this conference. Ten papers are registered for the poster presentation.

Preface

vii

There were ten technical sessions, ten keynote speakers (International and National) and one poster presentation session conducted in ICIMMI 2019. We hope all the participants enjoyed the conference proceedings and wish all of them best comes to their way in future. Jaipur, India Arad, Romania Milpitas, USA Fortaleza, Brazil Jaipur, India

Dinesh Goyal Valentina Emilia Bălaş Abhishek Mukherjee Victor Hugo C. de Albuquerque Amit Kumar Gupta ICIMMI 2019

Contents

A Study of Data Hiding Using Cryptography and Steganography . . . . . Priya Mathur and Amit Kumar Gupta

1

A Review on Offline Signature Verification Using Deep Convolution Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deepak Moud, Sandeep Tuli, and Rattan Pal Rana

15

Designing of SAW-Based Resonator Under Variable Mass Load for Resonance Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yateesh Chander and Manish Singhal

23

Ablation of Hepatic Tumor Tissues with Active Elements and Cylindrical Phased Array Transducer . . . . . . . . . . . . . . . . . . . . . . . Sarita Zutshi Bhan, S. V. A. V. Prasad, and Dinesh Javalkar

31

A Brief Analysis and Comparison of DCT- and DWT-Based Image Compression Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anuj Kumar Singh, Shashi Bhushan, and Sonakshi Vij

45

Topic Modeling on Twitter Data and Identifying Health-Related Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sandhya Avasthi

57

Effort Estimation Using Hybridized Machine Learning Techniques for Evaluating Student’s Academic Performance . . . . . . . . . . . . . . . . . . A. J. Singh and Mukesh Kumar

65

Frequency Sweep and Width Optimization of Memos-Based Digital Logic Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parvez Alam Kohri and Manish Singhal

77

Performance Improvement of Heterogeneous Cluster of Big Data Using Query Optimization and MapReduce . . . . . . . . . . . . . . . . . . . . . . Pankaj Dadheech, Dinesh Goyal, Sumit Srivastava, Ankit Kumar, and Manish Bhardwaj

85

ix

x

Contents

Signaling Load Reduction Using Data Analytics in Future Heterogeneous Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Naveen Kumar Srinivasa Naidu, Sumit Maheshwari, R. K. Srinivasa, C. Bharathi, and A. R. Hemanth Kumar Modelling and Simulation of Smart Safety and Alerting System for Coal Mine Using Wireless Technology . . . . . . . . . . . . . . . . . . . . . . . 111 Om Prakash and Amrita Rai Green Algorithmic Impact of Computing on Indian Financial Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Krishna Kumar Singh and Sachin Rohatgi Coarse-Grained Architecture Pursuance Investigation with Bidirectional NoC Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Yazhinian Sougoumar and Tamilselvan Sadasivam Home Automation and Fault Detection . . . . . . . . . . . . . . . . . . . . . . . . . 135 Megha Gupta and Pankaj Sharma Performance Evaluation of Simulated Annealing-Based Task Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Abhishek Mishra, Kamal Sheel Mishra, and Pramod Kumar Mishra Epileptic Seizure Onset Prediction Using EEG with Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Shruti Bijawat and Abhishek Dadhich A Review of Crop Diseases Identification Using Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Pooja Sharma, Ayush Sogani, and Ashu Sharma Efficiency of Different SVM Kernels in Predicting Rainfall in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 M. Kiran Kumar, J. Divya Udayan, and A. Ghananand Smart Trash Barrel: An IoT-Based System for Smart Cities . . . . . . . . . 177 Ruchi Goel, Sahil Aggarwal, A. Sharmila, and Azim Uddin Ansari Iterative Parameterized Consensus Approach for Clustering and Visualization of Crime Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 K. Lavanya, V. Srividya, B. Sneha, and Anmol Dudani Assistive Technology for Low or No Vision . . . . . . . . . . . . . . . . . . . . . . 199 Soumya Thankam Varghese and Maya Rathnasabapathy A Socio Responding Implementation Using Big Data Analytics . . . . . . . 203 S. GopalaKrishnan, R. Renuga Devi, and A. Prema

Contents

xi

An Optimised Robust Model for Big Data Security on the Cloud Environment: The Numerous User-Level Data Compaction . . . . . . . . . . 209 Jay Dave An Adapted Ad Hoc on Demand Routing Protocol for Better Link Stability and Routing Act in MANETs . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Yatendra Mohan Sharma, Neelam Sharma, and Pramendra Kumar An Efficient Anonymous Authentication with Privacy and Enhanced Access Control for Medical Data in WBAN . . . . . . . . . . . . . . . . . . . . . . 227 K. Mohana Bhindu, R. Aarthi, and P. Yogesh A Study on Big Data Analytics and Its Challenges and Tool . . . . . . . . . 237 K. Kalaiselvi “Real-Time Monitoring with Data Acquisition of Energy Meter Using G3-PLC Technology” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Deepak Sharma and Megha Sharma A Profound Analysis of Parallel Processing Algorithms for Big Image Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 K. Vigneshwari and K. Kalaiselvi Breast Cancer Detection Using Supervised Machine Learning: A Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Akansha Kamboj, Prashmit Tanay, Akash Sinha, and Prabhat Kumar An Analytical Study on Importance of SLA for VM Migration Algorithm and Start-Ups in Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 T. Lavanya Suja and B. Booba In-Database Analysis of Road Safety and Prediction of Accident Severity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Sejal Chandra, Parmeet Kaur, Himanshi Sharma, Vaishnavi Varshney, and Medhavani Sharma Identifying Expert Users on Question Answering Sites . . . . . . . . . . . . . . 285 Pradeep Kumar Roy, Ayushi Jain, Zishan Ahmad, and Jyoti Prakash Singh SentEmojis: Sentiment Classification Using Emojis . . . . . . . . . . . . . . . . 293 Sangeeta Lal, Niyati Aggrawal, Anshul Jain, Ali Khan, Vatsal Tiwari, and Amnpreet Kaur Protection of Six-Phase Transmission Line Using Bior-6.8 Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Gaurav Kapoor Protection of Nine-Phase Transmission Line Using Demeyer Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Gaurav Kapoor

xii

Contents

A Comparative Analysis of Benign and Malicious HTTPs Traffic . . . . . 327 Abhay Pratap Singh and Mahendra Singh Comparative Study of the Seasonal Variation of SO2 Gas in Polluted Air by Using IOT with the Help of Air Sensor . . . . . . . . . . . . . . . . . . . 337 Vandana Saxena, Anand Prakash Singh, and Kaushal Fake News Detection: Tools, Techniques, and Methodologies . . . . . . . . 347 Deependra Bhushan, Chetan Agrawal, and Himanshu Yadav Synthesis and Analysis of Optimal Order Butterworth Filter for Denoising ECG Signal on FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Seema Nayak and Amrita Rai Prognosis Model of Hepatitis B Reactivation Using Decision Tree . . . . . 371 Syed Atef, Vishal Anand, Shruthi Venkatesh, Tejaswini Katey, and Kusuma Mohanchandra Novel Approach for Gridding of Microarray Images . . . . . . . . . . . . . . . 377 D. P. Prakyath, S. A. Karthik, S. Prashanth, A. H. Vamshi Krishna, and Veluguri Siddhartha Neighbours on Line (NoL): An Approach to Balance Skewed Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Shivani Tyagi, Sangeeta Mittal, and Niyati Aggrawal Competent of Feature Selection Methods to Classify Big Data Using Social Internet of Things (SIoT) . . . . . . . . . . . . . . . . . . . . . . . . . . 393 S. Jayasri and R. Parameswari Big Data Analytics: A Review and Tools Comparison . . . . . . . . . . . . . . 399 V. Dhivya Secured Cloud for Health Care System . . . . . . . . . . . . . . . . . . . . . . . . . 407 K. Kalaiselvi and R. Seon Kumarathi Minimising Acquisition Maximising Inference—A Demonstration on Print Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Suyash Shandilya Data Management Techniques in Hadoop Framework for Handling Small Files: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Vijay Shankar Sharma and N. C. Barwar Maintaining Accuracy and Efficiency in Electronic Health Records Using Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 A. Suresh and R. Udendhran An Extensive Study on the Optimization Techniques and Its Impact on Non-linear Quadruple Tank Process . . . . . . . . . . . . . . . . . . . . . . . . . 445 T. J. Harini Akshaya, V. Suresh, and M. Carmel Sobia

Contents

xiii

A Unique Approach of Optimization in the Genetic Algorithm Using Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 T. D. Srividya and V. Arulmozhi Deep Learning Architectures, Methods, and Frameworks: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Anjali Bohra and Nemi Chand Barwar Protection of Wind Farm Integrated Double Circuit Transmission Line Using Symlet-2 Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . 477 Gaurav Kapoor Predicting the Time Left to Earthquake Using Deep Learning Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Vasu Eranki, Vishal Chudasama, and Kishor Upla Fully Informed Grey Wolf Optimizer Algorithm . . . . . . . . . . . . . . . . . . 497 Priyanka Meiwal, Harish Sharma, and Nirmala Sharma A Study to Convert Big Data from Dedicated Server to Virtual Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 G. R. Srikrishnan, S. Gopalakrishnan, G. M. Sridhar, and A. Prema Fuzzy Logic Controller Based Solar Powered Induction Motor Drives for Water Pumping Application . . . . . . . . . . . . . . . . . . . . . . . . . 519 Akshay Singhal and Vikas Kumar Sharma Identity Recognition Using Same Face in Different Context . . . . . . . . . . 527 Manish Mathuria, Nidhi Mishra, and Saroj Agarwal Using Hybrid Segmentation Method to Diagnosis and Predict Brain Malfunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 K. Dhinakaran and R. A. Karthika A Blockchain-Based Access Control System for Cloud Storage . . . . . . . 545 R. A. Karthika and P. Sriramya Cost-Effective Solution for Visually Impaired . . . . . . . . . . . . . . . . . . . . . 555 Abhinav Sagar, S Ramani, L Ramanathan, and S Rajkumar When Sociology Meets Next Generation Mobile Networks . . . . . . . . . . . 567 Harman Jit Singh, Diljot Singh, Sukhdeep Singh, Bharat J. R. Sahu, and V. Lakshmi Narasimhan Image Enhancement Performance of Fuzzy Filter and Wiener Filter for Statistical Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 Pawan Kumar Patidar and Mukesh Kataria A Review of Face Recognition Using Feature Optimization and Classification Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 Apurwa Raikwar and Jitendra Agrawal

xiv

Contents

Rapid Eye Movement Monitoring System Using Artificial Intelligence Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 M. Vergin Raja Sarobin, Sherly Alphonse, Mahima Gupta, and Tushar Joshi Analysis of Process Mining in Audit Trails of Organization . . . . . . . . . 611 Swati Srivastava, Gaurav Srivastava, and Roheet Bhatnagar Modern Approach for the Significance Role of Decision Support System in Solid Waste Management System (SWMS) . . . . . . . . . . . . . . 619 Narendra Sharma, Ratnesh Litoriya, Harsh Pratap Singh, and Deepika Sharma Integration of Basic Descriptors for Image Retrieval . . . . . . . . . . . . . . . 629 Vaishali Puranik and A. Sharmila Bitcoin Exchange Rate Price Prediction Using Machine Learning Techniques: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635 Anirudhi Thanvi, Raghav Sharma, Bhanvi Menghani, Manish Kumar, and Sunil Kumar Jangir A Critical Review on Security Issues in Cloud Computing . . . . . . . . . . 643 Priyanka Trikha Smart Traveler—For Visually Impaired People . . . . . . . . . . . . . . . . . . . 653 Amrita Rai, Aryan Maurya, Akriti, Aditya Ranjan, and Rishabh Gupta Comparative Study of Stability Based AOMDV and AOMDV Routing Protocol for MANETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663 Polina Krukovich, Sunil Pathak, and Narendra Singh Yadav IoT-Based Automatic Irrigation System Using Robotic Vehicle . . . . . . . 669 Sakshi Gupta, Sharmila, and Hari Mohan Rai

About the Editors

Dr. Dinesh Goyal is currently working as a Professor & Director of Poornima Institute of Engineering & Technology & Director of Engineering. He has more than 18 years of research and academic experience, specializing in NAAC and other accreditation activities. His research interests include information security, image processing and cloud computing and he has published more than 60 papers in leading national and international journals. Dr. Valentina Emilia Bălaş is currently working as an Associate Professor at Aurel Vlaicu University of Arab. She is a head of the Arad section of the Association for Automation and Instrumentation in Romania, and a member of the Multidisciplinary Development Association – Timisoara, and the General Association of Romanian Engineers, where she is Vice-president of the Arad section. She has also been a member of IEEE Computational Intelligence Society since 2003. Dr. Abhishek Mukherjee is currently working in the Connected Mobile Experiences (CMX) team of the Wireless Network BU, where he has been involved in developing Cisco’s indoor location solution (a.k.a. RTLS or LBS) based on the state-of-the-art hyperlocation, angle-of-arrival localization technology enabled by the world’s smartest access point: Cisco Aironet 4800. He currently has over 10 patents submitted to USPTO, and 3 more are under review at Cisco. He was Cisco’s lead representative at IEEE Location Services for Healthcare (LSH) standards. Victor Hugo C. de Albuquerque is currently an Assistant VI Professor in the Graduate Program in Applied Informatics at the University of Fortaleza (UNIFOR). His research focuses on computer systems, mainly in the fields of applied computing, intelligent systems, visualization and interaction. Dr. Amit Kumar Gupta is currently working as an Associate Professor of Computer Science and Engineering at Poornima Institute of Engineering & Technology. He has more than 14 years of academic and research experience, xv

xvi

About the Editors

particularly in the fields of learning, information security, cloud computing and CPU scheduling, and he has published more than 20 papers in leading national and international journals. He is also guest editor for various Springer, IGI Global, Inderscience and Bentham Science journals.

A Study of Data Hiding Using Cryptography and Steganography Priya Mathur and Amit Kumar Gupta

1 Introduction The two techniques cryptography and steganography are used to encrypt a message when it is transferring from one source to another on a network. These techniques are widely used to make data secure and confidential. A cryptography key hides information using a private key or public key so that the third person cannot access that data, whereas stenography is used to hide the cover medium such as audio, pictures, videos so that the third person is not able to use that data. These techniques have been used to make data secure, confidential, entity authentication, and basic authentication. Cryptography is the word which has been made combining two words, i.e., crypt meaning “hidden” and the graphy meaning “writing.” Cryptography is a way of protecting information and communication between sender and receiver so that the third does not access it in an unauthorized manner. In it we secure the data using a set of mathematical concepts and rule-based calculations called algorithms to change the message in a difficult way. Cryptography has four objectives and is confidentiality, integrity, non-reprisal, and authentication. Figure 1 shows the process of cryptography [1, 2]. The term steganography is formed by two combinations of two words, i.e., steganos meaning “hidden or covered” and graphy meaning “writing.” It is a technique which has been used in hiding secret data within a simple, non-secret file or message to avoid accessing third-party users, then extract secret data and add images and audio. Figure 2 is depicting the process of steganography [3–5].

P. Mathur · A. K. Gupta (B) Computer Science and Engineering, Poornima Institute of Engineering and Technology, Jaipur, Rajasthan, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_1

1

2

P. Mathur and A. K. Gupta

Plain Text

EncrypƟom

Cipher Text

DecrypƟon

PlainText

Stego-Image

ExtraƟng Steganography Algorithm

Secret Message

Fig. 1 Process of cryptography

Covered image + Secret Message

Embedded Steganography Algorithm

Fig. 2 Process of steganography

2 Cryptographic Techniques It is the method in which the source encrypts the message using a key, i.e., public or private key to hide the data, so an unauthorized party will be unable to access that data. The encryption algorithm works on two principles, i.e., substitution and transposon. In substitution, the plaintext is mapped to another element and the transposition is one in which the elements of plaintext are redistributed. Figure 3 shows the process of encryption and decryption of plaintext using keys [6, 7]. Terminologies which have been used in cryptography are: • Plaintext:—Original message.

Fig. 3 Process of encryption and decryption with using keys

A Study of Data Hiding Using Cryptography and Steganography

3

• Encryption:—It is the process of encoding the original message by using of some techniques like text, numbers, etc., so that the outsider will be unable to know the original message. • Decryption:—It is a process of message decoding. • Ciphertext:—The encoded message/text is known as ciphertext. • Hash Functions:—It is used for the digital signature. In message authentication, hash function has been approached. In this, the result of the hash function is hash code. The cryptography algorithm used two types of keys that are symmetric key and asymmetric key which have been described as follows. • Symmetric key—If both the sender and receiver use same key for both message encryption and decryption. • Asymmetric key—If different keys are being used for message encryption and decryption.

2.1 Cryptographic Algorithms 2.1.1

DES Algorithm (Symmetric Key Algorithm)

Data encryption algorithm is used for encryption of electronic data. It has been used to transmit data from sender to receiver. This is the first and oldest method used to crypt messages. DES in now time is being considered unsafe for many of the applications as the DES encryption algorithm uses a 56-bit key for the process of encryption of plaintext which is too small. It can be easily cracked by brute forcing method. DES is suspended using a more secure algorithm, i.e., Advanced Encryption Algorithm (AES). DES also suffers from man-in-the-middle attack. As shown in Figure 4, DES is a block cipher in which a message or data is Fig. 4 Process of DES algorithm

4

P. Mathur and A. K. Gupta

inserted, and then it will encrypt the data in a 64-bit-sized block, meaning 64-bit plaintext and key Is the input to DES, producing 64 bits of ciphertext. For encryption and decryption, the same algorithm and key are used [8–10].

2.1.2

RSA Algorithm (Asymmetric Key Algorithm)

RSA is one of the first public key cryptographic systems and is widely used to ensure data transmission. RSA was first described in 1977 by Ron Rivest, Adi Shamir, and Leonard Edleman of the Massachusetts Institute of Technology. Here the encryption key is public, and the decryption key is private and kept secret. RSA is based on two large prime numbers. The public and private key generation algorithms are the most complex part of RSA cryptography. We can generate two large prime numbers, x and y, using the Rabin–Miller primality test algorithm. A module is calculated by multiplying x and y. This number is used by public and private keys and provides links between them. Its length is called the principal length. To calculate the RSA algorithm, we have [11] to follow the following steps: 1. 2. 3. 4.

Select two large prime nos. p and q. Calculate N = p * q. Calculate f (z) = (p−1) * (q−1). Find a random number e satisfying 1 < e < f (n) and relatively prime to f (n), i.e., gcd (e, f (z)) = .1. 5. Calculate the number d such that d = e−1 mod f (n). 6. For the encryption: Enter message to get ciphertext. Ciphertext c = mod ((message. ˆe), N). 7. For the decryption: The ciphertext is decrypted by message m = mod ((c. ˆd), N).

2.1.3

AES Algorithm (Symmetric Key Algorithm)

AES is a symmetric key algorithm that means that the same key is used by both sender and receiver. This AES standard specifies the Rijndael algorithm, a symmetric block code capable of processing 128-bit blocks of data, using key sizes of 128, 192, and 256 bits. Rijndael uses entry, exit, and encryption keys. It only takes input and output of a fixed block size of 128 bits. Figure 6 shows the basic encryption of the AES algorithm [1].

A Study of Data Hiding Using Cryptography and Steganography

5

Read Cipher text from Database Plain Text Displayed to user

Implement RSA Algorithm

Store Plain text into Database

Generate 1 level of DecrypƟon

Generate Plain Text

Implement DES algorithm

Fig. 5 Process of RSA algorithm [11]

Secret Key+Plain Text(128,192,256 bit)

Cipher

Cipher Text(128,192,256 bit)

Fig. 6 Process of AES algorithm [1]

2.1.4

Comparison Among DES, AES, and RSA Algorithms (Revised After the Review)

Table 1 shows the difference among the DES, AES, and RSA algorithms.

3 Steganography Techniques The word steganography is taken from the Greek meaning coated or secret, and the spelling means writing or drawing. Therefore, steganography is a “cover writing.” The purpose of steganography is secure communication and hidden data must be undetectable.

6

P. Mathur and A. K. Gupta

Table 1 Difference among DES, AES, and RSA algorithms [12, 13, 14] Factor

AES

DES

RSA

Developed

2000

1977

1978

Key size

128, 192, 256 bits

56 bits

>1024 bits

Block size

128 bits

64 bits

Minimum 512 bits

Ciphering and deciphering key

Same

Same

Different

Scalability

Not scalable

It is scalable algorithm due to varying the key size and block size

Not scalable

Algorithm

Symmetric algorithm

Symmetric algorithm

Asymmetric algorithm

Encryption

Faster

Moderate

Slower

Decryption

Faster

Moderate

Slower

Power consumption

Low

Low

High

Security

Excellent secured

Not secure enough

Least secure

Deposit of keys

Needed

Needed

Needed

Inherent vulnerabilities

Brute forced attack

Brute forced, linear, and differential cryptanalysis attack

Brute forced and oracle attack

Key used

Same key used for encrypt and decrypt

Same key used for encrypt and decrypt

Different key used for encrypt and decrypt

Rounds

10/12/14

16

1

Stimulation speed

Faster

Faster

Faster

Trojan horse

Not proved

No

No

Hardware and software implementation

Faster

Better in hardware than in software

Not efficient

Ciphering and deciphering algorithm

Different

Different

Same

The means used to cover can be digital images, audio, video, text files, and other computer files called media load-loading objects or roof objects. Basic models of steganography are included to insert and remove transport objects, secret messages, embedding algorithms, extraction algorithms, and steering keys. There are some steganographic techniques which are discussed below [15–17].

A Study of Data Hiding Using Cryptography and Steganography

7

3.1 LSB Steganography It contains the text message in the least significant bit. Data is integrated replacing the courier’s LSB with the data being sent. First, the cover image and the text message will be hidden in the cover image, and then, convert the text message to binary. Calculate the LSB of each pixel of the cover image. Swap the LSB of the cover image with each bit of the secret message one by one to get an image where the data is hidden. For example, the following grid can be thought of as three pixels of a 24-bit color image, using 9 bytes of memory: (00100111 11101001 11001000) (00100111 11001000 11101001) (11001000 00100000 1101001). When the character A, 10000001. The binary value that is equal to is included in the following grid result: (00100111 11101000 11001000) (00100110 11001000 11101000) (11001000 00100111 11101001). In this case, only three bits have to be changed to successfully insert the character. is.

3.2 DST Steganography The hidden message is converted into a binary stream of “1” and “0” which is inserted into the DCT domain of the cover image. The color-based transformation converts the image (cover image) into 8 × 8 blocks of pixels. The high positive coefficient should be integrated into the cover image in the middle low-frequency range. DCT can divide the image into high-frequency, medium-frequency, and low-frequency components because high-frequency coefficients are porous and less robust in image quality. The main problem in this work is resistance to high image quality, so lowfrequency and medium-frequency coefficients are the most appropriate. The selected coefficient is modified by the corresponding bit in the message flow. This quantity of K represents the persistence factor. As soon as the last word of the message bit S (i) is “1”, the image coefficient is combined with the quantity K; otherwise, the same amount will be deducted.

3.3 DWT Steganography A discrete wavelet transform (DWT) is a wavelet transform for which a large sample of wavelets is taken. It is one of the frequency domains in which steganography can be implemented. DCT is calculated on blocks of self-governing pixels, a coding error will cause discontinuity between blocks, resulting in disturbances in blocked artifacts. This DCT defect is eliminated using DWT because DWT applies to the entire image. DWT provides better energy condensation than DCT without any inhibitory artifact. The DWT divides the components into several frequency bands called sub-bands.

8

P. Mathur and A. K. Gupta

LL—Horizontally low pass and vertically low pass LH—Horizontally low pass and vertically high pass HL—Horizontally high pass and vertically low pass HH—Horizontally high pass and vertically high pass

3.4 Comparison Among LSB, DST, and DWT Steganography Techniques (Revised After the Review) In this section, the researcher presents a comparison between different types of steganography techniques (Table 2) [18–24].

4 Combined Cryptography and Steganography We can use a combination of cryptography and steganography to hide third-party data. There are many weaknesses when we are using these technologies individually, so by using a combination of both techniques we can overcome these weaknesses Table 2 Comparison among LSB, DST, and DWT steganography techniques [25] S. No.

Steganography techniques

Cover media

Embedding technique

Advantages

1.

Least significant bit (LSB)

Image

It works by using the least significant bits of each pixel in one image to hide the most significant bits of another

This method is probably the easiest way of hiding information in an image, and it is effective

2.

Direct cosine transform (DCT)

Image

Embeds the information by altering the transformed DCT coefficients

Generally more complex and robust

3.

Wavelet transform

Image

This technique works by taking many wavelets to encode a whole image. They allow images to be compressed so highly by storing the high-frequency “detail” in the image separately from the low-frequency parts

Wavelet transformations are far better at high compression levels and thus increase the level of robustness of the information that is hidden

A Study of Data Hiding Using Cryptography and Steganography

9

Fig. 7 Process of combined cryptography and steganography

when we are transmitting data. This combined technology will meet requirements such as capability, security, robustness, and integrity that we do not personally do when we implement them. When the message is broadcasting over the network, cryptography fails when the “opponent” cipher can access the contents of the message when the data is transmitting to the receiver, and steganography fails when the “opponent” is detected. A secret message in the steganographic medium (Fig. 7). In the combined technique of steganography and cryptography, it works like this: The steps are: • In cryptography, we have plaintext and keys that will convert data or information in the form of encryption. • The information in encryption has been successfully converted to ciphertext. • The ciphertext and cover image will be converted into a stego image. • Then, the receiver will receive the ciphertext. • After that the receiver will decrypt the message or information with the help of the key. • Then, the receiver will get the plaintext information which he needs.

4.1 DES Technique with LSB In this, we use the DES algorithm for cryptography to encrypt data when plaintext is converted or encrypted into ciphertext, and then it will be hidden within the cover carrier. Here, we can use the text as cover carrier. The process of embedding steganography is performed using LSB steganography Firstly, the secret data is

10

P. Mathur and A. K. Gupta

converted using DES cryptography, and with the help of the key, we get the ciphertext. The ciphertext will then convert the information or data into binary. The cover images’ least significant bit (LSB) will be replaced with binary ciphertext. The image will then be transmitted to the receiver. A disadvantage with the LSB method is that the LSB may change for all image pixels and information or data will be lost. This technique is intolerable to noise and image compression.

4.2 AES Technique with LSB The AES algorithm for cryptography will be used for encrypting the data which is to be transferred and will then be converted into ciphertext. The ciphertext will then be inserted into the cover carrier. Here, the 16-bit image can be used as a cover carrier, depending on which bit we want to use in it. The embedding process will be used by LSB steganography technique. First, the information or data will be converted using AES cryptography, through which we will get the ciphertext. Then, the ciphertext will convert in to binary. For each 8-bit of data, the last two bits of data are replaced by the last two MSB of the blue byte; the second three data bits are replaced by the three least significant bits of the green byte, the first three data bits. The red byte is replaced by the three most significant bits. The image will then be transmitted to the receiver.

4.3 AES Technique with DCT The AES algorithm for encryption is used to encrypt data, through which ciphertext is generated from plaintext and keys using AES encryption. The ciphertext will then be embedded into the cover image which will use DCT-based steganography technology. In this, the DCT transformation is applied to the cover image so that the image can be divided into high-frequency, mid-frequency, and low-frequency components. Since the high-frequency coefficients are superimposed and less robust on image quality, we can use lower-frequency and middle-frequency coefficients. The selected coefficients are separated by the corresponding bit in the message stream. If the duration of the message bit S (i) is “1”, the coefficient of the image is combined with the quantity K. Otherwise, the same quantity is subtracted from it. The main problem associated with this technique is that increasing the size of the data reduces the quality of the image.

A Study of Data Hiding Using Cryptography and Steganography

11

4.4 AES Technique with DWT The AES algorithm for cryptography is used for encrypting data; ciphertext is generated via plaintext and keys using AES encryption. The ciphertext will then be embedded in the cover image which will use DWT-based steganography. DWT changes are imposed to cover image so that the image is divided into four sub-bands. Since human eyes are more sensitive to the low-frequency part, we can hide the secret message in the high-frequency part without any modification to the lowfrequency sub-band. The advantage of DWT steganography is that the cover can hold more data without distortion of the image.

5 Conclusion Cryptography and stenography are two techniques that help make our data secure. It helps us secure our data when it is sending to the receiver. Cryptography and stenography are two techniques that help make our data secure. It helps us secure our data when it is sending to the receiver. In this, we are working with steganography which will reduce image quality degradation in core work to improve security. In this, we can say that AES technology is better with DWT because this method is helping us to maintain our image.

References 1. Priyadarshini, P., Prashant, N., Narayan, D. G., & Meena, S. M. (2016). A comprehensive evaluation of cryptographic algorithms: DES, 3DES, AES, RSA and Blowfish. Procedia Computer Science, 78, 617–624. 2. Yogesh, K., Rajiv, M., & Harsh, S. (2011). Comparison of symmetric and asymmetric cryptography with existing vulnerabilities and countermeasures. International Journal of Computer Science and Management Studies., 11(3), 60–63. 3. Sridevi, R., Vijaya Lakshmi, P., & Sada Siva Rao, K.S. (2013)..Image Steganography combined with cryptography. International Journal of Computers & Technology. 4. Sharda, S., & Budhiraja, S. (2013). Image steganography: A review. International Journal of Emerging Technology and Advanced Engineering (IJETAE), 4(1), 707–710. 5. Raphael, J., Sundaram, V. (2011). Cryptography and Steganography—A survey. International Journal. ISSN: 2229-6093, 2(3), 626–630. [9] Altaay, A. J., et al. (2012). An Introduction to image steganography techniques. International Conference on Advanced Computer Science Applications and Technologies, 122–126. 6. Jeeva, A. L., Palanisamy, V., & Kanagaram, K. (2012). Comparative analysis of performance efficiency and security measures of some encryption algorithms. International Journal of Engineering Research and Applications, 2(3), 3033–3037.

12

P. Mathur and A. K. Gupta

7. Ritu, T., & Sanjay, A. (2014). Comparative study of symmetric and asymmetric cryptography techniques. International Journal of Advance Foundation and Research in Computer., 1(6), 68–76. 8. Mina, A., Kader, D. S., Abdual, H. M., & Hadhoud, M. M. (2012). Performance analysis of symmetric cryptography. pp. 1. 3. Chehal, R., & Kuldeep, S. (2012). Efficiency and security of data with symmetric encryption algorithms. International Journal of Advanced Research in Computer Science and Software Engineering, 2(8), 1. ISSN: 2277 128X. 9. Elminaam, D. S. A., Abdual Kader, H. M., & Hadhoud, M. M. (2010). Evaluating the performance of symmetric encryption algorithms. International Journal of Network Security, 10(3), 216. 10. Alanazi, H. O., Zaidan, B. B., Zaidan, A. A., Jalab, H. A., Shabbir, M., & Al-Nabhani, Y. (2010). New Comparative Study Between DES, 3DES and AES within Nine Factors. Journal of Computing., 2(3), 152–157. 11. Idrizi, F., Dalipi, F., & Rustemi, E. (2013) Analyzing the speed of combined cryptographic algorithms with secret and public key. International Journal of Engineering Research and Development, 8(2), 45. e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com. 12. Nitin, K. K., & Ashish, V. N. (2013). Comparison of various images steganography techniques. International Journal of Computer Science and Management Research, 2(1), 1213–1217. 13. Mahajan, P., & Sachdeva, A. A study of encryption algorithms AES, DES and RSA for security. Global Journal of Computer Science and Technology Network, Web & Security, 13(15) Version 1.0 Year 2013. 14. Goel, S., Rana, A., Kaur, M. (2013). Comparison of image steganography techniques. International Journal of Computers and Distributed Systems, 3(I). 15. Hussain, M., & Hussain, M. (2013). A survey of image steganography technique. International Journal of Advanced Science and Technology, 54, 113–124. 16. Kumar, L. (2012). Novel security scheme for image steganography using cryptography technique. International Journal of Advanced Research in Computer Science and Software Engineering. 2(4), 143–146. 17. Al-Barhmtoshy, H., Osman, E., Ezzaand, M. A novel security model combining cryptography and steganography. Technical report 483–490. 18. Soe, T. N., Chan, & W. W. (2011). Implementation and analysis of three steganographic approaches. IEEE Explore. ISBN: 978-1-61284-839-6. 19. Ashwin, S., Ramesh, J., Aravind Kumar, S. & Gunavathi, K. (2012). Novel and secure encoding and hiding techniques using image steganography: A survey. IEEE Explore, ISBN: 978-1-46734633-7. 20. Joseph Raphael, A., & Sundaram, V. (2010). Cryptography and steganography-a survey. International Journal of Computer and Technology Applications, 2(3), 626–630. ISSN: 2229-6093. 21. Seth, D., Ramanathan, L., Pandey, A. (2010). Security enhancement: combining cryptography and steganography. International Journal of Computer Applications (0975–8887), 9(11), 3–6. 22. Challita, K., & Farhat, H. (2011). Combining steganography and cryptography: New directions. International Journal on New Computer Architectures and Their Applications (IJNCAA), 1(1), 199–208. 23. Sitaram Prasad, M., Naganjaneyulu, S., Krishna, C. G., & Nagaraju, C. (2009). A novel information hiding technique for security by using image steganography. Journal of Theoretical and Applied Information Technology, 35–39. 24. Babu, K. R., et al. (2010). A survey on cryptography and steganography methods for information security. International Journal of Computer Applications (0975–8887), 12(2), 13–17.

A Study of Data Hiding Using Cryptography and Steganography

13

25. Madhuravani, B., Bhaskara Reddy, P., Lalith SamanthReddy, P. (2014). Steganography techniques: Study & comparative analysis. International Journal of Advanced Scientific and Technical Research, 2(4). 26. Oppliger, R. (1996). SSL and TLS: Theory and practice, ARTECH HOUSE, 2014. [5] B. Schneier, Applied cryptography, Second Edition: Protocols, Algorthms, and Source Code in C (cloth) (pp. 1–1027).

A Review on Offline Signature Verification Using Deep Convolution Neural Network Deepak Moud, Sandeep Tuli, and Rattan Pal Rana

1 Introduction The objective of biometrics technology is to identify a person based on biological or behavioural characters. In physiological attribute, the identification and verification take place on the basis of biometric measurements like thumb impression, face, retina scan, etc. The behavioural characteristics are speech and signature. The objective of offline signature verification is to inspect genuinely the signer that he/she is the same who claim to be the signer. It is to categorize signatures under investigation valid or forged. Handwritten initial is key attribute of behavioural feature to verify ones identity in administrative, legal and financial sector. As people are acquainted with the use of signature, this process is widely accepted and used for verification. Three types of forgeries are (1) random, (2) casual and (3) skilled (simulated). In random forgeries, the imitator does not have knowledge about the real user and does own signature instead. In simple or casual forgeries, the forger knows the name of real user but does not have idea about his/her signature. In this situation, forger signs with his complete name or partial name. In skilled or simulated forgeries, the forger knows both name and signature and imitates signature of user; such manipulation has higher likeness and toughness to catch. Offline signature verification system is static. Offline signature is a digital image captured through scanning of signature after signature is produced by user on paper or document. It is also difficult to identify D. Moud (B) · R. P. Rana Poornima University, Jaipur, India e-mail: [email protected] R. P. Rana e-mail: [email protected] S. Tuli Poornima Institute of Engineering & Technology, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_2

15

16

D. Moud et al.

best feature extractor that can classify genuine sign and simulated forged signature because same person may make different signature for verification task. To overcome need of good features extractor and to improve accuracy, convolution neural network has been used by many researchers in literature.

2 Literature Review Research on offline signature verification has been incepted from year 1970. Many researchers have started working in this field since then. Initially, the emphasis was on extracting features from signature images and then using these features as input to classifier for the classification. Plamondon and Lorette have written article in 1989, in which the authors have reviewed and consolidated all the work done from 1989 to 1993 in the area of automatic signature verification [1]. Impedovo and Pirlo presented the survey in automatic signature verification. This study includes experimental results of all research work done till 2008. It also shows way forward to new researcher in this field. Paper contains a rich repository of 300 research contributions [2]. Hafemann et al. have surveyed recent work in field of signature verification and provided insight of future advancements and future direction. Authors have given detail of hand-crafted feature descriptors used by many researchers and compared their results [3]. In last few years, researchers have investigated many hand-crafted feature descriptors like geometric descriptors. Nagel and Rosenfeld have written paper on freehand forgeries on cheques of bank. They have used geometrical features: ratio of size and slant [4]. Justino has considered signature verification with different forgery in an HMM framework. Authors have shown encouraging results in simulated manipulation using simple static and pseudodynamic feature [5]. Oliveira et al. have proposed techniques used in forensic document inspectors to identify handwriting. Graphometric features, pixel density, pixel proportion, progression and slant, were used [6]. Directional-based descriptors such as PDF were calculated from the gradient of outline of image using grid. Local shape descriptor’s pyramid histogram of oriented gradients (PHOGs) has also been used for signature verification [7]. Malik et al. have implemented speeded-up robust features (SURFs) that are used to retrieve interest points in digital images [8]. Ruiz-del Solar et al. have explored scale-invariant feature transform (SIFT) technique through which local interest points are fetched from input image and reference image [9]. Yılmaz and Yanıko˘glu have used texture descriptors, such as local binary patterns (LBPs) [10]. Hu and Chen have experimented three pseudodynamic parameter based on grey-level image: local binary pattern (LBP), grey-level co-occurrence matrix (GLCM) and histogram of oriented gradients (HOG). The verification was done using writer-dependent support vector machines (SVMs) and global real AdaBoost method [11].

A Review on Offline Signature Verification Using Deep …

17

3 Problem Statement Identifying and designing feature extractor for signature verification are difficult tasks because it is hard to understand that what characteristics best define signature. Many researchers have made efforts to find features descriptors such as geometric features, graphometric features, directional features, mathematical transformations, extended shadow code, texture features and interest point matching that could represent signature and can also be used to distinguish between genuine or fraudulent signatures. So far no hand-crafted feature extractor has emerged as best suitable for verification task in literature in terms of accuracy. If experiment results are compared, then best published result is equal error rate (EER) 7% in literature on GDPS dataset [12]. To improve the efficiency and accuracy of system and to obtain feature representation, scanned image of signature should be used directly instead of hand-crafted features. Scanned image of signature becomes the input to CNN which categorizes signature as valid or fake.

4 Methodologies Pre-processing is indeed necessary in all application of image processing; it is mainly due to noise introduced while images are captured. In signature verification system, also variations may come due to thickness of pen, size and rotation. Pre-processing is used to retrieve image from complex background; then, noise is removed and size is normalized. Major pre-processing methods for signature verification system are input size of each signature must be same for convolution neural network, removal of background, centring of image and resizing take place in pre-processing steps [13]. Many experiments have been done in past to obtain features directly from data using CNN. Bernardete Ribeiro, Ivo Gonçalves, Sérgio Santos and Alexander Kovacec have used massive parallel distributed neural network (NN) which was successful to obtain complex representation of signature. They experimented with neural network on GDPS dataset. Research was able to fetch layer-wise high level presentation of signature with help of three layers with 100 units each. Out of three layers, one layer was input layer and two layers were internal layer. Two-step hybrid model was also developed to lower down misclassification rate. No classification took place in this research [14]. Khalajzadeh et al. proposed CNN that classifies signature without prior knowledge of feature base. In this experiment, multilayer perceptron was used for classification task. The experiment was performed on Persian signatures gathered from 22 people. CNN was implemented as feature descriptor, and multilayer perceptron (MLP) was for classification. A total of 176 signs from 22 persons were used in training [15]. Soleimani et al. prepared a solution using deep multitask metric learning (DMML). Author has calculated distance metric between pair of signatures. The test was conducted on UTSig, MCYT-75, GPDSsynthetic and GPDS960 GraySignatures dataset with HOG and DRT features [16]. Hafemann et al.

18

D. Moud et al.

and Oliveira have explored four CNN architectures to obtain classification improvement on the GPDS dataset. The methodology was to get feature representation for classifier to verify signature using writer-independent CNN, and then, resultant features were practised to train writer-dependent binary classifier to categorize signature as fake or valid. The experiment was taken on GPDS-960 dataset. The data contain signature of 881 persons. A total of 24 true signatures and 30 invalid (forged) signs were collected from each user. The dataset was further divided into four parts for training and testing of CNN and SVM classifier [17]. Hafemann et al. addressed two difficulties of signature verification system (SVS)—(1) 7% error in classification and (2) finding best feature for classification. To overcome both problems, new approach has been devised which includes samples of skilled forgery to be included in learning of features. Authors have trained writer-independent CNN with genuine as well as skill forged signature, and then writer-dependent SVM classifier is trained using these features to discriminate between signatures. Experiment was conducted on four popular datasets: GPDS, MCYT, CEDAR and Brazilian PUC-PR. GDPS dataset is divided into learning set and verification for CNN; learning set is used in CNN for learning features, and then verification set along with learnt features is used to train SVM classifier. [18]. Hafemann et al. have also proposed variation of previous paper [18]; in this paper, they have used writer-independent CNN for representing and extracting features of signature image and then support vector machine of writer-independent (WI) network combined with dichotomy transformation used for classification. Dichotomy converts multiple-class classification into two-class classifier. The experiment took place on GDPS and Brazilian PUC-PR datasets. The dataset was divided into four parts—two sets were used in writer-independent CNN for feature extraction and two for writer-independent SVM classifier [19]. Yapici et al. have experimented with two convolution neural networks for signature verification. Authors have used two separate networks: (1) writer-dependent (WD) and (2) writer-independent (WI) and trained them separately for the task. Experiment was done using publically available GPDSsynthetic Signature dataset. WD model was trained with 30 (15 genuine + 15 forged) signatures from the pool of 54 signatures, and remaining 24 samples were used for testing. Similarly for WI model, 300 (150 genuine + 150 forged) samples from the pool of 540 samples (240 genuine + 300 forged) were used in training, and remaining 240 were used in testing [20].

5 Results and Discussion Ribeiro et al. found satisfactory results. Authors were able to fetch layer-wise highlevel presentation of signature that permits non-local generalization and comprehensibility. Experiment suggested to use GPU to cope up with huge number of parameters [14]. Khalajzadeh et al. have found validation performance at average 99.86. The CNN had better result than neural network. It is also found that convolution neural network is less affected by distortions, translation, scaling, rotation and squeezing. It is also observed that CNN can fetch features from data directly so pre-processing

A Review on Offline Signature Verification Using Deep …

19

and feature descriptors are no more required. CNN is robust regarding location and scale of signature. It is concluded that CNN is better than feed-forward network in terms of adaptability, robustness and efficiency. This experiment has taken only random forgery under consideration. Scope is to identify simulated forged signature [15]. Soleimani et al. have conducted experiment. Experiment concluded better performance of DMML in comparison with support vector machine. In this experiment, hand-crafted features were used [16]. Luiz et al. and Oliveira results showed huge improvement in performance with 2.74% equal error (EE) in comparison with bestavailable equal error (EE) 6.97% in literature. It is also proved that writer-dependent classifier performs well even with few samples along with features generated by writer-independent CNN. It is also noted that features learnt could discriminate sign on general presence not on better-quality detail. It means model performed well on simple forgery but not on skilled forgery [17]. Hafemann et al.’s experimental setup yielded error rate 1.72 which is far lesser than the lowest available error rate in literature. It is interesting to note that with one sample, error rate was 5.74, that is lesser than error rate with 12 samples. This experiment concluded that feature learnt through writer-independent CNN is more effective than hand-engineered features. Forged signatures were used in training of CNN, and features learnt performed better to distinguish between forged and genuine sign. The experiment resulted well with few samples also. The model also worked well with unseen user and thus generalized well. The weakness found of performance was not good in all cases, which means further study is required in this context. A model that combines online and offline signature verification can be future research trend [18]. Hafemann et al. results were better than other results available in past researches. In global threshold scenario, the proposed idea outperformed Hafemann et al. [18] in the Brazilian dataset. There is a scope of research in feature and prototype selection in the dissimilarity space. Adaptation of signer-dependent classifier and decision threshold classifier needs to be improved. CNN does not require hand-crafted feature and classify accurately [19]. Yapici et al.’s result yielded accuracy 62.5% and 75% for WI and WD models, respectively. It is concluded that features were not enough to gain efficiency, and more features must be used to improve accuracy [20].

6 Conclusion This paper reviewed research work done in area of signature verification. The research has been incepted in 1970, and lot of work and experimentations have been done. Initially, the emphasis was on classification through hand-crafted features, and it is realized that classification using hand-crafted feature is tedious task and did not produce desired accuracy. Then, attention was paid to automatic feature extraction through CNN. Many experiments have explored CNNs. Many experiments have been done with writer-dependent and writer-independent network using CNN for

20

D. Moud et al.

feature representation. Most of the researches have used writer-dependent SVM as classifier for signature verification. Now CNNs are used to classify as well as to verify genuinely of signature. It is also observed that still performance needs to be improved.

7 Future Scope Signature verification is still open for research with scope of accuracy improvement. Combination of online and offline features could be way out. Pre-trained CNN can increase accuracy, and hand-crafted features can also be combined with learnt features from CNN to improve performance.

References 1. Plamondon, R., & Lorette, G. (1989). Automatic signature verification and writer identification—the state of the art. Pattern Recognition, 22(2), 107–131. 2. Impedovo, D., & Pirlo, G. (2008). Automatic signature verification: The state of the art. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 38(5), 609–635. https://doi.org/10.1109/TSMCC.2008.923866. 3. Hafemann, L. G., Sabourin, R., & Oliveira, L. S. (2017). Offline handwritten signature verification—literature review. Ecole de technologie superieure,´ Universite´ du Quebec,´ Montreal, Canada e-mail: [email protected], [email protected] Federal University of Parana,´ Curitiba, PR, Brazil e-mail: [email protected] 978-1-5386-1842-4/17/$31.00 c2017 IEEE. 4. Nagel, R. N., & Rosenfeld, A. (1977). Computer detection of freehand forgeries. IEEE Transactions on Computers, C-26(9), 895–905. https://doi.org/10.1109/tc.1977.1674937. 5. Justino, E. J. R., El Yacoubi, A., Bortolozzi, F., & Sabourin, R. (2000). An off-line signature verification system using HMM and graphometric features. In Fourth IAPR international workshop on document analysis systems (DAS) (pp. 211–222), Rio de. 6. Oliveira, L. S., Justino, E., Freitas, C., & Sabourin R. (2005). The graphology applied to signature verification. In 12th conference of the international graphonomics society (pp. 286– 290). 7. Zhang, B. (2010). Off-line signature verification and identification by pyramid histogram of oriented gradients. International Journal of Intelligent Computing and Cybernetics, 3(4), 611– 630. 8. Malik, M., Liwicki, M., Dengel, A., Uchida, S., & Frinken, V. (2014). Automatic signature stability analysis and verification using local features. In International Conference on frontiers in handwriting recognition. IEEE. 9. Ruiz-del Solar, J., Devia, C., Loncomilla„ P., & Concha, F. (2008). Offline signature verification using local interest points and descriptors. In Progress in pattern recognition, image analysis and applications, number 5197. Springer. 10. Yılmaz, M. B., & Yanıko˘glu, B. (2016). Score level fusion of classifiers in off-line signature verification. Information Fusion Part B, 32, 109–119. https://doi.org/10.1016/j.inffus.2016. 02.003. 11. Hu, J., & Chen, Y. (2013). Offline signature verification using real adaboost classifier combination of pseudo-dynamic features. In International conference on 12th document analysis and recognition (pp. 1345–1349). https://doi.org/10.1109/icdar.2013.272.

A Review on Offline Signature Verification Using Deep …

21

12. Vargas, J., Ferrer, M., Travieso, C. & Alonso, J. (2007). Off-line handwritten signature GPDS960 corpus. In 9th international conference on document analysis and recognition (Vol. 2, pp. 764–768). https://doi.org/10.1109/icdar.2007.4377018. 13. Hafemanna, L. G., Sabourina, R., & Oliveirab, L. S. (2017). Learning features for offline handwritten signature verification using deep convolutional neural networks. arXiv:1705.05787v1 [cs.CV] 16 May 2017. 14. Ribeiro, B., Gonçalves, I., Santos, S., & Kovacec, A. (2011). Deep learning networks for off-line handwritten signature recognition. In C. S. Martin & S. -W. Kim (Eds.), Progress in pattern recognition, image analysis, computer vision, and applications (pp. 523–532). Berlin Heidelberg: Springer. https://doi.org/10.1007/978-3-642-25085-9_62. 15. Khalajzadeh, H., Mansouri, M. & Teshnehlab, M. (2012). Persian signature verification using convolutional neural networks. International Journal of Engineering Research and Technology, 1. 16. Soleimani, A., Araabi, B. N., & Fouladi, K. (2016). Deep multitask metric learning for offline signature verification. Pattern Recognition Letters, 80, 84–90. https://doi.org/10.1016/j.patrec. 2016.05.023.Oquab. 17. Hafemann, L. G., Sabourin, R., Oliveira, L. S. (2016). Analyzing features learned for offline signature verification using deep CNNs. In International conference on pattern recognition (pp. 2989–2994). 18. Hafemann, L. G., Sabourin, R., & Oliveira, L. S. (2017). Learning features for offline handwritten signature verification using deep convolutional neural networks. Pattern Recognition, 70, 163–176. 19. Hafemann, L. G., Sabourin, R., & Oliveira, L. S. (2016). Writer-independent feature learning for offline signature verification using convolutional neural networks. In The 2016 international joint conference on neural networks. 20. Yapici, M. M., Tekerek, A., & Topaloglu, N. (2018). Convolutional neural network based offline signature verification application.

Designing of SAW-Based Resonator Under Variable Mass Load for Resonance Shift Yateesh Chander and Manish Singhal

1 Introduction The electrical signal creates positive and negative polarities between the inter-digital transducer (IDT) [1]. The alternative polarity creates tensile and compressive strain region between the IDT fingers of the electrode by the piezoelectric effect, and a mechanical wave produced due to the alternative changes in strain is called surface acoustic wave. Figure 1 shows the SAW device with one port IDT [2], which is made of metal for example gold, aluminum, etc., and an electrical signal is applied to the IDT which produces the SAW [1].

2 Surface Acoustic Wave (SAW) Lord Rayleigh discovered surface acoustic waves in 1885 [1]. The operation of the surface acoustic wave device is based on the acoustic wave which propagates near the surface of the piezoelectric solid material [3]. The operation of SAW device implies that the wave can be modified while propagating means that we can change the velocity of the wave. The displacement decay exponentially away from the surface of the piezoelectric material. Basically, two IDTs are placed on a piezoelectric material [2]. The input IDT where we apply our signal launches the wave and the second IDT which is output IDT receives the wave that is coming from input IDT. Depending on the energy of the wave, SAW device is classified into different modes [4] (Fig. 2). Y. Chander (B) · M. Singhal Electronics & Communications Department, Poornima College of Engineering, Jaipur, India e-mail: [email protected] M. Singhal e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_3

23

24

Y. Chander and M. Singhal

Fig. 1 Typical acoustic wave device [1]

Fig. 2 Typical acoustic wave device [1]

2.1 Mathematical Modeling A piezoelectric material generates internal voltage when strained and experiences strain when electric field is applied. So equations governing piezoelectric behavior of material will use tensors and given as Table 1. Table captions should be placed above the tables [5].  = d1 T + ε T E D

(1)

S = d2 E + S E

(2)

The strain charge for a material of the 4 mm crystal and 6 mm crystal class can be written as ⎤⎡ ⎤ ⎡ ⎤ ⎡ SE11 SE12 SE13 0 0 0 T1 S1 ⎢ S ⎥ ⎢ SE SE SE 0 0 0 ⎥⎢ T ⎥ ⎥⎢ 2 ⎥ ⎢ 2 ⎥ ⎢ 21 22 23 ⎥⎢ ⎥ ⎢ ⎥ ⎢ E E E ⎢ S3 ⎥ ⎢ S31 S32 S33 0 0 0 ⎥⎢ T3 ⎥ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎢ S4 ⎥ ⎢ 0 0 0 SE44 0 0 ⎥⎢ T4 ⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎣ S5 ⎦ ⎣ 0 0 0 0 SE55 0 ⎦⎣ T5 ⎦ S6 T6 0 0 0 0 0 SE66

Designing of SAW-Based Resonator Under Variable Mass Load …



0 ⎢0 ⎢ ⎢ ⎢0 +⎢ ⎢0 ⎢ ⎣ d15 0

0 0 0 d24 0 0

25

⎤⎡ ⎤ d31 ⎢ ⎥ d32 ⎥ ⎥⎢ ⎥ ⎥⎢ ⎥ d33 ⎥⎢ E1 ⎥ ⎥⎢ ⎥ 0 ⎥⎢ E2 ⎥ ⎥⎢ ⎥ 0 ⎦⎣ E3 ⎦ 0 ⎡

⎤ T1 ⎡ ⎤⎢ T2 ⎥ ⎥ D1 0 0 0 0 d15 0 ⎢ ⎢ ⎥ T ⎢ 3 ⎣ D2 0 0 0 d24 0 0 ⎦⎢ ⎥ ⎥ ⎢T ⎥ D3 d31 d32 d33 0 0 0 ⎢ 4 ⎥ ⎣ T5 ⎦ T6 ⎡ ⎤⎡ ⎤ ε11 0 0 E1 + ⎣ 0 ε22 0 ⎦⎣ E2 ⎦ 0 0 ε33 E3 The main objective is to find mass load sensitivity of SAW devices. As mass load sensitivity of SAW devices depends on resonance frequency shift of SAW device, which can be defined as [6]  f = Cm f o2 ρs

(3)

where f shift in resonance frequency due to mass loading, Cm is mass load sensitivity, f o is resonance frequency without mass loading, and ρs is surface mass density of pillar. Resonance frequency shift f depends on two factors—first is structure of pillars (nanostructure or bulk structure) and second is geometry of pillars (width and height). So mass load sensitivity will also depend on structure as well geometry of pillars [7].

3 Proposed Structure Piezoelectric material used as substrate: quartz, electrode material used: aluminum, height of substrate: 430 µm, pitch of IDT: 43 µm, width of electrode: 21.5 µm. Now by simulating this model using COMSOL multi-physics, we find that a surface acoustic wave (resonance frequency f o = 39.96 MHz) is generated and propagates through surface of substrate, which is shown in Figs. 3 and 4. The amount of shift in resonance frequency f o (for without mass load and with mass load) is defined as resonance shift f o (Fig. 5).

26

Y. Chander and M. Singhal

Fig. 3 Surface acoustic wave generated in SAW device without mass load

Fig. 4 SAW device with mass loading

Designing of SAW-Based Resonator Under Variable Mass Load …

27

Fig. 5 Surface acoustic wave generated in SAW device with mass load

4 Results Resonance shift variation with width (W) and height (H) of mass load. 1. Variation in resonance frequency with height (H). The first maximum resonance shift is at 11 µm height. The ratio of first maximum resonance shift and corresponding height is known as sensitivity. It can be observed from Fig. 6 that resonance shift corresponding to certain heights is zero, so these heights offers zero mass loading. 2. Variation in resonance frequency with width (W) (Fig. 7). 3. Resonance shift variation with number of nanopillars used as mass load (Fig. 8). We will use more than one nanopillar as mass load and observe how resonance shift will vary with height.

28

Fig. 6 Variation in resonance shift with height of pillar

Fig. 7 Variation in resonance shift with width of pillar

Y. Chander and M. Singhal

Designing of SAW-Based Resonator Under Variable Mass Load …

29

Fig. 8 Nanostructured SAW device with eight nanopillar mass load

5 Conclusion and Future Work We have designed SAW device using COMSOL multi-physics tool and studied the sensing properties in two parts. In first part, we describe how resonance shift varies with geometry (width and height) of pillar, and in second part, we explain how resonance shift varies with structure (dimension of pillar, nano or micro) used. In first part, we found that resonance shift varies with width as well height and takes both positive and negative values of shift. As sensitivity depends on resonance frequency shift, it is desirable to have larger amount of resonance shift and corresponding height or width of pillar. In second part, we design the structure with nanodimension and find the resonance frequency shift variation with height of pillars and found that amount of resonance frequency shift is larger as compared to first model. So we can conclude that for nanostructures, mass load sensitivity is better. For SAW device, we have used quartz as piezoelectric and SU-8 as mass load. This work can be extended for different materials (metals or polymers) as mass load and different piezoelectric material as substrate. Acknowledgements My deepest gratitude is to my supervisors. I have been amazingly fortunate to have an advisor who gave me the freedom to explore on my own and at the same time the guidance to recover when my steps faltered.

30

Y. Chander and M. Singhal

References 1. Riesch, C., Keplinger, F., Reichel, E. K., & Jakoby, B. (2006). Characterizing resonating cantilevers for liquid property sensing. SENSORS (pp. 1070–1073). Daegu: IEEE. 2. Lu, X., Mouthaan, K., & Yeo, T. S. (2016). A wideband bandpass filter with frequency selectivity controlled by SAW resonators. IEEE Transactions on Components, Packaging and Manufacturing Technology, 6(6), 897–905. 3. Psychogiou, D., & Gómez-García, R. (2018). Switched-bandwidth SAW-based bandpass filters with flat group delay. Electronics Letters, 54(7), 460–462. 4. Liu, Y., Liu, J., Wang, Y., & Lam, C. S. (2019). A novel structure to suppress transverse modes in radio frequency TC-SAW resonators and filters. IEEE Microwave and Wireless Components Letters, 29(4), 249–251. 5. Stefanescu et al. (2012). SAW GaN/Si based resonators: Modeling and experimental validation. In CAS 2012 (International Semiconductor Conference) (pp. 193–196), Sinaia. 6. Neculoiu, D., Bunea, A., Dinescu, A. M., & Farhat, L. A. (2018). Band pass filters based on GaN/Si lumped-element saw resonators operating at frequencies above 5 GHz. IEEE Access, 6, 47587–47599. 7. Lu, D., Zheng, Y., Penirschke, A., & Jakoby, R.. (2016). Humidity sensors based on photolithographically patterned PVA films deposited on SAW resonators. IEEE Sensors Journal, 16 (1), 13–14.

Ablation of Hepatic Tumor Tissues with Active Elements and Cylindrical Phased Array Transducer Sarita Zutshi Bhan, S. V. A. V. Prasad, and Dinesh Javalkar

1 Introduction The high-intensity focused ultrasound (HIFU) treatment method for treating various tumors like prostate cancer [7], breast cancer [8], bone metastases [9] and malignant renal tumors [10] has proved one of the best ways of treating various solid tumors. The recent studies regarding HIFU treatment for treating liver tumor have depicted promising results out of this technology for this type of tumor which has always been a challenge for all researchers as well as surgeons because of continuous movement of liver tissues due to continuous breathing. This HIFU technique is an excellent alternative to conventional tumor treatment methods due to its non-invasive nature and fast recovery time [11]. The HIFU treatment can be delivered via one of the two mechanisms, i.e., via thermal effect or via mechanical effect [12] while thermal effect being the more dominant method due to its mechanism of creating extra damage of target tissue due to excessive heat generated there; however, few studies suggested to introduce exogenous synergists so as to exploit the pure mechanical effect of HIFU by adjusting the in sonication parameters which results in increase in treatment efficacy and is also responsible for prevention of unwanted heat-related side effects. In order to deliver such treatment method, the equipment required for this method needs to be designed with high level of accuracy. The major parts being the function generator for generating a high frequency, the power amplifier for providing high power to the high-frequency ultrasound waves and the transducer for focusing these S. Z. Bhan (B) · S. V. A. V. Prasad · D. Javalkar Lingayas Vidyapeeth, Faridabad, India e-mail: [email protected] S. V. A. V. Prasad e-mail: [email protected] D. Javalkar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_4

31

32

S. Z. Bhan et al.

high-frequency high-power ultrasound waves at the target tissue site. The function generator and the power amplifier have been designed using state-of-the-art active elements, while the transducer has been simulated for more accurate center frequency, i.e., 1 MHz with reduced power of 60 watts as compared to the band of frequencies, i.e., 0.8–1.6 MHz with power output of 70 watts. The results were generated in Multisim Version 12.0 software for function generator and power amplifier while MATLAB software was used for generation of results for transducer.

2 Literature Survey 2.1 Liver Tumor and Available Treatments The various methods available for treatment of liver tumor are liver transplant, freezing cancer cells, injecting chemotherapy drugs, heating cancer cells, targeted drug therapy, injecting alcohol and high-intensity focused ultrasound (HIFU). Out of all these methods, the HIFU method is the most advanced method for treating liver tumor at various stages. In this method, the tumorous cell is ablated due to high-intensity ultrasound beams focused at it. It is the only non-invasive method that has more fast recovery time.

2.2 Liver Tumor and High-Intensity Focused Ultrasound The authors in [13], Kim and et al., said that the treatment given to primary liver cancer patients with HIFU along with transarterial chemo embolization (TACE) depicts little damage of normal liver tissues along with higher overall remission rate as compared to treatment given with TACE alone. In [14], the authors conducted the HIFU ablations on 15 patients in Phase I–IIa. 30 HIFU ablations were created by them very precisely with a precision of 1 to 2 mm within 40 s, and they observed that the ablation size is typically dependent upon the transducer characteristics; however, a typical size of lesion is generally found to be of 1.3 mm in transverse and 8 to 15 mm along the beam axis of the transducer. In [15], the authors Ulrik Carling and et. al. conducted HIFU ablation of liver in six male land swine under general anesthesia. These ablations were made using a frequency of 1.2 MHz with a power of 200 W. The patients immunity for any treatment is a major concern before planning any treatment method for him. It was Baofengma and et.al in [16] who presented the clinical observations of a study carried out on 96 patients with primary liver cancer, out of which there were 66 males and 30 females. In their research work, they observed the patients for three months after giving the HIFU treatment and their observation concluded that the symptoms of Jaundice, the

Ablation of Hepatic Tumor Tissues with Active Elements …

33

pain in the abdomen, the anorexia and ascites were relieved as compared to their values observed before the treatment.

2.3 Active Elements The operational amplifiers had been versatile building blocks until their performance was found to be less as compared to the current driven latest state-of-the-art active elements [17]. The current conveyor introduced by Sedra and Smith in 1968 was the initial breakthrough for development of active elements that were found to be more expedient than operational amplifiers. It was in 1996 that first CMOS-based differential operational floating amplifier was introduced that was further used for implementing current mode filters [18]. Sooner more members for the family of active elements came into existence like current differencing buffered amplifier (CDBA), fully differential current conveyor (FDCCII), voltage current controlled conveyor transconductance amplifier, voltage differencing transconductance amplifier (VDTA), CMOS realization of voltage differencing buffered amplifier (VDBA), and many more [19–25].

2.4 HIFU Transducers The authors Raffaula Righetti and et al. used a HIFU transducer that was air backed, single and spherical with center frequency at 1.5 MHz approximately, 80 mm radius of curvature and 100 mm diameter to generate the focused ultrasound field. This transducer maintained the RF power of 20 W and generated the lesions of liver tumor cells by exposing them to focal intensities ranging between 750 and 1565 W cm−2 . The time for sonication was varied from 8 to 20 s [26]. Allain Sibbille and et. Al. performed several clinical experiments with extra corporeal HIFU on rabbit liver. They created a lesion of coagulation necrosis of tumorous liver tissue [27]. The authors Song Peng and et.al in [28] used the lower transducer out of lower and upper transducer of HIFU machine, i.e., FEP-BY02 made by China. They adjusted the treatment power as per the tumor dimensions and its location. The transducer used by them was a phased array transducer with 251 elements and was built using piezo lead zirconate (PZT) material. The HIFU transducers aperture was of 37 cm in diameter with 255 cm of focal length.

34

S. Z. Bhan et al.

3 Experimental Configuration and Working Mechanism of HIFU Equipment The experimental configuration used for HIFU equipment is depicted in Fig. 1. Its main blocks consist of a computer system, function generator, power amplifier, power meter, matching network, HIFU transducer, phantom (of Liver), phantom cavity detector (PCD), band pass filter, low noise preamplifier and digitizer. The computer system with HIFU software loaded in it is used to send starting pulse to function generator to produce high-frequency pulses of 1 MHz. These highfrequency pulses are fed to high-frequency power amplifier to produce high frequency with 60 W power which is measured by the power meter continuously. These highfrequency pulses are then given to the matching network before feeding them to the HIFU transducer. The HIFU transducer focuses these high-frequency pulses at phantom at the focal length of 8.99 cm. The HIFU transducer is selected whose outer radius is 2.5 cm, inner radius is 1 cm, focusing depth is 10 cm, working frequency is 1 MHz, output power is 60 W.

Fig. 1 Experimental configuration of HIFU equipment

Ablation of Hepatic Tumor Tissues with Active Elements …

35

The high-frequency ultrasound waves are hit by the target which results into the generation of reflected echo signals that are captured by PCD, i.e., phantom cavity detector which is again a type of transducer that receives the echo signals from phantom. These signals are filtered by bandpass filter so as to eliminate the unwanted signals coming from the system. These filtered signals are further amplified using low noise preamplifier. In this block, the noise from received echo signals is minimized and the strength of such signals is raised. Since these signals need to be analyzed to take further decisions on creating lesions at target tissues in phantom, a digitizer is used for converting the echo signals into digital signals. These digital signals are then fed to the computer system where the preloaded software is capable to take further decisions on the basis of such received echo pulses.

3.1 Circuit Implementation Using Multisim Software The HIFU equipment’s main blocks excluding the transducer part were implemented using Multisim software. Fig. 2 depicts the implementation of oscillator (as the main part of function generator), power amplifier, power meter and matching network implemented using Multisim software. The oscillator and power amplifier have been implemented using the state-of-the-art active elements AD846AN [29, 30].

Fig. 2 Implementation of proposed model using active element AD846AN

36

S. Z. Bhan et al.

3.2 Circuit Implementation Using MATLAB Software The transducer has been implemented using MATLAB software in which the different parameters of the transducer were fed in using HIFU Simulator toolbox. The equation coefficients for simulating the transducer using MATLAB are given below: 1. Peak Pressure at transducer face(P0 )    = (2 ∗ ρ ∗ c ∗ P// (a/100)2 − (b/100)22 where, ρ c P a b

Mass density of phantom in kg/m3 Small-signal sound speed = 1482 m/s Output power of transducer = 60 W Outer radius of transducer = 2.5 cm Inner radius of transducer = 1 cm.

2. Nonlinear coefficient = 2 ∗  ∗ P0 ∗ β ∗ (d/100) ∗ f / ρ /c3 where, P0 β d f

Peak pressure at transducer face in kg/m3 Nonlinear parameter = 3.5 Focusing depth of transducer = 8.99 cm Frequency of transducer = 1 MHz.

3. Linear Pressure Gain =  ∗ (a/100)2 ∗ f /c/(d/100). The various graphs obtained after simulating KZK equation in MATLAB are shown in Figs. 3, 4, 5, 6, 7, 8, 9 and 10. In Fig. 5, it is observed that a peak pressure of 4 MPa is observed within short sonication period of 0.8 µs while it is clear in Fig. 6 that this peak pressure is observed at an axial distance of around 9 cm. In Fig. 7, the focal intensity with transducer whose radius of curvature is 0.5 cm is observed to be of around 250 W/cm2 , and Fig. 8 depicts that this intensity comes at a little less axial pressure of around 3.5 MPa. The radial heating rate is observed to be of more than 35 W/cm3 at the focal depth which gives rise to axial heating rate of more than 50 W/cm3 that gradually leads to focal intenstity of around 350 W/cm2 , and this heating rate becomes sufficient to create coagulative necrosis of phantom tissue immersed in degassed water tank as shown in Fig. 4. This irreversible cell death leads to lesion that removes the tumorous tissue permanently from the phantom.

Ablation of Hepatic Tumor Tissues with Active Elements … Fig. 3 Temporal waveform (on axis) at distance where peak pressure occurs

37

Waveform at z = 8.99 cm

5

4

3

p (MPa)

2

1

0

-1

-2

-3

0

0.1

0.2

0.4

0.3

0.5

0.6

0.7

0.8

1

0.9

t ( s)

4 Result and Analysis The circuit implementation of proposed model was done using Multisim 12.0 software and MATLAB software. The design and simulation of high-frequency oscillator and high-frequency and high-power amplifier has already been described in [29, 30]. The simulation results of HIFU transducer are mentioned below.

4.1 Simulation of High-Frequency Ultrasound Transducer The HIFU transducer was simulated with the help of KZK equation given below: ∂ ∂τ



∂ p ε  ∂ p b ∂ 2 p − − p ∂x ρ0 c03 ∂τ 2ρ0 c03 ∂τ 2



  c0 1∂ ∂ p = r 2r ∂r ∂r

(1)

where, p’ is acoustic pressure, ρ0 is medium density at rest, c0 is speed of sound, ε is nonlinearity parameter, b is dissipation coefficient of the medium, τ = t − x/c0 is time in the coordinate system. The results obtained after simulating the KZK equation given in (I) are depicted in Table 1. The density of material used for simulation

38

S. Z. Bhan et al. 3

Fig. 4 Axial pressure amplitude of the first five harmonics

2.5

p (MPa)

2

1.5

1

0.5

0

0

0.5

1

1.5

2

2.5

2

2.5

r (cm) 250

Fig. 5 Radial intensity at focus

2

I (W/cm )

200

150

100

50

0

0

0.5

1

1.5

r (cm)

Ablation of Hepatic Tumor Tissues with Active Elements …

39

3

Fig. 6 Radial pressure amplitude of the first five harmonics

2.5

p (MPa)

2

1.5

1

0.5

0

0

0.5

1

1.5

2

2.5

1.5

2

2.5

r (cm)

40

Fig. 7 Radial heating rate at focus

35

3

H (W/cm )

30 25 20 15 10 5 0

0

0.5

1

r (cm)

purpose is 1000 kg/m3 whose absorption at the frequency of 1 MHz is set around 0.217 dB/m maintaining the small-signal sound speed at 1482 m/s which is sufficient for the high-frequency ultrasound waves to reach at the target site. While the transition distance of material is fixed at 5 cm, the focusing depth is maintained at 8 cm and frequency of transducer is fixed at 1 MHz which is the improvement over past records

40

S. Z. Bhan et al. 5

Fig. 8 Axial peak positive and pressures

4 3

p (MPa)

2 1 0 -1 -2 -3

0

5

z (cm)

10

15

10

15

60

Fig. 9 Axial heating rate

50

3

H (W/cm )

40 30 20 10 0 -10

0

5

z (cm)

where the frequency is varied in the range of 0.8–1.6 MHz. With these settings, it is observed that the peak pressure is experienced after a time delay of just 0.8 µs.

Ablation of Hepatic Tumor Tissues with Active Elements … Fig. 10 Axial intensity

41

350 300

2

I (W/cm )

250 200 150 100 50 0

5

0

z (cm)

10

Table 1 List of all HIFU transducer parameters (Calculated and observed ones) Parameter Material related

Transducer related

Computational domain

Observations after simulation

Symbol

Value

Mass density

ρ

1000 kg/m3

Absorption at 1 MHz

α

0.217 dB/m

Small-signal sound speed

c

1482 m/s

Exponent of absorpiton versus frequency curve

η

2

Material transition distance

z−

5 cm

Nonlinear parameter

β

3.5

Outer radius

a

2.5 cm

Inner radius

b

1 cm

Frequency

f

1 MHz

Focusing depth

d

8 cm

Power

p

60 W

Max radius

R

a cm

Max axial distance

Z

1.5 d cm

Number of harmonics

K

128

Temporal waveform (on axis) at distance where peak pressure occurs

ρ versus t

4 MPa < ρ < 5 MPa at around 0.8 µs

Radial Intensity at focus

I versus r

250 W/cm2 at 0 cm

Radial heating at focus

H versus r

36 W/cm3 at 0 cm

Axial heating rate

H versus z

50 W/cm3 at < 10 cm

Axial intensity

I versus z

350 W/cm2 at < 10 cm

15

42

S. Z. Bhan et al.

5 Conclusion The coagulative necrosis of liver tumorous tissue is possible in more effective way while maintaining the parameters of the required HIFU transducer at more comfortable and accurate ranges instead of a wide range that ultimately leads to increase in treatment time due to more adjustments in the transducer parameters even for the same patient. Also, the improvised HIFU equipment with the help of latest active elements will be a boon in improving the treatment time and thus reducing the anxiety of patients at the treatment table. This HIFU equipment gives high accuracy with repeatable results as is visible in result section.

6 Future Scope The HIFU equipments are the need of future medical treatment techniques for various diseases. Diagnosing and planning treatment for patients with the help of previous case histories shall reduce the treatment time and can help radiologists to plan the treatment procedure more accurately when they shall be provided the database of such cases. Hence, with the introduction of artificial intelligence and smart database maintenance in future shall undoubtedly ensure its more effective utility in medical world.

References 1. Ter Haar, G., Sinnett, D., & Rivens, I. (1989). High-Intensity focused ultrasound—A surgical technique for the treatment of discrete liver-tumors. Physics in Medicine and Biology, 34(11), 1743–1750. 2. Al-Bataineh, O., Jenne, J., & Huber, P. (2012). Clinical and future applications of high intensity focused ultrasound in cancer. Cancer Treatment Reviews, 38(5), 346–353. 3. Kennedy, J. E., ter Haar, G. R., & Cranston, D. (2003). High intensity focused ultrasound: Surgery of the future? British Journal of Radiology, 76(909), 590–599. 4. Hill, C. R., & Ter Haar, G. R. (1995). Review article: High intensity focused ultrasound-potential for cancer treatment. British Journal of Radiology, 68(816), 1296–1303. 5. Jeffrey Elias, W., et al. (2013). A pilot study of focused ultrasound thalamotomy for essential tremor. The New England Journal of Medicine, 369.7, 640–648. 6. Dasgupta, S., et al. (2010). HIFU volume as function of time, as determined by MRI, histology, and computations. Journal of Biomechanical Engineering, 132, 081055. 7. Postema, A., Mischi, M., De La Rosette, J., & Wijkstra, H. () Multiparametric ultrasound in the detection of prostate cancer: A systematic review. World Journal of Urology, 33, 1651–1659. www.springer.com. 8. Peek, M. C. L., & Wu, F. (2018). High intensity focused ultrasound in the treatment of breast tumours. ecancer Medical Sciences, 12, 794. 9. Hurwitz, M. D., Ghanouni, P., Kanaev, S. V., Iozeffi, D., Gianfelice, D., & Fennessy, F. M., et al. Magnetic resonance-guided focused ultrasound for patients with painful bone metastases: phase iii trial results. JNCI Journal of the National Cancer Institute, 106(5) dju 082–2.

Ablation of Hepatic Tumor Tissues with Active Elements …

43

10. Nabi, G., Goodman, C., & Melzer, A. (2010). High intensity focused ultrasound treatment of small renal masses: Clinical effectiveness and technological advances. Indian Journol of Urology, 26(3), 331–337. 11. Wu, et al. (2003). A randomised clinical trial of high-intensity focused ultrasound ablation for the treatment of patients with localised breast cancer. British Journal of Cancer, 89, 2227–2233. 12. Dubinsky, T. J., Cuevas, C., Dighe, M. K., Kolokythas, O., & Hwang, J. H.. (2008). Highintensity focused ultrasound: Current potential and oncologic applications. AJR, 190. 13. Kim, J., et al. (2012). Therapeutic effect of high-intensity focused ultrasound combined with transarterial chemoembolisation for hepatocellular carcinoma, 5 cm: comparison with transarterial chemoembolisation monotherapy—preliminary observations. The British Journal of Radiology, 85, 940–946. 14. Dupre, A., et al. (2019). Evaluation of the feasibility, safety, and accuracy of an intraoperative highintensity focused ultrasound device for treating liver metastases. Journal of Visualized Experiments, Issue, 143(e57964), 1–10. 15. Carling, U., et. al. (2018). Can we ablate liver lesions close to large portal and hepatic veins with MR-guided HIFU? An experimental study in a porcine model. European Radiology. 16. Ma, B., et. al. (2019). The effect of high intensity focused ultrasound on the treatment of liver cancer and patients’ immunity. IOS Press, 24(1). 17. Abdalla, K. K., Bhaskar, D. R., & Senani, R. (2012). A review of the evolution of current-mode circuits and techniques and various modern analog circuit building blocks. Nature and Science, 10, 1–13. 18. Elwan, H. O., & Soliman, A. M. (1996). CMOS differential current conveyors and applications for analog VLSI. Analog Integrated Circuits and Signal Processing, 11, 35–45. 19. Jaikla, W., & Siripruchyanan, M. (2006). Current controlled CDTA (CCCDTA). IEEE, ISCIT, 48–351. 20. Prokop, R., & Musil, V. (2006). Building blocks for modern active components design. Electronics, Research Gate. 21. Kacar, F., Yesil, A., & Noori, A. (2012). New simple CMOS realization of voltage differencing buffered amplifier and its biquad filter applications. Radioengineering, 21(1), 333–339. 22. Liu, S. I., Tsao, H. W., & Wu, J. (1991). CCII-based continuous time filters with reduced gain bandwidth sensitivity. IEEE Proceedings, 138, 210–216. 23. Biolek, D., Biolková, V., & Kolka, Z. (2007). Universal current-mode OTA-C KHN Biquad. International Journal of Electronics Circuits and Systems, 1, 214–217. 24. Soliman, A. M. (1998). A New filter configuration using current feedback Op-Amp. Microelectronics Journal, 29, 409–419. 25. Walde, N., & Ahmad, S. N. (2015). New voltage mode sinusoidal oscillator using voltage differencing transconductance amplifiers (VDTAs). Scientific Research Publishing, Circuits and Systems, 6, 173–178. 26. Righetti, R., et al. (1999). Elastographic characterization of HIFU-induced lesions in canine liver. Ultrasound in medicine & biology, 25(7), 1099–1113. 27. Sibille, A., et. al. Extracorporeal ablation of liver tissue by high-intensity focused ultrasound. Oncology, 50, 375–379. 28. Peng, S., et al. (2016). Treatment of hepatic tumors by thermal versus mechanical effects of pulsed high intensity focused ultrasound in vivo. Physics in Medicine & Biology, 61, 6754– 6769. 29. Bhan, S. Z., & Kapoor, P. (2017). Reduction of power in high frequency oscillators using active elements for focused ultrasound application. International Journal of Computer Applications (0975–8887), 173(5). 30. Zutshi, S., & Kapoor, P. (2018). Design of power amplifier for high intensity focused ultrasound using state-of-art technology. International Journal on Future Revolution in Computer Science and Communication Engineering, 4(3), 313–316. ISSN: 2454–4248.

A Brief Analysis and Comparison of DCT- and DWT-Based Image Compression Techniques Anuj Kumar Singh, Shashi Bhushan, and Sonakshi Vij

1 Introduction Image compression is employed to scale back the image size as well as redundancy of the image knowledge. The number of information required to represent the image under concern must be reduced. Compression deals with redundancy, the quantity of bits required to represent an image by removing redundant knowledge. The compression techniques are employed to effectively increase performance of applications in domains like the health industries, stores with retailing, information security and encoding, galleries and museums, etc. Figure 1 depicts the basic features of image compression. It remains a fact that the basic advantage handed over by image compression is that it provides ease of storage since less disk space would be needed to store compressed images and pictures. This provides media portability options. But the major disadvantage is that this process requires technical expertise. This disadvantage can be converted into an advantage if this technical expertise is easily available and is also technologically friendly. This means that correct image compression technique must be adopted. In order to find out the better of the two techniques, several factors must be analyzed, which makes this quest subjective.

A. K. Singh (B) · S. Bhushan · S. Vij Krishna Engineering College, Ghaziabad, India e-mail: [email protected] S. Bhushan e-mail: [email protected] S. Vij e-mail: [email protected]

© Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_5

45

46

A. K. Singh et al.

Fig. 1 Features of image compression

In order to fulfill the purpose, several compression techniques, i.e., scalar/vector quantization, differential encryption, prognosticative image secret writing, remodel secret writing, are introduced. Among of these, remodel secret writing is most effective, particularly at low bit rate [1]. Remodel secret writing depends on the principle that pixels in a picture show an exact level of correlation with their close pixels. Hence, it could be well observed that these correlations are often exploited to predict the worth of an element from its individual neighbors. The primary aim of image compression remains the preservation of essential information. In the literature, mainly two types of image compression techniques exist, namely A. Lossy image compression B. Lossless image compression In lossless image compression, the image that has been reconstructed is compressed using a numerical model that ensures that essential picture details are preserved. On the contrary, in lossy image compression scheme, the image that is reconstructed contains some degradation value as compared to the original one. Lossless image compression has always provided the user with a smart quality of compressed pictures, but it yields solely less compression ratios compared to the lossy compression techniques [2, 3]. Nowadays, there are numerous online tools available for serving the purpose of image compression. These tools are either free or available on a subscription basis. Figure 2 lists the various tools that are available online for performing image compression.

A Brief Analysis and Comparison of DCT- and DWT-Based Image …

47

Fig. 2 Online tools for performing image compression

In this paper, the authors have tried to create a comparative analysis of two very famous image compression technique, viz. DCT and DWT, in terms of image quality, performance under noise, etc. The structural outline of this research paper is as designated below: • • • • •

Section II: Discrete Cosine Transform Section III: Discrete Wavelet Transform Section IV: DWT Applications Section V: Comparative Study Section VI: Conclusion.

2 Discrete Cosine Transform The original process of implementing DCT image compression is shown in Fig. 3. It can be observed that the major steps involved in the same are: 1. 2. 3. 4.

Transformation and encoding using cosine function calculation Matrix creation using cosine model Image quantization using numerical function Image encoding and consequent compression.

DCT usually achieves the task of compressing the images using low-level compression and offers lossy remodel. DCT is majorly deployed for signal processing, image

48

A. K. Singh et al.

Fig. 3 DCT compression process

processing, etc., particularly for lossy compression as a result of its powerful energy compaction.

3 Discrete Wavelet Transform Wavelets square measure helpful for press signals. Wavelets may not be taking away noise in a picture. Wavelets square measure mathematical performs that may not be rework one function illustration into another. DWT performs multiple solution image analysis. Wavelets are replicating functions that employ two kinds of filters: 1. High pass filter [1] 2. Low pass filter [1]. This means that the original image information is effectively divided into two elements: a close part of relatively higher frequency and an approximation part of relatively lower frequency. There are various levels in which the detailing is done: • Level one detail corresponds to the major horizontal details. • Level two detail corresponds to the major vertical details. • Level three detail corresponds to the major diagonal details of the image signal.

A Brief Analysis and Comparison of DCT- and DWT-Based Image …

49

4 Motivation and DWT Applications Applications of wavelet-based image compression technique include: 1. Fingerprint verification. 2. Biology for cell wall recognition to differentiate the traditional from the pathological membranes. 3. Polymer analysis; super molecule analysis. 4. Laptop graphics, transmission and multi-fractal analysis. 5. Quality progressive or layer progressive. 6. Resolution progressive. 7. Secret writing. 8. Meta-data analysis. Areas wherever wavelet based mostly techniques realize additional applications: 1. X-ray study and medical field 2. Physical science field at the side of satellite applications 3. Movability of pictures from one device to a different. The authors got impelled to review compression after we detected that WhatsApp uses compression algorithmic rule to transfer pictures. United Nations Agency has used WhatsApp and they determined however quickly pictures are often shared although they are giant files. By giant files, we tend to mean high-resolution pictures typically starting from 1–4 MB. As an example, the iPhone 4S, with associate eight megapixel camera takes fairly giant photographs: 3264 × 2448 pixels (8 MP suggests that 8 million pixels). But once we share these pictures through WhatsApp, it compresses the pictures that permits for unbelievably quick sharing. An image with dimensions 3264 × 2448 (3 MB) is regenerate to a picture with dimensions 800 × 600 (~70 KB). Once compression pictures, the ratio is maintained, i.e., the ensuing compressed image has same ratio because the original image. The compression quality was five hundredth or an element zero.5.

5 Comparative Study The advantage of wave compression is that in distinction to JPEG wave algorithmic program does not divide image into blocks, however analyzes the total image. Wavelet rework is applied to sub-pictures; thus, it produces no block artifacts. Wavelets have the good advantage of having the ability to separate the fine details in an exceedingly signal. Very tiny wavelets is wont to isolate terribly fine details in an exceedingly signal, whereas terribly massive wavelets will establish coarse details. These characteristic of wave compression permits obtaining best compression magnitude relation, whereas maintaining the standard of the pictures.

50

A. K. Singh et al.

Wavelet based mostly compression offers additional strength underneath transmission and decipherment errors. It offers smart frequency resolution at lower frequencies and experience resolution at high frequencies. Hence, it is appropriate for natural pictures. Denoising is one in all the key applications of wave transforms. Owing to their inherent multi-resolution nature, wavelet-coding schemes area unit used for applications wherever quantifiability and tolerable degradation area unit vital.JPEG 2000 is a picture committal to writing customary developed by JPEG that is predicated on DWT. Wavelets area unit performs generated from one function called the image or the mother wave by scalings and translations(shifts) in time(frequency) domain. The advantage of DWT over Fourier rework is that it performs multi-resolution analysis of signals. As a result, it decomposes a digital signal into totally different sub-bands in order that the lower frequency sub-bands can have finer frequency resolution compared to high frequency sub-bands. Its major use in compression is due to: • easy compressed image manipulation • Multi-resolution analysis—it permits approximation of a perform at totally different levels of resolution. Undesirable blocking artifacts (distortion that appears in compressed image as very large pixel blocks) affect the reconstructed images in DCT. This is not true for DWT. Figure 4 shows the snapshot of the process in which the authors have extracted the image from their system. Figure 5 represents the original picture which needs to eb compressed. Figure 6 shows the diagonal, vertical and horizontal details of the picture under consideration. Figure 7 highlights the L1 image reconstruction, while

Fig. 4 Selecting target image

A Brief Analysis and Comparison of DCT- and DWT-Based Image …

Fig. 5 Original image

Horizontal Detail H1

Approximation A1 50

50

100

100

150

150

200

200

250

50

100 150 200 250

250

50

50

100

100

150

150

200

200 50

100 150 200 250

Fig. 6 Horizontal, vertical and diagonal details

100 150 200 250

Diagonal Detail D1

Vertical Detail V1

250

50

250

50

100 150 200 250

51

52 Fig. 7 L1 image reconstruction

A. K. Singh et al.

Input image

1-level reconstructed image

Fig. 8 shows the comparison between L1 and L2 details. The L2 reconstruction is shown in Fig. 9. Finally, Fig. 10 depicts the output image as the results of DWT compression (Table 1).

6 Conclusion In this paper, we had tried to do a comparative study of DCT and wavelet-based transformation for still image. And according to our analysis, we have found that wavelet transformation techniques are much better to provide us better quality image than DCT in terms of time scale and much more better multi-resolution. In our comparisons, we focused on factors like quantizer and entropy encoder to get better image compression.

A Brief Analysis and Comparison of DCT- and DWT-Based Image … Approximation A1

Horizontal Detail H1

Vertical Detail V1

53 Diagonal Detail D1

50

50

50

50

100

100

100

100

150

150

150

150

200

200

200

200

250

250

Approximation A2

250 50 100150200250

50 100150200250

Horizontal Detail H2

250 50 100150200250

Vertical Detail V2

50 100150200250

Diagonal Detail D2

50

50

50

50

100

100

100

100

150

150

150

150

200

200

200

200

250

250 50 100150200250

Fig. 8 LI Versus L2 details

250 50 100150200250

250 50 100150200250

50 100150200250

54 Fig. 9 L2 reconstructed image

A. K. Singh et al.

Input image

2-level reconstructed image

Fig. 10 Output image for DWT

A Brief Analysis and Comparison of DCT- and DWT-Based Image …

55

Table 1 Comparison between wavelets and other compression techniques Parameter

DCT

DWT

Performance under decoding errors

Moderate

High

Performance under transmission errors

Moderate

High

HVS matching

Low

Higher

Frequency resolution characteristics

Low

Higher

Computation costs

Low

High

Time taken to fully compress

Low

High

Quality of compressed images

Low

High

Performance in case of noise

Moderate

Good

References 1. Hnesh, A. M. G., Demirel, H. (2016). DWT-DCT-SVD based Hybrid lossy image compression technique. In IEEE IPAS’16: International Image Processing Applications and Systems Conference. 2. Agarwal, N., Sharma, H. (2013). An efficient pixel-shuffling based approach to simultaneously perform image compression, encryption and steganography. IJCSMC, 2(5), 376–385. 3. Villasenor, J. D., Belzer, B., & Liao, J. (1995). Wavelet filter evaluation for image compression. IEEE Transactions on Image Processing, 4(8), 1053–1060. 4. Calderbank, A. R., Daubechies, I., Sweldens, W., & Yeo, B. L. (1998). Wavelet transforms that map integers to integers. Applied and computational harmonic analysis, 5(3), 332–369. 5. Chao, H., Fisher, P., & Hua, Z. (1997). An approach to integer reversible wavelet transformations for lossless image compression. Technical report, Infinop Inc., 1997. http://www. infinopcom/infinop/html/whitepaper.html. 6. Gomes, J., & Velho, L. (1997). Image Processing for Computer Graphics. Springer Science & Business Media. 7. Salomon, D. (2004). Data compression: The complete reference. Springer Science & Business Media. 8. Stern, A., & Javidi, B. (2006). Three dimensional sensing, visualization, and processing using integral imaging. In Proceedings IEEE, special issue on 3-d technologies for imaging and display (vol. 94, no. 3, pp. 591–607). [2] Yeom, S., Stern, A., Javidi, B. (2004). Compression of 3-D color integral images. Opt. Express, 12, 1632–1642. [3] Shortt, A., Naughton, T. J., Javidi, B. (2006). Compression of digital holograms of three-dimensional objects using wavelets. Opt. Express, 14, 2625–2630. 9. Starosolski, R. (2015). Application of reversible denoising and lifting steps to DWT in lossless JPEG 2000 for improved bitrates. Signal Processing: Image Communication 39, 249–263, Elsevier. 10. Rathodand, M., & Khanapuri, J. (2017) Comparative study of transform domain methods for image resolution enhancement of satellite image. IEEE xplore.

Topic Modeling on Twitter Data and Identifying Health-Related Issues Sandhya Avasthi

1 Introduction Social media platforms have become an integral part of people’s lives in the last decade [1], and text mining became an important technique to analyze user’s conversation over such platforms. Several kind of research have utilized social media for analysis of real-world events and understanding ongoing trends, e.g., news events, natural disasters, user sentiments, and political opinions [2]. The most widely used form of social media is Twitter, with over millions of users posting “tweets” every day [3]. Such online tweets can be easily accessible by streaming tools. These social network portals offer new options for data acquisition and research. The services make users to create their profile, define a list of users with whom they want to connect and to view connections. Twitter provides a portal where users can create and exchange content with a larger audience [3–5]. Twitter operates in real time through small and simple messages known as “tweets”. The “tweets” of users are available to all of his “followers”, i.e., others who subscribe to that user’s profile. There are 269 million users of Twitter worldwide as of August 2019, in which 37% of users are in ages 18 and 29. The most commonly analyzed disease in social media is influenza [6]. The researchers have found topic of influenza in data from Twitter [7], by applying methods like supervised classification, social network analysis and linear regression. Researchers have been extensively using social media to study dental problems [8, 9], cardiac arrest, cholera, mood and mental health, alcohol, tobacco and drug use. The Twitter is very good in a way that it provides real-time data, unlike surveys that can take weeks or years to provide relevant information. Sometimes users share that information which they do not tell their doctors, so in this case, Twitter becomes a source of new information. S. Avasthi (B) Amity University, Noida, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_6

57

58

S. Avasthi

According to World Health Organization (WHO), India is home to 12% of the world’s smokers. Approximately, 10 million people die each year due to tobacco use in India [10]. India is the second-largest tobacco consumers in the world and so faces a tobacco-related life-threatening disease [11]. The problem of effectively identifying health-related topics from Twitter’s large collected text data is presented in this paper. The past studies have focused on identifying topics on the basis of high frequency of terms in text documents. Such topic modeling faces difficulties in detecting lowfrequency topics [7, 12]. They mainly focus on the frequency distribution of words to generate topic models. Use of traditional topic model creates problems because public health-related topics use such words less frequently. The main focus in this research is on the following questions: • • • •

How to use topic modeling to identify effectively public health topics? Which topics are discussed among users related to tobacco use? Which topics are discussed among users related to alcohol use? How tobacco use and alcohol use are correlated?

The topic is defined as a distribution over a definite set of vocabulary. The goal of topic modeling is to automatically discover the topics from a collection of documents. The topic modeling is performed on the test data on tobacco and alcohol use in India and world. The use of tobacco among people is the major cause of concern specifically use of cigarette and other local versions. The uses of latent Dirichlet allocation (LDA) for topic modeling from the taken dataset give popular topics or trending topics. The main problem in topic modeling is to find hidden structure or theme that very well represents the collection. It is a very simple way to provide algorithmic answers to manage, annotate and organize large collection of texts or documents. The following section covers steps for sampling and analysis of tweets. To analyze trending terms and topics from the Twitter dataset as well as subset of tweets generated according to tobacco and alcohol-related terms, the trending topics are highlighted and also limitations of methods used. In the last section, the main findings and results are discussed as well as future directions.

2 Methodology The method of collecting tweets and sampling is introduced here; also, the technique of analyzing “tweets” through topic modeling is discussed. Initially, a large dataset of extracted through Twitter API is modeled to find health-related problems. In the next step, a smaller subset based on “tobacco” and “alcohol” is generated. The topic modeling is done on both datasets.

Topic Modeling on Twitter Data and Identifying Health-Related …

59

2.1 Data Collection The Twitter dataset was collected from different time periods and also from different regions. The data collection was done using streaming API as Twitter data is freely available for user having Twitter account. The selections of “tweets” were based on matching to selected keywords. In experiment, roughly 250 health keywords are considered, and this selection is made by identifying words matching collection of health-related tweets. The tweets were collected with the help of R packages like tweetR [13] and rtweet [14] which provide function for tweet scraping and saving them in different formats. By using Twitter search API “geocode” parameter one can retrieve the most recent “tweets” from specific area or states within country [3]. The “geocode” parameter needs parameters like latitude, longitude and radius length. The “tweet” was gathered from multiple regions that spread across states. To do topic modeling, the dataset was preprocessed by removing foreign words, symbols and replacing all links within the dataset by term “link”. In total, 50,000 messages from various users were collected through this process. More than 15,000 key phrases are collected from two Web sites, which are related to symptoms, illnesses and treatments. Also, the words “sick” and “doctor” are added. The importance of these key phrases lies in data filtering steps. Many selected words come from Web sites where consumer visits, because language is more likely to match the informal style of language used in application like Twitter.

2.2 Latent Dirichlet Allocation (LDA) The most extensively used topic modeling method is latent Dirichlet allocation (LDA) [3, 15]. Here, LDA is used to perform topic modeling on extracted Twitter dataset. LDA being one of the most used topic modeling algorithms, it learns a set of topics from words which likely to fall at same place in documents [16]. The single topic is represented by a multinomial distribution of latent topics. The observed documents and words in the document with the help of model give hidden topic structure and create per-document topic distributions. The per-document topic distribution is represented by P(topic|document), and per-topic word distributions are P(word|topic). LDA is the Bayesian model governed by Dirichlet distributions with hyperparameters α and β. Each word in the text corpus is independent given the parameters. Topic models are known as an unsupervised model because they self-organize words into clusters according to topics and associate documents to those topics. In the experiments, a variant of LDA is used which is based on additional properties like common and non-topical words. By modeling like this, the results give less noisy topics. Under this assumption, each word is generated using the standard LDA model with probability “p”, with probability 1-p the word comes from the background distribution [17, 18].

60

S. Avasthi

2.3 Topic Aspect Model In this model, a document is partitioned into some mixture of topics that can be very well described by words distribution [19, 20]. Each topic and words in it are related in one way or the other. One can define “Aspect” as some property that is inherent to the document as a theme or perspective. The topics in documents are affected by aspect in the same way in this model (TAM). For example, a health-related paper might have both computational aspect and a health aspect. Like, “deep learning “or “neural network” gives computational aspect, while “cancer” or “lung cancer” gives health aspect. In this model, five variables are chosen for each token like a word (W ), a topic (Z), an aspect (y) and a level (l) and a route(x).

2.4 Ailment Topic Aspect Model The Ailment Topic Aspect Model is used to discover the diversity of health topics which are being discussed on Twitter, which may correlate with survey data [21, 22]. First, LDA-based analysis discovered health-related topics around ailments but other topics along with it too. For example, some topic clusters would correspond to terms relating to symptoms that could be associated with many diseases and medical conditions. Consider an example in the sentence “just couldn’t make it, sitting in pain and fever at home trying to read” [23]. The sentence contains two words relevant to ailment “flu”, one of which “fever” is a symptom. The tweet contains some other words which are not about ailment like “at home” and “read” [24, 25]. ATAM explicitly labels each tweet with an ailment category and differentiate ailment words from other topics and non-topical words [26]. In ATAM, each tweet “t” belongs to a category of ailment, say at = i with probability pi . Each word token n in tweet t is associated with two observed variables, first the word type and a label that we call the “aspect”. The “aspect” explains whether a word is a symptom word, treatment word or anything else like a general word. The labels are provided as input, and the first dataset is labeled using key phrases selected according to the need.

3 Discussions and Results 3.1 Comprehensive Twitter Dataset Analysis The dataset was extracted from Twitter using keyword such as tobacco and alcohol. The analysis of the dataset was performed using LDA and Topic Aspect Model to produce topics for document collections. By applying the LDA method, frequently occurring words were identified that share a common connection. Also, it identifies

Topic Modeling on Twitter Data and Identifying Health-Related …

61

Fig. 1 Common ailment and non-ailment key phrases [3]. Available at https://journals.plos.org/ plosone/article/figure?id=10.1371/journal.pone.0103408.g002

some additional terms within the area of topics that may not be directly related to the topics but relevant in analysis. The LDA was configured to generate 100 topic distributions for single words, after removing stop words. We started at 10 topics and increased the topics by 10. After ten such repetitions, model provided topics that contained unigrams and ngrams showing partial and complete cohesion. To check the accuracy, the LDA was run for more than 200 iterations. After applying LDA model on a comprehensive dataset, the results were collected and analyzed. The model produced various healthrelated terms, some ongoing trends related to tobacco and alcohol use. Several healthrelated themes like physical activity, healthcare, consultancy, weightloss, obesity, heart attack and blood pressure were identified (Figs. 1 and 2).

3.2 Analysis of Tobacco and Alcohol Subset The terms related to tobacco and alcohol could be “smoking”, “cigar”,” pipe”,” hookah”,” bidi”, “panmasala”, “beer”,” drinking”,” toxic”,” intoxication” and

62

S. Avasthi

Fig. 2 Word cloud of tweets on tobacco and alcohol

Fig. 3 Top 10 words in LDA topic model for k = 20

“drink”. To build the subset from the main dataset, such keywords are taken into consideration, and the query includes these words. The selection of such words depends on their relationship to tobacco and alcohol use. The term “hookah” is also included in the analysis because recently college students have grown inclined toward hookah, and so, it gave rise in new hookah bar joints in country [11]. The topic models were generated for k = 10 and k = 20; in Fig. 3, the first six topics and most frequent ten terms/words are shown.

4 Conclusion The result shows that discovered health-related topics and ailments are significant by applying topic models. These topics and information related to health are identified without the involvement of a person or survey data. The topic models used are effective in discovering trends and novel diseases. It is observed that implementing LDA on a large dataset identifies only a few health topics. The Twitter analysis gives public health researchers a better understanding of public health status and helps in solving health problems quickly. The test topic tobacco and alcohol were relevant terms in Twitter data collected over a specific period of time. Through topic modeling like LDA, the core issues relating to tobacco and mindset were determined. Also,

Topic Modeling on Twitter Data and Identifying Health-Related …

63

irrelevant or unwanted tweets can be removed, or tweets posted as health status can be isolated. The results gathered through the experiments are done on Twitter data; Twitter came out as a potential source to understand health-related topics, e.g., tobacco and alcohol. On the other hand, the result does not provide enough insights on short-term events like disease outbreaks. The topic model like LDA and its use in the experiments proves its utility in extracting valuable topics from large-scale text datasets. The method automates the process of removing irrelevant information and focuses only on keywords, but it requires users to provide terms while querying and preparing relevant subset. In continuation of research work, the overall process of topic modeling can be automated where less user involvement is required.

References 1. Jordan, S., Hovet, S., Fung, I., Liang, H., Fu, K. W., & Tse, Z. (2019). Using Twitter for public health surveillance from monitoring and prediction to public response. Data, 4(1), 6. 2. Stieglitz, S., Mirbabaie, M., Ross, B., & Neuberger, C. (2018). Social media analytics–challenges in topic discovery, data collection, and data preparation. International Journal of Information Management, 39, 156–168. 3. Paul, M. J., & Dredze, M. (2014). Discovering health topics in social media using topic models. PLoS ONE, 9(8), e103408. 4. Prier, K. W., Smith, M. S., Giraud-Carrier, C., & Hanson, C. L. (2011). Identifying healthrelated topics on twitter. In International conference on social computing, behavioral-cultural modeling, and prediction (pp. 18–25). Berlin, Heidelberg: Springer. 5. Beykikhoshk, A., Arandjelovi´c, O., Phung, D., Venkatesh, S., & Caelli, T. (2015). Using Twitter to learn about the autism community. Social Network Analysis and Mining, 5(1), 22. 6. Culotta, A. (2010). Towards detecting influenza epidemics by analyzing Twitter messages. In Proceedings of the first workshop on social media analytics (pp. 115–122). acm. 7. Culotta, A. (2013). Lightweight methods to estimate influenza rates and alcohol sales volume from Twitter messages. Language resources and evaluation, 47(1), 217–238. 8. Kalyanam, J., Katsuki, T., Lanckriet, G. R., & Mackey, T. K. (2017). Exploring trends of nonmedical use of prescription drugs and polydrug abuse in the Twitter sphere using unsupervised machine learning. Addictive Behaviors, 65, 289–295. 9. Bosley, J. C., Zhao, N. W., Hill, S., Shofer, F. S., Asch, D. A., Becker, L. B., et al. (2013). Decoding twitter: Surveillance and trends for cardiac arrest and resuscitation communication. Resuscitation, 84(2), 206–212. 10. Mohan, P., Lando, H. A., & Panneer, S. (2018). Assessment of tobacco consumption and control in India. Indian Journal of Clinical Medicine, 9, 1179916118759289. 11. Nazar, G. P., Chang, K. C., Srivastava, S., Pearce, N., Karan, A., & Millett, C. (2019). Impact of India’s National Tobacco Control Programme on bidi and cigarette consumption: A differencein-differences analysis. Tobacco control. 12. Paul, M. J., Sarker, A., Brownstein, J. S., Nikfarjam, A., Scotch, M., Smith, K. L., & Gonzalez, G. (2016). Social media mining for public health monitoring and surveillance. In Biocomputing 2016: Proceedings of the pacific symposium (pp. 468–479). 13. Gentry, J. (2015). twitteR: R Based Twitter Client. R package version 1.1.9. https://CRAN.Rproject.org/package=twitteR. 14. Kearney, M. W. (2019). rtweet: collecting twitter data. R package version 0.6.9 Retrieved from https://cran.r-project.org/package=rtweet.

64

S. Avasthi

15. Blei, D. M., Ng, A. Y., Jordan, & M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research 3, 993–1022. 16. Jelodar, H., Wang, Y., Yuan, C., Feng, X., Jiang, X., Li, Y., & Zhao, L. (2019). Latent Dirichlet Allocation (LDA) and Topic modeling: Models, applications, a survey. Multimedia Tools and Applications, 78(11), 15169–15211. 17. Chemudugunta, C., Smyth, P., & Steyvers, M. (2007). Modeling general and specific aspects of documents with a probabilistic topic model. In Advances in Neural Information Processing Systems (pp. 241–248). 18. Paul, M. J. (2012). Mixed membership Markov models for unsupervised conversation modeling. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning (pp. 94–104). Association for Computational Linguistics. 19. Paul, M., & Girju, R. (2010). A two-dimensional topic-aspect model for discovering multifaceted topics. In Twenty-fourth AAAI conference on artificial intelligence. 20. Parker, J., Wei, Y., Yates, A., Frieder, O., & Goharian, N. (2013). A framework for detecting public health trends with twitter. In Proceedings of the 2013 IEEE/ACM international conference on advances in social networks analysis and mining (pp. 556–563). ACM. 21. Twitter API documentation. http://dev.twitter.com/doc. 22. Boyd, D. M., & Ellison, N. B. (2007). Social network sites: Definition, history, and scholarship. Journal of computer-mediated Communication, 13(1), 210–230. 23. Chew, C. (2010). Pandemics in the age of twitter: A content analysis of the 2009 h1n1 outbreak (Doctoral dissertation). 24. Hoang, T. A., & Lim, E. P. (2017). Modeling topics and behavior of microbloggers: An integrated approach. ACM Transactions on Intelligent Systems and Technology (TIST), 8(3), 44. 25. Yang, S. H., Kolcz, A., Schlaikjer, A., & Gupta, P. (2014). Large-scale high-precision topic modeling on twitter. In Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1907–1916). ACM. 26. Wagner, C., Singer, P., Posch, L., & Strohmaier, M. (2013). The wisdom of the audience: An empirical study of social semantics in twitter streams. In Extended semantic web conference (pp. 502–516). Berlin, Heidelberg: Springer.

Effort Estimation Using Hybridized Machine Learning Techniques for Evaluating Student’s Academic Performance A. J. Singh and Mukesh Kumar

1 Introduction Machine learning algorithm is an application area of Artificial Intelligence which further used to develop a system program, which automatically learns from the input given by the user and gives the result. Nowadays, different companies are using this technology to automate there working [1]. ML algorithms have further divided into different categories, like supervised and unsupervised algorithms. In supervised ML algorithm, the input data given to training the program is already level with some class. On the other hand, in unsupervised ML algorithms, the input data provided to train the algorithm is not known in advance [2]. Artificial intelligence (AI) refers to software technologies that make a computer act and thinks like a human being. Artificial intelligence is an assumption and expansion of computer programming that can execute responsibilities that usually require human knowledge. Data Mining (DM) is another technology comes into the above categories, but it limits itself only for the analysis purpose and finds the hidden information from the given dataset. It used the ML algorithms to make an analysis and gave a conclusion. DM application area also varies from telecommunication, marketing, production, hospitality, medical and education sector [8]. The study of Data Mining concerning education application area has recognised as Educational Data Mining (EDM). In EDM, we are analysing the student dataset, which is collected from different sources and analysed to predict the student result, placement, dropout and student’s progress in academics. Predicting student academic performance (SAP) is essential for any organisation to compete with others in the same market [9]. A. J. Singh · M. Kumar (B) Department of Computer Science, Himachal Pradesh University, Summer-hill, Shimla, Himachal Pradesh, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_7

65

66

A. J. Singh and M. Kumar

Predicting the number of work units compulsory to perform a particular assignment based on an understanding of similar projects and other project features that are supposed to be associated with the effort. The functions of the software application are the input and the effort we want to predict [10]. The processing is used to predict the effort of work (units) required to perform a particular task, based on knowledge of similar projects and other project features related to the effort. It is essential to organise, superiority and success of any software application development. The commonly used efficient categories of effort estimation are an expert estimation, algorithmic estimation and machine learning [11]. In this contribution, comparisons of different machine learning algorithms have performed and which algorithm is more suitable in which the situation has discussed.

2 Literature Survey There are so many effort estimation models are developed in recent years and has surveyed. The effort done by the different researcher has been discussed here: Malhotra and Jain presented a paper titled “Software Effort Prediction using Statistical and Machine Learning Methods” [2]. In this paper, authors estimate and compare different machine learning algorithms like Linear Regression, Artificial Neural Network, Decision Tree, Support Vector Machine and Bagging techniques on software project dataset. In his work, they used a dataset that has further taken from 499 different software projects. Initially, the dataset contains 19 features, but after pre-processing, only ten features are selected using feature selection algorithms (CFS algorithm). In his result, they found the estimation of Decision Tree algorithm is too good as compared to any other machine learning algorithm taken into consideration. Bhatnagar and Ghose et al. presented a paper titled “Comparing Soft Computing Techniques for Early Stage Software Development Effort Estimation” [1]. In this paper, authors, implemented a Neural Network (NN) algorithm and FIS approach to estimate the effort. They compared Linear Regression Neural Network with Fuzzy Logic and found that Fuzzy Logic approach gave a better performance as compared to others for effort estimation. M. Sadiq, A. Ali, S. U. Ullah et al. presented a paper titled “Prediction of Software Project Effort Using Linear Regression Model” [3]. In this paper, the authors implemented a Linear Regression (LR) algorithm for estimating the software project effort. The author further explained the importance of the software’s function point count before estimating the total effort. In the study, the value of the MMRE is found to be 0.1356. Saini and Khalid presented a paper titled “Empirical Evaluation of machine learning techniques for software effort estimation” [4]. The authors implemented Decision Tree, Multi-layer perceptron, Decision Table, bagging and Radial Bias Networks to estimate the total effort required to develop a software project. Seref and Barisci presented a paper titled “Software Effort Estimation Using Multi-layer Perceptron (MLP) and Adaptive Neuro-Fuzzy Inference System (ANFIS)” [5]. In this paper, the authors implemented Multi-layer perceptron and Adaptive Neuron Fuzzy interference algorithms for estimating effort by taking

Effort Estimation Using Hybridized Machine Learning Techniques …

67

NASA and Desharnais dataset. They analysed these two data sets for Mean Relative Magnitude Error and Percentage Relative Error. After implementation, they found that ANFIS gave use the better result as compared to MLP. Boetticher et al. presented a paper titled “An Assessment of Metric Contribution in the Construction of a Neural Network-Based Effort Estimator” [6]. In this paper, the authors implemented 33,000 different Neural Network algorithm experiments which have further collected from different corporate domains. The trials assessed the contribution of different metrics to programming effort. This research produced a cross-validation rate of 73.26%, using pred (30). Hodgkinson and Garratt et al. presented a paper titled “A Neurofuzzy Cost Estimator” [7]. In this paper, the authors implemented a Neural Fuzzy machine learning algorithm to predict the total cost of the project. They compared the implemented algorithm with ML techniques like Least-squares multiple Linear Regression and Neural Network algorithms. In the last few years, lots of research has done, but nobody cares about finding the total effort required to analysis the student academic performance [12]. So in this paper, we tried to improve the performance of the machine learning algorithms to find the overall effort needed to review the student academic performance.

3 Data Processing Phase There are so many software tools available for estimating the total effort required for project development using machine learning techniques. These effort estimation prediction tools are WEKA, MATLAB, Orange, RapidMiner, etc [13]. In this paper, we will be using MATLAB tool for the implementation of ML techniques for predicting total effort required for project development. MATLAB is used to resolve a massive amount of problems such as Classification, Clustering and Neural Networks techniques. Figure 1 shows the MATLAB interface while uploading the dataset for pre-processing [14, 15]. Evaluating the academic performance of the students is crucial to check for the possibilities of improvement in academics. Here, we proposed a computerised resolution for the performance evaluation of the students using ML algorithms. A thresholdbased segmentation is employed to complete the evaluation procedure over MATLAB simulation tool. The performance of machine learning is evaluated by accuracy and mean square error.

4 Proposed Algorithm Implementation The proposed algorithm architecture is the combination of Neural Networks and Support Vector Machine. A Hybrid classification mechanism has designed which utilises both the structure of Neural Network and Support Vector Machine. First

68

A. J. Singh and M. Kumar

Fig. 1 MATLAB interface while uploading a dataset

of all, Neural Network is applied for all non-matched Target Labels through Neural Network, Support Vector Machine algorithm has used. The pseudo-code is as follows. Pseudo-Code for Hybrid Algorithm: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.

[r, c] = size(gr1ele); // Group 1 elements [r1, c1] = size(gr2ele); // Group 2 elements group = []; // Target Set Group cnt = 1; for i = 1 : r group(cnt) = 1; // Initialization of Target Label trainingdata(i,1) = gr1ele(i, 1); // Preparing the Training data for group 1 trainingdata(i,2) = gr1ele(i, 2); // Training data for group 2 cnt + + // Increment in Counter foreach i groupelement group(cnt) = 2; trainingdata(cnt,1) = gr2ele(i,1) ; trainingdata(cnt,2) = gr2ele(i,2) ; cnt + + // Counter Increment End For   net = newff trainingdata , group, 20 // Initializing Neural Network net.trainParam.epochs = 100 − 1000  // Propagating Iterations  net = train net, trainingdata , group ; // Training res = sim net, trainingdata ; // Simulating diff = res − group; abe = (diff);

Effort Estimation Using Hybridized Machine Learning Techniques …

22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37.

69

nonzero = []; nzcount = 1; for zx = 1 : numel(abe) if abe(zx) ∼= 0 nonzero(nzcount) = zx; nzcount = nzcount + 1; end end trainingdatasvmnew = trainingdata(nonzero); groupnew = group(nonzero); figure(1)   svmstruct = svmtrain trainingdatasvmnew, groupnew ,  showplot , true ;  res = svmclassify svmstruct, trainingdatasvmnew,  showplot ,  true ; group = groupnew ; end end

Figure 2 shows the Hybrid structure of the proposed work. In this research, a Hybrid form of neural with Support Vector Machine algorithm has been used to train the system.

Fig. 2 Implementation of Hybrid algorithm using MATLAB

70

A. J. Singh and M. Kumar

5 Result and Discussion Below is the implementation result of the above steps which are implemented on MATLAB tool. Student data with different records are taken as input to the algorithm and then effort estimation is calculated. Table 1 given us the result: Table 1 and Fig. 3 depict the evaluation of effort estimation of Neural Network (NN), Support Vector Machine (SVM), Hybrid (SVM-NN), Naïve Bayes (NB) and Random Forest (RF) Algorithm. The X-axis in Fig. 3 shows the total number of supplied student’s record, whereas Y-axis defines the values obtained for each algorithm has considered. The average cost of Effort Estimation by Neural Network is 39.14, by Support Vector Machine is 44.61, by Hybrid (SVM-NN) is 30.3, by Naïve Bayes is 50.67 and by Random Forest is 51.87. Figure 4 and Table 2 demonstrate the examination of Neural Network and Hybrid (SVM-NN) as well. The X-axis shows the count of neurons and Y-axis defines the values that are has considered after the evaluation. It has seen that the Estimated Effort in case of Neural Network is more as compared to the Hybrid (SVM-NN) algorithm. The average value of Estimated Effort in case of Neural Network is 24.62 and for Estimated Effort in case of Hybrid (SVM-NN) is 17.44. Figure 5 and Table 3 demonstrate the examination of Kernel type Linear for the Support Vector Machine algorithm. The X-axis in Fig. 5 shows that the total number of supplied student’s record. The Y-axis defines the values obtained for Support Vector Machine, and Hybrid (SVM-NN) algorithm has considered. It has seen that the amount of Support Vector Machine only is more as compared to Hybrid (SVMNN) algorithm. The average cost for the Support Vector Machine algorithm is 44.68, whereas the value in the case of Hybrid (SVM-NN) algorithm is 25.09. Figure 6 and Table 4 demonstrates the examination of Kernel type polynomial for support vector machine algorithm. The X-axis in Fig. 6 shows the total number of supplied student’s record. The Y-axis defines the values obtained for the Support Vector Machine algorithm, and Hybrid (SVM-NN) algorithm has considered. It has seen that the value of the Support Vector Machine algorithm only is more as compared to Hybrid (SVM-NN) algorithm. The average cost for the Support Vector Machine algorithm is 42.31, whereas the value in the case of Hybrid (SVM-NN) algorithm is 24.03. Table 1 Comparison of effort estimation by different machine learning algorithms Student records

ANN

SVM

SVM-NN

NB

RF

100

23.56

29.334

16.11

36.336

36.7854

200

31.25

35.698

23.75

42.125

43.14

300

39.336

42.856

29.665

51.145

52.221

400

41.256

46.667

32.145

53.332

55.69

500

48.339

53.715

39.418

57.896

59.325

600

51.148

59.413

41.259

63.21

64.112

Effort Estimation Using Hybridized Machine Learning Techniques …

Fig. 3 Effort estimation evaluation of different machine learning algorithms

Fig. 4 Estimated effort for neural network and hybrid algorithm

71

72

A. J. Singh and M. Kumar

Table 2 Estimated effort for Neural Network and SVM-NN Neuron count

NN

SVM-NN

10

26.11

19.156

12

25.114

18.114

15

25.1

17.269

20

24.124

17.102

25

24.103

16.936

30

23.221

16.105

Fig. 5 Kernel type linear for Support Vector Machine algorithm Table 3 Kernel type linear for Support Vector Machine algorithm Student records

SVM

SVM-NN

100

29.996

18.698

200

36.214

21.112

300

42.265

23.365

400

47.114

28.145

500

53.145

29.145

600

59.362

30.1145

Effort Estimation Using Hybridized Machine Learning Techniques …

73

Fig. 6 Kernel type polynomial for support vector machine algorithm

Table 4 Kernel type polynomial for support vector machine algorithm

Student records

SVM

SVM-NN

100

25.145

17.116

200

35.654

20.2038

300

41.256

22.512

400

45.339

27.154

500

52.14

28.001

6 Conclusion The purpose of this work is to find the most robust student features and methods of studying data that help us estimate the student’s academic performance. This research contributes to locating a variety of data acquisition algorithms for research and active student analysis. In this study, the machine learning process for evaluation has offered. It is done to help in consuming time and accurate data. The machine will not be content with just evaluating data, making its results better and faster with time and way than traditional processes. Machine learning makes the assessment process better and quicker, but also allows you to get comments from the analysis. SVM, SVM, Random Forest and Naive Bayes Algorithm has used for classification purpose. The evaluation has been done based on effort estimation. The average value of Effort Estimation by Neural is 39.14, by SVM is 44.61, by Hybrid is 30.3, by

74

A. J. Singh and M. Kumar

Naïve Bayes is 50.67 and by Random Forest is 51.87. The average value for SVM is 44.68, whereas the value in case of Hybrid SVM is 25.09 in case of Kernel type Linear for SVM. The average cost for SVM is 42.31, whereas the value in case of Hybrid SVM is 24.03 in case of Kernel type polynomial for SVM. Future Research: The future scope of this research work lies in applying subspace clustering techniques to high dimensional student’s datasets. Usually, student’s educational performance data are dividing into two parts, such as sparse data and dense data. In the proposed technique, the sparse student’s academic performance data has not supported, and there is a need to improve the efficiency of the system in the future. Acknowledgements I am grateful to my guide Dr. A. J. Singh for all help and valuable suggestion provided by them during the study.

References 1. Bhatnagar, R., & Ghose, M. K. (2012). Comparing soft computing techniques for early stage software development effort estimation. International Journal of Software Engineering & Applications (IJSEA), 3(2). 2. Malhotra, R. (2011). Software effort prediction using statistical and machine learning methods. (IJACSA) International Journal of Advanced Computer Science and Applications, 2(1). 3. Sadiq, M., Ali, A., Ullah, S. U., Khan, S., & Alam, Q. (2013). International Journal of Information and Electronics Engineering, 3(3). 4. Saini, N., & Khalid, B. (2014). Empirical evaluation of machine learning techniques for software effort estimation. Journal of Computer Engineering (IOSR-JCE). e-ISSN: 2278-0661. 5. Seref, B., & Barisci, N. (2014). Software effort estimation using multilayer perceptron and adaptive neuro-fuzzy inference system. International Journal of Innovation, Management and Technology, 5(5). 6. Boetticher, G. (2001). An assessment of metric contribution in the construction of a neural network-based effort estimator. Second International Workshop on Soft Computing Applied to Soft. Engineering. 7. Hodgkinson, A. C., & Garratt, P. W. (1999). A neuro fuzzy cost estimator. In Proceeding 3rd International Conference Software Engineering and Applications (SAE) (pp. 401–406). 8. He, Y., Tan, H., Luo, W., Feng, S., & Fan, J. (2014). MR-DBSCAN: A scalable MapReducebased DBSCAN algorithm for heavily skewed data. Frontiers of Computer Science, 8(1), 83–99. 9. Dudik, J. M., Kurosu, A., Coyle, J. L., & Sejdi´c, E. (2015). A comparative analysis of DBSCAN, K-means and quadratic variation algorithms for automatic identification of swallows from swallowing accelerometry signals. Computers in Biology and Medicine, 59, 10–18. 10. Cordova, I., & Moh, T. S. (2015). DBSCAN on resilient distributed datasets. In 2015 international conference on high-performance computing & simulation (HPCS) (pp. 531–540). IEEE. 11. Rathore, N. S., Singh, V. P., & Kumar, B. (2018). Controller design for DOHA water treatment plant using grey wolf optimization. Journal of Intelligent and Fuzzy Systems, 35(5), 5329–5336. 12. Singh, V. P., Prakash, T., Rathore, N. S., Chauhan, D. P. S., & Singh, S. P. (2016). Multilevel thresholding with membrane computing inspired TLBO. International Journal on Artificial Intelligence Tools, 25(6), 1650030.

Effort Estimation Using Hybridized Machine Learning Techniques …

75

13. Nanda, G., Dua, M., & Singla, K. (2016). A hindi question answering system using machine learning approach. In 2016 international conference on computational techniques in information and communication technologies (ICCTICT) (pp. 311–314). IEEE. 14. Dua, M., Kumar, S., & Virk, Z. S. (2013). Hindi language graphical user interface to database management system. In 2013 12th international conference on machine learning and applications (vol. 2, pp. 555–559). IEEE. 15. Devi, M., & Dua, M. (2017). ADANS: An agriculture domain question answering system using ontologies. In 2017 International Conference on Computing, Communication and Automation (ICCCA) (pp. 122–127). IEEE.

Frequency Sweep and Width Optimization of Memos-Based Digital Logic Gates Parvez Alam Kohri and Manish Singhal

1 Introduction The memristor is known as the fourth class of fundamental circuit element. Earlier, there exist three circuit elements, which are resistor, capacitor and inductor. Professor Leon Chua in 1971 gave the relationship among all the four cornerstones of a network system, i.e., current (i), charge (q), voltage (v) and flux (F). Furthermore, he found the conclusive evidence of such symmetry and suggested memristor as a missing element between flux and charge. The relationship between fundamental elements is shown in Fig. 1.

2 Ease of Use There are various memristor models. Based on the experimental data, in the practical MD, the existence of threshold voltage instead of the threshold current was found. It is observed that the computational efficiency and accuracy of VTEAM model are less than 1.5% in terms of relative RMS error [1]. This model is the extension of the TEAM model [2], which couples numerous benefits of threshold voltage rather than the threshold current. This model assumes asymmetric and nonlinear switching behavior. A. Mathematical modeling of Voltage Threshold Adaptive Memristor Model (VTEAM): The objective of this work is to design one-bit logic gates (NAND, NOR) using VTEAM model, which is the voltage threshold-based memristive model. This model has the highest accuracy with respect to the other available P. A. Kohri (B) · M. Singhal Electronics & Communication Department, Poornima College of Engineering, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_8

77

78

P. A. Kohri and M. Singhal

Fig. 1 Relation between four cornerstones of network system [3]

models. To achieve this work, I have used Cadence Virtuoso. Based on experimental data, in the practical MDs, the existence of threshold voltage rather than the threshold current was found. It is observed that the computational efficiency and accuracy of VTEAM model lie below 1.5% in terms of relative RMS error [1]. Two types of equations characterize memristor behavior: • A state-dependent ohms law. • Equation defining the internal state variable.

dW = F(W, V ) dt

(1)

V (t) = M(q).i(t)

(2)

where the derivative of the internal state variable is the function of both, applied current or voltage V (t) and internal state variable W, a state-dependent Ohms law depends on memristance M(q). 

ROFF − RON M(q) = RON + .(W − WON ) WOFF − WON

 (3)

On substituting Eq. (3) in Eq. (2), we get:   ROFF − RON V (t) = RON + .(W − WON ) .i(t) WOFF − WON

(4)

Frequency Sweep and Width Optimization of Memos-Based …

79

where memristance M(q) is completely dependent on the bounds (W ON and W OFF ) of state variable(defines the movement of oxygen vacancies) and RON , ROFF , where RON , ROFF are the ON and OFF memristance correspond to W ON and W OFF . The derivative of the state variable is mainly obtained by multiplying two functions. In which, one function depends on the internal state variable and other depends on memristive voltage independently  αOFF ⎧ V (t) ⎪ .FOFF (W ), 0 < VOFF < V ⎪ K OFF . VOFF − 1 ⎨ dW (t) = 0, V O N < V < V O F F  ⎪ dt ⎪ ⎩ K . V (t) − 1 αON .F (W ), V < V < 0 ON

VON

ON

(5)

ON

The above equation shows the derivative of the internal state variable. This equation consists of various fitting parameters like K OFF, K ON having positive and negative value, respectively. αon and αoff are constant. In the above equation, V ON and V OFF describe that memristor holding bipolar switching mechanism has two threshold voltages of equal magnitude and opposite polarity. Functions F ON (W ) and F OFF (W ) are the window function which bounds the internal state variable in between W ON and W OFF . Figure 2 shows the pinched hysteresis curve between current and voltage. I–V curve is drawn for sinusoidal input having amplitude 1 V and frequency of 50 MHz, and the fitting parameters are D = 3 nm, ROFF = 50, RON = 1 k, K ON = −10 nm/s, K OFF = 5e−4 nm/s, V OFF = 0.3 V, V ON = −0.3 V, which designate that the ON switching is faster than OFF switching.

Fig. 2 I–V curve for VTEAM model

80

P. A. Kohri and M. Singhal

3 Proposed Structure • NAND Gate Implemented by Memristor: MeMOS-based schematics of the NAND gate are shown in Fig. 3, where A and B are inputs to the two different memristors M1 and M2, respectively. Outputs of these memristors are connected to the common node, which is further connected to the MOS-based inverter circuit. OUT is the output to the NAND gate. • NOR Gate Implemented by Memristor: MeMOS-based schematics of the NOR gate are shown in Fig. 4, where A and B are inputs to the two different memristors M1 and M2, respectively. Outputs of these memristors are connected to the common node, which is further connected to the MOS-based inverter circuit.

Fig. 3 NAND gate schematics

Fig. 4 NOR gate schematics

Frequency Sweep and Width Optimization of Memos-Based …

81

Fig. 5 NAND gate result

4 Results On applying logic 1, i.e., positive voltage to memristor M1, and logic 0, i.e., negative voltage to M2, the change in the distribution of oxygen vacancies that consist of positive ions causes the increase in resistance of M1 and decrease the resistance of M2. These results in the output at the common node became logic 0. This logic 0 is applied to inverter circuit made up of NMOS and PMOS. The NMOS becomes OFF while PMOS is ON. This results the output to be the DC voltage which is treated as logic 1. Figure 5 shows the NAND gate simulation result. On applying logic 1, i.e., positive voltage to memristor M1 and logic 0, i.e., negative voltage to M2. The changes in the distribution of oxygen vacancies that consists of positive ions Causes the decrease in resistance of M1 and increase the resistance of M2. These results in the output at the common node became logic 1. This logic 1 is applied to inverter circuit made up of NMOS and PMOS. The NMOS becomes ON while PMOS is OFF. This results the output as logic 0. Figure 6 shows the NOR gate simulation result.

5 Conclusion and Future Work In this work, we designed basic logic gates, NAND and NOR using VTEAM model in Cadence Virtuoso. Earlier, we were using TEAM, linear ion drift and Simmons tunnel barrier model; there is a current-controlled mechanism in these models. In VTEAM model, we require threshold voltage to describe the characteristics of physical behavior accurately. The accuracy of this model was determined by using Verilog-A model. Its accuracy was further verified by simulating in Cadence Virtuo Virtuoso and comparing it with the experimental results. This shows that MeMOS design implemented

82

P. A. Kohri and M. Singhal

Fig. 6 NOR gate result

using Pt = TiO2−x = TiO2 = Pt nano layers is a better alternative to achieve low die area and high packing density. In the future, by making use of these logics, we can redesign different circuits using memristor. By this, we can get a device having low cost and low power consumption. In the future, we can design a booting free computer, flash memory, DRAM and hard drive by using memristor. Acknowledgements My deepest gratitude is to my Supervisor, Mr. Manish Singhal, Department of Electronics & Communication Engineering, Poornima College of Engineering, Jaipur. I have been amazingly fortunate to have an advisor who gave me the freedom to explore on my own and at the same time the guidance to recover when my steps faltered. He taught me how to question thoughts and express ideas. His patience and support helped me overcome many crisis situations and finish this dissertation. I hope that one day I would become as good an advisor to my students as he has been to me.

References 1. Biolek, Z., Biolek, D., Biolkova, V. (2009). SPICE model of memristor with nonlinear dopant drift. Radio Engineering, 18(2), 210–214. 2. Strukov, D. B., Snider, G. S., Stewart, D. R., Williams, R. S. (2008). The missing memristor found. Nature, 453(7191), 80–83. 3. Adhikari, S., Sah, M., Kim, H., & Chua, L. O. (2013). Three fingerprints of memristor. In IEEE transactions on circuits and systems I: Regular papers (in press). 4. Jo, S. H., Chang, T., Ebong, I., Bhadviya, B. B., Mazumder, P., & Lu, W. (2010). Nanoscale memristor device as synapse in neuromorphic systems. Nano Letters, 10(4), 1297–1301. 5. Chua, L. O. (1971). Memristor—the missing circuit element. IEEE Transactions Circuit Theory, 18, 507–519. 6. Pickett, M. D., Strukov, D. B., Borghetti, J. L., Yang, J. J., Snider, G. S., Stewart, D. R., et al. (2009). Switching dynamics in titanium dioxide memristive devices. Journal of Applied Physics, 106(7), 074508.

Frequency Sweep and Width Optimization of Memos-Based …

83

7. Kvatinsky, S., Friedman, E. G., Kolodny, A., & Weiser, U. C. (2013). TEAM: Threshold adaptive memristor model. IEEE Transactions Circuits Systems. I: Reg. Papers, 60(1), 211–221. 8. Kvatinsky, S., et al. (2014). MAGIC|Memristor aided LoGIC. IEEE Transactions Circuits Systems. II: Express Briefs, 61(11), 1–5. 9. Prodromakis, T., Peh, B. P., Papavassiliou, C., Toumazou, C. (2011). A ver- satile memristor model with non-linear dopant kinetics. IEEE Transactions Electron Devices, 58(9), 3099–3105. 10. Kvatinsky, S., et al. (2014). Memristor-based material implication (IMPLY) logic: Design principles and methodologies. IEEE Transactions VLSI, 22(10), 2054–2066.

Performance Improvement of Heterogeneous Cluster of Big Data Using Query Optimization and MapReduce Pankaj Dadheech, Dinesh Goyal, Sumit Srivastava, Ankit Kumar, and Manish Bhardwaj

1 Introduction Hadoop is scalable and capable of managing substantial data volumes with same or homogeneous kind of clusters since the data movement and processing capacities of servers remain same all around the network. In case of heterogeneous clusters, each node comprises of a host with different speeds of storage and processing capabilities. The task of processing tasks then is changed from slow-end nodes into high-end nodes. This strategy works good once the number of information to be processed is less or job load is low, but it fails to improve the rate of job processing on Hadoop heterogeneous clusters if processing and data involves huge volumes of data sets. The improvement of the hardware product is another issue where processors can be improved but it reveals quite expensive. Another solution for this matter is to improve performance of those heterogeneous clusters using different analytical techniques to analyze structured and unstructured information [1]. The MapReduce algorithm P. Dadheech (B) · A. Kumar Department of Computer Science & Engineering, Swami Keshvanand Institute of Technology, Management & Gramothan, Jaipur, Rajasthan, India e-mail: [email protected] A. Kumar e-mail: [email protected] D. Goyal · M. Bhardwaj Poornima Institute of Engineering & Technology, Jaipur, Rajasthan, India e-mail: [email protected] M. Bhardwaj e-mail: [email protected] S. Srivastava Manipal University Jaipur, Jaipur, Rajasthan, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_9

85

86

P. Dadheech et al.

divided the data classification activities and assigned those jobs to different computers on the system. When every task was computed and finished processing, it was gathered and introduced as the final outcome. The MapReduce algorithm was shown to be very effective as now, large data sets could be readily processed and that also in a reduced time period. The group of Mike Caferella and Doug Cutting with additional members in 2005 employed the MapReduce algorithm and laid the base of Hadoop. It is able to deal with large amount of data sets as in big data and provide insights and meaning of those data statistically. The data generated today can be in any kind of format, be it a simple text message, an audio clipping, video or even a file with information of customers. Broadly speaking data may be user generated, machine generated, structured form where data is organized to offer some significance, or unstructured in which information is in raw format. An analytical survey shows that 33% of information generated digitally can be useful for different situations. But today on 0.5% of information is used to seriously analyze the situation. The limited analytical capabilities of the existing algorithmic approaches a very major chunk of information insights and meanings have been missing. The usage of MapReduce in data warehousing systems implies that analytic queries aren’t sufficient for the data outburst experienced now. A quick look on information generation and classification reveals that the rate of ability of data movement on/off hard drive is slow compared to the online data development. Though the capacities of hard drives are also advancing day by day yet, processor speeds or functionality have comparable growth rates as of hard drives. The major bottle neck of this data throughput mismatch occurs due to the motion of information on and off the hard disk drive [2]. Hadoop is also an open-source framework which is used to store in addition to process information using different approaches in a distributed environment. It can manage data flow in a distributed manner across a various bunch of computers in such a way that data access gets fast, reliable, and secure. It entails use of computing and programming models for handling data storage or distribution over large and widespread geographical areas. Going with the conventional databases was quite a good alternative with predefined software written to utilize them. It might be used to analyze information in consumer in addition to business standpoint but it worked well only till the data was confined to some less quantity. With larger volumes of information, processing large number of data became an impossible task. The processing and managing of information within a single or bunch of computers has been also a tedious task as there were risks of node failures, data reduction, and slow processing times. The conventional database servers proved to be a massive risk to store and process data when the amount of data doubled exponentially [3].

Performance Improvement of Heterogeneous Cluster of Big Data …

87

2 Literature Review Another such implementation of good clustering algorithm is conducted by the authors of the paper, [4] where they have used K-means and TF-IDF on MapReduce for clustering of distributed documents. The authors have proposed that as data is accumulating over Internet rapidly, the amount of documents or data to be processed has starting causing overheads for Web servers. The authors have proposed a new type algorithm to cluster the data they are taking in form of documents. They have initiated a preprocessing phase where they calculate the respective TF-IDF weights in the documents for further processing. The experiments conducted reveal that the precision of clustering has improved by incorporating K means and TF-IDF algorithm [5]. It can help in processing thousands of documents processed concurrently at high pace and in stipulated time period. As mentioned in earlier works of distinct authors it is evident that big data requires a lot of mining along big data sets to find the right keyword, but it comes with a cost that lots of organizations have to pay as to finding a difficult pattern in large sets is already a tedious task. This problem is raised as a concern in the paper by Rong [6]. The algorithm implemented here is introduced in MapReduce framework to map the memory as and when required the data items from big data. The authors have accessed and analyzed few of the Web logs spanning over simple databases and reached to the conclusion that traditional database do not offer a good solution in managing and accessing because data cannot reside on a single machine and data movement over long distances is not feasible enough. The cost of computing also increases manifold when the data items in a cluster are in peta or zeta bytes. Many developers and researchers have shifted their focus on developing algorithms in MapReduce framework as it supports a wide variety of options and can contend with real case scenarios which was lacking in earlier implementations [7, 8]. As per the authors [9] deep neural networks (DNN) are a good way to represent the complexity of speech recognition systems. The authors have used singular value decomposition (SVD) in order to maintain and restructure the data model for learning effects of speech recognition in complex systems. The weights of each sub-point are collected and applied SVD upon. The paper gives a general idea on how the complexity of speech recognition systems could be managed by fine tuning, modeling the sparseness and training the recovery systems in manners so as they could be recognized and worked upon faster calculations. The expenses of big data are relevantly written by [10] in his paper. He explains the scenario of testing tools, hardware, and software counterparts of the Hadoop and the big data that can help organizations to discover new ways to use the day and provide good monetization. He also advocates in finding the performance bottlenecks and what could be the solution to the idealistic features to improve them for further proceedings. Another such analysis conducted by the author [11] in his paper suggests that clustering can really help large organizations process data that are not labeled and help them manifest all the resources effectively. They have used the method of filtering,

88

P. Dadheech et al.

uploading and then applying K-means to find effective results. The document clustering can be improved by using methods that distinguishably divided/filters or prunes documents and then held them in the Hadoop account. The subsequent steps of managing them into a cluster can help the document to be utilized at its fullest and reduce the data management jobs. The author has incorporated Davis–Bouldin index that measures the quality of cluster that is generated post Hadoop implementation step. An application is designed around the architecture that once a document is fed into a system, it is cleansed from less important words by using Porter Stemmer, then for calculating the weighted frequency, TF-IDF is applied [12, 13]. The clustering algorithm used here, K-means, require inputs in numerical forms; therefore, the weight of important words are given as an input to the K-means algorithm after the document intermediary is uploaded on Hadoop ecosystem. Then, the subsequent steps are utilized to check the quality of clusters where closeness in clusters depicts similarity in documents and distance in clusters depicts the dissimilarity of clusters [14, 15]. One big issue that arises here is that the larger the data set the more effective the implementation begins. It is applicable for MapReduce frameworks which work well in highly complex environments. The cluster quality is measured in Davis–Bouldin metrics which provides a statistical analysis of the document clustering done so far. The more the value of a cluster is nearer to 1, the bad the quality of cluster is. In this way, it is easier to manage cluster quality by large organizations and the impact of clustering in distributed documentation [16, 17]. The preprocessing steps help in determining the needful from whole lot of documents. The document clustering is one of the main headaches for companies that involve lot of go through in data sets. While big data continues to thrive on Internet, it is becoming more complex and expensive to maintain sanity among data processing. The need of such preprocessing clusters is highly requited so that it can be solve and managed efficiently without introducing the costs to storage, computation, and other means. The intent of clustering document has attracted many eminent researchers. One such research group have studied and designed algorithm for text clustering [18].

3 Proposed Work 3.1 Methodology In this work, we enhance the performance of heterogeneous clusters on Hadoop by minding a set of policies that increase the input/output signal queries, enhances the routing of the algorithm into heterogeneous clusters, so improving the operation of processing of inquiries in such a fashion that they are readily attached with the ideal portion of implementation at minimal period because we could without raising the values at calculating degree [19]. This work includes that scenario in which

Performance Improvement of Heterogeneous Cluster of Big Data …

89

we processed the complex information through clustering. We handle the clustering processing through query improvisation.

3.1.1

Solving Problems Through Iteration and Phasing

The information employed here for doing experiments have been all log files of sites that will be in the form of unstructured information. It is the very good method through which we could establish the efficacy of processes and it is also closely associated with the real-world environment where information is recorded in such condition. The information thus additionally assists in imagining the situation of true use of procedures which are closely associated with the real-world situation where one needs to take care of data that may arrive in any shape and shape [16, 20]. The information since log files have assisted in obtaining the question entry procedure for an e-commerce site that must take care of countless search key words, necessitates clustering of information items to purchase and supplying precise results in rather short periods of life. Within this paper breaking up the question processing system for MapReduce in successive pragmatic stages such as filtering, clustering, and enhancing the question for quicker execution procedures. Initially, the stage involves iterative actions to clean out the query and discover dependencies one of the question so that after a query is implemented, task tracker does not need to count on calculating query dependencies again after it had been calculated at the first actions. To lessen the query execution time, enhance the efficacy of clusters and such as the pragmatic processing in which the questions are enhanced through clustering them to comparable measures, employing TF-IDF, calculating weights, and handling the similarity matrix and ultimately submitting the question for implementation [8]. The procedure used here for calculating minimum rate and supplying output in comparatively smaller timeframe is represented Fig. 1.: Query Improvisation

Clustering

Load Distribution Process

Applying MapReduce

Calculating TF-IDF

Query Scheduling Fig. 1 Phasing methodology for heterogeneous clusters

90

3.1.2

P. Dadheech et al.

Query Improvisation

Query improvisations are actually just a procedure through which people strengthen the performance of both clustering [6, 21]. Clustering can be employed when queries are brought into the parser and semantic analyzer they always compute the dependencies among these queries. But the moment this parsed question has been sent into the Hadoop’s MapReduce to do the exact dependencies which have been calculated for your Hive query is dropped between your alterations. Once the dependencies are calculated they are used for semantics extractions in Hive QL chip. In the second phase, we are able to use these dependencies like logical predicates and input tables to method that the dependencies among distinct questions to become closely attached with each other during transition [9, 22]. When the semantic gap between both Hive and also Hadoop is bridged from these intermediary measures, they are sometimes readily utilized for clustering equivalent inquiries at query level, and thus by minute steps improving the query job performance (Fig. 2). Steps of Query Improvisation Process which use the following property: 1.

Selection of Query: A select with conjunctive conditions on the attribute list is equivalent to a cascade of selects upon selects: Q A1 ∧ A2 ∧ . . . An(R) ≡ Q A1(Q A2(. . . (Q An(R)) . . .))

2.

Calculating the Commutative property of Query: We use select operation by commutative A1(Q A2(R)) ≡ Q A2(Q A1(R)) Query Improvisation

Task 1

Task 2

Aggregation

Task 3

Joining

Sort

Fig. 2 Query improvisation process

Input (Task 1)

Mediatry

Output of Values

Performance Improvement of Heterogeneous Cluster of Big Data …

3.

91

Calculating the value of cascade of p: The value of cascade is the of project operations is equivalent to the last project operation of the calculated value: pAList1 (pAList2(. . . (pAListn(R)) . . .)) ≡ pAList1(R)

4.

Calculating the Commuting Query with the value of p: Given a P’s and Q’s attribute list of A1, A2… An. The P and Q selection operations can be commuted: Commutative of Q : A1, A2 . . . An(Qc(R)) ≡ Qc(P, A1, A2 . . . An(R))

5.

Calculate the value of X: We merge the join and Cartesian product operations which are commutative: R X Q ≡ Q X R and R X S ≡ Q X R

6.

Commuting Query with or X: Select can be commuted with join (or Cartesian product) as follows: a. If all of the attributes in the select’s condition are in relation R, then Query, c (R Query) ≡ (Query c (R)) Query b. Given select the condition c composed of conditions c1 and c2, and c1 contains only attributes from R, and c2 contains only attributes from Query, then Query c(R Query) ≡ (Query c1 (R)) (Query c2 (Query))

7.

Commutative of set operations (U, ∩, −): Union and intersection operations are commutative; but the difference operation is not: RU Q Selection ≡ QU R, R ∩ Q ≡ S ∩ Q, R − Q = Q−R

8.

Associativity of X, U and ∩: All four of these operations are individually associative. Let θ be any one of these operators, then: (Rθ Q)θ T ≡ Rθ (Qθ T )

9.

Commuting s with set operations (U, ∩, −): Let θ be any one of the three set operators, then: Qc(Rθ Q) ≡ (Qc(R))θ (Qc(S))

10. Commuting P with U: Project and union operations can be commuted: PAList(RU Q) ≡ (PAList(R))U (PAList(Q))

92

P. Dadheech et al.

3.1.3

Applying Clustering in Initial Phases

A lot of applications and systems use clustering as a part of dividing similar and dissimilar data items together. Clustering of data sets here can help achieve focusing on the items that are useful and discarding items which are generally not useful [23]. Looking from the view of a log file and applying clustering on it, we get clusters of user prone data that categorize the behavior of the item buying selections. The interactions of a user with an e-commerce Web site can identify good buying patterns easily. When this single end user data is collected from the multi-million users of the Web site and analyzed, the results tell a whole lot of insight on the consumer behavior, selection, buying options, and fields which need improvisation. The log files used here symbolizes the unstructured data category and recording the data from different fronts, a presentation of heterogeneous Hadoop clusters [24, 25]. In this iterative clustering model when the data is captured from user end, it is optimized to remove static and dynamic parts of it in the clustering process itself where static parts are the decisions where dynamic parts are variants of time stamping, invoice numbers, delimiters, etc. It could be done through iteration process but applying a standard clustering algorithm with changes improves the efficiency as well.

3.1.4

Proposed Approach Algorithm

Step 1: We will read the log transition. Step 2: We store the transition value in Qi , Where i = 1, 2, 3 … n. Where n is equal to the log transition value. Step 3: We store the query which is request from client and store in array by get Query () method. IQ = I get query. Where I is number of query request. We store the query in array to create cluster. By method table put (null, array, parameter1, parameter 2 …. n); To convert the array in object they are Qi = Table—get query (). Step 4: Merge the pair of most similar queries (qi , qj ) that does not convert the same queries. If (Qi is not in C i ) then store the frequency of the item and the increase the value. Step 5: Compute the simulating Matrix if Qi in C i . Step 6: If Qi is not C i then compute new cluster IQ = New (C i ). Step 7: Go to Step 3. Step 8: Step go to Step 2.

Performance Improvement of Heterogeneous Cluster of Big Data …

93

Table 1 System specification Nodes

Specifications RAM (in GB)

Processors

Memory (in GB)

NameNode

2

1

30

DataNode 1

1

1

30

DataNode 2

1

1

30

JobTracker

2

1

30

4 Experimental Setup 4.1 System Specifications The system deployed here to test the proposed work contains data nodes, name nodes, job trackers, and task trackers on virtual machines [26, 27]. The machines have different capacity and hardware configurations. The nodes are created in such a manner that each node has its own specialty and try to resemble to real-world environment (Table 1).

4.2 Load Distribution to High- and Slow-End Clusters Once the data is sorted initially, it can be forwarded to the clusters for processing one by one. Here, comes the tricky part because ideally Hadoop does not contain same type of hardware architecture underneath [28]. There can be nodes that operate as high-end clusters with good processing speeds but there also can be slow-end clusters with lower speeds than their high-end counterparts. When a job or couple of jobs is fed into Hadoop ecosystem, they are sent to both types of processors depending on the availability. When a high-end cluster finishes job processing, the data waiting to be processed is sent to the high-end systems. This data movement introduces cost to the processing, data storage, and security as well.

4.3 Applying MapReduce As demonstrated in previous sections, when initially the system requirements, job processing on clusters, and queries to implement are decided, it becomes whole lot easier to deal with the unstructured data formats [29, 30]. The cleansed, clustered, and diversified data items can now be sent to MapReduce to extract the key value pairs for further processing. In MapReduce, the applications can be used to classify unstructured documents into a set of meaningful terms which can provide value to

94

P. Dadheech et al.

the clusters. The MapReduce algorithm can help in retrieving important information from a pool of data which cannot be achieved manually. Implementation of MapReduce: Map: Step 1: Prepare the Map() input Input unstructured data file: (log_file.txt) Step 2: Run the Map() code–Map() is run exactly once for each K1 key value, generating output organized by key values K2. Function that divides data sets: (log_file_intermediate) Splitthefile(log_filetranstation.txt):(lft) + (lft2) + (lft3)….+ (lftn) Step 3: Collect Output: (log_file_output; lfIdt, 1) Reduce: C is the Count of the individual terms. Step 1: Input file which was output from map function: (log_file_output; lfIdt, [c]) Step 2: Run the Reduce() code Function to summarize the intermediate terms and give a final value to all the detected valid terms in the module: S = Sum (c) Step 3: Provide output to sum up the file values. (log_part, countId, S)

5 Results and Discussion 5.1 Results The subsequent results have been established by directing the proposed methodology in the system. The heterogeneous Hadoop clusters were used to test the working of big data sets which emulated the IO throughput, average rate of execution, standard deviations and finally the test execution time.

Performance Improvement of Heterogeneous Cluster of Big Data …

95

Fig. 3 Comparison of different Hadoop schedulers

5.1.1

Comparison of Productivity Among Hadoop Schedulers

Comparison among the most efficient schedulers used in Hadoop, i.e., FIFO, HFS, and HCS. The following graphs establish the point that schedulers can overtake other schedulers which are used in HDFS and internal mechanisms to share a file or job over the several default systems to execute and store them reliably (Fig. 3). The figure shows a trade-off among various schedules which may lead to optimized performance of systems in HDFS for clustering big data. The response time caused by the data loads, which were actually the logs of an e-commerce Web site and the response time took by the schedulers in mapping and reducing them to store them on the DFS, manually extract as well as manage them. It also establishes the fact that surmounting amount of data as is general in big data sets; Hadoop can optimize the processing instead of data loading and unloading frequently to get better insights about data. This also proves the fact that once data is accepted by a node, the average time to consume and process it may vary depending upon the type of scheduler being used.

5.1.2

Comparative Study of FIFO/HCS/HFS

See Table 2; Figs. 4, 5 and 6.

5.2 Performance Evaluation The clusters that were created for this work are working and derived results as formulated. The average I/O rate of data movement, standard deviations, and time

96

P. Dadheech et al.

Table 2 Efficient job scheduling for MapReduce clusters [16] Scheduling algorithm FIFO

HFS

Data size

Previous methodology

Proposed methodology

Execution time

Execution time

500 MB

4020

3958

1 GB

7700

7156.356

2 GB

9900

9165

5 GB

14,500

11,130

10 GB

Not consider

12,780

500 MB

3950

3925

1 GB

7200

7065

2 GB

9200

9085

5 GB

13,500

10,980

500 MB

3900

3875

1 GB

7000

6998

2 GB

9000

8912

5 GB

12,000

10,260

10 GB HCS

12,400

10 GB

Fig. 4 Previous versus proposed methodology

12,610

Performance Improvement of Heterogeneous Cluster of Big Data …

97

Fig. 5 Performance analysis of FIFO/HCS/HFS with query improvisation

Fig. 6 Test execution time by schedulers

taken to execute the loaded files prove that the formulated method is working as expected.

98

P. Dadheech et al.

5.3 Schedulers at Work 5.3.1

First in First Out (FIFO)

The scheduler used here is FIFO which is used to schedule the resources of big data sets in the heterogeneous clusters and concurrently running TestDFSIO as benchmarking.

5.3.2

Hadoop Fair Scheduler

The test results are the resultant of the working of fair scheduler algorithm on the big data sets for heterogeneous clusters in Hadoop environment.

6 Conclusion The conducted experimentation also establishes the fact that if a practice is described to handle different usage case scenarios, one could complete lessen the amount being spent on computing and may gain on counting upon dispersed programs for rapidly executions. The major points of thought that were resulted as conduction of this work are as following: Calculating question dependencies: This really may be lowered by keeping an index or question addiction desk which cannot only cut back time but affect that the total calculating in an excellent phrase. The processor could be utilized to method an increasing number of data within the dispersed nodes of Hadoop. The heterogeneous clusters used in the Hadoop can reap by simply in carrying the relevant and shedding the insignificant jobs to method. Job scheduling mechanics: The default scheduler of Hadoop and also MapReduce could be your FIFO scheduler (First in First Out) which seems important as so when Hadoop project arrives. However, at a dynamic setting of big data if the payable size of data is to be processed, using in everything that comes could become unfruitful and squander of reference, because while in the end to make money from these kinds of big chunks of data, then it must be sorted at the past. Therefore, scheduling the tasks that process this info therefore the clusters using high-performance can manage big numbers sets and clusters with slow-ends can assist in additional processing systems. The study experiments and work conducted under this work have emulated quite surprising results, many of them function as selection of schedulers to program tasks, placement of data in the similarity matrix, clustering before scheduling inquiries and moreover, iterative, mapping along with reducing and binding the inner dependencies collectively to steer clear of query stalling and execution times.

Performance Improvement of Heterogeneous Cluster of Big Data …

99

7 Limitation The big data set has many probing worries for computing and data storage. Since the major part of big data analytics is related to dynamic nature of data, data is deleted, manipulated, and retrieved frequently. This ad hoc-based processing of data includes streaming data in and out of the storage systems as the requirement but this also introduces a large chunk of processing.

References 1. Liu, Z. (2015). Efficient storage design and query scheduling for improving big data retrieval and analytics, Dissertation, Auburn University, Alabama. 2. Zongben, X., & Shi, Y. (2015). Exploring big data analysis: Fundamental scientific problems. Springer Annals of Data Science, 2(4), 363–372. 3. Tinetti, F. G., Real, I., Jaramillo, R., & Barry, D. (2015). Hadoop scalability and performance testing in heterogeneous clusters. In The proceedings of the 2015 international conference on parallel and distributed processing techniques and applications (PDPTA-2015), Part of WORLDCOMP’15 (pp. 441–446). 4. Wan, J., Yu, W., & Xu, X. (2009). Design and implement of distributed document clustering based on MapReduce, ISBN 978-952-5726-07-7, 2009. 5. Kamtekar, K., & Jain, R.. (2015). Performance modeling of big data (pp. 1–9). Washington University in St. Louis. 6. Das, T. K., & Mohan Kumar, P. (2013). Big data analytics: A framework for unstructured data analysis. International Journal of Engineering and Technology (IJET), 5(1), 153–156. ISSN: 0975-4024. 7. Liu, F. H., Liou, Y. R., Lo, H. F., Chang, K. C., & Lee, W. T. (2014). The comprehensive performance rating for hadoop clusters on cloud computing platform. International Journal of Information and Electronics Engineering, 4(6), 480–484. 8. Rong, Z., & De Knijf, J. (2013). Direct out-of-memory distributed parallel frequent pattern mining, ACM, BigMine’13. In Proceedings of the 2nd international workshop on big data, streams and heterogeneous source mining: Algorithms, systems, programming models and applications (pp. 55–62). ISBN: 978-1-4503-2324-6, https://doi.org/10.1145/2501221. 2501229. 9. Li, B., & Guoyong, Y. (2012). Improvement of TF-IDF algorithm based on Hadoop framework. In The 2nd international conference on computer application and system modeling (pp. 0391– 0393), Paris, France: Atlantis Press. 10. Kamtekar, K. (2015). Performance modeling of big data, May 2015. 11. Jagtap, A. (2015). Categorization of the documents using K-Means and MapReduce. International Journal of Innovative Research in Science, Engineering and Technology, ISSN: 2319-8753, 2015. 12. Das, T. K., & Kumar, P. M. (2013). BIG data analytics: A framework for unstructured data analysis. International Journal of Engineering and Technology (IJET), 5(1), 153–156. ISSN: 0975-4024. 13. Novacescu, F. (2013). Big data in high performance scientific computing. In International Journal of Analele Universit˘a¸tii “Eftimie Murgu (vol. 1, pp. 207–216). “Eftimie Murgu” University of Resita, ANUL XX, NR. 14. Rao, B. T., Sridevi, N. V., Reddy, V. K., & Reddy, L. S. S. (2011). Performance issues of heterogeneous Hadoop clusters in cloud computing. Global Journal of Computer Science and Technology, XI(VIII).

100

P. Dadheech et al.

15. Xie, J., Yin, S., Ruan, X., Ding, Z., Tian, Y., Majors, J., Manzanares, A., & Qin, X. (2010). Improving MapReduce performance through data placement in heterogeneous Hadoop clusters. In Proceedings of the 19th international heterogeneity in computing workshop (pp. 1–9), Atlanta, Georgia. 16. Liu, J., et al. (2015). An efficient job scheduling for MapReduce clusters. International Journal of Future Generation Communication and Networking, 8(2), 391–398. 17. Fahad, A., Alshatri, N., Tari, Z., Alamri, A., Khalil, I., Zomaya, A. Y., Foufou, S., & Bouras, A. (2014). A survey of clustering algorithms for big data: Taxonomy and empirical analysis. IEEE Transactions on Emerging Topics in Computing, 2(3), 267–279. Digital Object Identifier https://doi.org/10.1109/tetc.2014.2330519. 18. MonaP. EMC Corporation (2014). Virtualizing Hadoop in large-scale infrastructures. 19. Aggarwal, C., & Han, J. (2014). An introduction to frequent pattern mining. In Frequent pattern mining, Springer. ISBN 978-3-319-07820-5 (Print) 978-3-319-07821-2 (Online), https://doi. org/10.1007/978-3-319-07821-2. 20. Victor, G. S., Antonia, P., & Spyros, S. (2014). CSMR: A scalable algorithm for text clustering with cosine similarity and MapReduce. In IFIP international conference on artificial intelligence applications and innovations, AIAI 2014: Artificial intelligence applications and innovations (pp. 211–220), AICT 437. 21. Novacescu, F. (2013). Big data in high performance scientific computing. EFTIMIE MURGU RESITA, ANUL XX, NR. 1, (pp 207–216). ISSN 1453–7397. 22. Xue, J., Li, J., & Gong, Y. (2013). Restructuring of deep neural network acoustic models with singular value decomposition (pp. 2365–2369), ISCA, INTERSPEECH. 23. Akdere, M., Cetintemel, U., Riondato, M., Upfal, E., & Zdonik, S. B. (2012). Learning based query performance modeling and prediction. In IEEE 28th international conference on data engineering (pp. 390–401). 24. Thirumala Rao, B., Sridevi, N. V., Krishna Reddy, V., & Reddy, L. S. S. (2011). Performance issues of heterogeneous Hadoop clusters in cloud computing. Global Journal of Computer Science and Technology, XI(VIII). 25. Kumar, A., Goyal, D., Dadheech, P. (2018). A novel framework for performance optimization of routing protocol in VANET network. Journal of Advanced Research in Dynamical & Control Systems, 10(02), 2110–2121. ISSN: 1943-023X. 26. Dadheech, P., Goyal, D., Srivastava, S., & Kumar, A. (2018). A scalable data processing using Hadoop & MapReduce for big data. Journal of Advanced Research in Dynamical & Control Systems, 10, (02), 2099–2109. ISSN: 1943-023X. 27. Dadheech, P., Goyal, D., Srivastava, S., & Choudhary, C. M. (2018). An efficient approach for big data processing using spatial boolean queries. Journal of Statistics and Management Systems (JSMS), 21(4), 583–591. 28. Dadheech, P., Kumar, A., Choudhary, C., Beniwal, M. K., Dogiwal, S. R., & Agarwal, B. (2019). An enhanced 4-way technique using cookies for robust authentication process in wireless network. Journal of Statistics and Management Systems, 22(4), 773–782. https://doi.org/10. 1080/09720510.2019.1609557. 29. Kumar, A., Dadheech, P., Singh, V., Raja, L., & Poonia, R. C. (2019). An enhanced quantum key distribution protocol for security authentication. Journal of Discrete Mathematical Sciences and Cryptography, 22(4), 499–507. https://doi.org/10.1080/09720529.2019.1637154. 30. Kumar, A., Dadheech, P., Singh, V., Poonia, R. C., & Raja, L. (2019). An improved quantum key distribution protocol for verification. Journal of Discrete Mathematical Sciences and Cryptography, 22(4), 491–498. https://doi.org/10.1080/09720529.2019.1637153.

Signaling Load Reduction Using Data Analytics in Future Heterogeneous Networks Naveen Kumar Srinivasa Naidu, Sumit Maheshwari, R. K. Srinivasa, C. Bharathi, and A. R. Hemanth Kumar

1 Introduction The rapid development in the field of telecommunication has pushed the boundaries and expectations of networks far beyond their current capacity. The capacity limits are partially met by introducing multiple radio and device types, making the network truly heterogeneous in nature [1, 2]. With the advent of at least a few variations of 5G, the telco industry is progressing toward a network which should be able to provide sustainable high data rates, deterministic low latencies and high reliability, to support demanding applications in an end-to-end system. Achieving these objectives simultaneously is non-trivial and therefore calls for a systematic study of the various components involved in this end-to-end orchestration [3]. First, the reason for the core network latency is its inability to handle the unexpected high-density traffic flows due to delayed information about the connecting devices. Second, the inter-network delays are owing to the handshakes and negotiations between the parties such as network entities (NEs) involved. And finally, the user equipment (UE) to access latency is introduced by continuous data requests as N. K. S. Naidu · R. K. Srinivasa · C. Bharathi · A. R. Hemanth Kumar Department of ECE, VTU, Belgaum, Karnataka, India e-mail: [email protected] R. K. Srinivasa e-mail: [email protected] C. Bharathi e-mail: [email protected] A. R. Hemanth Kumar e-mail: [email protected] S. Maheshwari (B) WINLAB, Rutgers University, New Brunswick, NJ, USA e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_10

101

102

N. K. S. Naidu et al.

well as control message signaling. As it is evident that in all these delays, signaling messages bring in crucial overhead, efforts are consistently being made to reduce the signaling procedures in the future mobile networks [4–7].

1.1 Paging Procedure Paging procedure is used by the core network to inform idle mode device about the terminating services like packet, circuit switched fallback (CSFB) or SMS [8–11]. Figure 1 depicts the area covered under the three levels of paging procedure considering Tracking Area Index (TAI), Tracking Area List (TAL) and Border Tracking Area List (BTAL). On receiving the paging request message, the user equipment (UE) establishes the Radio Resource Control (RRC) connection to send the respective non-access stratum (NAS) message [12, 13]. In order to increase the paging success rate and efficient usage of radio resources, Mobility management entity (MME) can be configured to support different levels of paging (1, 2 and 3) each covering broader area than the previous one. However, this mechanism comes with a cost of overloading base station which is not relevant to the device. This causes unnecessary signaling load on the base station, increased computation at the base station, reduced resources (radio bearer) for device traffic and increased computing load on the core network. In order to meet the current

Fig. 1 Area covered under the three levels of paging procedure considering TAI, TAL and BTAL level paging configured by a telecom operator in MME

Signaling Load Reduction Using Data Analytics …

103

architectural and procedural requirements, this approach eventually seeks additional network resources such as advanced hardware and higher backhaul bandwidth.

1.2 5G Requirements Future mobile networks under consideration such as 5G involve heterogeneous network with at least ten times more device per unit area as compared to the current state-of-the-art cases [14]. Therefore, using the existing paging strategies in the future networks will further increase the network signaling load and worsen their load handling capabilities. In order to address this problem, this paper proposes a paging reduction technique based on data analytics. Using realistic mobility traces, a probabilistic model of predicting the device location based on its past mobility pattern is presented. Using this approach narrows the device location search to a best-predicted base station region. To further optimize the paging success, we suggested three-level paging. During our study, we found that based on the prediction algorithm with about 80% prediction accuracy, we can achieve at least 90% reduction in the paging message load. Paper Organization. This paper is organized as follows. In Sect. 2, the network topology and scenario are introduced with an emphasis on the current best practice for the paging. Section 3 details our proposed paging method. Section 4 discusses the probabilistic model based on the data analytics algorithm. Section 5 provides the simulation results. Finally, Sect. 6 concludes the paper with a highlight on our future work.

2 Network Topology and Scenario In order to meet the requirement of predicting mobility of user, a new component, data analytics is introduced in this section. Further, a top view of paging levels is also discussed.

2.1 Network Topology Figure 2 depicts the network architecture based on [2]. The external connectivity to the IP Multimedia Subsystem (IMS) or Internet is through the System Architecture Evolution Gateway or the User Plane Function (SAE-GW/UPF). A new node “data analytics” with the intention of computing/modeling purpose is introduced which can inter-work with the 4G/5G signaling node MME/AMF (Access Management Function) to get the predicted device location. A query-response-based API mechanism is used to periodically obtain the information at the MME/AMF. The query

104

N. K. S. Naidu et al.

Fig. 2 Network diagram

message requests for the UE and its last known eNB, and the response message sends the UE-ID along with the eNB lists. For the purpose of using the current LTE architecture, eNB term is used throughout this paper which is functionally synonymous to the 5G gNB.

2.2 Conventional Paging Configuration The conventional paging configuration follows a strict hierarchy where TAI sends the paging request to all eNBs in the TAI, TAL sends the paging request to all eNBs in all the tracking areas (TAs) from TAL, and BTAL sends the paging request to all the eNBs in all the TAs from BTAL and TAL. The sequence of paging is first at TAI, second at TAL and finally, third at BTAL. The broadcast paging request is sent to all the eNBs connected through a single MME/AMF.

3 Proposed Paging Method For every level of paging, it is possible to configure the paging area per service. This can reduce the paging load on the network. When Serving Gateway (S-GW) receives an incoming packet for the device which is in the idle mode, S-GW notifies the MME with “downlink data notification” message. On receiving the notification, MME checks the UE status. If UE status is unknown in the MME, the MME sends a response message to S-GW with cause value “context not found” [3]. If device status is known in MME, then MME can find the last reported tracking area from the UE context information. MME can send the paging request to the eNBs based on the paging area (TAI/TAL/BTAL) configuration in MME. MME sends a paging request to all the eNBs defined in the first paging area. If there is no response from the UE in the defined wait time, then MME sends a paging request to all eNBs in the second paging area and waits for the response. If there is no response from the

Signaling Load Reduction Using Data Analytics …

105

UE again, then MME sends a paging request to all the eNBs in the third paging area. If MME does not receive the paging responses from the device after all levels of paging requests, then MME sends a failure indication message to S-GW with cause value “device not responding”. If the UE receives a paging request, it establishes the RRC connection and responds back to MME with “service request” NAS message. On receiving the service request message, MME initiates the S1-AP context setup procedure and establishes the bearer. Once the S1-U link is established, the buffered downlink data at S-GW is sent to the device. In our proposed approach, whenever there is an incoming page, i.e., “Downlink Data Notification,” MME queries the data analytics server to retrieve the device location. The server then returns the three values of eNB list to MME: • Optimistic: one eNB • Top: 3 high probability eNB • Worst: eNBs throughout the history of the UE. Based on the eNB list received from the data analytics server, MME sends a paging request to all the eNBs defined in the optimistic eNB paging area. If there is no response from the device in the pre-defined wait time, then MME sends a paging request to all eNBs in the best eNB paging area and waits for the response. Lastly, if there is no response from the device even in this case, then the MME sends a paging request to the eNB in the worst eNBs paging area. Data analytics server as shown in Fig. 3 has internal modules to parse the incoming data, perform data mining and train the probabilistic model. Furthermore, it contains front-end modules to share the predicted result. The server continuously learns the device location and updates the DB. The stale data after a fixed period is automatically removed as it becomes obsolete in determining UE location.

Fig. 3 Modules of the data analytics server

106

N. K. S. Naidu et al.

4 Data Analytics Algorithm 4.1 The Probability Model An algorithm based on a probabilistic model is used to maintain the probability of each eNB to which the device might be connected at a given point of time in the last 24 h. The model can be described as follows. Let E denote the number of different eNBs present in the dataset, and Pti ∀i ∈ [1, E] is the probability matrix for each timestamp t of the day sampled at each second. The i, t index of Pti represents the probability of the device connected to the ith eNB ID at the given timestamp t. The missing data is obtained using the probabilistic time averaging method as explained as follows. If t1 and t2 (t2 − t1 > 1) are the points of time present in the original dataset, and i 1 and i 2 are the eNB IDs assigned to the UE for them, respectively, then, at an unknown timestamp t, t1 < t < t2 , the probability is assigned as shown in Eqs. 1 and 2. This simple yet effective approach is chosen to save compute cycles of the data analytics server. Pti1 =

t − t1 t2 − t1

(1)

Pti2 =

t2 − t t2 − t1

(2)

For any given day, j and eNB, i, D ij denotes the time series of the probability vector as shown in Eq. 3.   D ij = P0i , P1i , . . . , Pni

(3)

Here, n ∈ [1,86400] denotes the sample per second in the day. For N days of data, the final probability model (matrix) D is the average of  probability matrices of all the days and is obtained as N1 1N Di . For our simulation, we use N to be 30 to use a month’s worth of data. The data is retrained for each new incoming data as shown in Eq. 4. Dnew =

D ∗ N + D N +1

(4)

Here D is the probability matrix for the current day. This technique allows us to dynamically inject data into the current model.

Signaling Load Reduction Using Data Analytics …

107

5 Simulations and Results The simulation is run in three steps: data collection, building network topology and predicting UE location for paging. During the data collection phase, the UE locations and respective attached eNBs are retrieved at each time instant. The network topology is framed based on the base station deployment statistics which has on an average a kilometer distance each from one another. The eNB list prediction is carried out in the order of decreasing probability at which the UE may be connected at the given point of time. This algorithm uses the historical data of the device to assign the probability to each eNB for a specific timestamp.

5.1 Device Mobility Prediction Performance Figure 4 shows the actual versus predicted UE mobility. It is observed that the prediction results are highly correlated with the actual mobility data. The paging prediction accuracy for five randomly chosen days is presented in Table 1. It can be observed that the accuracy is within bounds of 88–94%. This implies that there is not much deviation, and therefore, the method is reliable. Figure 5 shows the number of paging signals at first, second and third level of paging. The conventional method generates paging messages for each event without prediction of the UE location. With the suggested approach, there is a 90% gain as compared with the conventional approach. The paging load at level 1 of suggested approach is more than level 2 and so on. This is because it is less likely to find a UE in a single predicted eNB location. With the reduction in the backhaul and base

Fig. 4 Actual versus predicted UE mobility

108

N. K. S. Naidu et al.

Table 1 Paging prediction accuracy Top 1

Top 3

Top N

Day 1

89.2

91.6

93.1

Day 2

87.1

92.1

93.9

Day 3

89.5

90.6

93.2

Day 4

89.2

89.9

94

Day 5

88.1

91.4

93.3

Fig. 5 Paging request comparison

station signaling, the saved resources can be used for device traffic for which device can benefit with high quality of services and enhance device experience.

6 Conclusion In this paper, a simple, effective and accurate UE prediction method is proposed for future mobile networks which can reduce the control signaling load. The realistic mobility traces are used for simulation using a probabilistic model. A gain of about 90% reduction in the paging signals is shown for a network designed using the eNB and UE data. A data analytics module is appropriately proposed which associates with the mobility management entity to obtain the real-time information. The load on network MME/AMF due to additional transaction required with the data analytics is analyzed. However, it is shown that with the reduction in paging signaling, this load can be offset. Overall, this paper contributes toward the performance improvement of location management strategies in current and future cellular networks. For future work, the model will be extended to accept paging from multiple radio devices and multiple UE mobility models. Acknowledgements The authors would like to thank Visvesvaraya Technology University (VTU), Belgaum, and WINLAB, Rutgers University, for providing essential academic freedom for this research.

Signaling Load Reduction Using Data Analytics …

109

References 1. Li, Q. C., Niu, H., Papathanassiou, A. T., & Wu, G. (2014). 5G network capacity: Key elements and technologies. IEEE Vehicular Technology Magazine, 9(1), 71–78. 2. Demestichas, P., Georgakopoulos, A., Karvounas, D., Tsagkaris, K., Stavroulaki, V., Lu, J., et al. (2013). 5G on the horizon: Key challenges for the radio-access network. IEEE Vehicular Technology Magazine, 8(3), 47–53. 3. Andrews, J. G., Buzzi, S., Choi, W., Hanly, S. V., Lozano, A., Soong, A. C., & Zhang, J. C. (2014). What will 5G be? IEEE Journal on Selected Areas in Communications, 32(6), 1065–1082. 4. Chen, M., Qian, Y., Hao, Y., Li, Y., & Song, J. (2018). Data-driven computing and caching in 5G networks: Architecture and delay analysis. IEEE Wireless Communications, 25(1), 70–75. 5. Maheshwari, S., Raychaudhuri, D., Seskar, I., & Bronzino, F. (2018). Scalability and performance evaluation of edge cloud systems for latency constrained applications. In 2018 IEEE/ACM Symposium on Edge Computing (SEC). IEEE. 6. Kahn, C. L., & Viswanathan, H. (2017). Access independent signaling and control. U.S. Patent No. 9,693,382. 7. Maheshwari, S., Mahapatra, S., Kumar, C. S., & Vasu, K. (2013). A joint parametric prediction model for wireless internet traffic using hidden Markov model. Wireless Networks, 19(6), 1171–1185. 8. 3GPP TS 36.413, Release 15, version 15.2.0. Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1 Application Protocol (S1AP). 9. Di Taranto, R., Muppirisetty, S., Raulefs, R., Slock, D., Svensson, T., & Wymeersch, H. (2014). Location-aware communications for 5G networks: How location information can improve scalability, latency, and robustness of 5G. IEEE Signal Processing Magazine, 31(6), 102–112. 10. 3GPP TS 23.002, Release 9, version 9.1.0, Network architecture, 2009. 11. 3GPP TS 23.401, Release 8, version 8.6.0, General Packet Radio Services (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access, 2009. 12. 3GPP TS 24.301, Release 12, version 12.1.0, Technical Specification Group Core Network and Terminals; Non-Access-Stratum (NAS) protocol for Evolved Packet System (EPS), 2013. 13. Vasu, K., Maheshwari, S., Mahapatra, S., & Kumar, C. S. (2011). QoS aware fuzzy rule based vertical handoff decision algorithm for wireless heterogeneous networks. In 2011 National Conference on Communications (NCC). IEEE. 14. Hong, W., Baek, K. H., Lee, Y., Kim, Y., & Ko, S. T. (2014). Study and prototyping of practically large-scale mmWave antenna systems for 5G cellular devices. IEEE Communications Magazine, 52(9), 63–69.

Modelling and Simulation of Smart Safety and Alerting System for Coal Mine Using Wireless Technology Om Prakash and Amrita Rai

1 Introduction Nowadays, mine’s observing and alerting systems replace the traditional coal mine monitoring systems which use wired network system. With nonstop increasing in the exploiting areas and extension of depth in coal mine, the observing and monitoring seem to play an important role in coal and stone mines [1–4]. Many researchers proposed different types of monitoring and alerting system based on wireless technology like ZigBee and GSM [1], ZigBee and CAN Bus [2], integrated GPRS system for monitoring [3], one station one network module [4]. Multitasking is an important part of mines observation and alerting system, and hence, the multisensor systems are recently attracting considerable attention, especially those with diversified complexity [5] and deep learning GIS-based system [6]. Ramesh et al. presented an embedded monitoring and alerting system for coal mines using IOT with GSM [7]. Data processing unit of any observation and alerting system is consisting of the microcontroller or microprocessors for the collection of all the data from the sensor network and transmitting over wireless units [1–8]. Mostly systems are used wireless technology ZigBee and GSM module for transmission of signal from happening area to alerting unit and security systems [9, 10]. Due to rigorous evaluation and demand of mine safety, another system was developed using ZigBee and RS 485 interfaces, WSN and gas sensors and other wireless technology, which are more helpful than the existing wired monitoring, sensing, processing and alerting systems [11–14].

O. Prakash (B) ECE Department, St. Mary’s Engineering College, Hyderabad, India e-mail: [email protected] A. Rai ECE Department, GL Bajaj Institute of Technology and Management, Greater Noida, UP, India © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_11

111

112

O. Prakash and A. Rai

Concealed mining processes show to be a hazardous scheme as far as the safety and healthiness of workers are worried. These hazards are due to different practices used for mining different minerals. Such type of mining contains a complex hazard than uncluttered pit mining due to the problems of freshening and prospective for failure. However, the utilization of heavy machinery and the methods accomplished during excavations result into safety hazards in all types of mining, which lead to significant deviations and enhancements the safety level both in opencast and underground mining [11–15]. Coal is a standout amongst the most essential of fossil vitality around the globe. It is likewise a basic crude material in the metallurgical and chemical industry. For over a century, Indian coal mineshafts have been satisfying over 60% of the vitality required by the nation for her household and modern employments. The security status of Indian coal mineshafts has been improved throughout the years. The current circumstance is difficult for everybody related with the business. It is especially valid because of abnormal amounts of casualty and genuine damage rates in Indian coal mineshafts. In spite of the fact that the passing and genuine damage rates are demonstrating diminishing patterns, the lessening isn’t generous when looked into throughout the previous two decades. Further, over the past 100 years, Indian coal mineshafts have seen a few changes in their security and well-being measures. These intercessions depend on suggestions of security meetings that went for lessening casualty and genuine wounds [5–16]. Despite the fact that the well-being circumstance has been improving, India’s coal mining industry stays one of the world’s deadliest. Because of poor coal topographical conditions, low dimension of data, little scale coal mineshafts, and different reasons, the coal mining death rate is as yet higher than the world’s real coal-delivering nations. Gas, flooding, dust, and different episodes are as yet the serious issues in Indian coal mineshafts In this work, a prototype is developed using XBee, Arduino and X-CTU software. X-CTU software is used to assign XBees as transmitter and receiver or as coordinator and router. The router can also utilize as end device if required. The wireless communication is established between end devices, routers, and information is sent to the co-ordinator. IEEE 802.15.4 standard is used for communication. Based on the information, coordinator takes appropriate action. On the basis of developed prototype in this paper, smart safety and alerting systems are proposed to assist in monitoring, controlling and provide security over the mining environment. Wireless technology ZigBee is more accurate, efficient, and advantages for real-time tracking. Therefore, the principal objective of this paper is to design an intelligent real-time observing system so that several leaked hazards mine gases could be recognized and protective actions could be formulated accordingly. The research investigations to be carried out with the following objectives firstly to detect toxic gases and level of temperature within mining environment very efficiently after that establish wireless sensor network interfacing and communication protocol between sensors and ZigBee. Finally, design a more reliable, accurate, and cost-effective real-time monitoring system with safety alerts.

Modelling and Simulation of Smart Safety …

113

1.1 Wireless Sensor Network Sometimes wired network cannot be established or not suitable so that time ZigBee as a WSN can penetrate walls [17–19]. The improvement in wireless communication technology made it possible to lay the communication network by keeping the communicating node at the required spots and switching the transmitter between them. With great usefulness and essential features, a wireless network is necessarily used in coal mines where it is not possible to establish a network. Also, in case of a hazardous environment, cave-ins or in the explosion, the wire may get damaged, making the whole system useless. So to avoid the drawback of a wired network, the wireless network is used. With large operating node, even though solid object such as walls and in the mines the most-suited technology is WSN.

2 Proposed System Design The block diagram representation of the proposed system is shown in Fig. 1. It depicts the details of the required hardware and software with wireless interfacing. The key to controlling coal mine accidents, the prediction of the outburst by implementing sensors and microcontrollers that generate an alarm system before critical atmospheric level. Continuous monitoring is necessary, which again requires a useful and accurate sensing system. Several techniques are adapted to sense the presence of this poisonous gas, and amongst them the use of semiconductor type gas sensor is very much useful. These sensors can be mounted in the coal mine area. This monitoring system mainly consists of two units. First one is the sensor unit. Another one is monitoring unit. Sensor unit contains two parts: one display unit and another transmitter unit. Display unit consists of the coordinator. The transmitter unit consists of a router and the sensors. To test the design, a real-time monitoring system

Power supply

Power supply

Zigbee USB Interfacing

Harmful gas

LED

Temperature

Xbee pro S2b

Fault CondiƟon

Module

Health

Fig. 1 Block diagram of smart safety and monitoring system

Arduino UNO Atmega 32B

Buzzer

114

O. Prakash and A. Rai

is developed using a wireless sensor network, and an artificial mining environment is simulated inside the laboratory. The system consists of the following components like Arduino Board—Model Arduino UNO as a microcontroller XBee—Model XBee ProS2B as a router, Methane Sensor—ModelMQ-4 for gas sensing, temperature sensor-LM35 for temperature measurement and buzzer for alarming.

3 Description of the Experimental Setup and Results In this section, the experimental setup of the developed system is described in detail. The system is required for real working in three parts: one is sensor network which sensing the hazards known as end device and mounted inside the mines at different risky places. Secondly, routers are used for the communication of generated signal from the end device to the server room or security room utilizing wireless communications, and finally, alerting system at the security room known as coordinator for indication and solution of any kind of hazards. The proposed system is consisting of both hardware and software implementation for performing the above task. The smart monitoring and alerting system are composed of embedded processor Arduino Uno, ZigBee communication module, sensors, LED and battery. Router module of system is designed using Arduino Uno board and ZigBee as shown in Fig. 2a, b for experimental setup, which transmits data from end device to the server unit. End device plays a vital role in the proposed system for sensing and monitoring the hazards. It is a combination of sensors and mounted inside the mines different places for continuous monitoring and set some threshold for each hazard gases and temperatures. Figure 3 shows the experimental setup of the end device with sensors and ZigBee module for detection and transmission of hazards situations. The server room or security room has setup with all processing and display devices like LED for detection, GPS module for position monitoring and LCD for display with buzzer for alarming the security. Figure 4 shows the practical implementation of the temperature sensor and LED display. Through the programming, it can set

(a)

(b)

Fig. 2 a Front view of designed router, b back view of designed router

Modelling and Simulation of Smart Safety …

115

Fig. 3 Design of experimental setup of end device with sensors

Fig. 4 Display and detection of high or low temperature using sensor and LED

some bearable temperature as the threshold; when the temperature is rise beyond a threshold, it will blink RED LED as an indication of dangers. Similarly, Fig. 5 shows the experimental setup of display and detection of harmful gases using different sensors for different gas and ZigBee module with LED. If any gas sensor is detecting toxic gas inside the mines, it sends an indication to server room as Red LED blinks and if it is not detected than Green LED blinks.

Fig. 5 Display and detection of harmful gases using sensor and LED

116

O. Prakash and A. Rai

The above prototyping is tested and verified in the laboratory, but in actual implementation across a large area of mine will be measured in the future. The proposed wireless network technology is suitable for small area mining.

4 Conclusion In mining environment wired system, safety assurance and communication capabilities are restricted. So existing wired mines security and observation system can be efficiently replaced by the smart safety and alerting system proposed in this paper, which will deed its improvement for practical application, leading to the mine disasters. The proposed system is practically implemented for small area monitoring and the alerting system as a prototype which helps in developing the specific implementation of the system in underground mining. This system has a coordinator, which indicates with alarm and different LED’s when sensor value crosses the threshold value and indicating exact practical feasibility. The given structure also can be extended with ZigBee wireless image transmission capability in the future, which will improve the scalability of the mining environment. The experimental results of the proposed system give satisfactory performance, and it yields higher output value and incalculable economic benefit. Future scope of the system can be extended to monitoring other safety issues like dust, vibration and fire by using additional sensor network, and it will also help for the surveillance of different mining processes such as subsidence, water leakage and leakage current flow in some area of mines wall.

References 1. Boddu, R., Balanagu, P., & Suresh Babu, N. (2012). ZigBee based mine safety monitoring system with GSM. International Journal of Computer & Communication Technology, 3(5). 2. Asesh Kumar, T., & Sambasiva Rao, K. (2013). Integrated mine safety monitoring and alerting system using ZigBee & Can Bus. IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE), 8(3), 82–87. 3. Mu, L., & Ji, Y. (2012). Integrated coal mine safety monitoring system. In W. Zhang (Ed.), Software engineering and knowledge engineering: Theory and practice. Advances in intelligent and soft computing (Vol. 162). Berlin: Springer. 4. Tian, J., Wang, H., Zhang, L., Zhu, P., Hu, Y., & Song, S. (2018). Key technologies of comprehensive monitoring of safety production in networked coal mine. In H. Yuan, J. Geng, C. Liu, F. Bian, & T. Surapunt (Eds.), Geo-Spatial Knowledge and Intelligence, GSKI 2017. Communications in Computer and Information Science (Vol. 848). Singapore: Springer. 5. Li, Z., & Gongxun, Y. (2007). Research of data processing in mine safety monitoring system based on multisensory information fusion. In The Eighth International Conference on Electronic Measurement and Instruments. 1-4244-1135-1/07/$25.00 ©2007 IEEE. 6. Shen, J., Wang, H., Meng, Z., & Huo, P. (2010). Management information system design of safe coal-production based on C/S and B/S. In 2010 2nd IEEE International Conference on Information Management and Engineering. 978-1-4244-5265-1/10/$26.00 ©2010 IEEE.

Modelling and Simulation of Smart Safety …

117

7. Ramesh, V., Gokulakrishnan, V. J., Saravanan, R., & Manimaran, R. (2018). Integrated mine safety alerting system using IOT with GSM. IJSRST, 5, 200–206. 8. Nutter, R. S. (1983). Hazard evaluation methodology for computer-controlled mine monitoring/control systems. IEEE Transactions on Industry Applications, IA-19(3), 445–449. 9. Deokar, S. R., & Wakode, J. S. (2017). Coal mine safety monitoring and alerting system. International Research Journal of Engineering and Technology (IRJET), 04(03). 10. Khanapure, S., & Sayyadajij, D. (2013). Coal mines monitoring and security system. International Journal of Advances in Science Engineering and Technology, 1(2). 11. Raju, P., Pratap, G., & Karthick, C. (2017). Assessment of can based coal mine safety monitoring and controlled automation. Research Journal of Pharmaceutical, Biological and Chemical Sciences, 8(3), 1298. 12. Joshi, H. C., & Das, S. (2017). Design and simulation of smart helmet for coal miners using ZigBee technology. International Journal on Emerging Technologies (Special Issue NCETST2017), 8(1), 196–200. 13. Katara, A., Dandele, A., Chare, A., & Bhandarware, A. (2015). ZigBee based intelligent system for coal mines. In 2015 Fifth International Conference on Communication Systems and Network Technologies. IEEE. https://doi.org/10.1109/csnt.2015.142. 14. Maity, T., Das, P. S., & Mukherjee, M. (2012). A wireless surveillance and safety system for mine workers based on ZigBee. In 1st International Conference on Recent Advances in Information Technology RAIT-2012. 978-1-4577-0697-4/12/$26.00 ©2012 IEEE. 15. Wu, Y., & Feng, G. (2014). The study on coal mine monitoring using the Bluetooth wireless transmission system. In 2014 IEEE Workshop on Electronics, Computer and Applications (pp. 1016–1018). 16. Gowrishankaran, G., & He, C. (2017). Productivity, safety and regulation in underground coal mining: Evidence from disasters and fatalities. Arizon Education. 17. Martín, H., Bernardos, A. M., Bergesio, L., & Tarrío, P. (2009). Analysis of key aspects to manage wireless sensor networks in ambient assisted living environments. In International Symposium on Applied Sciences in Biomedical and Communication Technologies (pp. 1–8). 18. Ransom, S., Pfisterer, D., & Fischer, S. (2008). Comprehensible security synthesis for wireless sensor networks. In Proceedings of the 3rd International Workshop on Middleware for Sensor Networks (pp. 19–24), Leuven, Belgium. 19. Alemdar, H., & Ersoy, C. (2010). Wireless sensor networks for healthcare. A survey. Computer networks. The International Journal of Computer and Telecommunications Networking, 54(15), 2688–2710.

Green Algorithmic Impact of Computing on Indian Financial Market Krishna Kumar Singh and Sachin Rohatgi

1 Introduction In the era of big data analytics, learners are running toward the analysis of multidimensional behavior of the system. Dealing with the data set in this environment is one of the most challenging tasks. Because of various challenges researchers are trying to innovate new methodologies to deal with it, Hadoop ecosystem and software-based upon these are most useful to handle these types of data but it demands more space as well as time. Now time became the most crucial factor in the success of the technology as people demand all services in the less time. In view of these expectations of the people, author introduced green computing model applicable in the financial market computing as a case study of Indian stock market. In the previous published work, author introduced algorithms based on the data set used by different agencies in the financial market through notable papers as green database management system for the intermediaries of Indian stock market [1], green referential database management system for Indian stock market [2], green database for stock market: A case study of Indian stock market [3], score-based financial forecasting method by incorporating different sources of information flow into integrative river model [4] and cloud testing and authentication model in financial market big data analytics [5]. In this paper, author has calculated green impact of these algorithms in the financial market by considering time complexities of the earlier proposed algorithms. After complexity consideration of these algorithms, author shows the simulation of these algorithms in R and found that there is no any adverse impact on K. K. Singh (B) Symbiosis International (Deemed University), Lavale, Pune, Maharashtra, India e-mail: [email protected] S. Rohatgi Amity University, Sector–125, Noida, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_12

119

120

K. K. Singh and S. Rohatgi

the chart pattern. Charts are the same before and after implementation of algorithms on financial data set.

2 Green Impact Calculation of Green Financial Algorithms Algorithmic complexity of is dependent upon many factors of algorithm as, for example, length, execution time, input size. In all these complexity discussions, space is not an issue as it is available in ample quantity and cost of space is reducing drastically with negligible pricing. Time is the most prominent concern for today. Usually, a numerical function T (n)—time versus the enter length n—is used as notation for complexity. Computational complexity is classified algorithms according to their performances with the help of asymptotic notations. On the basis of six different algorithms, author tried to reduce space, cost, time of the database used in stock markets. Although this process reduced storage space of database in the stock market and established direct relationship with time and cost, time calculation is most crucial for these algorithms. It is always wise to increase algorithmic efficiency than to increase machines efficiency. In this section, author calculated time complexity and number of runs at each step of all defined algorithms. In this method, C is treated as constant factors and n is the number of times an activity is done. When there is no value, it is written as NILL. Time complexity has been calculated with the traditional method of complexity calculation. Calculation of time complexity for financial market algorithms is discussed below: Time complexity of Algorithm 1 [1] (Integer value)

Time complexity of Algorithm – 2 [2] (Referential value)

Green Algorithmic Impact of Computing on Indian Financial Market Time complexity of Algorithm - 3 [3] (Old Database)

121

Time complexity of Algorithm - 4 [3] (New database)

122 Time complexity of Algorithm - 5 [4] (Score = Impact factors)

K. K. Singh and S. Rohatgi Time complexity of Algorithm - 6 [4] (Forecasting with score based values)

Actual time complexity of above algorithms is dependent upon data structure of the database in which data is saved in the system or structure followed by the host organization. Factors affecting time, cost, space, etc., are type of data structure used to store data, e.g., linked list, array, etc., type of searching methodologies used, e.g., linear, binary, etc. Just for the shake for calculations as well as real prediction of time complexity, in this case, researcher took array as data structure and linear search as searching methodology. In the above tables, author calculated amortization time of each step of the individual algorithm. But total time taken by a single algorithm is the sum total of all steps in that specific algorithm. So, in the next section of the paper, author calculated total time taken by individual algorithms individually. In the era of machine learning, not only individual algorithm but users are concerned with the time taken by whole machine. Each machine runs with the help of many algorithms runs in coordination with the each other. For machine, all algorithms are equally important. So, to calculate grand total time complexity of machine, author calculated amortize behavior of the system. Below is the amortize complexity calculation on the basis of the above assumptions.

Green Algorithmic Impact of Computing on Indian Financial Market

123

Time complexity of all algorithms is looking inline, i.e., linear functions. But exact timings are dependent upon dimensions of the matrix.

124

K. K. Singh and S. Rohatgi

3 Simulation Results of Green Computing Model with R in Indian Stock Market Below are the simulation results on the State Bank of India (SBI) data from the National Stock Exchange (NSE). Prices of SBI are considered from date 01/01/1996 to 2/09/2017. This data has been imported in R and draw charts on the basis of both data through PLOTLY software. Before implementation of algorithm, number of data is 5323 (number of rows in the table) * 7 (number of field taken into consideration) = 37,261 and after implementation of algorithm is 2663 (number of rows in the table) * 7 (number of field taken into consideration) = 18,641. This table draws by considering seven fields as Date, Open, High, Low, Close, Volume, and Adj. Close. We found that both charts are the same and applicable for the forecasting of the financial market. Below are the simulation results (before and after). In Fig. 1, author derived different charts with the help of green financial database on the Indian financial market data and found that charts are same in both the cases. Means in spite less data, there is no change in the financial charts and results. After analysis and results of the new algorithms, author came to the conclusion that green algorithms are equally applicable in the financial market as it is showing excellent results on the trail data of Indian stock market. Efficient market like India is the most volatile market among developing economy market of the globe. Results of the algorithms are very encouraging. It is more efficient without compromising its essence. Result shows that data from these algorithms are very easily able to integrate with the big data technologies and will provide results with accuracy.

Green Algorithmic Impact of Computing on Indian Financial Market BEFORE GREEN COMPUTING MODEL

AFTER GREEN COMPUTING MODEL

BAR Chart

BAR Chart

Candle Stick Chart

Candle Stick Chart

OHLC Chart

OHLC Chart

PIE Chart

PIE Chart

TIME SERIES Chart

TIME SERIES Chart

3D Chart

3D Chart

125

Fig. 1 Comparison charts of before implementation of algorithm and after implementation of algorithm

References 1. Singh, K. K., Dimri, P., & Singh, J. N. (2014). Green data base management system for the intermediaries of Indian stock market. In 2014 Conference on IT in Business, Industry and Government (CSIBIG) (pp. 1–5), Indore. https://doi.org/10.1109/csibig.2014.7056996. http:// ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056996&isnumber=7056912.

126

K. K. Singh and S. Rohatgi

2. Singh, K. K., Dimri, P., & Chakraborty, S. (2014). Green referential data base management system for Indian stock market. International Journal of Computer Application, 89(3), 8–11. ISBN: 973-93-80880-18-3. http://doi.org/10.5120/15480-4197, ORCID URL: http://orcid.org/ 0000-0003-3849-5945. 3. Singh, K. K., Dimri, P., & Rawat, M. (2014). Green data base for stock market: A case study of Indian stock market. IEEE Xplore digital library (pp. 848–853). ISBN Number: 978-1-47994236-7. http://doi.org/10.1109/CONFLUENCE.2014.6949306. Scopus URL: http://ieeexplore. ieee.org/document/6949306/. 4. Singh, K. K., & Dimri, P. (2016). Score based financial forecasting method by incorporating different sources of information flow into integrative river model. IEEE Xplore digital library (pp. 694–697). ISBN: 978-1-4673-8202-1. IEEE URL: http://ieeexplore.ieee.org/document/750 8205/. 5. Singh, K. K., Dimri, P., & Rohatgi, S. (2016). Cloud testing and authentication model in financial market Big Data analytics. IEEE Xplore digital library (pp. 242–245). ISBN: 978-1-5090-35434. IEEE URL: https://ieeexplore.ieee.org/document/7894528.

Coarse-Grained Architecture Pursuance Investigation with Bidirectional NoC Router Yazhinian Sougoumar and Tamilselvan Sadasivam

1 Introduction System on a chip (SoC) is the design methodology currently used by VLSI designers based on extensive IP core reprocess. Cores do not make up system on chips alone; they have to include an interconnection architecture and interfaces to peripheral devices [1]. Usually, the interconnection architecture is based on dedicated wires or shared busses. Dedicated wires are effective only for systems with a small number of cores since the number of wires in the system increases dramatically a s the number of cores grows. Therefore, dedicated wires have poor reusability and flexibility. A shared bus is a set of wires common to several cores. This scheme is reusable and more scalable, when compared to devoted wires. On the other hand, busses authorize only one communication transaction at a time. Thus, all cores share the same communication bandwidth in the system and scalability is limited to a few dozen IP cores [2]. Using separate busses interconnected by bridges or hierarchical bus architectures may reduce some of these constraints, since different busses may account for protocols, different bandwidth is needed and also expand communication parallelism. Nonetheless, scalability remains a problem for hierarchical bus architectures. A network on chip (NoC) appears as a probably better solution to implement future on-chip interconnection architectures. In the most commonly found organization, a NoC is a collection of interconnected switches. These switches have the IP core connected to them. NoCs present better bandwidth, scalability, and performance than shared busses. Routers are responsible for (i) receiving incoming packets; (ii) storing packets; (iii) routing these packets to a given output port; and (iv) sending packets to others switches. To achieve these tasks, four most important components compose a crossbar switch [3]: a router, to describe a way between input and output Y. Sougoumar (B) · T. Sadasivam Department of Electronics and Communication Engineering, Pondicherry 605014, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_13

127

128

Y. Sougoumar and T. Sadasivam

channels (function i); buffers are temporary storage devices to store the intermediate data (function ii); an arbiter to grant access to a given port when multiple input requests arrive in parallel (function iii); and a flow control module to regulate the data transfer to the next switch (function iv). The architecture and dataflow control will change the design of NoC arbiter considerably [4]. The arbitration should assure the fairness in avoid starvation, scheduling, and offer high speed. The NoC’s switches should offer high throughput and cost-effective contention resolution technique when several packets from different input channels vie for the same output channel. A fast arbiter is one of the most dominant factors for high-speed NoC switches [5]. For the exceeding reasons, the analyses of the speed of the arbiters are considerably meaningfulness in the design of network-on-chips (NoC). The NoC router is a heart of the on-chip network, which undertakes critical assignment of coordinating the data flow. The network router operation revolves approximately two fundamental rules: (a) the associated control logic and (b) the data path. The data path contains number of input and output channels to make easy packet switching and traversal [6]. Usually, 5 × 5 input and output routers are used in NoC router. Out of five ports, four ports are in cardinal direction (North, South, East, and West) and one port is connected to its local (PE) Processing Element Like in any other network; router is the vital component for the design of message back-bone of a NoC router. In a packet-switched network router, the working principle of the router is to transmit an incoming packet to the destination port if it is directly connected to it, or to transmit the packet to another router connected to it [7]. It is significant that design of a NoC router should be as simple as possible because implementation cost increases with an increase in the design complexity of a router. The design of router mainly consists of five parts: 1. Buffer, 2. Arbiter, 3. Crossbar, 4. Routing logic, and 5. Channel control logic. In this paper, we have designed bidirectional router with and without contention situation by introducing virtual channel allocator, switch allocator, and round robin arbitration schemes. Organization of this paper is given below: Introduction about system on chip (SoC) and network on chip (NoC) was presented in Sect. 1. Design of unidirectional router is presented in Sect. 2. The proposed bidirectional router is presented in Sect. 3. Results of unidirectional and bidirectional routers are compared and discussed in Sect. 4. Conclusion of this research work is presented in Sect. 5.

2 Existing System Unidirectional router is used to perform the routing operation in single direction. Main drawbacks of this routing logic are path fails occur, deadlock, and live lock problem. The 5 × 5 round robin arbiter consists of number of OR gates, AND gates, and D flip-flops. Conventional unidirectional router structure is shown in Fig. 1, which consists of round robin arbiter, first in first out (FIFO) buffers, and crossbar switches. Arbiter is used to grant the data based on the priority [8]. Higher priority data will be routed

Coarse-Grained Architecture Pursuance Investigation …

129

Fig. 1 Block diagram of conventional unidirectional router

first. A FIFO buffer is used to store the data few times. It looks like temporary storage device. These FIFO buffers are used in both input and output channel sides. Crossbar switches are used to transfer the data, which comes from the arbiter. Channel control logic is incorporated in this router to send the control signal to crossbar switches. For example, input channel A can route the data through corresponding output channel A only, and cannot route the data via output channel B. Hence, it is called unidirectional router.

3 Proposed Architecture Network-on-chip (NoC) router plays an important role in the system on-chip (SoC)based applications. Normally, routing operation is not easy to perform inside the SoC Chip. Soc contains millions of chip within single-integrated circuits; each integrated circuit contains millions of transistor. Routing operation is important in the SoC architecture because the information transferred through routing logic. So need to design an efficient routing logic functions. NoC router consists of the following components: network interconnects (NI), crossbar switches, arbiters, buffer, and routing logic. Unidirectional router and bidirectional router are the two types of router mostly used in the NoC architecture. The unidirectional router operates in a single direction. It does not communicate on both sides; information travels to only one direction. Drawbacks of this routing logic are path failure, deadlock problem, and live lock problem. The main aim of the routing logic creates the path between the source and the destinations. Routing logic prevents the deadlock, live lock, and starvation problem. Deadlock is defined as the cyclic dependency among nodes requiring access to the collection of sources. Live lock is the process of circulating the packets to the network without ever making any progress towards their destinations. Starvation problem is occurred for the packet requesting the buffer when the output channel is allocated to another packet [9]. Routing algorithm can be classified into three criteria: (a) where the routing decisions are taken, (b) how the path defined, and (c) path length. The unidirectional router includes round robin arbiter, First in First out (FIFO) buffers, and crossbar switches. Arbiter is used to access the data

130

Y. Sougoumar and T. Sadasivam

based on the priority. Higher priority data will be routed first in the architecture. A buffer is a temporary storage device. FIFO buffer is used to store the packet or data temporally. Both the input and output channels use the buffer. Data is transferred by using crossbar switches. Crossbar switches receive the control signal from the router with channel control logic. A contention is one of the issues being done in the routing logic. A contention is nothing but competition for resources when two or more nodes are trying to transmit a message in the same channel at the same time [10]. To avoid the contention situation by introducing the bidirectional network on chip router. The proposed bidirectional router consists of In Out port, static RAM, round robin arbiter, routing logic, and channel control module. Arbiter chooses one output from the number of inputs based on the logic used. Arbiter present in the crossbar switch contains the same number of input and output. Data, request, and destination are the three quantities come from the crossbar input. The input data will be routed at the output side, the output port address is present in the destination. If two or more data send the request at the same time from the crossbar input, the round robin arbiter allows only one request at the time. This may cause the data losses to avoid the data loss by using FIFO buffer and SRAM memory. The stored data in the memory transfer into the next clock cycle. This process is called as contention free crossbar. Static RAM and dynamic RAM are the two types of memory element used in the routing logic. DRAM consists of transistor and capacitor, and it needs periodical revive to carry on the leak power of capacitor. SRAM contains more transistors, so it consumes more area, but it reduces the leakage power by increasing the number of the transistor to increase the reading capability. It provides high speed of operation when compared to the DRAM (Fig. 2). The proposed bidirectional router is designed using the SRAM memory to speed up the router and avoid the unwanted leakage power. Virtual channel allocator and source allocator is used to control the channels. Virtual channel allocator is used to virtually change the corresponding channel direction. Switch allocator is used to remove the path failures. Three methods are used to transfer the data in the bidirectional NoC router; these are, all input and output channel act as master or slave [10]. The second one is data from the input channel routes the data through same output channel; it eliminates the path failure. Third, all input data transfers through all the output port. It will remove the livelock and deadlock problems. The proposed bidirectional NoC router provides less area and high frequency than the conventional unidirectional router. The area and delay are reduced by introducing the efficient routing logic in the proposed bidirectional router.

4 Results The design of bidirectional NoC router with and without contention is a main goal of this research work. The proposed router is designed using Verilog HDL. Performance

Coarse-Grained Architecture Pursuance Investigation …

131

Fig. 2 Architecture of bidirectional NoC router

investigation is processed between unidirectional and bidirectional NoC router. Simulation process is done by ModelSim6.3c. Synthesis process is checked by Xilinx10.1. Contention-free bidirectional NoC router is designed using static random-access memory (SRAM) to store the data up to next clock cycles. So high-speed and low leakage power is used in static RAM. Crossbar switches with round robin arbitration scheme is used to eliminate the path failures occur. Simulation result of conventional unidirectional NoC router is shown in Fig. 3. The proposed bidirectional NoC router is designed and verified through simulation process. Simulation result of bidirectional NoC router is shown in Fig. 4. Active high reset is used in the router, i.e., whenever the reset is high, all the outputs were zeros.

132

Y. Sougoumar and T. Sadasivam

Fig. 3 Simulation result of unidirectional NoC router

Fig. 4 Simulation result of proposed bidirectional NoC router

All the outputs are generated, when the reset is low. The information or data in the input channel 1 is 20, input channel 2 is 40, input channel 3 id 60, input channel 4 is 100, and input channel 5 is 120. The valid data signal goes high, when the valid information comes from input channel. From Table 1, the analysis results of conventional and bidirectional routing are clearly shown. Delay of the bidirectional routing is reduced when compared to the conventional unidirectional routing. Also device utilization of the bidirectional routing is getting reduced (Fig. 5).

Coarse-Grained Architecture Pursuance Investigation …

133

Table 1 Comparison between conventional and bidirectional routings Parameters noticed

Conventional routing

Bidirectional routing

% Reduction

Number of slices flip-flop

1579

1269

19

Number of 4-input LUTs

1050

868

17

Number of occupied slices

1204

981

Minimum time period (ns)

13.684 ns

8.501 ns

37

Maximum frequency (MHz)

73.076 MHz

117.636 MHz



18

Fig. 5 Comparison graph of conventional and bidirectional routing technique

5 Conclusion In this paper, an area efficient and high-speed bidirectional NoC router with and without contention is proposed. This proposed bidirectional NoC router is compared with the conventional unidirectional router. The simulation result shows that the proposed bidirectional NoC router occupies smaller chip size and utilizes lesser delay than the conventional unidirectional NoC router. The designed router is suitable for coarse-grained architecture. Delay of the coarse-grained architecture can be reduced by bidirectional routing method. It effectively reduced the delay occurring in the architecture. It offers 37% of delay reduced when compared to the conventional routing method. Also it provides 17% reduction in LUT counts and 18% reduction in slices used in the architecture. In future, the proposed NoC router is used in system-on-chip applications for efficient on-chip routing process.

134

Y. Sougoumar and T. Sadasivam

References 1. Lamba, A. K., Student, M., & Assistant, B. B. S. (2014). Performance analyses of speculative virtual channel router for network-on-chip. European Scientific Journal, 3, 246–251. 2. Tran, A. T., & Baas, B. M. (2014). Achieving high-performance on-chip networks with sharedbuffer routers. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 22(6), 1391–1403. 3. Cota, E., Frantz, A. P., Kastensmidt, F. L., Carro, L. & Cassel, M. (2007). Crosstalk- and SEU-aware networks on chips. IEEE Design and Test of Computers, 24, 340–350. 4. Arjunan, A., & Manilal, K. (2013). Noise tolerant and faster on chip communication using Binoc model. International Journal of Modern Engineering Research (IJMER), 3(5), 3188–3195. 5. Khodwe, A., & Bhoyar, C. N. (2013). Efficient FPGA based bidirectional network on chip router through virtual channel regulator. International Journal of Advances in Engineering Sciences, 3(3), 82–87. 6. Pote, B., Nitnaware, V. N., & Limaye, S. S. (2011). Optimized design of 2D mesh NOC router using custom SRAM and common buffer utilization. International Journal of VLSI design and Communication Systems (VLSICS), 2(4), 179–191. 7. Cedric, K., Camel, T., Fabrice, M., & Abbas, D. (2014). Smart reliable network-on-chip. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 22, 242–255. https://doi.org/10. 1109/tvlsi.2013.2240324. 8. Chang, E. J., Hsin, H. K., Lin, S. Y., & Wu, A. Y. (2014). Path-congestion-aware adaptive routing with a contention prediction scheme for network-on-chip systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 33(1), 113–126. 9. Chavan, M. K., & Yogeshwary, B. H. (2014). Reusability of test bench of UVM for bidirectional router and AXI bus. International Journal of Computational Engineering Research (IJCER), 4(4), 2250–3005. 10. Wanjari, M. M., & Kshirsagar, R. V. (2014). Implementation of buffer for network on chip router. International Journal of Technological Exploration and Learning (IJTEL), 3(1), 362–365.

Home Automation and Fault Detection Megha Gupta and Pankaj Sharma

1 Introduction Today, over two billion people around the world use the internet for browsing web, sending and receiving emails, accessing multimedia system content and services, playing games, social networking applications and many other tasks. From the old chestnut, ‘A world wherever things will automatically communicate to computers and every alternative providing services to the benefits of the humankind’, Iot explain different types of technologies and disciplines that achieve out into the real world to implement on physical objects. Iot is represented as a connection of objects like phones, TVs, Sensors, etc. to the internet wherever the devices are joined and make new kind of communication between people and things. Any person can connect to anything from anywhere and anytime and except that these connections extend and work completely on advance dynamic networks. This technology can be applied to generate new ideas and develop many areas to make home smart, comfortable and to enhance standards of life. Internet of Things builds on three pillars, associated with the flexibility of good objects: 1. First pillar is it will be identifiable. 2. Second pillar is it will be communicational. 3. Third pillar is it will be intractable. The three characteristics of IoT: 1. To anything communicates: good things have the flexibility to wirelessly communicate among themselves and kind networks of interconnected objects, M. Gupta (B) · P. Sharma Computer Science, Poornima Institute of Engineering and Technology, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_14

135

136

M. Gupta and P. Sharma

2. To anything identifiable: good things are identified with a digital name. Relationships among things will be specified within the digital domain whenever physical interconnection can’t be established and 3. To anything intractable: Good things will be interacted with the native atmosphere by using sensing and effort capabilities whenever present.

2 Smart Home System In our regular life, state of affairs comes wherever it’s troublesome to regulate the house appliances just in case once nobody is accessible at home or once the user is much far from home or once the user leaves home forgetting to switch off some appliances that result in spare wastage of power and additionally might result in accidents. Sometimes, one can also need to observe the status of home appliances, staying far from home. Altogether the higher than cases the presence of the user is necessary to observe and control the appliances that aren’t potential all the time. This shortcoming can be eliminated by connecting the home appliances to the user via some medium. However, connectivity can be established with the help of GSM, internet, Bluetooth, Zigbee. It is reliable to connect devices via the internet so that the remote user can monitor and control home appliances from anywhere and at any time around the world. This increases the comfort of the user by connecting the user with the appliances at home, to monitor the status of the home appliances through a mobile app, to control the appliances from any corner of the world, to understand the power consumption of each appliance and to estimate the tariff well in advance.

2.1 Automation System The appliances square measure classified as consistent with the character of their operation/control. Appliances like Geyser and Toaster got to be switched ON/OFF at specific time intervals. For efficient utilization of the appliance, the device needs to be switched ON and OFF suitably. AN RTC primarily based system will perform the management exactly which boosts the appliance’s life and saves the ability. Once the match takes place between the loaded time and therefore the real-time, the controller activates the appliance and equally once the period gets over, the controller turns OFF the appliance. Therefore the appliances square measure controlled as per the time schedule outlined by the user. Some appliances got to work solely throughout the human presence. During this planned work, human movements are detected exploitation PIR sensing element and therefore the necessary automation is completed. The required intensity in the room will be established exploitation sensible lamp. The lamp should turn ON and

Home Automation and Fault Detection

137

Fig. 1 Smart lamp control using LDR

OFF solely throughout human presence that is enforced exploitation motion sensing element primarily based lighting system. These system’s performance is increased by switch them solely there is no enough light and so the sunshine will not activate throughout the daytime. This is often done together with LDR within the system. The specified close intensity is ready by varied the brightness of the lamp exploitation PWM techniques that helps in energy-saving. By correct positioning of LDR, the sunshine intensity of the space will be maintained as shown in Fig. 1.

2.2 Monitoring System After going from home a few meters away, the user could have doubts relating to the status of the appliances at home. In such cases, returning back to our home and checking for its status won’t be troublesome. Once the gap extends to about a few miles, returning back becomes tedious. Just in case of emergency, the user needs to come, which disrupts the user’s routine. The appliance may be left as such, which can lead to severe damage to the device, just in case of motor or geyser. These cons are overcome by remote observation of the house appliances. Thus the PIR sensor put in at the precise points at the house senses the location of the user at the house permanent to the location awareness system that makes use of a floor mapping algorithmic rule for one user. The meant usage of the PIR detector is to observe the human presence. The Light-Dependant Resistor (LDR) placed at appropriate locations determines the light intensity of the place at the placement and sends the worth to the system for any interpretation. Therefore LDR enhances the

138

M. Gupta and P. Sharma

feature cited in by creating the use of sensors to observe the surrounding factors and accommodates identical desired by the user. The sensible plug associates to the work drawn in. These sensible plugs enhance the feature of planning the devices likewise as dominant the power consumption of lamps like crystal rectifier lamps. The device status should be monitored sporadically and once the user sends missive of invitation, the status of the appliance is given. The status observation of the appliances are often realized with the assistance of a flow chart shown in Fig. 2. When the device is in standby mode, as presently because the PIR sensor detects the human presence the controller calculates the room luminousness value and compares with the prefixed value. Based on the results, the lamp is either switched ON or switched off with the given brightness and also the cycle continues.

Fig. 2 Monitoring flow chart

Home Automation and Fault Detection

2.3 Algorithm

1.

IF Room Temperature (=25°C). Then Fan ON (Speed of the fan increases according to the temperature) End IF End IF

2. Then Tube light ON Else Continue Sensing. End IF 3.

IF Gas value (>=1000) MQ5. Then Alarm Start. Else Continue Sensing. End IF

4.

IF current sensors not detected current. Then Alarm Start. Else Continue Sensing. End IF

5.

IF Door sensor lost the line of sight connection for 20 sec. Then Alarm Start. Else Continue Sensing. End IF

139

140

M. Gupta and P. Sharma

3 Controlling and Fault Detection System 3.1 Control System Remote controlling of the appliances may be performed. Once the user sends a command, betting on the command received, the appliances are switched ON and OFF Consequently. Generally, there arises a discrepancy wherever no device has been connected or fault within the existing device. In such a case, associate degree electrical circuit prevails. With the assistance of a current sensing mechanism, fault detection is performed. This could be enforced by sensing the present flowing to the appliance with the assistance of a current detector as shown in Fig. 3. This helps in any saving energy similarly because of the appliances from any damage. When the intensity of the light is increased then the brightness of the light is adjusted hence the power consumption is reduced. Simulation Result provides the relationship between light intensity and power consumption of the LED light are depicted in Fig. 4.

Fig. 3 Block diagram depicting the working of current sensor

Fig. 4 Simulation showing the relation between power consumed and Lux

Home Automation and Fault Detection

141

Table 1 Power and Lux comparison between LED using LDR and fluorescent lamp Desired Lux (lm)

Obtained Lux using LED (lm)

Power consumed by LEDS (W)

Obtained Lux using Tube light (lm)

Power consumed by tube light (W)

1057

1057

1.951

3450

36

1420

1420

2.82

3450

36

1700

1700

4.2

3450

36

2100

2100

5.41

3450

36

2600

2600

7.08

3450

36

A user-friendly atmosphere is made at home with the assistance of an LCD and input device to let the user to enter the beginning and the period that the device has got to stay switched ON. The same may be supplied with the assistance of the mobile application at the user finish. Hence associate LDR primarily based adaptive lighting system is employed to avoid wasting power by varied the PWM for LED lamps instead of victimization fluorescent lamps with mounted power consumption. The results tabulated in Table 1 were obtained by inserting the LEDs at a distance of regarding two feet from the lamp and both are opposite to every alternative. Regarding sixty LEDs were utilized in the setup. The ability consumed by the LEDs at a Lux of 2600 lm is about 7 W whereas the ability consumed by a lamp is 36 W. This protects power at a rate of 33 W/h. The results illustrating the distinction in power consumption between the LED lamp and lamp determined with the assistance of check setup are shown in Fig. 5. Once light-weight is employed for an amount of 8 h/day, a complete of 264 W/day is going to be saved. On a median, 7.92 KW power is going to be saved for a month that prices regarding Rs. 45/month. For a year, this sums up to a price of Rs. 540. Hence, through the implementation of LDR alongside LEDs, energy consumption will with efficiency be handled. Fig. 5 Results showing the comparison between LED and fluorescent lamps

142

M. Gupta and P. Sharma

Fig. 6 Proposed architecture

3.2 Proposed Design for Fault Detection • To detect any fault in the device and to know the environmental condition we use some sensors like light, temperature, humidity, vibrations, current, etc. • All the devices work according to the person’s demand. • Laptop or computer continuously take the data from the sensors and manage the device according to the value of sensors. • During the analysis of the sensor, if it finds any problem or fault in the system then it sends the feedback to the cloud server. • User can change the settings according to the requirement and see our device functionality and how it is working. We make an application where user, technician, supplier and vendor can do registration on it. All of them have to enter details regarding their service and service time, etc. • When data set is ready then we apply mining using cloud server. • About any problem, it mail or SMS to the technician and send these message details to the owner also. • By using cloud server we can connect wide range of user and it support multiuser functionality. • Many users connect to cloud server via desktop, laptop or android devices (Fig. 6).

4 Future Work The proposed work can further be improved in the future by developing an application that comprises a Speech Recognition System that mitigates the need for physical contact between the user and the smartphone.

Home Automation and Fault Detection

143

5 Conclusion In this projected work, we’ve established a sensible system for controlling home appliances. Sensible devices are connected using net thereby increasing the dependableness of the product. We additionally examined the contribution of every answer towards up the potency and effectiveness of consumers’ lifestyle yet as of society normally. Potency in managing power has been improved by turning OFF the appliances throughout unnecessary times and accidents are avoided just in case of any nonfunctional of the device. Archetype of the projected sensible home systems is additionally performed. Sensible experiment was supervised to demonstrate that the developed archetype works well which the planned sensible smart home systems provide an excellent performance of appliances and considerable energy-saving. Automating the appliances with the assistance of RTCs, LDRs and PIR is additionally done. We also define a system that not only monitors environmental conditions but also acts according to the person’s requirement. It sends message to the user when any type of error detect in device by SMS or send mail or audio message and if user want then it can also send message to the supplier. By using this system we can eliminate human interaction, able to manage low price, versatile sensible home to regulate and resolve its errors with energy-saving.

Performance Evaluation of Simulated Annealing-Based Task Scheduling Algorithms Abhishek Mishra, Kamal Sheel Mishra, and Pramod Kumar Mishra

1 Introduction Software can be divided into modules (modules are called tasks in this paper). The software modules may have dependencies between them. The dependencies between the modules can be represented as a task graph [5, 21, 22]. The task graph can be represented as a weighted directed acyclic graph (DAG). Each vertex of the DAG has a weight that is representative of the computation time of the task. Each (directed) edge of the DAG has a weight that is representative of the communication time between the tasks (a directed edge from task T 1 to task T 2 implies that T 2 can start its execution only after getting some data from T 1 ). Software running on a single processor system may take more time. Such type of software is usually run on a parallel processor system to save time. The task scheduling problem is: given a task graph and a system (single or parallel processor system), minimize some objective function. Some examples of objective functions are the total execution time of the software (it is called SL in this paper) and the energy consumed by the system during software execution. The complexity of task scheduling algorithms varies from problem to problem. Some task scheduling problems are in P [9], while others are NP-complete [10]. A. Mishra (B) Department of Computer Science and Information Systems, Birla Institute of Technology and Science Pilani, Pilani 333031, India e-mail: [email protected] K. S. Mishra Department of Computer Sciences, School of Management Sciences, Varanasi 221011, India e-mail: [email protected] P. K. Mishra Department of Computer Science, Department of Science and Technology, Center for Interdisciplinary Mathematical Sciences, Banaras Hindu University, Varanasi 221005, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_15

145

146

A. Mishra et al.

For NP-complete task scheduling problems, the usual approach is to look for some approximation algorithm, heuristic algorithm, or randomized algorithm [11]. SA is a type of randomized heuristic that is generally used to solve NP-hard optimization problems [12, 18]. SA is inspired by annealing used in metallurgy in which a metal is heated at a high temperature, and then, it is cooled slowly to give it a crystalline structure. This is done to ensure that the metal reaches a low energy state. As an analogy to annealing, in the case of optimization problem (assuming that the problem is a minimization problem), the objective function is taken as energy. The system is started in an initial state with some initial energy. Then, the state is changed. This will also change the energy of the system. If the energy is decreased, the new state is taken as the current state of the system. If the energy is increased, then the new state is taken as the current state with a probability that is exponential of the negative of the change in energy divided by a factor (e−E/(kT ) ), which can be taken as the analog of temperature in annealing. Then the same process is repeated, but with a lower temperature. The work done in this paper is an evaluation of the performance of SA-based task scheduling algorithms. First, various parameters of SA are varied and observed for how it affects the SL. The parameters that are varied are initial temperature, number of iterations, initial clustering, and cooling schedule. Then, one SA-based task scheduling algorithm is selected and compared with other task scheduling algorithms. The algorithms selected for comparison are CPPS [13], DSC [24], EZ [19], and LC [7]. Random task graphs are used for comparison [14]. Remainder of the paper is organized in the following way: Sect. 2 gives an overview of some task scheduling heuristics, Sect. 3 describes a generic SA-based task scheduling algorithm, in Sect. 4, experimental results are shown. Finally, Sect. 5 concludes the paper.

2 A Review of Some Task Scheduling Heuristics There are many papers that compare various task scheduling algorithms. Jin et al. [6] compare the task scheduling algorithms for LU decomposition and the Gauss– Jordan elimination task graphs. The algorithms used in the comparison are min–min, chaining, A*, genetic algorithms, simulated annealing, tabu search, HLFET, ISH, and DSH with task duplication. Mishra et al. [14] perform a benchmarking study of dynamic priority-based task scheduling algorithms for peer set, random, systolic array, Gaussian elimination, divide and conquer, FFT, and small random task graphs with optimal solutions. The algorithms used in the benchmarking are CPPS, DCCL, RDCC, DSC, EZ, and LC. Arora [1] has done a performance evaluation of four sets of task scheduling algorithms: bounded number of processors (BNP), unbounded number of clusters (UNC), task duplication based (TDB)-algorithms, and arbitrary processor network (APN) algorithms. The TY algorithm and DSH algorithm are used in the comparison.

Performance Evaluation of Simulated Annealing-Based …

147

Singh et al. [20] have measured the effect of increasing the number of tasks and processors on the performance of BNP scheduling algorithms. The parameters used in the comparison are makespan, speed up, processor utilization, and scheduled length ratio. de Carvalho et al. [2] propose a genetic algorithm (GA)-based task scheduling algorithm and compare it with other algorithms. GA-based algorithms are inspired by the theory of natural selection in genetics. Mishra and Tripathi [16] have done a performance evaluation of task scheduling algorithms in the presence of faults. They have considered three types of faults. In the first type of fault, there can be delays due to fault in processing nodes (computation fault). In the second type of fault, there can be delays due to fault in communication links (communication fault). In the third type of fault, there can be delays due to both faults in processing nodes as well as a fault in communication links (computation and communication fault). This work is done for random task graphs. They have extended this work in [17] to also consider the case of special types of task graphs like systolic array task graphs, Gaussian elimination task graphs, divide and conquer task graphs, and FFT task graphs. Vidyarthi et al. [23] have considered the case of multiple task allocation in distributed computing systems (DCS). They also propose an algorithm that does not require prior knowledge of execution times of tasks. They have used A* and GA algorithms. Mishra and Tripathi [14] propose the CPPS algorithm. CPPS is a dynamic priority algorithm. The priority is defined as a function of cluster pairs. For calculating the priority between two clusters, first, the edge costs are added between the clusters, and then, the vertex costs of the two clusters are subtracted. CPPS algorithm has time complexity O(VE(V + E)). Yang and Gerasoulis [24] have proposed the DSC algorithm that is based on the concept of critical path (CP). A CP in a graph gives a lower bound on SL. For this reason, they call it the dominant sequence (DS). DSC has time complexity O((V + E)log(V )). Sarkar [19] has proposed the EZ algorithm that is based on the concept of edge zeroing. In edge zeroing, the clusters that are connected by a high-cost edge are merged. EZ has time complexity O(E(V + E)). Kim and Browne [7] have proposed the LC algorithm that is based on the concept of linear clustering. In linear clustering, the independent tasks are always put on different clusters. LC has time complexity O(V (V + E)). Mishra and Trivedi [15] have done a benchmark evaluation of nature-inspired metaheuristic task scheduling algorithms. The algorithms compared are simulated annealing (SA), genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), bat algorithm (BA), cuckoo search (CS), and firefly algorithm (FA).

148

A. Mishra et al.

3 A Generic SA-Based Task Scheduling Algorithm

The algorithm above is a generic SA-based task scheduling algorithm. SA takes as input the task graph (T ), initial temperature (τ 0 ), and the number of iterations (I). In line 01, an initial clustering of the tasks is created by using the function InitialClustering. In line 02, cluster min is used to store the clustering that gives the minimum SL. In line 03, the SL is computed corresponding to the initial schedule using the function EvaluateTime. EvaluateTime has time complexity O((V + E)log(V + E)) [16]. In line 04, τ is initialized with the initial temperature. In line 05, cur is used to store the current SL. The for loop from line 06 to 16 is repeated I times. In line 07, a next cluster is created using the function Next. In line 08, the SL of the new cluster is evaluated. In line 09, a random number between 0 and 1 (both inclusive) is generated using the function Random. In lines 10 and 11, two cases are possible: The new SL is not greater than the current SL, or the new SL is greater than the current SL. In the first case, the new clustering is taken as the current clustering, and in the second case, the new clustering is taken as the current clustering with probability eˆ((cur − time)/(kτ )). In lines 12 to 14, if the new SL is also less than or equal to the minimum SL, then the minimum clustering is updated as well as the minimum SL. In line 15, if the new SL is greater than the current SL, and it is not chosen in line 10 to update it as the current clustering, then the original clustering is saved using the function Previous. In line 16, the temperature is updated according to a cooling schedule specified by

Performance Evaluation of Simulated Annealing-Based …

149

the function CoolingSchedule. In line 17, the minimum SL is returned as well as the corresponding clustering. The SA algorithm has time complexity O(I(V + E)log(V + E)). The SA algorithm can be fully specified by five parameters: initial temperature (τ 0 ), number of iterations (I), initial clustering (the function InitialClustering), cooling schedule (the function CoolingSchedule), and how the next clustering is generated (the function Next). In this paper, the Next function is fixed as a randomized function that randomly selects a task and puts it into a randomly selected cluster. The constant k is also fixed (in the probability value eˆ((cur − time)/(kτ ))) as 1. In this paper, the four parameters are varied one by one while keeping the other parameters fixed to study the effect of varying a parameter on the performance of SA. Various SA algorithms are named as SAαβγ δ, where α will specify τ 0 , β will specify I, γ will specify InitialClustering, and δ will specify CoolingSchedule. The three values of τ 0 are taken as 1 (α = 1), 2 (α = 2), and 5 (α = 5). α = 1 specifies that when SL is increased (cur − time = −1), it is accepted with probability e−1 = 36.79%. α = 2 specifies that when SL is increased (cur − time = −1), then it is accepted with probability e−1/2 = 60.65%. α = 5 specifies that when SL is increased (cur − time = −1), then it is accepted with probability e−1/5 = 81.87%. The three values of I are taken as V (β = 1), V 2 (β = 2), and V 3 (β = 3). The three InitialClustering functions used are: allocate all tasks on a single cluster (γ = s), allocate each task on a separate cluster (γ = n), and random clustering (γ = r). The three CoolingSchedule functions are taken as follows: δ = a is in arithmetic progression (AP): τ j = τ 0 (1 − j/I), δ = g is in geometric progression (GP): τ j = τ0 (0.999)ˆj, and δ = h is in harmonic progression (HP): τ j = τ 0 /j.

4 Experimental Results Thirty benchmark random task graphs are taken, each having 50 vertices from Davidovic and Crainic [3], and Davidovic [4] having label as t50_i_j.td for various values of i and j. The four algorithms selected for comparison are CPPS [13], DSC [24], EZ [19], and LC [7]. CPPS is a recent algorithm. DSC, EZ, and LC are well-known algorithms. All four algorithms are benchmarked by Mishra et al. [14]. The performance evaluation for normalized schedule length (NSL) and the number of processors used is done. NSL is defined as SL divided by the length of CP [8]. SA13sa is showing a 0.51% improvement over SA23sa and a 2.22% improvement over SA53sa. Decreasing the initial temperature results in decreased NSL. SA13sa is showing a 5.25% improvement over SA23sa and a 20.05% improvement

150

A. Mishra et al.

over SA53sa. Decreasing the initial temperature results in the decreased number of processors used. SA13sa is showing a 15.99% improvement over SA11sa and a 4.85% improvement over SA12sa. Increasing the number of iterations results in decreased NSL. SA11sa is showing a 23.84% improvement over SA12sa and a 10.04% improvement over SA13sa. Decreasing the number of iterations results in the decreased number of processors used. SA13sa is showing a 2.11% improvement over SA13na and a 2.82% improvement over SA13ra. Single clustering gives the least NSL. SA13sa is showing a 19.25% improvement over SA13na and an 18.85% improvement over SA13ra. Single clustering gives the least number of processors used. SA13sa is showing a 1.09% improvement over SA13sg and a 0.89% improvement over SA13sh. AP cooling schedule gives the least NSL. SA13sh is showing a 2.03% improvement over SA13sa and a 3.64% improvement over SA13sg. HP cooling schedule gives the least number of processors used. SA13sa is showing a 0.27% improvement over CPPS, a 0.91% improvement over DSC, a 16.12% improvement over EZ, and a 4.78% improvement over LC. SA13sa is giving the least NSL. SA13sh is showing a 24.29% improvement over CPPS. DSC is showing a 3.84% improvement over SA13sh. EZ is showing an 81.95% improvement over SA13sh. LC is showing a 57.86% improvement over SA13sh. EZ is giving the least number of processors used.

5 Conclusion The performance evaluation of SA-based task scheduling algorithms is done by varying the four parameters: initial temperature, number of iterations, initial clustering, and cooling schedule. The SA-based task scheduling algorithms are also compared with some other task scheduling algorithms: CPPS, DSC, EZ, and LC. From the experimental results, it can be observed that lower initial temperature gives better results for NSL as well as the number of processors used. More number of iterations will give better results for NSL, but less number of iterations will give better results for the number of processors used. The initial clustering that puts each task on a single cluster gives better results for NSL as well as the number of processors used. The cooling schedule in AP will give better results for NSL, but in HP, it will give better results for the number of processors used. As compared with other algorithms, the SA algorithm (SA13sa) is giving the best results for NSL. But if the number of processors used is to be minimized, then the EZ algorithm should be used that gives the best results for the number of processors used. The following table summarizes the results: NSL

Number of processors used

Initial temperature

1

1

Number of iterations

V3

V (continued)

Performance Evaluation of Simulated Annealing-Based …

151

(continued) NSL

Number of processors used

Initial clustering

Single clustering

Single clustering

Cooling schedule

AP

HP

Overall

SA13sa

EZ

For future work, some sophisticated SA-based task scheduling algorithms can be proposed, and their performance can be evaluated on some real task graphs like systolic array task graphs, Gaussian elimination task graphs, divide and conquer task graphs, and FFT task graphs [14].

References 1. Arora, N. (2012). Analysis and performance comparison of algorithms for scheduling directed task graphs to parallel processors. International Journal of Emerging trends in Engineering and Development, 4, 793–802. 2. de Carvalho, R. M., Lima, R. M. F., & de Oliveira, A. L. I. (2011). An efficient algorithm for static task scheduling in parallel applications. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 2313–2318). 3. Davidovic, T., & Crainic, T. G. (2006). Benchmark-problem instances for static scheduling of task graphs with communication delays on homogeneous multiprocessor systems. Computers & Operations Research, 33, 2155–2177. 4. Davidovic, T. Benchmark task graphs. http://www.mi.sanu.ac.rs/~tanjad/sched_results.htm. 5. Drozdowski, M. (2009). Scheduling for parallel processing. Berlin: Springer. 6. Jin, S., Schiavone, G., & Turgut, D. (2008). A performance study of multiprocessor task scheduling algorithms. The Journal of Supercomputing, 43, 77–97. 7. Kim, S. J., & Browne, J. C. (1988). A general approach to mapping of parallel computation upon multiprocessor architectures. In: Proceedings of 1988 International Conference on Parallel Processing (vol. 3, pp. 1–8). 8. Kwok, Y. K., & Ahmad, I. (1999). Benchmarking and comparison of the task graph scheduling algorithms. Journal of Parallel and Distributed Computing, 59, 381–422. 9. Mishra, A., & Tripathi, A. K. (2014). Energy efficient voltage scheduling for multi-core processors with software controlled dynamic voltage scaling. Applied Mathematical Modelling, 38, 3456–3466. 10. Mishra, A., & Tripathi, A. K. (2015). Complexity of a problem of energy efficient real-time task scheduling on a multicore processor. Complexity, 21(1), 259–267. 11. Mishra, A., & Tripathi, A. K. (2014). A Monte Carlo algorithm for real time task scheduling on multi-core processors with software controlled dynamic voltage scaling. Applied Mathematical Modelling, 38, 1929–1947. 12. Mishra, A., & Mishra, P. K. (2016). A randomized scheduling algorithm for multiprocessor environments using local search. Parallel Processing Letters, 26, 1650002. 13. Mishra, A., & Tripathi, A. K. (2011). An extension of edge zeroing heuristic for scheduling precedence constrained task graphs on parallel systems using cluster dependent priority scheme. Journal of Information and Computing Science, 6, 83–96. 14. Mishra, P. K., Mishra, A., Mishra, K. S., & Tripathi, A. K. (2012). Benchmarking the clustering algorithms for multiprocessor environments using dynamic priority of modules. Applied Mathematical Modelling, 36, 6243–6263.

152

A. Mishra et al.

15. Mishra, A., & Trivedi, P. (2019). Benchmarking the contention aware nature inspired metaheuristic task scheduling algorithms. Cluster Computing. https://doi.org/10.1007/s10586-01902943-z. 16. Mishra, K. S., & Tripathi, A. K. (2013). Task scheduling of a distributed computing software in the presence of faults. International Journal of Computer Applications, 72, 1–9. 17. Mishra, K. S., & Tripathi, A. K. (2014). Task scheduling of special types of distributed software in the presence of communication and computation faults. International Journal of Engineering and Computer Science, 3, 8752–8764. 18. Orsila, H., Salminen, E., & Hamalainen, T. (2013). recommendations for using simulated annealing in task mapping. Design Automation for Embedded Systems, 17, 53–85. 19. Sarkar, V. (1989). Partitioning and scheduling parallel programs for multiprocessors. Research Monographs in Parallel and Distributed Computing. Cambridge: MIT Press. 20. Singh, N., Kaur, G., Kaur, P., & Singh, G. (2012). Analytical performance comparison of BNP scheduling algorithms. Global Journal of Computer Science and Technology, 12, 11–24. 21. Sinnen, O. (2007). Task scheduling for parallel systems. Hoboken: Wiley. 22. Sriram, S., & Bhattacharyya, S. S.: (2009). Embedded multiprocessors: Scheduling and synchronization (2nd ed.). Boca Raton: CRC Press. 23. Vidyarthi, D. P., Sarkar, B. K., Tripathi, A. K., & Yang, L. T. (2009). Allocation of multiple tasks in DCS. Scheduling in distributed computing systems (pp. 1–94). 24. Yang, T., & Gerasoulis, A. (1991). A fast static scheduling algorithm for DAGs on an unbounded number of processors. In: Proceedings of the 1991 ACM/IEEE Conference on Supercomputing (pp. 633–642).

Epileptic Seizure Onset Prediction Using EEG with Machine Learning Algorithms Shruti Bijawat and Abhishek Dadhich

1 Introduction Epilepsy is one of the serious neurological disorders which can affect people of all ages. In India, more than 10 million persons are suffering with epilepsy. It is also a physical condition as the body of the person having seizure may be affected. As one of the definition says: “Epilepsy is not just one condition, but is a group of many epilepsies with one thing in common: a tendency to have seizures that start in the brain” [1]. Seizures are also known as “fits” or “attacks” in layman terms. A seizure occurs when there is a sudden interruption in which the way the brain normally works. In between these seizures, the functioning of the brain is normal. Our brain is the center of the nervous system. It is one of the most complex organs of our body. Skull protects our brain, which is wrapped in three layers called meninges. A liquid called cerebrospinal fluid is there in between these sheets which cushions the brain and fills the space inside the brain. Our brain consists of three areas hindbrain, midbrain and forebrain. One of the biggest and most important parts of the brain present is in the area of forebrain called cerebral cortex or cerebrum. This cerebrum is divided into two halves—the right and left hemispheres. Each of this hemisphere is divided into four areas called lobes. These lobes are known as the frontal, temporal, parietal and occipital lobes. Each lobe has a different set of functions. The middle part of the brain is part of the temporal lobe called the

S. Bijawat Computer Science and Engineering, Poornima University, Jaipur, India e-mail: [email protected] A. Dadhich (B) Computer Science and Engineering, Poornima Institute of Engineering and Technology, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_16

153

154

S. Bijawat and A. Dadhich

hippocampus. This section of the brain is responsible for learning and in forming memories. If this hippocampus is damaged, it may cause epilepsy in some people. Generally, the diagnosis of epilepsy is done by analyzing the detailed history, performing neuro-imaging and by EEG. The EEG signals are able to identify interictal (between seizures) and ictal (during seizure) epileptiform abnormalities [2]. When a patient has a seizure, a sudden surge in neural discharge is there resulting in the increase of disparities in EEG signals. Thus, the epileptic EEG signals generated are more chaotic and have more variation as compared to the normal EEG signals. The visual and manual assessment of these reports is time consuming and is a complicated process. This research paper focuses on early methods adopted for detection and prediction of clinical seizures on their edge of onset so that proper treatment could be given to the patient on time, and to study techniques of machine learning for the same [3].

2 Literature Review Epilepsy is characterized by continuous series of epileptic seizures. The electric disturbances of neurons that occur during epileptic seizures result in abnormal behavior, sensations and emotions [1]. A lot of work has been done in this direction to have a concrete solution to this problem. The historical work done in this field suffers from lack of data and advanced algorithms which could predict the symptoms of epilepsy accurately and on time. Vigilone et al. (1972) first tried to characterize EEG signals using some quantitative pattern recognition techniques. Rogowski et al. [4] carried out their research on 12 patients. The bandwidth of the recording system was kept as 100 Hz and then the signals were low-pass filtered before sampling. Autoregressive model was adopted by them for data analysis [4]. Siegel et al. [5] used 4 subjects; here, the EEG spectrum was computed by algorithm of Welch, and also cross-validation procedure [5]. Baumgartner et al. [6] provides some unique information about change in regional blood flow that occurs during the onset on seizure. Author conducted his study on two subjects suffering from temporal lobe epilepsy. This research used MRI, continuous video EEG monitoring and CBF monitoring of the patient [6]. Alkan et al. [7] used multiple signal classification, period gram and autoregressive methods to obtain power spectra in patients. The power spectral densities obtained from these methods are then used as input into some classification algorithm like multilayer perceptron neural network [7]. Bao et al. [8] used the dataset from Bonn University, and extracted feature from the dataset applied neural network classification and made prediction about epilepsy. Three major types of features are extracted in this research: Power spectral features which described the energy distribution in frequency domain, the fractal dimension define the fractal properties while the Hjorth parameters describe the chaotic behavior of EEG signal. Apart from this mean and standard deviations are also calculated which represents the statistics of amplitude [8]. Panda et al. [9] used the approach of discrete wavelength transformation to preprocess the EEG signals. Author extracted features like entropy,

Epileptic Seizure Onset Prediction Using EEG with Machine …

155

energy and standard deviations on the basis of which classification using support vector machines was done [9]. Gandhi et al. [10] collected data from Sir Ganga Ram Hospital, New Delhi of six subjects. Author applied discrete wavelet packet transform which extracts features from non-stationary signals. Discrete harmony search with modified differential operator is used in this research for selecting optimal features which are then fed into probabilistic neural networks classifier to obtain high accuracy [10]. Moghim and Corne [11] describe an advanced seizure prediction via pre-ictal relabeling, this algorithm tries to detect seizure prior their occurrence. For analysis purpose, features based on signal energy, DWT and nonlinear dynamics were considered. Finally, after feature selection they are fed into multiclass SVM for further classification [11]. Fergus et al. [12] presented a comparative study of various classifiers like linear discriminant classifier (LDC), uncorrelated normal density classifier (UDC), K-class nearest neighbor classifier (KNNC) and support vector classifier (SVC). The author evaluated that KNNC classifier predicted the best results with 93% for sensitivity, 94% for specificity, 98% for AUC and 6% global error [12]. Future studies focus of techniques of deep learning like deep neural networks for prediction of seizures. These techniques are giving wonderful results as well since they can evaluate large dataset with high computations in parallel, which gives more accurate results [2, 13].

3 Various Techniques of Seizure Detection and Prediction 3.1 Seizure Detection A system which could detect seizure should be such which could find the presence or absence of ongoing seizures. All such algorithms have two main steps: First one is feature extraction from the data and the second is model-based criteria which are applied to the features to find presence of a seizure. This step is also called classification. There are various methods through which epileptic seizures could be detected and some of them are discussed below: 1. Electrocorticography or EEG—An EEG is a “snapshot” of the brain at the moment of recording [2]. It is one of the most reliable and acceptable source of information for collecting data for epilepsy. The EEG signals recorded are non-stationary and nonlinear, it is highly complex and are difficult to interpret visually. 2. Electrocardiography or ECG—Heart rate disturbances may be caused due to epileptic seizures. There is trend of tachycardia during ictal phase which returned to normal in post-ictal phase. Thus, the change in heart rate can be considered as an extra clinical information for finding epileptic discharge.

156

S. Bijawat and A. Dadhich

3. Accelerometry—These are devices which are used to measure the changes in velocity and direction. Use of these sensors to detect epilepsy is a novel approach. Such systems are used to detect the motion seizures. 4. Video detection system—Video monitoring is yet another technique used for epilepsy detection. Such systems works on path of moving objects over the space.

3.2 Seizure Prediction Seizure prediction has greater advantages than seizure detection. Such kind of devices or tools is useful in improving outcomes and reducing accidents, thus, finally allowing early treatment to seizure patients. The prediction system must be such which is able to identify any abnormal changes in EEG signals within minutes or hours prior to onset of seizures [3]. For example, a study states that EEG signals are notable 7 h prior to the onset of epileptic seizure. Some other work presents that by continuously monitoring ECG along with EEG signals can also produces warning signals of seizure in the inter-ictal state. Some of the seizure prediction techniques along with features extracted and algorithm used are mentioned in Table 1.

4 Methodology Adopted Once the features are extracted from EEG signals, they are fed into a classification algorithm. For this purpose, the most commonly used tool is MATLAB which is used by most of the researchers for EEG signal analysis and classification. This proposed work has a setup using Python which is an open source language and is been widely used for data analytics worldwide. It has simple syntax and rich libraries. This research makes use of a Python module PyEEG which is specially designed for Table 1 Seizure prediction techniques Technique

Features extracted

Algorithms used

EEG

Permutation entropy Kolmogorov entropy Correlation dimension Relative wavelength energy

a. b. c. d.

Accelerometry and electrodermal activity

Sweat secretion during seizures a. ANN Use of wearable devices for b. SVM detecting accelerometry and electrodermal activities

ECG

Heart rate (It has some weak relation of tachycardia parietal phase)

Recurrent neural networks Probabilistic neural networks SVM Fuzzy logic system

a. Any supervised learning algorithm

Epileptic Seizure Onset Prediction Using EEG with Machine …

157

Fig. 1 Graphical representation of healthy, inter-ictal and ictal data (Source PyEEG documentation)

feature extraction from EEG signal [14–16]. The dataset is from Children’s Hospital Boston, it consist of 686 recordings of EEG from pediatric subjects. The sampling rate at which the data was collected was 173.61 Hz. This dataset consists of three sets: SET A, B and C. Set B refers to healthy dataset, Set A refers to a healthy dataset, Set B refers to inter-ictal (transition between healthy to seizure) dataset, while Set C is of ictal or seizures. Figure 1 plots the three datasets, where the spikes during seizures are clearly seen. In Fig. 1, the green line shows the data sample collected during an ongoing seizure. The spikes generated in the figure shows an abnormal discharge of neurons which creates major disturbances. The blue line shows the inter-ictal state which is the onset of seizure. It is clearly seen that the spikes shown by blue lines are smaller than those shown by green lines but larger than those shown by orange line which shows the sample of a healthy patient.

4.1 Data Preprocessing PyEEG is a python module which contains some functions to build data for feature extraction. In this module of python, EEG series are demonstrated as Python list or as numpy array. PyEEG cannot load or read files, for this purpose, we can use other tools like EDFBrowser or EEGLab. Therefore, the dataset used in this work was already proposed and was ready to use in the form of text files. Dataset used here have total 100 channel and 4097 data point per channel.

158

S. Bijawat and A. Dadhich

4.2 Feature Extraction PyEEG extracts various features which are basis of EEG signal analysis. In this work, the features which are extracted are: 1. Detrended Fluctuation Analysis: DFA analysis is used for EEG signal analysis. Instead of looking at the entire signal, often, DFA is performed on some filtered signals. This method of detrended fluctuation analysis has been useful in revealing the extent of long-range correlations in time series. 2. Higuchi Fractal Dimension: HFD is a useful mathematical tool that go beyond the classic Fourier analysis (fast Fourier transform, FFT). FFT works upon the hypothesis of stationary signals, while EEG signals are non-stationary. The fluctuation neuronal activity display frequencies of the population’s the so-called power law dependence. 3. Fisher information: It quantifies the spreading of a probability distribution in a way that it is high for flat densities while low for narrow ones. Fisher information is also very sensitive to local discontinuities in the density. 4. Petrosian Fractal Dimension: Fractal dimension is an important feature of a signal that may have information about geometric shape at different scales. PFD is one such measure to extract such information from EEG signals.

4.3 Classification Classification is a technique which divides the EEG signals extracted into three categories: Healthy, transition and seizure, with class labels 1, 0 and −1, respectively, this is shown in Table 2. Table 2 shows the top four values of dataset belonging to Set B. Here, F1, F2, F3 and F4 are the values of features extracted and class shows that it belongs to transition class, i.e., movement from a normal state toward seizure. We have a similar table for Set A and Set C, which has class values 1 and −1, respectively. This work has used several machine learning algorithms for classification purpose. The extracted values from the above step are fed into these machine learning algorithms like nearest neighbors, linear SVM, random forest and Naïve Bayes. The proposed work also generates accuracy of each algorithm as well [17–19]. Table 2 It shows the values of features extracted and the class to which the data point belong F1

F2

F3

F4

Class

0

0.770598

0.103797

0.066248

0.598450

0.0

1

0.832769

0.120673

0.063004

0.592081

0.0

2

0.840151

0.168617

0.059796

0.583278

0.0

3

0.966122

0.181981

0.116347

0.580750

0.0

Epileptic Seizure Onset Prediction Using EEG with Machine …

159

1. K-nearest neighbors—It is an important classification algorithm. It belongs to domain of supervised learning algorithms. It stores all the available cases and then classifies the new cases based on some similarity measure. 2. Linear SVM—It is an extremely fast ML algorithm. It is used for solving multiclass classification problems from large datasets. It has a linearly scalable routine, it means that it creates SVM model in CPU time which linearly scales according to the size of the training dataset. 3. Random forest—It is also a supervised learning algorithm. It creates multiple decision trees and then combines them together to get accurate and consistent predictions. 4. Naïve Bayes algorithm—It works on the concept of Bayes theorem. It predicts the membership of probabilities for each class such as the probability that given record or data a point belongs to a particular class or not.

5 Findings and Discussions For increasing the accuracy rate of the algorithms, this work employs multiple feature extraction from EEG signal. These features are used as input into four different machine learning algorithms to have a comparative analysis of accuracy of these used algorithms. The result obtained is shown in Table 3. Linear support vector machines are a strong tool for machine learning, but in our case it does not give better results. The model here gives an accuracy of only 29%. Linear SVM is designed as a method for two class classifications. It means it can handle data with only two class outputs. In this work, the output has three classes, so the choice of linear SVM here fails to produce a better output. KNN on the other hand is a very simple and intuitive algorithm. KNN adjusts itself pretty well for a multi-class algorithm without any extra efforts. Similarly, random forest algorithm can handle high dimension spaces very well, and produces good results multiple categorical features. Table 3 clearly indicates how KNN and random forest produces a high accuracy of 84% with the given dataset. Table 3 Accuracy of each algorithm for seizure prediction in percentage

Name of algorithm

Accuracy in percentage

Nearest neighbors

84

Linear SVM

29

Random forest

84

Naïve Bayes

69

160

S. Bijawat and A. Dadhich

6 Conclusion Epilepsy is one of the least explored and understood neurological disorder. The fact is in India itself there are more than 10 million people suffering with epilepsy [1]. The epileptic seizures are frequently unannounced, which increases the risk of injury to the sufferer and in some cases, leads to death. If the EEG readings of the patient can be predicted well after analysis these catastrophic results could be avoided to some extent [12]. There is a crucial requirement to develop fast seizure prediction algorithm which could work well with large dataset. This work classifies the data under three categories: Healthy, transition and seizure. The work also generates a comparative analysis of different machine learning algorithms. The results clearly show how KNN and random forest algorithm generates a high accuracy over SVM and Naive Bayes classifier.

7 Future Scope A lot of scope is left for future by adding more features for analyzing and using other classification algorithms for seizure prediction more accurately. Addition to this a closed loop system must be developed a seizure could be predicted on the onset or before hand and a mechanism must be developed where information could be passed to the nearest health care organization and to the care taker of the patient so that catastrophic results could be avoided at patient’s end.

References 1. Dixit, A. B., Banerjee, J., Chandra, P. S., & Tripathi, M. (2017). Recent advances in epilepsy research in India. Neurology India, 65(7), 83. 2. Acharya, U. R., Oh, S. L., Hagiwara, Y., Tan, J. H., & Adeli, H. (2018). Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals. Computers in Biology and Medicine, 100, 270–278. 3. Ramgopal, S., Thome-Souza, S., Jackson, M., Kadish, N. E., Fernández, I. S., Klehm, J., et al. (2014). Seizure detection, seizure prediction, and closed-loop warning systems in epilepsy. Epilepsy & Behavior, 37, 291–307. 4. Rogowski, Z., Gath, I., & Bental, E. (1981). On the prediction of epileptic seizures. Biological Cybernetics, 42(1), 9–15. 5. Siegel, A., Grady, C. L., & Mirsky, A. F. (1982). Prediction of spike-wave bursts in absence epilepsy by EEG power-spectrum signals. Epilepsia, 23(1), 47–60. 6. Baumgartner, C., Serles, W., Leutmezer, F., Pataraia, E., Aull, S., Czech, T., et al. (1998). Preictal SPECT in temporal lobe epilepsy: Regional cerebral blood flow is increased prior to electroencephalography-seizure onset. Journal of Nuclear Medicine, 39(6), 978–982. 7. Alkan, A., Koklukaya, E., & Subasi, A. (2005). Automatic seizure detection in EEG using logistic regression and artificial neural network. Journal of Neuroscience Methods, 148(2), 167–176.

Epileptic Seizure Onset Prediction Using EEG with Machine …

161

8. Bao, F. S., Lie, D. Y. C., & Zhang, Y. (2008). A new approach to automated epileptic diagnosis using EEG and probabilistic neural network. In 2008 20th IEEE International Conference on Tools with Artificial Intelligence (Vol. 2, pp. 482–486). IEEE. 9. Panda, R., Khobragade, P. S., Jambhule, P. D., Jengthe, S. N., Pal, P. R., & Gandhi, T. K. (2010). Classification of EEG signal using wavelet transform and support vector machine for epileptic seizure diction. In 2010 International Conference on Systems in Medicine and Biology (pp. 405–408). IEEE. 10. Gandhi, T. K., Chakraborty, P., Roy, G. G., & Panigrahi, B. K. (2012). Discrete harmony search based expert model for epileptic seizure detection in electroencephalography. Expert Systems with Applications, 39(4), 4055–4062. 11. Moghim, N., & Corne, D. W. (2014). Predicting epileptic seizures in advance. PloS one, 9(6). 12. Fergus, P., Hignett, D., Hussain, A., Al-Jumeily, D., & Abdel-Aziz, K. (2015). Automatic epileptic seizure detection using scalp EEG and advanced artificial intelligence techniques. ˙ BioMed Research International, 2015. 13. Thodoroff, P., Pineau, J., & Lim, A. (2016). Learning robust features using deep learning for automatic seizure detection. In Machine Learning for Healthcare Conference (pp. 178–190). 14. Bao, F. S., Liu, X., & Zhang, C. (2011). PyEEG: An open source python module for EEG/MEG feature extraction. Computational Intelligence and Neuroscience, 2011. 15. Liu, N. H., Chiang, C. Y., & Chu, H. C. (2013). Recognizing the degree of human attention using EEG signals from mobile sensors. Sensors, 13(8), 10273–10286. 16. Gajic, D., Djurovic, Z., Di Gennaro, S., & Gustafsson, F. (2014). Classification of EEG signals for detection of epileptic seizures based on wavelets and statistical pattern recognition. Biomedical Engineering: Applications, Basis and Communications, 26(02), 1450021. 17. Teixeira, C. A., Direito, B., Bandarabadi, M., Le Van Quyen, M., Valderrama, M., Schelter, B., et al. (2014). Epileptic seizure predictors based on computational intelligence techniques: A comparative study with 278 patients. Computer Methods and Programs in Biomedicine, 114(3), 324–336. 18. Fergus, P., Hussain, A., Hignett, D., Al-Jumeily, D., Abdel-Aziz, K., & Hamdan, H. (2016). A machine learning system for automated whole-brain seizure detection. Applied Computing and Informatics, 12(1), 70–89. 19. Hosseini, M. P., Hajisami, A., & Pompili, D. (2016). Real-time epileptic seizure detection from ˙ eeg signals via random subspace ensemble learning. In 2016 IEEE International Conference on Autonomic Computing (ICAC) (pp. 209–218). IEEE.

A Review of Crop Diseases Identification Using Convolutional Neural Network Pooja Sharma, Ayush Sogani, and Ashu Sharma

1 Introduction Crop diseases are major problem in total production and economic loss in agriculture sector. As the demand for food is continuously increasing protection of crops against diseases is the basic need of today. The plants are infected by many diseases like bacteria, mildew, viruses, and insects could affect the overall plant health [1]. Conventional methods to tackle these diseases are use of pesticides, but the more use increase the cost of total production and affect the quality of food. So it is important to have accurate estimation of disease incidence, its severity and overall impacts. In a number of way, a before time detection of diseases in the plants may avoid numerous monetary loses and make easy to manage through appropriate planning to increase productivity [2]. The automation of identifying plant diseases by viewing plants appearance and symptoms can help farmers and experts also. Advancement in computer vision techniques including digital image processing are used to classify diseases into various classes. A range of machine learning techniques are already explored like artificial neural network and SVM, these are used along with image preprocessing and feature extraction. The conventional approach for image classification in machine learning is based on hand-crafted features, to provide full automation in plant diseases diagnosis deep P. Sharma (B) · A. Sogani Poornima University, Jaipur, India e-mail: [email protected] A. Sogani e-mail: [email protected] A. Sharma Poornima Group of Institutions, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_17

163

164

P. Sharma et al.

learning is widely used. Deep learning has many algorithms that can easily model high level image data abstractions along with many processing layers in convolutional neural network. Deep learning techniques are very effective in feature learning that automatically extract features from raw data [3]. It can solve various complex problems [4].

2 Literature Review Convolutional neural networks have made progress in object reorganization and image classification in past years. Mohanty et al. [5] presented a method of using publicly available dataset and implemented deep learning models using smartphone application on larger scale [5]. Singh et al. [6] proposed multilayer convolutional neural network (MCNN) to classify mango leaves from diseases [6]. Zhang et al. [7] compared the proposed method with ML algorithms like support vector machine, KNN, and backpropagation neural network for cherry leaf disease contaminated problem for classification and presented fully automatic classification method using convolution neural network (CNN) with GoogLeNet [7]. Manso et al. [8] discussed classification of coffee leaf miner and coffee leaf rust with the help of smartphone [8]. Amara et al. [9] offered an approach using deep learning model to classify the diseases in banana [9]. Sladojevic et al. [10] The proposed model can classify 13 dissimilar kind of plant diseases from leaves, having capacity to differentiate plant leaves from their overall area with deep neural network [10]. Alfarisy et al. [11] developed own dataset of 4511 images from search engine and smart phone for automatically identification of pests and diseases of paddy production in Indonesia [11]. Yu et al. [12] presented technique of identifying apple leaf diseases using regionof-interest-aware deep CNN [12]. Barbedo et al. [13] presented study to check the impact and effectiveness of deep learning techniques by applying various size and diversity of many datasets. The study included image dataset having 12 plant variety, each containing special uniqueness in number of samples, diseases and variety of conditions using deep CNN [13]. Cap et al. [14] proposed deep learning approach using leaf localization technique from on-site wide-angle images [14]. Chandra [15] designed a mobile app for plant disease classification [15]. Ramcharan et al. [16] applied learning strategy as transfer learning to train a deep convolution neural network. The dataset collected from Tanzania of cassava images having infections to classify diseases and pest damage [16]. Fuentes et al. [17] identify classify tomato plant diseases and pests recognition by difficulty of false positives and class unbalance by applying a Refinement Filter Bank framework [17]. Lin et al. [18] proposed a semantic segmentation model derived from convolution neural networks (CNN) to division on cucumber leaf images at pixel level for the powdery mildew [18].

A Review of Crop Diseases Identification Using Convolutional …

165

Rançon et al. [19] evaluated features for the classification and detection of Esca disease in Bordeaux vineyards using SIFT encoded and deep learning [19]. Itakura et al. [20] to estimate the Brix/acid ratio, fluorescence spectroscopy, technique was used using a convolutional neural network (CNN) [20].

3 Methodology As the advancement in technology like graphics processing unit (GPU) computationally competent devices, deep learning have got remarkable progress in applications, based on the model of conventional neural network. The architecture is arranged having multiple number of preprocessing layers in which the specific output is extracted from datasets of various application fields [3, 4]. CNN have four different layers that is convolution, max-pooling, fully connected, and output layer. Many CNN models existing like GoogLeNet, ResNet, AlexNet, VGG, etc. These models varying terms of depth, kind of functions used like nonlinear and vary in number of units also. These models have a variety of parameters can be adjusted with dropout rate, variation in learning rate those are used for solving many classification and object recognition problems [3, 4]. Learning from scratch using PlantVillage dataset and then transfer learning using ImageNet dataset with deep learning architecture AlexNet and GoogleNet [5]. The paper compares transfer learning techniques, i.e., used to train the CNN from already trained GoogLeNet using ImageNet dataset, and also compared the method with machine learning methods, i.e., backpropagation (BP) neural network, SVM and k-nearest neighbor (KNN) [7]. A smartphone application was developed to determine the type of coffee leaf disease and the fraction of the injured area. For classification, artificial neural network trained with backpropagation and extreme learning machine were used [8]. LeNet architecture was used to classify data set [9]. Caffe, a deep learning architecture was developed to distinguish 13 dissimilar types of crop diseases to differentiate leaves from the surroundings of plant leaves [10]. A novel ROI sub-network was used to divide input images to leaf area, background, and spot area and combined VGG sub-network to be trained [12]. Developed an application for plant disease identification that uses dataset including plant species, diseases and image capture conditions for CNN [13]. Model developed to search input image into “fully leaf,” “not fully leaf” or “none leaf,” fully leaf to be accepted and others to not be detected [14]. Using already Inception v3 CNN model was used to recognize the cassava image [16]. Refinement Filter Bank framework was used to classify tomato related plant infections and pests identification [17]. To solve the problem of segmentation of powdery mildew on leaves accurately U-net architecture-based convolution neural network model was proposed [18].

166

P. Sharma et al.

4 Results and Discussion Using transfer learning with AlexNet and GoogLeNet architectures the results yields GoogLeNet constantly shows better performance than AlexNet. Transfer learning gives better results than training from scratch. The deep models present the best in case of the colored description of the dataset [5]. CNN was compared with machine learning techniques (i.e., SVM, KNN, and BP), CNN has shown better precision performance with the testing accuracy of 99.6% and ‘health’ classification accuracy of 100%, respectively [7]. LeNet architecture as CNN to identify image dataset shows effective performance under challenging conditions like illumination, background, size, pose, and resolution [9]. Fine-tuning has not exposed major changes in the overall accuracy, but augmentation process had greater influence to achieve adequate results [10]. The proposed model was able to classify 13 classes of paddy pests with 87% accuracy [11]. ROI-aware DCNN shown better performance than modern methods: TL methods, MDFEP method, FVE with SIFT, and DCNN-based bilinear model [12]. The study investigates the results of size and variety of datasets of deep learning models applied on deep CNNs [13]. Regions of leaf were detected with high accuracy of 78% [14]. A mobile app was designed to help farmers to check health of plants and diseases identification [15]. To recognize image with transfer learning using convolution neural network, the Inception v3 shown good results in automated cassava disease detection [16]. The Refinement Filter Bank framework method obtained classification rate approx 96% in complex identification of diseases in tomatoes and pest reorganization. The system has dealt with false positive due to bounding box generator and class unbalance [17]. U-net architecture for semantic segmentation task shows better accuracy of powdery mildew segmentation than other exiting algorithms. But this method has more complexity in terms of computation so it cannot be deployed on portable device [18]. Leaf scale classification compared with image processing techniques and CNN. Using transfer learning CNN gave better results applying deep mobile net [19].

5 Conclusion The paper conducts a survey of plant diseases identification through deep learning approach. In this paper, total 15 papers have been observed they focused on various domains like data source, data preprocessing, techniques for data augmentation, technical details of papers, and performance matrices. Paper also compared deep learning models with other models also.

A Review of Crop Diseases Identification Using Convolutional …

167

With intensive learning methods computer vision has performed better in resolving many problems of plant leaves, including pattern recognition, classification, extraction of matter, etc. The size of dataset and data augmentation techniques gives more accuracy in classification of plant diseases. Many smartphone applications have been developed for automatic diagnosis of diseases that can help farmers and experts for quick decisions. The findings of the survey are to showcase deep learning models gives best results as compared other popular image processing techniques. Transfer learning in deep learning models gives better results than learning end to end convolutional neural network from scratch. When the images used for training are examined on a set of images taken under different conditions, then the accuracy of the model is greatly reduced. Finally, it is worth noting that the purpose presented here is not to replace existing solutions for diagnosis, rather to complement them. Laboratory tests are ultimately more reliable than the diagnosis based on visual symptoms, and often the initial phase diagnosis is often challenging through visual inspection.

6 Future Scope There should be real-world application to identify diseases as it appears on plants itself. As deep learning models require large image dataset so new techniques of image data collection should be employed. With the help of smartphone application image data from may be supplemented with location and time information for additional improvements in accuracy. The future model should detect the diseases severity for timely cure of diseases. The future product plan is to provide a common platform where farmers can use it as a forum to exchange information/experience and support peer-to-peer learning. The future scope also includes weather forecasts and news feeds depending upon user’s interest. This project was started with the aim of empowering farmers and eliminating dependence on any external resources, and that can only be possible through continuously improving the product with changes in the requirements and usage patterns. As various techniques are used for analysis of different issues for plant diseases, farmers need a single point solution for all problems.

References 1. Sankaran, S., Mishra, A., Ehsani, R., & Davis, C. (2010). A review of advanced techniques for detecting plant diseases. Computers and Electronics in Agriculture, 72, 1–13. https://doi.org/ 10.1016/j.compag.2010.02.007. 2. Johannes, A., Picon, A., Alvarez-Gila, A., Echazarra, J., Rodriguez-Vaamonde, S., Navajas, A. D., & Ortiz-Barredo, A. (2017). Automatic plant disease diagnosis using mobile capture devices applied on a wheat use case. Computers and Electronics in Agriculture, 138, 200–209.

168

P. Sharma et al.

3. Ding, J., Chen, B., Liu, H., & Huang, M. (2016). Convolutional neural network with data augmentation for SAR target recognition. IEEE Geoscience and Remote Sensing Letters, 13(3), 364–368. https://doi.org/10.1109/LGRS.2015.2513754. 4. Volpi, M., & Tuia, D. (2017). Dense semantic labeling of subdecimeter resolution images with convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing, 55(2), 881–893. https://doi.org/10.1109/TGRS.2016.2616585. 5. Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science, 7, 1419. 6. Singh, U. P., Chouhan, S. S., Jain, S., & Jain, S. (2019). Multilayer convolution neural network for the classification of mango leaves infected by anthracnose disease. IEEE Access, 7, 43721– 43729. 7. Zhang, K., Zhang, L., & Wu, Q. (2019). Identification of cherry leaf disease infected by Podosphaera pannosa via convolutional neural network. International Journal of Agricultural and Environmental Information Systems (IJAEIS), 10(2), 98–110. 8. Manso, G. L., Knidel, H., Krohling, R. A., & Ventura, J. A. (2019). A smartphone application to detection and classification of coffee leaf miner and coffee leaf rust. arXiv preprint arXiv: 1904.00742. 9. Amara, J., Bouaziz, B., & Algergawy, A. (2017). A deep learning-based approach for banana leaf diseases classification. In BTW (Workshops). 10. Sladojevic, S., Arsenovic, M., Anderla, A., Culibrk, D., & Stefanovic, D. (2016). Deep neural networks based recognition of plant diseases by leaf image classification. Computational Intelligence and Neuroscience, 2016. 11. Alfarisy, A. A., Chen, Q., & Guo, M. (2018). Deep learning based classification for paddy pests & diseases recognition. In Proceedings of 2018 International Conference on Mathematics and Artificial Intelligence. ACM. 12. Yu, H.-J., & Son, C.-H. (2019). Apple leaf disease identification through region-of-interestaware deep convolutional neural network. arXiv preprint arXiv:1903.10356. 13. Barbedo, J. G. A. (2018). Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Computers and Electronics in Agriculture, 153, 46–53. 14. Cap, H. Q., Suwa, K., Fujita, E., Kagiwada, S., Uga, H., & Iyatomi, H. (2018). A deep learning approach for on-site plant leaf detection. In 2018 IEEE 14th International Colloquium on Signal Processing & Its Applications (CSPA). IEEE. 15. Chandra, A. (2019). Diagnosing the health of a plant in a click. In Proceedings of ICoRD 2019 (Vol. 1). https://doi.org/10.1007/978-981-13-5974-3_52. 16. Ramcharan, A., Baranowski, K., McCloskey, P., Ahmed, B., Legg, J., & Hughes, D. P. (2017). Deep learning for image-based cassava disease detection. Frontiers in Plant Science, 8, 1852. 17. Fuentes, A. F., Yoon, S., Lee, J., & Park, D. S. (2018). High-performance deep neural networkbased tomato plant diseases and pests diagnosis system with refinement filter bank. Frontiers in Plant Science, 9, 1162. 18. Lin, K., Gong, L., Huang, Y., Liu, C., & Pan, J. (2019). Deep learning-based segmentation and quantification of cucumber powdery mildew using convolutional neural network. Frontiers in Plant Science, 10, 155. 19. Rançon, F., Bombrun, L., Keresztes, B., & Germain, C. (2019). Comparison of SIFT encoded and deep learning features for the classification and detection of Esca disease in bordeaux vineyards. Remote Sensing, 11(1), 1. 20. Itakura, K., Saito, Y., Suzuki, T., Kondo, N., & Hosoi, F. (2019). Estimation of citrus maturity with fluorescence spectroscopy using deep learning. Horticulturae, 5(1), 2.

Efficiency of Different SVM Kernels in Predicting Rainfall in India M. Kiran Kumar, J. Divya Udayan, and A. Ghananand

1 Introduction Recently, weather condition’s prediction is a very difficult procedure, and it is one of the key tasks for researchers and academicians [1–4]. Most of the Indians are directly or indirectly dependent on agriculture. Some are directly related to farming, and some others are involved in doing business with these goods. India has the capacity to produce the food grains which can make big difference in the Indian economy [5]. Its growth is primarily controlled by rainfall, and predicting rainfall patterns has been one of the chief issues for an Indian farmer as India has wide variety of seasons and with each season one can grow entirely different crops. The crop seasons in India and two other countries Pakistan and Bangladesh were categorized them into three main seasons as rabi, Kharif, and zaid or zayad. The terms are originated from Arabic where rabi means spring, Kharif means autumn and zaid means summer. Periods of this season are Kharif from July to October during the time of southwest monsoon, a rabbi from October to March and from March to July its mainly during summer. Each season has its own harvesting crops were in rabi includes wheat, oats, onion, tomato, potato, peas, barley, linseed, mustard oil seeds and masoor etc., Kharif includes rice, sorghum, groundnut, jowar, soya bean, bajra, jute, maize, cotton, hemp, tobacco ragi has millet, and arhar etc. and zaid includes sugar cane, cucumber, rapeseed, sunflower, rice, cotton, oilseeds, watermelon, and muskmelon etc. [6]. Support vector machines can be used to predict rainfall patters, based on the concept of dependent M. Kiran Kumar · A. Ghananand Madanapalle Institute of Technology and Science, Madanapalle, India J. Divya Udayan (B) School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India e-mail: [email protected]

© Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_18

169

170

M. Kiran Kumar et al.

and independent variables. For example, if amount of rainfall in the last five seasons is an independent variable, then rainfall amount in the upcoming season will be a dependent variable. Our next section presents the related research works. Section 3 describes the support vector machine overview and background knowledge about various classifiers. In Sect. 4 discussion of proposed method in Sect. 5 results is carried out. Section 6 summarizes the work, and in final section, references are given.

2 Related Works Zaw and Naing [7] “Modeling of Rainfall Prediction over Myanmar Using Polynomial Regression,” they have explained how we can use polynomial regression in predicting rainfall pattern, and they have used the method of least squares in finding the result by combining all possible combinations of the column and then giving the output when the value reaches a certain threshold. Deeprasertkul and Praikan [8] “A rainfall forecasting estimation using image processing technology” they have used image processing on satellite images to predict the rainfall pattern; the type of cloud is decided on K-means clustering. Mohapatra et al. [9] “Rainfall prediction based on 100 years of meteorological data”. In this paper, the authors have used multiple techniques in order to find a pattern in rainfall; some of the techniques include regression over the data mined. Applying ensemble learning models to the data to predict makes more accurate predictions. Ashraf Awan and Maqbool [10] “Application of Artificial Neural Networks for monsoon rainfall prediction” list the use of artificial neural networks in predicting the monsoon, in this approach, they have used LVQ supervised competitive neural network algorithm for pattern classification. Various techniques like linear regression, autoregression, multilayer perception (MLP), radial basis function networks are applied to calculate nearly atmospheric factors like temperature, wind speed, rainfall, meteorological pollution [11–17] (Fig. 1).

3 Overview of the SVM In machine learning, support vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories: an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.

Efficiency of Different SVM Kernels in Predicting …

171

Fig. 1 Overall rainfall in India by monthly

In addition to performing linear classification, SVMs can efficiently perform a nonlinear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces [10] (Figs. 2 and 3). The choice of kernel can help configure the SVR to avoid an excess of calculation in high-dimensional space. We considered three types of kernel all of which are widely used in SVR: Linear Kernel: K (x, y) = x, y + k = x T y + k Fig. 2 Sample cut to divide into two classes

172

M. Kiran Kumar et al.

Fig. 3 Types of SVM classifiers

Polynomial Kernel:  d K (x, y) = (γ x, y + k)d = γ x T y + k Radial Basis Function: K (x, y) = e−||x−y||

2

In the SVR model, γ is the kernel coefficient and d is the degree of the polynomial kernel function.

Efficiency of Different SVM Kernels in Predicting …

173

4 Proposed Methodology The approach used in this paper has illustrated in the below-mentioned diagram: Data Selection: The first observable step in any data mining procedure is data selection. For this work, we collected rainfall data form https://data.gov.in Web site. Data Processing: The data thus collected shall have many erroneous entries like missing values, duplicate values. The same is first cleaned for removing data anomalies. In this work, disappeared values in dataset are substituted with the modes, means based on existing available data. Extraction and Selection of Feature: Feature selection happens to be an important step in a supervised learning process that shall be used in this paper. SVM Approach Selection: In this paper, we proposed SVM classification method to classify the rainfall datasets. Training and Testing: The obtained model from the previous method is validated against the test set. In this paper experienced the datasets by using different support vector machine kernel. Three different kernels are compared on the basis of accuracy, training time, and prediction time.

5 Analysis of the Kernel Support vector machine classification accomplished its task effectively. The support vector machine classifiers have been tested. The accuracy score is calculated by calculating the standard deviation about the mean of all the values predicted by the different SVM kernels (“rbf,” “linear,” and “polynomial”). Table 1 shows the rainfall (in inches) and temperature (in celsius). Table 2 shows the results of different SVM kernels. Polynomial kernel gives the best result for the accuracy than other kernels. The different measuring attributes play a critical part in giving exact rainfall prediction. We observed that polynomial produces the best results with an accuracy of 83.014% shows maximum values in recall, F-Measure, and ROC as compared with other SVM kernels.

6 Conclusion The predicting of rainfall is a very essential factor in terms of water resource management, agriculture human life. As rainfall is a nonlinear in nature, values are not constant, so statistical model yields poor inaccuracy in the result. In this paper, the investigation of different kernels of support vector machine is presented to predict the

174

M. Kiran Kumar et al.

Table 1 Rainfall (in inches) and temperature (in celsius) Index

Rainfall (mm)

Temp (°C)

1

50

27

2

38

23

3

32

24

4

44

26

5

31

21

6

50

26

7

36

26

8

36

29

9

31

22

10

39

29

11

37

25

12

47

28

13

34

26

14

35

22

15

48

24

16

45

29

17

31

30

18

46

24

19

41

29

20

30

23

Table 2 Efficiency of different SVM kernels S.No SVM kernel

Precision Recall F-measure ROC MAE RMSE RAE

Accuracy

1

Radial bias function

0.83

0.84

0.832

0.81

0.20

0.32

49.13 77.115

2

Linear

0.82

0.85

0.841

0.79

0.19

0.38

52.17 79.317

3

Polynomial 0.873

0.89

0.891

0.85

0.201 0.31

58.69 83.014

rainfall. Therefore, from the above findings it can be concluded that the polynomial kernel is most efficient in predicting the rainfall from the annual temperature.

References 1. Data.gov.in. 2. Radhika, Y., & Shashi, M. (2009). Atmospheric temperature prediction using support vector machines. International Journal of Computer Theory and Engineering, 1(1), 55.

Efficiency of Different SVM Kernels in Predicting …

175

3. Riordan, D., & Hansen, B. K. (2002). A fuzzy case-based system for weather prediction. Engineering Intelligent Systems for Electrical Engineering and Communications, 10(3), 139– 146. 4. Guhathakurtha, P. (2006). Long-range monsoon rainfall prediction of 2005 for the districts and sub-division kerala with artificial neural network. Current Science, 90(6), 25. 5. Madhusudhan, L. (2015). Agriculture role on indian economy. Business and Economics Journal, 6, 176. https://doi.org/10.4172/2151-6219.1000176. 6. Harsha, K. S., Thirumalai, C., Deepak, M. L., & Krishna, K.C. Heuristic prediction of rainfall using machine learning techniques. 7. Zaw, W. T., & Naing, T. T. (2009). Modeling of rainfall prediction over Myanmar using polynomial regression. In 2009 International Conference on Computer Engineering and Technology (Vol. 1, pp. 316–320). IEEE. 8. Deeprasertkul, P., & Praikan, W. (2016). A rainfall forecasting estimation using image processing technology. In 2016 International Conference on Information and Communication Technology Convergence (ICTC) (pp. 371–376). IEEE. 9. Mohapatra, S. K., Upadhyay, A., & Gola, C. (2017). Rainfall prediction based on 100 years of meteorological data (pp. 162–166). https://doi.org/10.1109/ic3tsn.2017.8284469. 10. Ben-Hur, A., & Weston, J. (2010). A user’s guide to support vector machines. Methods in Molecular Biology (Clifton, N.J.) (Vol. 609, pp. 223–239). https://doi.org/10.1007/978-1-60327-2414_13. 11. Min, J. H., & Lee, Y. (2005). Bankruptcy prediction using support vector machine with optimal choice of kernel function parameters. Expert Systems with Applications, 28, 603–614. 12. Mohandes, M. A., Halawani, T. O., Rehman, S., & Ahmed Hussain, A. (2004). Support vector machines for wind speed prediction. Renewable Energy, 29, 939–947. 13. Pal, N. R., Pal, S., Das, J., & Majumdar, K. (2003). SOFM-MLP: a hybrid neural network for atmospheric temperature prediction. IEEE Transactions on Geoscience and Remote Sensing, 41(12), 2783–2791. 14. Yu, P. S., Chen, S. T., & Chang, I. F. (2006). Support vector regression for real-time flood stage forecasting. Journal of Hydrology, 328(3–4), 704–716. 15. Osowski, S., & Garanty, K. (2007). Forecasting of daily meteorological pollution using wavelets and support vector machine. Engineering Applications of Artificial Intelligence, 20, 745–755. 16. Lu, W.-Z., & Wang, W.-J. (2005). Potential assessment of the support vector machine method in forecasting ambient air pollutant trends. Chemosphere, 59, 693–701. 17. Mohd, R. (2018). Comparative study of rainfall prediction modeling techniques (A case study on Srinagar, J&K, India). Asian Journal of Computer Science and Technology, 7(3), 13–19. ISSN: 2249-0701.

Smart Trash Barrel: An IoT-Based System for Smart Cities Ruchi Goel, Sahil Aggarwal, A. Sharmila, and Azim Uddin Ansari

1 Introduction Due to tremendous increase in population, their day-to-day activities as well increasing demand of industrial goods causes for increase in the level of garbage production in any country. The improper managment of garbage collections and its decomposition leads to various diseases such as Dengue, Malaria, etc., which in turn causes loss of human lives in large amount. Since in most of the developing countries, there is no proper process is followed to collect and dump the garbage which causes their mortality rate higher than developed countries. Developing countries garbage is collected by municipal corporation vans and gets collected to the outside of the city. Sometimes, municipal corporation does not get information regarding where garbage is collected or from where they have to pick which may cause to pollution to environment as well as causes for various hazardous diseases. In today’s era since everything is getting digital or we can say smart, we have to make our garbage detection and collection system also smart. For this process, we can make a system which can generate an alarm system that can send signal to municipal corporation to come and collect the garbage.

R. Goel (B) · S. Aggarwal · A. Sharmila · A. U. Ansari Krishna Engineering College, Ghaziabad, India e-mail: [email protected] S. Aggarwal e-mail: [email protected] A. Sharmila e-mail: [email protected] A. U. Ansari e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_19

177

178

R. Goel et al.

2 Literature Survey Since too much of work has been done in the field of garbage management system. Earlier the garbage management system was manually done like garbage collection, dissemination, and its recycling. But nowadays, so many things have come to automate this process. An IoT-based work sensed the garbage bin status and sends an alarm to truck driver to get it empty. They also give the shortest path to reach over the detected place [1]. Ghorpade-Aher [2] proposed a work based on robotic technique in which they are sensing whether the bin is full or not. If the garbage bin gets filled to a threshold level, it will move by the robot to the required place where it gets empty and again the bin gets move to its original place. Balamurugan et al. [3] proposed an automated solution by sending an alert message to the concerned department when the bins get completely filled. In this system, the ultrasonic sensors detect level of the garbage in the bin. This sensor in turn sends the information to server via Bluetooth. Server in turn uses the Arduino to process the received information and sends an alert message to the concerned department. Lokuliyana et al. [4] sys proposed a system based on Raspberry Pi, in which he is using four sensors which sense whether a bin gets full. If the bin gets full, bin gets locked so that no more garbage gets dumped over it. The system sends an alarm signal to the authority to collect the garbage. Jajoo [5] proposed a system of smart bin in which they are measuring the level of bin to some threshold value. When the dustbin level reaches the assigned threshold value, it automatically sends an alarm signal to the authority using Wi-Fi technology. Muruganandam et al. [6] proposed a system in which ultrasonic sensor is used to detect whether the bin is full, rain sensor is used to detect the rainfall. In both the cases, any one gets true, the bin door gets closed, and a notification is send to the concerned authority via SPI Ethernet. In this paper, he is also using the infrared sensor which senses whether any object gets thrown outside the bin, if it so, then a buzzer get on and a notification is also send to the authority via SPI Ethernet. Nehete et al. [7] proposed a system in which it uses a sensor which senses the various level of garbage bin at empty state, 25, 50, 70, 80% filled, etc. At 70% garbage bin gets filled, it generates a message via GSM module. It also connected to the LCD device from where concerned authorities get check the status of various bins on their screen. Chaware et al. [8] proposed a IoT-based smart garbage bin management system in which they used a sensor which sense whether the bin gets filled to some threshold value. If it gets filled, then a message is send to the concerned person via GSM module. It uses two LEDs, red one when bin gets completely filled so that no one would put more garbage in IoT and green LED when the bin is empty so that garbage can be put in it.

Smart Trash Barrel: An IoT-Based System …

179

Ibrahim [9] proposed a web server-based model in which they uses various sensors like level sensor to check level of garbage in the bin, tilt sensor which senses and check whether the bin is in its actual position or fallen down or tilted. If the system gets tilted or gets filled, microcontroller sends message to the authority on their web server via WiFi. In this paper, they also mention the route planning which gives the route to reach the bin which has to rectify or get empty. Velladurai [10] proposed a hardware system in which it detects the gas emission from the garbage bin. If the gas emission reached a threshold value exceeding which causes the health issues in human, a buzzer is on and an alert message is generated to the concerned person. The gas levels of the garbage get sensed by their respective gas sensors and the sensed signal that is in the analog form get converted into the digital form via ADC. This digital form gets displayed on the LCD screen placed in the concerned department so that real-time monitoring can be done for the bins and their gas emission level. Poddar et al. [11] proposed a system which gives information about the bin system in real time. For this purpose, they add an additional feature of real-time clock which senses the percentage of bin gets filled in real time and sends notification on realtime system. When the waste percentage exceed to some threshold value, then a notification is send to the concerned department to clean the bin. Joshi et al. [12] proposed a system for efficient monitoring of garbage bins situated in an area. Since in a given geography some bins require quick and immediate response, some require very less monitoring and some require intermediate amount of monitoring. For this monitoring system, they proposed a system with wide range of monitoring conditions like: critical state, less critical state, place required for immediate action, non-requirement of immediate action. By this a smart bin system more effective as we will be able to know where we have to focus more or where to less. Bharadwaj et al. [13] proposed a system in which they used LORA technology to establish communication between garbage bin and concern person to clean it up. For this purpose, they develop a mobile app in which a garbage collector login with his credentials in which he showed with red marked bin which have to clean up; for this, they also showed him the shortest route to reach the bin using GPS. When the bin gets cleaned up, the red marked converted into green in real time. Kumar et al. [14] proposed a system which sends an alert to municipal authorities when the garbage bin gets filled and they are providing a system which monitors the whole process in real time. They proposed a system which monitors the whole process with proper verification by using RFID. In this process, the truck activities can be monitor at any time; they can also be able to check whether the whole process is working on the correct pace or not.

180

R. Goel et al.

3 Proposed System Smart trash barrel management system uses various devices and sensors which are as follows: (1) Ultrasonic sensor: It is a sensor which is used to sense the level of waste material in the trash barrel. If it exceeds the threshold value, then a message is sent to the concerned department. (2) Rain sensor: It is sensor used to check the water or raindrop falling on the surface lid of the bin; if it happens, then garbage bin lid automatically gets closed so that waste material does not rot due to water. (3) Vacuum pump: It is used to suck the excess air and thus creating more space in the bin. (d) Status indicator: It is used to indicate the status of the garbage bin. If it is red, then garbage is full, and if it is green, then it is ready to use by the users. (4) Raspberry Pi: It is a controller board used to control the whole system with the outside world. (5) Air purifier unit: To purify the harmful gases released from the garbage. (6) Solar panel and battery unit: The battery unit consists of solar panel and battery. Figure 1 shows the flowchart of the proposed model which implemented.

4 Conclusion In our proposed work, we are going to form a smart trash barrel system in which we can generate a message signal to the concerned authority when the garbage bin gets full. We also send the coordinates to the concerned authorities and person to collect the garbage from the bin. In our work, we are also going to neutralize the hazardous fumes emitted by the waste material. We are also protecting the waste material from getting spoiled due to rain by using rain sensor. In our proposed work, we are also using an additional technique to squeeze out the air of the goose mare substance so that more waste material can be put in the bin and the whole system become more effective. By doing this, we will be able to dispose the waste material in very effective manner which is a very big challenge in today’s world.

Smart Trash Barrel: An IoT-Based System …

181

Fig. 1 Flowchart of smart trash barrel management system

References 1. Chaudhari, S. S. (2018). Solid waste collection as a service using IoT—Solution for smart cities. In 2018 International Confernece on Smart City Emerging Technologies (pp. 1–5). 2. Ghorpade-aher, J. (2018). Smart dustbin: An efficient garbage management approach for a healthy society. In 2018 International Conference on Information, Communication, Engineering and Technology (pp. 1–4). 3. Balamurugan, C. C. N., Shyamala, S. C, Kunjan, S., Vishwanth, M. (2017). Smart waste management system. International Journal for Scientific Development Research, 1(9), 223– 230.

182

R. Goel et al.

4. Lokuliyana, P.G., Saranya, L., Rajeshwari, P., Priyadharshini, M., & Praveen Kumar, S.S. (2018). Garbage management system for smart city using IoT. International Journal of Pure and Applied Mathematics, 118(20), 597–601. 5. Jajoo, P. (2018). Smart garbage management system. In 2018 International Conference on Smart City Emerging Technologies (pp. 1–6). 6. Muruganandam, S., Ganapathy, V., & Balaji, R. (2018). Efficient IOT based smart bin for clean environment. In 2018 International Conference on Communication and Signal Processing (pp. 715–720). 7. Nehete, P., Jangam, D., Barne, N., Bhoite, P., & Jadhav, S. (2018). Garbage management using Internet of Things. In 2018 Second International Conference on Electronic Communication and Aerospace Technology no. Iceca (pp. 1454–1458). 8. Chaware, P. D. S. M., Dighe, S., Joshi, A., Bajare, N., & Korke, R. (2017). Smart garbage monitoring system using Internet of Things (IOT). International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering, 5(1), 74–77. 9. Ibrahim, M. (2017). Arduino-based smart garbage monitoring system (pp. 28–32). 10. Velladurai, V. S. (2017). And garbage alerting system for smart city (pp. 6–9). 11. Poddar, H., Paul, R., Mukherjee, S., & Bhattacharyya, B. (2018). Design of smart bin for smarter cities. In 2017 Innovation in Power Advances Computing Technologies i-PACT 2017 (vol. 2017, pp. 1–6). 12. Joshi, J., et al. (2017). Cloud computing based smart garbage monitoring system. In 2016 3rd International Conference on Electronic Design ICED 2016 (pp. 70–75). 13. Bharadwaj, A. S., Rego, R., & Chowdhury, A. (2017). IoT based solid waste management system: A conceptual approach with an architectural solution as a smart city application. In 2016 IEEE Annual India Conference INDICON 2016. 14. Kumar, N. S., Vuayalakshmi, B., Prarthana, R. J., & Shankar, A. (2017). IOT based smart garbage alert system using Arduino UNO. In IEEE Region 10 Annual International Conference Proceedings/TENCON (pp. 1028–1034).

Iterative Parameterized Consensus Approach for Clustering and Visualization of Crime Analysis K. Lavanya, V. Srividya, B. Sneha, and Anmol Dudani

1 Introduction Crime rate is a measure of crimes that occurred to assess the effectiveness of an offense. The impact of the policy on the risk of crime victimization is also studied. In Atlanta, the most populous city in the United States, Georgia, the value of the crime rate is greater than the national median value and has been a major issue for the city since the mid of twentieth century. Some of the crime categories observed in Atlanta are kidnapping, fraud, Robbery, Vehicle theft, Trespassing and Missing person. From Fig. 1, the inference can be derived that larceny and assault are the major crimes reported in the city. Using this data of crimes committed and their frequency we can identify the safer areas suitable for business or residence. ATAC is a software that is used for Crime Analysis that provides facilities such as entering the data, manipulating the data and analyzing the data. It additionally allows time oriented analysis of data and also provides a mechanism that spots the possible forms in crime. CrimeStat is a spatial code dealing with statistics joining hands with the GIS code to allow users to perform a crime survey using incident locations [1]. RCAGIS (Regional. Crime. Analysis. Geographic. Information. System.) is a code K. Lavanya (B) · V. Srividya · B. Sneha · A. Dudani School of Computer Science and Engineering (SCOPE), Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India e-mail: [email protected] V. Srividya e-mail: [email protected] B. Sneha e-mail: [email protected] A. Dudani e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_20

183

184

K. Lavanya et al.

Fig. 1 Police release 2017-2018 crime rates in Atlanta

for Crime Analysis that is used by various agencies [2]. CART is a strong, easyto-use tool that uses massive databases to locate patterns and relationships using the decision tree algorithm. Linear regression is a helpful technique for locating the relationship between two continuous variables. One is a predictor or independent variable and the second is a response or dependent variable. This technique can be used in crime rate analysis to find the average crime rate based on the type of crime [3]. The Decision Stump operator is employed for generating a decision tree with only one single split. The resultant tree is often used for classifying the crimes. Random Forest is a machine learning algorithm that produces accurate. It is one among the foremost used algorithms, due to its ease of use [4]. In this paper, we employ the concept of clustering for dividing Atlanta into zones. Clustering can be defined as the process of division of the data or population present into variety of sets. These data points present in such assemblies are similar to all other data points within the same group than those in other groups. In easy words, the goal is to isolate sets with similar traits and similarities by assigning those into clusters. The paper describes the crime rates through the concept of consensus clustering for which three clustering methods are used namely: K-means clustering, Gaussian Mixture model clustering and Bisecting K-means clustering.

Iterative Parameterized Consensus Approach for Clustering …

185

2 Literature Survey Sukhanov et al. have analyzed data fragments that forms the component of a cluster framework to propose a replacement dissimilarity measure on information—fragments and build a consensus function. This function helps in dealing with massive clustering issues without any compromise on the accuracy. Their work shows the implementation of consensus clustering on various datasets giving the results having high performance when compared to the existing techniques [5]. Li et al. propose a novel clustering rule, named Greedy optimization of K-means-based consensus clustering (GKCC). Inspired by the well-known greedy K-means that aim to unravel the sensitivity of K-means initialization, GKCC seamlessly combines greedy K-means and KCC along, achieves the merits inherited by GKCC and overcomes the drawbacks of the precursors. Moreover, a 59-sampling strategy is conducted to produce high-quality basic partitions and accelerate the algorithmic speed. In-depth experiments conducted on 36 benchmark datasets demonstrate the significant benefits of GKCC over KCC and KCC++ in terms of the target function values and standard deviations and external cluster validity [6]. Parekh et al. present a completely unique visualization and cluster technique known as Consensus Similarity Mapping (CSM). These techniques are used for integration of M-dimensional radiological information present. This method computes gives an ensemble of stable cluster results obtained after performing multiple runs of the K-means algorithm. It used the cluster stability index (CSI) to identify and analyze the stable configurations present in dataset which is essential in forming the ensemble. It is found that the performance of CSM works well on well-known artificial datasets and also in multi-parametric magnetic resonance imaging (MRI) information [7]. da Silva et al. perform a study on how different cluster algorithms manufacture totally different types of clusters and their relations. They assess the likelihood function to merge the individual cluster into a cluster which is found to be unique when compared to what the original algorithms produce. The most significant contribution of their work may is a new algorithmic program that merges previously generated clustering supported by must-link constraint rules designed from agreements among components discovered from such clustering. Experimental results indicate that the approach will merge the characteristics of the initial algorithm used thereby not deviating from the results and also producing output by finding hidden information [8]. Najar et al. study on unsupervised approach of data clustering. In their work, the performance measurements of Gaussian-based mixture models for data clustering namely: Gaussian mixture model (GMM), Bounded Gaussian mixture model (BGMM), Generalized Gaussian mixture model (GGMM) and Bounded Generalized Gaussian mixture model (BGGMM) are considered. The main objective of their study is to analyze the selection of the component model making it the critical step in mixture decomposition. Results show the close clustering accuracy between different models studied and concluding finite generalized Gaussian mixture model as the best for M-dimensional data [9]. Nagpal and Mann’s research work does not include many recent clustering techniques. Despite, it only deals with density-based clustering algorithms, like DBSCAN and DENCLUE. They have also

186

K. Lavanya et al.

discussed the merits and demerits of density-based clustering algorithms along with the challenges. All the people were so interested in doing research about the classification of algorithms that are used in the field of statistics and to apply them in specific traditional databases.

3 Background Study 3.1 Consensus Clustering Consensus clustering (CC) is declared as an approach that depends on multiple iterations of the chosen clustering technique on the dataset. This provides us the metrics to assess the steadiness of clusters with our parameter choices (i.e.,K and linkage) as a data visualization element in a heat map. The main reason behind recommending CC over simple clustering approach in pattern recognition is, CC can identify the pattern with greater accuracy since it undergoes several iterations. There are two summary statistics to be calculated that facilitate in verifying the steadiness of a selected cluster and also the significance of certain observations within a cluster. First, cluster consensus m(k) that computes the average consensus worth for every combination of two observations among each cluster. m(k) =

1 Nk (Nk − 1)/2



M(i, j)

i, j ∈ Ik i< j

Next is item consensus mi (k) that focuses on a selected item or observation and computes the average consensus rate of that item to all or any others among its cluster. m i (k) =

 1 M(i, j) Nk − 1{ei ∈ Ik } i ∈ Ik j = i

Using these summary statistics, one can rank clusters by their consensus or determine observations that are central in their cluster.

3.2 K-Means Clustering K-means clustering is a type of unsupervised learning paradigm that can be employed when the data is not labeled. The objective of this algorithm is to find out groups

Iterative Parameterized Consensus Approach for Clustering …

187

within data from the fixed K-value. The data points can be clustered using various distance metrics and feature similarity. The centroids of the K clusters are the results that will be helpful in labeling data points of the training data set. This algorithm also aims in minimizing an objective function know as the squared error function given by: J (V ) =

ci c     xi − v j  2 i=1 j=1

Here, .||x i −vj || is the .Euclidean distance. between x i and vj. . .ci is the no. of. data points in ith cluster. c. is the no. .cluster .centres.

3.3 Bisecting K-Means Clustering Bisecting k-Means is a combination of k-Means and hierarchical clustering technique. The process begins with a single cluster and bisects the data until the desired results are obtained. The pseudo-code of the algorithm is given below: (1) Elementary Bisecting K-means rule for locating K clusters (2) Select a cluster to fragment the data (3) Find two sub-groups with the help of the essential k-Means rule (Bisecting step). Reiterate the second stage, for ITER iterations then choose the splitting position that yields the clustering with the highest value for overall similarity. Repeat steps one, two and three till the required number of clusters are reached.

3.4 Gaussian Mixture Model Based Clustering 1. Gaussian mixture models (GMMs), is an extension of the k-means algorithm, however, will even be a robust tool for estimation beyond simple clustering. A Gaussian mixture model (GMM) makes an attempt to seek out a mix of multidimensional probability distributions that best suit any input dataset. In this approach, we tend to describe every cluster by its centroid (mean), covariance, and also the size of the cluster (Weight). To spot clusters we tend to match a collection of k Gaussians to the data and then we estimate Gaussian distribution parameters like variance and mean for every group and the cluster’s weight. Once the parameters for every data point are learnt, we measure the probability values of the point belonging to every one of the clusters [10]. Mathematically, we will outline Gaussian mixture model as mixture of K Gaussian distribution

188

K. Lavanya et al.

which means it’s a weighted average of K Gaussian distribution. Thus, we will write data distribution as

p(x) =

k 

   πk N x|μk , k

k=1

Here, N (.x|mu_k,sigma_k.) denote cluster in data.with mean. mu_k.,weight.pi_k. and covariance. epsilon_k.

4 Methodology Clustering is an important discovery in the field of data processing and analysis. We have employed consensus clustering as our area of experimentation. A combination of R and Cipher-statements is used. The cipher statement is used for loading the data into the Neo4J platform with the help of REST API. The abilities of R to preprocess the data and help in the clustering process is exploited by using the RStudio IDE for R [11]. Neo4j’s Platform [11] is used to traverse and analyze the networks having useful information [12].

4.1 Dataset The paper investigates the crime rate by processing the crime data set of Atlanta from 2007–2018. The crime data set has fifteen zones each with different types of crime. The dataset is been taken from github because it is secure and static and it introduces a very potent way to not just manage a single website but over 200 individual open data and API projects. The data originally contained around 12 fields. For our study we preprocess the irrelevant fields and finalize on nine fields, the description of which is given below (Table 1).

4.2 Consensus Clustering Consensus clustering is an iteration-based algorithm [13] that uses sub-samples which contribute to 90% of the data set. This algorithm provides the matrix to assess the sturdiness of each cluster (M) and the parameter choices given in Table 4.1. The paper follows the given below consensus clustering algorithm:

Iterative Parameterized Consensus Approach for Clustering …

189

Table 1 Notation representation Attributes

Description

IncidntNum

Primary key for every crime

Category

Represents different types of crimes

Descript

Provides the details for the specific crime

DayOfWeek

The day of the week of a particular crime

Date

The date of a particular crime

Time

The time of a particular crime (HH:mm)

Resolution

The status of the prosecution

Address

The location of the offender

Location

The location specifying latitude and longitude of the crime

Algorithm: Consensus Clustering Consensus. Clustering. (D., Resample., H., P., A., C) Input: D is the MNIST Dataset in matrix format Mi [X0,…, X19, Y0,…,Y19] EN Resample is the re-sampling scheme used for extracting a subset from dataset H is the number of resamples P is the percentage of rows extracted each time in the sub-sampling procedure A represents Consensus clustering algorithm Output: C is the updated set of cluster, C = {Ci,..Ck}, k E N Step 1: for each Mi do Step 2: initialize empty connectivity matrices Step 3: set create NewCluster = True; Step 4: for.1>.h>.H. do Perform. Resample. On. D. and.assign to.D(h) Step 5: group. Elements. of D(h). in C clusters. using algorithm A. Step 6: build a. connectivity matrix based on A’s results for C end for Step 7: using connectivity. Matrices. build a. consensus. Matrix. M x,y for. C End for each

190

K. Lavanya et al.

Step 8: return. C(M x,y)

5 Results and Discussion Odd patterns of I/O latency and summary statistics can be revealed by histograms and heat maps. The clusters gained by K-means, Bisecting K-means and Gaussian mixture algorithms and their detailed statistics are represented as heat maps below. The Atlanta maps discussed in this section are constructed from Neo4j plugin of R studio, using spatial indexing on attribute Location. The matrices obtained by the consensus clustering have the visualization of the items present into rows and columns. The values range from 0 to 1 depicting clustering capability. The consensus matrices are illustrated as a dendrogram atop the heatmap.

5.1 K-Means K-means is clustering techniques that achieve its objective by finding the K positions belonging to the clusters by minimizing the distance from the data points to the chosen cluster centroid [14]. For all the three clustering mechanisms discussed, the heat map shows the intensity of crime based on the hour of theft and day of the week. The darker region denotes the crime rate to be high during that period and the lighter region denotes a relatively low intensity of crime. The mixed region denotes the average crime rate. In graphical representation, the obtained data points in the clusters are illustrated as.circles. Each data point is represented by orange, yellow or green color where Orange represents the highest crime affected area and green the safest area (Fig. 2, Graph 1).

5.2 Gaussian Mixture Model In this approach, we describe each cluster by its centroid (mean), covariance, and the size of the cluster (Weight). Here rather than identifying clusters by “nearest” centroids, we fit a set of K Gaussians to the data. We estimate distribution parameters of the Gaussian function such as the variance and mean for individual clusters along with the weight of a cluster. After learning the parameters for each data point calculate the clustering probability (Fig. 3: Graph 2).

Iterative Parameterized Consensus Approach for Clustering …

191

Fig. 2 K-means heat map for crimes

Graph 1 K-means map

5.3 Bisecting K-Means In Bisecting K-means technique the centroids are first initialized using approximation methods. Then K-means is performed iteratively with the aim of Bisecting the data points. This process continues for a certain number of n trials. The best cluster is thus chosen to have the minimum total SSE in one of the trials. After this, the Bisecting process continues with this cluster until we obtain the desired number of clusters (Fig. 4: Graph 3).

192

K. Lavanya et al.

Fig. 3 Gaussian mixture model heat map for crimes

Graph 2 Gaussian mixture model graph

5.4 Consensus Clustering In order to study the differences of opinion among people and their similarities in emotional gauge during a crisis, we compare our study of Kerala Floods to that of Hurricane Michael (Fig. 4). The three clusters obtained by K-means, Gaussian Mixture model and Bisecting K-means are combined by masking their drawbacks to produce the consensus cluster. The heat map shown in Fig. 5 illustrates the consensus cluster categorized year-wise which describes the intensity of the crime rate based on the day of the week and the

Iterative Parameterized Consensus Approach for Clustering …

Fig. 4 Bisecting K-means Heat map for crimes

Graph 3 Bisecting K-means graph

193

194

K. Lavanya et al.

Fig. 5 Consensus clustering heatmap for crimes

hour of theft. The darker region denotes the crime rate to be high during that period and the lighter region denotes a relatively low intensity of crime. The mixed region denotes the average crime rate. In the Graph 4 obtained the data points present in the cluster have the shape of the circle. Each data point is represented by Orange, Yellow or Green colour where Orange being the highest crime affected area and Green being the safest area (Graph 4).

5.5 Comparative Analysis The Graph 5 clearly show that for the dataset used in consensus clustering covers ~100% of the data points. Bisecting K-means covers ~90% of the data points, Gaussian Mixture covers ~75% of the data points and the K-means cover less than ~65% of the data points. From this, we can conclude that consensus clustering achieved excellence in dividing our dataset into zones with maximum coverage of all individual data points (Graph 5).

Iterative Parameterized Consensus Approach for Clustering …

195

Graph 4 Consensus clustering map

Graph 5 Performance evaluation comparison

6 Conclusion Crime exists across the world in numerous forms. These crimes have different statistical trends which change in the course of time. Crime Analysis is done to analyze crime patterns around the globe and provide preventive measures. In this paper, the implementation of consensus clustering, a technique to represent the consensus obtained across multiple runs of the chosen clustering algorithm is considered. In order to analyze the steadiness of discovered clusters with reference to sampling

196

K. Lavanya et al.

variability Crime Analysis is taken as a sample application that is implemented in R with Neo4j library to indicate how consensus clustering may be used with Kmeans, Bisecting’ K-means, and Gaussian Mixture clustering algorithms. For the dataset depicting crimes in Atlanta city, taken in the application, different heat maps and a comparative scatter plot is drawn for the above-mentioned clustering algorithms based on the percentage of data points considered for clustering. From the results, we conclude that consensus clustering yields the best result considering all the data points. Consensus clustering will, in addition, be used in the determination of the optimal amount of clusters for a dataset and clustering algorithm. In the future, Consensus clustering may be applied to any clustering algorithm and is helpful to compare different algorithms on the same dataset.

References 1. Li, X., & Liu, H. (2018). Greedy optimization for K-means-based consensus clustering. Tsinghua Science and Technology, 23(2), 184–194. 2. Parekh, V.S., & Jacobs, M.A. (2016). A multidimensional data visualization and clustering method: Consensus similarity mapping. In 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) (pp. 420–423). IEEE. 3. da Silva, G.R., & Albertini, M.K. (2017) Using multiple clustering algorithms to generate constraint rules and create consensus clusters. In 2017 Brazilian Conference on Intelligent Systems (BRACIS) (pp. 312–317). IEEE. 4. Najar, F., Bourouis, S., Bouguila, N., & Belghith, S. (2017). A comparison between different gaussian-based mixture models. In 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA) (pp. 704–708). IEEE. 5. Levine, N., (2017). CrimeStat: A spatial statistical program for the analysis of crime incidents. Encyclopedia of GIS, 381–388. 6. United States Department of Justice. Criminal Division Geographic Information Systems Staff. Baltimore County Police Department. Regional Crime Analysis Geographic Information System (RCAGIS). Ann Arbor, MI: [distributor], December 05, 2002. https://doi.org/10.3886/ ICPSR03372.v1. 7. Khurana, S. (2017). Liner regression with example. NPR. http://web.archive.org/web/ 20190305153704/ https://towardsdatascience.com/linear-regression-with-example-8daf6205b d49?gi=a9657096fcca. 8. Soo, K. (2016). Random Forest Tutorial: Predicting Crime in san Francisco. NPR. http://web.arc hive.org/web/20190305154355/ https://algobeans.com/2016/08/25/random-forest-tutorial/. 9. Sukhanov, S., Gupta, V., Debes, C., Zoubir, A.M. (2017). Consensus clustering on data fragments. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4631–4635). IEEE. 10. Gupta, T. (2018), Gaussian Mixture Model. NPR. http://web.archive.org/web/20190305154014 /https://www.geeksforgeeks.org/gaussian-mixture-model/. 11. Swetha, G. (2015). Crime Data Investigation and Visualization Using R. International Journal of Emerging Technology and Innovative Engineering, 1(5). 12. Neo4j. (2019). GraphGist: City of London Crime Analysis. NPR. http://web.archive.org/web/ 20190305154146/ https://neo4j.com/graphgist/city-of-london-crime-analysis.

Iterative Parameterized Consensus Approach for Clustering …

197

13. Fernando, L. (2018). Consensus Clustering A Robust Clustering Method With Application For Song Playlists. NPR. http://web.archive.org/web/20190305153523/towardsdatascience.com/ consensus-clustering-f5d25c98eaf2. 14. Michael, J. (2018). Understanding K means clustering in Machine Learning. NPR. http://web. archive.org/web/20190305153900/https://towardsdatascience.com/understanding-k-meansclustering-in-machine-learning-6a6e67336aa1?gi=e839a6e6aecc.

Assistive Technology for Low or No Vision Soumya Thankam Varghese and Maya Rathnasabapathy

1 Introduction Vision loss affects almost all our day to day activities and lowering the quality of life. Recent advances in information and communication technologies along with medical intervention paved way for the development of innovative (non) wearable technologies. ICT broadens our dimensions of life at a rapid speed and the adoption rate also moving parallel. The increased emphasis on digitalization in India demands a lot from us and resulted in the very process of making ICT an integral part of our life. The most revolutionized area is education and specially education for children who are differently able. Education in a way is more inclusive and children don’t have to keep them away from opportunities for any causes. The three basic challenges faced by children with difficulty in visual processing of information are accessibility issues, mobility, and meaningful experience as determined by Lowenfeld [1]. The importance of assistive technologies is simply to overcome or manage their life against these challenges they are facing. For no doubt, we could say that the implementation of assistive technology will make the people equally competent. Research work of assistive technologies was usually focused on navigation and mobility issues. The recent advancement like printing and communication technologies makes their life more simple and dynamic [2]. The scope of assistive technologies lies from the physical properties of vision to the psychological factors for independent living. An assistive technology device could be any product which can modify or improve the abilities of differently able people. The term encompasses a wide variety of tasks such as evaluation of the needs, selecting, designing, fitting, customizing, adapting, applying, retaining, repairing, and technical assistance [3]. The integration of all S. T. Varghese (B) · M. Rathnasabapathy Vellore Institute of Technology, Chennai, Tamil Nadu, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_21

199

200

S. T. Varghese and M. Rathnasabapathy

these technologies has important applications in general as well as special education. Assistive technologies had a very deep impact on human life is non-debatable fact now [4]. Hersh and Johnson [5] came with a much acceptable definition that it helps people with disabilities to bridge the gap between their wishes and the availability of social infrastructure for fulfilling those wishes. Higgins et al. [6] showed that computerbased assessment is relatively good for reading comprehension scores compared to manual scores. Assistive technology is making the impossible happen by helping people to accomplish what more difficult for them. The Talking Tactile Tablet (assistive technology with speech output) has helped students to improve mathematical performance especially in the areas of geometry [7]. The reviews have detailed about a wide range of AT devices like audio, tactile, adapted visual, digital, and even non-digital.

2 Education and Art 2.1 NVDA in Education Modified visual assistive devices include screen magnifier, large monitor, modified keyboards, and screen readers. NVDA is an open-source screen reader that enables people with visual impairment to read and comprehend the content of the web. The requirements of people with visual impairment varies on the basis of the nature and extent of impairment. Screen reading softwares are of different types and of course, included under the category of assistive technologies. To name a few, we have selfvoicing programmes, graphical, command–line, and even cloud-based. But NVDA is free open-source software started by [8]. It is freely available and comfortable for English speaking people. It works through intercepting every input and output and presenting it in an audio or braille way. NVDA is classified into different subsystems like core loop, ad-ons manager, event handler and output handler, etc. The graphical part is powered by wxPython. The use of NVDA helps the students to work with websites that are more visual in nature. NVDA helps the student to browse through different fonts, colors, and graphics appearing on the website. Children can easily browse through the contents and find out what they really looking for. The factor which makes it more challenging is the lack of training for how to do the search. Especially when the alignment of the letters or paragraphs differs for graphical presentations it may create confusion for children. Ad on, pop-up, and auto-playing videos are some other factors deciding the ease of usage on the web. If children can get enough knowledge to identify these factors it will make the task easier for them. They really need guidance for easy navigation without the mentioned problems.

Assistive Technology for Low or No Vision

201

2.2 NVDA and Challenges Augusto and Schroeder emphasized the fact that a lack of information on assistive technologies can hamper our development towards information technology. NVDA is demanding much attention because of its user-friendly nature. It is easily instructional for the service provider and I turn makes the leaner to practice the guidelines easily. NVDA made the computers freely accessible to people who are with low or no vision. It has a synthetic voice that reads as the cursor moves over it. It creates a way for the 285 million people who are blind or vision impaired to get along with the mainstream society and to provide their contributions for holistic growth of humanity. The tools of assistive technologies made their life fruitful. This screen reader helps the people to produce written content also without many difficulties. The portable nature makes it more convenient to use. The technical problems with browsers and processors are also often reported in the past. The new features of NVDA make things easier for learners. There are features for changing the shape of braille cursor and also new translation tables are added. Automatic spell errors are suggestions are working cool for the leaner to give inputs easily. The delay or failure to process the updated content of the web is another challenging area with NVDA [9]. The new features of NVDA tried to overcome many of the problems reported by people with low or no vision [10]. People are expecting more techniques for writing large paragraphs without messing up with the fond, graphics, or even the alignment of the paper. Individualized needs and customizing the technologies towards these needs are the priority areas for researchers now. Apart from that content adaptation and ways for modification in user-friendly ways is also specific areas needed to be addressed. These modifications will take screen readers in the forwarding direction for sure.

3 Conclusion The technology has been able to replace the century’s long usage of cane stick or bamboo pole and people who are having difficulties in visual processing are more independent and confident. Technologies help them to broaden their horizon or to get equalized with others in society [11]. The manual braille writers are not replaced by the screen readers and the task of reading made education a more enjoyable activity. Review of literature shows an interesting trend that the usage of assistive technology among people with visual impairment is reported or investigated in connection with education rather than independent living measures. When a person becomes literate he or she has enough capacity to design the strategies for their independent mode of living. Training related to types and usage of assistive technologies should be added to the curriculum. Children will be more confident and this, in turn, influences the learning process to a greater extent. When it becomes an essential component of their

202

S. T. Varghese and M. Rathnasabapathy

learning process naturally people who are working for the production or delivery of assistive technologies can gather more inputs about the usability, accessibility, and availability issues. Moreover, the urgency to improve the quality of web interaction through personalizing the software has more evident these days.

References 1. Lowenfeld, B. (1973). The visually handicapped child in school. New York, NY: John Day. 2. Terven, J. R., Salas, J., & Raducanu, B. (2014). New opportunities for computer vision-based assistive technology systems for the visually impaired. Computer, 47, 52–58. 3. Sah, P. (2013). European Academic Research, 1, 2268–2280. 4. Cooper, H. L., & Nichols, S. K. (2007). Technology and early braille literacy: Using the mountbatten pro brailler in primary-grade classrooms. Journal of Visual Impairment & Blindness, 101, 22–31. 5. Hersh, M. A., & Johnson, M. A. (2008). On modelling assistive technology systems–part I: Modelling framework. Technology and disability, 20, 193–215. 6. Boone, R., & Higgins, K. (2007). The role of instructional design in assistive technology research and development. Reading Research Quarterly, 42, 135–140. 7. Landau, S., Russell, M., Gourgey, K., Erin, J. N., & Cowan, J. (2003). Use of the talking tactile tablet in mathematics testing. Journal of Visual Impairment & Blindness, 97, 85–96. 8. Jindal, S., Singh, M., & Singh, N. Screen reading software for Indian users: A challenge. 9. Kelly, B., Nevile, L., Fanou, S., Ellison, R., & Herrod, L. (2009). From web accessibility toweb adaptability. Disability and Rehabilitation: Assistive Technology, 4, 212–226. 10. Gerber, E. (2003). The benefits of and barriers to computer use for individuals who are visually impaired. Journal of Visual Impairment and Blindness, 97, 536–550. 11. Augusto, C., & Schroeder, P (1995). Ensuring equal access to information for people who are blind or visually impaired. Journal of Visual Impairment and Blindness, 89, 9–13.

A Socio Responding Implementation Using Big Data Analytics S. GopalaKrishnan, R. Renuga Devi, and A. Prema

1 Introduction Social sentiments can be understood by the brand monitoring that is helping to run a business. Information can be retrieved for process through observing online conversations and other service rendered to collect a feedback. Based on these feedbacks the text is categorized as positive, negative or high and low. The recent advancements in data analytics have improved the method of handling large data using different techniques. Many creative tools are used to improve the effectiveness of data analytics be it artificial intelligence or deep learning. The technique used may differ but the ultimate goal is to retrieve an algorithm that is going to reflect a sentiment table to identify the usage of social media by kids. This is eventually through reading the history set of each and every login the users do. The observed texts are captured based on the following steps: 1. Users response towards the site used-this is achieved through feedback 2. Reactions and intentions of the users based on their concern and way of website usage. Based on these conversations with different types of combinations of the analysis performed using the NLP algorithm helps to analyze millions of brand conversations to test the human-level accuracy. The text feedback obtained is analyzed based on the text classifiers, sentiment analysis is one of the best tools to analyze the incoming S. GopalaKrishnan (B) · R. Renuga Devi · A. Prema Department of Computer Science, School of Computing Sciences, VISTAS, Chennai, India e-mail: [email protected] R. Renuga Devi e-mail: [email protected] A. Prema e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_22

203

204

S. GopalaKrishnan et al.

messages and gives the respective positive, negative or neutral results. This acts as the basic building block of the entire sentimental analysis network (SAN).

2 Methodology The analysis can be further extended for a correct output by performing an intent analysis, where the user’s opinion is taken as poll feedback and analyzed depending on the answers collected through related opinions, news, suggestions, appreciations, complaints or as queries. Analyzing intent of textual data To evaluate actions, contextual semantic search or CSS is applied. It takes sample copies of messages as inputs and chooses closely related ones and then filters them to proceed further. A keyword is chosen and the relevant words are matched to get the result using the search algorithm that is to be declared at the end of this research work. As a basic idea, it is been designed as an AI technique that is used to convert each word into a particular point. In this technique, every word is converted into a point in the hyperspace and the distance between these points is calculated to identify the similarity of the text that is received. Data Samples can be obtained from various net sources, for example from Facebook-30,000 comments or more can be analyzed, similarly data can be obtained from Twitter, news article reviews for the next level process of identifying the text for classification. If the values are one then it denotes a positive comment, if it is zero then negative, with these values the usage level of a website by a user can be evaluated. Considering this as an example now the process is going to be designed to evaluate the usage of the internet by kids. After all the resource sample testing the algorithm is derived in such a way that depending parsing is implemented to test the sentiments, that helps to find the overall sentiments. A sentiment score is obtained by (i)

Number of positive and negative words count

Positive and negative values are evaluated depending on the positive and negative terms that are used in a sentence. (ii) If a negative is found with more than two possibilities then it is considered to be an illegal site to watch. (iii) All the scores are multiplied by weights to find the weight. Weight is calculated using the polarity of the text, which is negative, positive or neutral. The score is a précised numerical representation obtained by evaluating the received characters after conversion using a mathematical operator. Though it is available for all types of text in documents, named entities, themes and queries, this research involves mainly with less than 18 text usage to identify the right or wrong sage of words by the estimated age users. (iv) Now to get the total sentiment score positive and negative are added together (Fig. 1).

A Socio Responding Implementation Using Big Data Analytics

205

Fig. 1 Sentiment analysis

This is possible by scoring up the negation, sentiment analysis, text classification and analytics. With the implementation of all these as major criteria as the algorithm will be designed to get the identification of the target fixed.

3 Existing and Proposed Algorithm Sentiment analysis algorithm is the existing algorithm so far it is used to evaluate the emotions by points and it cannot analyze large volume of data in single shot but the proposed algorithm can analyze large volume of data as it is an application of NLP. In other words, it is also known as emotion extraction or opinion mining, a useful field of research in text mining. Positive, Negative or Neutral is classified by the polarity of the text. The method that is used to find the positive or negative is the ‘bag of words’ method, where the ‘Good’ are positive and the ‘Bad’ are negative. For smileys and indicators like:) are positive and :(are negative. Finally, a deep study of NLP is required to analyze syntax and pragmatics level (Fig. 2). In the existing algorithm context of marketing is focused using Radian6 which is a combination of both text and emoticons. In this proposed research work, the analysis is focused for users who are less than 18 years. This proposed algorithm is determined to obtain the effectiveness of messages and the level of its observation. This proposed algorithm uses sentiment analysis to identify various tasks like subjectivity, detection, sentiment classification, aspect and feature term extraction. This research work presents the survey of main approaches used for sentiment classification.

4 Conclusion Analyzing sentiments is a useful technique in the current scenario, which helps to forecast trends and to gather public opinion. The basic steps have been designed and literature review is performed periodically to give a sketch to the design of helping the society from getting into unwanted sites. Since nowadays students are much

206 Fig. 2 Step by step process of sentiment analysis

S. GopalaKrishnan et al.

Data Collection

Data Preprocessing

Bag of words

Algorithm

Training Model

Test Set

interacting with the internet, they need to be watched. At the end of this research work, all steps are taken to develop this as an App for parents, so that they can monitor easily from their mobile phones. This is the world of internet and also the world of cyber-crimes, more internet more misuse, so this is the right time to save the kids from watching or using unwanted sites. Therefore, parents should analyze the internet usage of children by monitoring them frequently, to help the young minds to grow without cyber corruption.

References 1. Tsytsarau, M., & Palpanas, T. (2012). Survey on mining subjective data on the web. Data Mining Knowledge Discovery, 24, 478–514. 3. Wilson, T., Wiebe, J., & Hoffman, P. (2005). Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of HLT/EMNLP. 4. Liu, B. (2012). Sentiment analysis and opinion mining. In Synthesis lectures on human language technologies. 4. Yu, L. C., Wu, J. L., Chang, P. C., & Chu, H. S. (2013). Using a contextual entropy model to expand emotion words and their intensity for the sentiment classification of stock market news. Knowledge-Based Systems, 41, 89–97. 5. Hagenau, M., Liebmann, M., & Neumann, D. (2013). Automated news reading: Stock price prediction based on financial news using context-capturing features. Decision Support Systems. 6. Xu, T., Peng, Q., & Cheng, Y. (2012). Identifying the semantic orientation of terms using S-HAL for sentiment analysis. Knowledge-Based Systems, 35, 279–289. 7. Maks, I., & Vossen, P. (2012). A lexicon model for deep sentiment analysis and opinion mining applications. Decision Support Systems, 53, 680–688. 8. Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Found Trends Retriev, 2, 1–135.

A Socio Responding Implementation Using Big Data Analytics

207

9. Cambria, E., Schuller, B., Xia, Y., & Havasi, C. (2013). New avenues in opinion mining and sentiment analysis. IEEE Intelligent System, 28, 15–21. 10. Feldman, R. (2013). Techniques and applications for sentiment analysis. ACM, 56, pp. 82–89.

An Optimised Robust Model for Big Data Security on the Cloud Environment: The Numerous User-Level Data Compaction Jay Dave

1 Introduction Cloud computing is known as the platform to provide the services to its uses on the computation basis. Cloud computing provides mainly software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS). The consumer of the cloud may opt for any of the services as mentioned above for the specified duration and pay for services utilised by them in the specified time duration. To provide the space for storing the end user’s critical data on the cloud environment is the boom in the market. Various cloud service providers can provide the amount of space required by users to meet their data needs. Ample precautions need to be taken by the cloud service provider before enabling their potential customers to store the massive amount of data [1] on the cloud. Various ways of data security [2] in the cloud environment are discussed in the paper like cumulus [3] and fade version [4]. On the other end, the V-GRT model architecture [5] relies on the intervention from the trusted third party, which is also widely known as a security vendor. The security vendor is responsible for the authorisation of the cloud service consumers. Furthermore, the SecCloud model [6] introduces the auditing process, where the data tags are generated before the data transmission and after data storage on the cloud environment which helps the effective conduction of auditing process from the perspective of data integrity. There also exists the two-level data security model [7] which does not only emphasise on multiple levels of encryption mechanism but also it utilises the biometric features for the security purpose. Hence, the higher level of security is desired to assure that the appropriate user with the desired privilege is having access to cloud storage. The mentioned method will assure that the security

J. Dave (B) Department of CE, Indus University, Ahmadabad, Gujarat, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_23

209

210

J. Dave

to the data will provide in an optimised way. Besides that, the extent to which the security will provide that no attacker can ever predict the original data. As per the need of the user, any cloudlet which has generated could be addressed by either private cloud, public cloud, or hybrid cloud. The most important thing about the private cloud is that all the resources are not open for access by every user. As the resources pool kept shared openly, it is but evident that the security mechanism is quite robust as compared to the public cloud. Contradictorily the other one, the public cloud is less secured as it provides open access to all the users of it. The middle way is called the hybrid cloud, which works partially like a private cloud for the specific operation and like a public cloud for certain kind of operations. One more cloud called community cloud is also observing a significant amount of growth because of its multitenant nature. The security of data in the cloud [4] environment desired in recent times because of the type of infrastructure and the behaviour of the infrastructure. A great deal of work is carried out by the people to provide data security [17] to the public cloud. The need for carrying such extensive work was there because the resources provided by the cloud were in a virtualised manner.

2 Working Methodology of the Multi-level Data Security Model The multiple-level data security model will commence its process by collecting the data, which is to be made secure, and seldom the collected data will be scattered and in colossal size. Hence, as soon as the data is received, it needs to be cleansed. One of the most crucial activities which need to carry out here is removing the redundant entries in the structured data file format. The process of elimination duplication in the structured data is often referred to as deduplication [8]. To deal with the substantial quantity data, it is desired that loose-less compression technique like adaptive Huffman [9] applied to the data set. Seldom, the data size of the segment is received so much vast, during that time the data compression algorithm may provide the best option of reducing the size of the segment so that encryption process becomes less cumbersome at the same time it could quickly go under the process of steganography [10]. Later, the system will divide the received data in the different size of chunks. The chunk size will vary as per the input data size. Furthermore, there is a provision for the user to choose the encryption algorithm and the occurrence of the algorithm to obtain the highly secured data. The prime reason behind segmenting the input data is to make the entire process harmonious at the same time helps to deal with the considerable size structure files. The process of encrypting the data becomes less cumbersome over the chunks as compared to a single large file. Apart from that, it also helps to deal with steganography [10] process. As known that the size of an image also plays a vital role during the process mentioned above as if the size of the file is that much substantial such that it becomes

An Optimised Robust Model for Big Data Security on the Cloud …

211

impossible to store its encrypted versions at the backside of an image. Working on chunks makes it more efficient to hide them behind the images in a way that the attacker will not able to sense the drastic change in the size of utilised images during the process. There is a considerable probability that data attacked or theft by some invader while it is in transition phase rather than stored data on the cloud environment. Therefore, several security layers shall enforce at the client side. The hybrid approach of taking a couple of security algorithms like DES and RSA, respectively, was shown in a model [11]. By taking the hybrid approach in the consideration, the multiplelevel data security model offers the provision of performing multiple encryption using any of the encryption algorithms from AES [12], DES [13], 3DES [12], FERNET, MULTIFERNET, blowfish [14, 15], or ARC4. Using any of the encryption mentioned above algorithms, a user may secure their segments of data. The sense of freedom is there to choose an appropriate algorithm to make the data secure by sensing their own needs and concerns [9] or the system shall provide the best of the algorithms to its intended users. The other layer of data security is enabled by using the steganography [10] technique. Precisely, the technique which is used to do the implementation is the LSB steganography [16, 17] which deals with the arrangement layout of buts of images; it works by modifying some bits of the image to carry the data. Ample care must be taken before starting this phase as the success of this phase depends on the type of the images chosen, number of images in need, quality of the images, capacity of the images to store the data, etc.; if any of the parameter discussed above is mistaken, then system producing the output will be ill. The final output of the system is the image which means the data which a user was trying to make secure is turned into an image after removing the duplication from it, retrieving segments of it, and enforcing the encryption algorithm of user’s choice. The image will now send on the public cloud with the help of network connectivity. The public cloud we are using here for showcasing the implementation is the dropbox as it provides cloud storage for the user without any significant cost involved. The beauty of the model is that only images will store on the public cloud and the cloud service providers will maintain their replicas. Any attacker trying to theft the data from the image will ever know that which algorithm was used to make the data secure. Furthermore, the additional layer of security and optimisation are introduced in terms of the compression algorithm. Hence, there is no way the attacker can theft the data (Fig. 1). The above diagram best illustrates the functioning of the multi-level data security model which enables its client to use the portal or application on their devices to store the data on the public cloud after enforcing high data security on it. It has shown in the diagram that clients can process their data by choosing the appropriate algorithm as per their choice and applying the images for the process called steganography. Here, the users may also make the choice that what images they want to have to prepare the final output before sending it to the cloud environment. It may happen that data has not reached the cloud environment because of some attack on the image which causes the entire hidden data lost. That is why the users must wait for the acknowledgement

212

J. Dave

Fig. 1 Workflow of the multi-level data security model

from the cloud side before discarding the data. After acknowledgement, cloud will create a replica of the data. Hence, in case of attack, also data might be made available smoothly.

3 The Implementation of the Client-Side Multi-level Data Security Model For the implementation of the model, an open-source platform like python is most suitable. Most of the time, legacy cryptography algorithms are leveraged to make the data secure. Here, an attempt is shown to utilise cutting-edge algorithms like ARC4 and FERNET. ARC4 is based on the sharing of the key among the parties and it is a cipher type of algorithm used to make the data secure. Here, there exists a requirement of sharing the key among the participating parties which makes it vulnerable to the attack. Therefore, the security of the algorithm is not that high. The size of the key may vary from 40 bit to 128 bits. It makes uses of the substitution block cipher to make the various permutations, whereas the FERNET algorithm is used to establish secret communication with the help of the key which needs to keep secret. The variation of the algorithm which is called triple FERNET is also famous among the communities to make the data secure at a higher rate. As the triple FERNET algorithm uses the concept of treading which results in boosting the entire process of encoding. This algorithm has shown a significant impact on the data security world by its performance in making data security and speeds up the process as well. The complete work has been carried out using the python platform in the

An Optimised Robust Model for Big Data Security on the Cloud …

213

Fig. 2 Sequential implementation of the multi-level data security model

Linux environment. Moreover, the below diagram best illustrated the occurrences of the processes mentioned above in a sequential manner. As indicated in the below image, any of the encryption algorithms shall use for enforcing security to the data (Fig. 2).

4 Result and Discussion Figure illustrates the time required for the mentioned algorithm to enforce high-level data security. It was observed that the time measured for each of the mentioned algorithms was less than that of the existing system [7]. Figure 4 clearly states the timing of various security algorithms in the cloud environment; significantly, the algorithms like AES, DES, and ARC4 are less than 100 ms, whereas MULTIFERNET and FERNET end up very close to them but slightly on the higher side. Moreover, FERNET algorithm shows about 500 ms to complete the entire process, and 3DES ends up at the slowest one as it takes about 3500 ms to complete the said process. The results reveal that the performance of the multi-level data security model is adequate to beat the existing systems. Hence, the model is not only capable of providing robust data security, but also it can be achieved very efficiently. It has observed that ARC4

214

J. Dave

600

TIME (MS)

TIME

500 400 300 200 100 0

Fig. 3 Measuring the time it takes to enforce multiple layers of encryption

Fig. 4 Performance for 1.12 MB file size with 128 block size

and FERNET algorithms have indicated a significant throughput; hence, the implementation of the algorithm consequently on 1.12 MB file size in Linux environment has made sense. When applying the FERNET algorithm over the encrypted data by ARC4, approximately 500 ms time occupied, whereas utilising the MULTIFERNET algorithm to receive the multi-level secured data will only take 300 ms of time. Hence, it is a wise option to use multiple algorithms to encrypt the data, but the selection of the algorithm is to be done wisely in a way that it does not impact the overall performance of the security model in a negative way. The sole reason after choosing ARC4 and FERNET/MULTIFERNET is the time it takes to encrypt. Similarly, the user may select blowfish along with AES or DES. On the other end 3DES algorithm is utilised for this process in collaboration with any other algorithm;

An Optimised Robust Model for Big Data Security on the Cloud …

215

then, as indicated in Fig. 4, the 3DES itself takes approximately 4000 ms of time alone. This makes it as least applicable and suitable for enforcing multiple layers of security (Figs. 3 and 4).

5 Conclusion The system is using the least significant bit (LSB) steganography technique for hiding the encrypted data behind the images. Furthermore, it has observed that except the 3DES algorithm, all other algorithms are performing adequately in the cloud environment over big data. The limitation of the cloud technology in terms of its dependence on robust and stable connectivity applies to this model as well. There also exists a scope of extending the work upon unstructured or semi-structured data to enforce the high-level data security on the different type of clouds.

References 1. Niu, X., & Zhao, Y. (2019). Research on Big Data platform security based on Cloud Computing. In J. Li et al. (Eds.) SPNCE 2019, LNICST 284 (pp. 38–45). 2. Kumari, M., & Nath, R. (2015). Security concerns and countermeasures in cloud computing paradigm. In 2015 Fifth International Conference on Advanced Computing & Communication Technologies. 3. Gedawy, H., Tariq, S., & Mtibaa, A. (2016). Khaled Harras School of Computer Science, Carnegie Mellon University. Cumulus: Distributed and Flexible Computing Testbed for Edge Cloud Computational Offloading. published in IEEE. 4. Tang, Y., Lee, P. P., Lui, J. C., & Perlman R. (2010) The Chinese University of Hong Kong. FADE: Secure Overlay Cloud Storage with File Assured Deletion. In S. Jajodia, & J. Zhou (Eds.) SecureComm 2010, LNICST 50, pp. 380–397. 5. Thamizhselvan, M., Raghuraman, R., Gershon Manoj, S., & Victer Paul, P. Data security model for Cloud Computing using V—GRT methodology. Published in IEEE. 6. Li, J., Li, J., Xie, D., & Cai, Z. Secure auditing and deduplicating data in Cloud. Published in IEEE. 7. Malakooti, M. V., & Mansourzadeh, N. (2015). A two level-security model for cloud computing based on the biometric features and multi-level encryption. In The Proceedings of the International Conference on Digital Information Processing, Data Mining, and Wireless Communications, Dubai, UAE. 8. Aman, M. A., & Cetinkaya, E. K. Towards Cloud security improvement with encryption intensity selection. Published in IEEE. 9. Nandi, U., & Mandal, J. K. (2012). Size adaptive region based huffman compression technique. IEEE. 10. Zeeshan, M., Ullah, S., Anayat, S., Hussain, R. G., & Nasir, N. (2017). A review study on unique way of information hiding:Steganography. International Journal on Data Science and Technology, 3(5), 45–51. 11. Khan, S. S., & Tuteja, R. R. (2015). Security in Cloud Computing using Cryptographic Algorithms. International Journal of Innovative Research in Computer and Communication Engineering, 3(1). (An ISO 3297: 2007 Certified Organization).

216

J. Dave

12. Singh, G., & Kinger, S. (2013). Integrating AES, DES, and 3-DES Encryption algorithms for enhanced data security. International Journal of Scientific and Engineering Research, 4(7). 13. Jain, N. & Kaur, G. (2012) Implementing DES algorithm in cloud for data security. VSRD International Journal of CS & IT, 2(4), 316–321. 14. Devi, G., & Pramod Kumar, M. (2012). Cloud Computing: A CRM service based on a separate encryption and decryption using Blowfish algorithm. International Journal of Computer Trends and Technology, 3(4), 592–596. ISSN: 2231-2803. 15. Dave, J., & Raiyani, A. (2016). The security perusal of big data in cloud computing environment. In Proceedings of RK University’s First International Conference on Research & Entrepreneurship, January, 5 and 6 2016. 16. Pant, V. K., Prakash, J., & Asthana, A. (2015). Three Step Data Security Model for Cloud Computing based on RSA and Steganography techniques. In 2015 International Conference on Green Computing and Internet of Things (ICGCloT). 17. Pant, V. K., & Saurabh, M. A. (2015). Cloud security issues, challenges and their optimal solutions. International Journal of Engineering Research & Management Technology, 2(3). ISSN: 2348-4039.

An Adapted Ad Hoc on Demand Routing Protocol for Better Link Stability and Routing Act in MANETs Yatendra Mohan Sharma, Neelam Sharma, and Pramendra Kumar

1 Introduction MANETs were projected for defeating the threat of obtainable communication schemes by lively putting up a path between source and relevant destinations with the help of motionless and independent mutable nodes [1] (Fig. 1). With unrestricted mobility, limited power constraint, and formation of this network, an implementation of efficient routing scheme is still a hot live challenging task of this field. To improve routing QOS in MANET, this paper offers a naïve talented version of AODV routing protocol that has analyzed under NS2 with dissimilar parameters. Experimental results illustrate that offered routing scheme of this paper is the finest method for MANETs in front of obtainable routing schemes.

2 Literature Survey The notion of AODV was intended in 1999 by Perkins [2] for putting up a pathway only at demand of network hops, and after that, various investigators have explored its characteristics and weakness over other obtainable routing actions [3–7]. Various investigators show that AODV routing scheme attains high overheads, eating tall energy in large network with flooding process that fabricates the need of modification. Y. M. Sharma (B) Biff and Bright College of Engineering and Technology, Dudu, Jaipur, Rajasthan, India e-mail: [email protected] N. Sharma Banasthali Vidyapith, Vanasthali, Rajasthan, India P. Kumar Rajasthan Institute of Engineering and Technology, Jaipur, Rajasthan, India © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_24

217

218

Y. M. Sharma et al.

Fig. 1 Structural design of mobile Ad hoc network

Table 1 Simulation parameters of experiment 1

Parameters

Values

Simulation area

6400 m × 6400 m

Maximum packet

50

Number of nodes

10, 20, 30, 40, 50

Initial nodes power (J)

50

Simulation time

2060 s

However, to shrink associated issues of AODV routing procedure, a vast number of researchers have made different endeavors; some focuses on dipping the size of active hops and controls packets in network [8–10]; some gives spotlight on selection of only energy efficient hops for firming optimum pathway in between communicative hops [11–15]. Some investigators use the concept of multipath routing [16]. Instead of above discussed, a lot of efforts have been done to shrink routing issues of MANETs. Explanation of each and every published effort is not possible to include in this investigation work. However, the presented approaches attained competent fallouts in front of traditional AODV routing algorithm but no one sole routing approach is still capable to outperform with each designed scenario of MANETs; therefore, need has remained to develop naïve practice or optimization of existing routing procedures.

3 Proposed Approach To defeat inadequacies of the existing routing algorithms and to improve routing QOS, the proposed scheme revises route setup and link protection method of traditional AODV routing algorithm with controlling the broadcasting size of RREQ

An Adapted Ad Hoc on Demand Routing Protocol for Better Link …

219

packets, utilizes two-hop-based link discovery process to discover fit path in between communicative nodes. Figure 2 shows the route setup ladder of intended approach. At initial level, anticipated process examines the links of message founder hop and its direct associated hops to verify the existence of destination hop. If process attains

Fig. 2 Route setup ladder of proposed routing scheme

220

Y. M. Sharma et al.

the link of destination hop, then set a communication pathway; otherwise, it adds ID of message founder and its direct linked hops into packet header and broadcasts RREQ packet to linked hops of associated nodes. Furthermore, the RREQ packet receiver hop inspects packet header information and exempts to those hops that entry exists in packet header the receiver nodes verify the link information of next hops, forward a RREP packet to massage founder node, if any intermediate node fetch the link of destination node, otherwise add self-hop IDs to packet header, and broadcast RREQ packet to only those hops that entry not exist in packet header. Such process efficiently shrinks broadcasting amount of RREQ packets and overheads with low consumption of node energy. Additionally to improve routing act intended approach constantly scans active route at each node level and confirms direct accessibility of forward or destination hop up to two hop level. If any intermediate hop attains the link of two steps forward or destination hop, then approach quickly updates route information and forwards a updated RREP packet to message generator hop. This adopted process significantly diminishes an amount of packet loss with preserve link connectivity and extends network lifetime.

4 Simulation Results and Discussion To estimate an effectiveness of proposed practice, it has simulated with traditional AODV routing procedure over same parameters of two different network scenarios. Fallouts have examined with well admired performance evaluation metrics throughput, PDR, NRL, packet drops, and energy consumption ratio (Figs. 3, 4, 5, 6 and 7) (Table. 1).

Fig. 3 Average throughput

An Adapted Ad Hoc on Demand Routing Protocol for Better Link …

221

Fig. 4 Packet delivery ratio

Fig. 5 Routing overheads

However, experimental fallouts demonstrate the effectiveness of the proposed approach over traditional AODV routing scheme, but to prove its suitability it has again evaluated with the different network parameters as utilized by [15] (Figs. 8, 9, and 10) and (Table 2). All comparative fallouts in figures evidently confirm that anticipated approach has considerably improved an QOS of routing with attaining better throughput, PDR ratio and low overheads, packet drop, and energy consumption of network nodes.

222

Y. M. Sharma et al.

Fig. 6 Packet drop ratio

Fig. 7 Energy consumption (Joul)

5 Conclusion On the base of amendments in traditional AODV routing algorithm, this paper proposed a naïve routing practice for MANETs. For maximizing QOS of routing, the intended method utilizes two-hop-based link exploration practice of destination hop that significantly reduces forwarding amount of RREQ packet and prevents from frequent packet and link losses. Experimental fallouts show that in comparison with

An Adapted Ad Hoc on Demand Routing Protocol for Better Link …

223

Fig. 8 Average throughput

Fig. 9 Packet delivery ratio

classical AODV routing process, the presented approach of this paper quietly outperforms over QOS factors like throughput, PDR, routing overheads, and consumption of hop energy.

224

Y. M. Sharma et al.

Fig. 10 Packet drop ratio

Table 2 Simulation parameters of experiment 2

Parameters

Values

Simulation area

2000 × 2000 m, 3000 × 3000 m, 4000 × 4000 m

Number of nodes

100, 225, 400

Node speed

10 m/s

Pause time

0s

Packet size

64

Initial nodes power (J) 100 Simulation time

200 s

References 1. Kumaravel, A., Chandrasekaran, M. (2014). A Complete Study On Power Aware Routing protocol for mobile Adhoc Network. IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE), 71–75. 2. Perkins, C., & Royer, E. (1999). Ad hoc on-demand distance vector routing. In Proceedings on IEEE WMCSA, pp. 90–100. 3. Al-Maashri, A., & Ould-Khaoua, M. (2006). Performance analysis of MANET routing protocols in presence of self-similar traffic. Department of Electrical and Computer Engineering Sultan Qaboos Universit, Sultanate of Oman, pp. 801–807, IEEE, Oman. 4. Royer, E.M., & Perkins, C.E. (2000). An implementation study of the AODV routing protocol. In Wireless Communications and Networking Conference, pp. 1003–1008, IEEE. 5. Aggarwal, R (2018). QoS based simulation analysis of EAODV routing protocol for improving energy consumption in Manet. In International Conference on Intelligent Circuits and Systems, pp. 246–250, IEEE. 6. Shaf, A., Ali, T, Draz, U., Yasin, S. (2018) Energy based performance analysis of AODV routing protocol under TCP and UDP environments. EAI Endorsed Transactions on Energy

An Adapted Ad Hoc on Demand Routing Protocol for Better Link …

225

Web and Information Technologies, 5(17), 1–6. 7. Mousami Vanjale, M. S., Chitode, J. S., Gaikwad, S. (2018). Residual battery capacity based routing protocol for extending lifetime of mobile Ad Hoc network. In International Conference On Advances in Communication and Computing Technology (ICACCT), pp. 445–450, IEEE. 8. Jambli, M. N., Wan Mohd Shuhaimi, W. B., Lenando, H., Abdullah, J., Mohamad Suhaili, S. (2015). Enhancement of Aodv routing protocol in MASNETs. In 9th International Conference on IT in Asia (CITA), pp. 1–6, IEEE. 9. Bhagyalakshmi, Dogra, A. K. (2018). Q-AODV: A flood control Ad-Hoc on demand distance vector routing protocol. In First International Conference on Secure Cyber Computing and Communication (ICSCCC), pp. 294–299, IEEE. 10. Ashwini H. K., Vyshali Rao, K.P., Vyshali Rao, K.P. (2018). CM-AODV: An efficient usage of network bandwidth in AODV protocol. In International Conference on Design Innovations for 3Cs Compute Communicate Control, pp. 111–114, IEEE. 11. Malek, A. G., Chunlin, L. I., Yang, Z., Naji Hasan, A. H., Zhang, X. (2012). Improved the energy of Ad Hoc on-demand distance vector routing protocol. In International Conference on Future Computer Supported Education, ERI Procedia, vol. 2, pp. 355–361, Elsevier. 12. Sridhar, S., Baskaran, R., & Chandrasekar, P. (2013). Energy supported AODV (EN-AODV) for QoS routing in MANET. In 2nd International Conference on Integrated Information, Procedia—Social and Behavioral Sciences, vol. 73, pp. 294–301. Elsevier. 13. Riaz, M. K., Yangyu, F., & Akhtar, I. (2019). Energy aware path selection based efficient AODV for MANETs. In 16th International Bhurban Conference on Applied Sciences & Technology (IBCAST), Islamabad, Pakistan, pp. 1040–1045, IEEE (2019). 14. Riaz, M. K., Yangyu, F., & Akhtar, I. (2019). Energy aware path selection based efficient AODV for MANETs. In 16th International Bhurban Conference on Applied Sciences & Technology (IBCAST), pp. 1040–1045, IEEE. 15. Yoshihiro, T., Kitamura, Y., Pauly, A. K., Tachibana, A., & Hasegawa, T. (2018). A new hybrid approach for scalable table-driven routing in MANETs. Wireless Communications and Networking Conference (WCNC), 1–6, IEEE. 16. Jhajj, H., Datla, R., Wang, N. (2019). Design and implementation of an efficient multipath AODV routing algorithm for MANETs. In 9th Annual Computing and Communication Workshop and Conference (CCWC), pp. 527–531, IEEE.

An Efficient Anonymous Authentication with Privacy and Enhanced Access Control for Medical Data in WBAN K. Mohana Bhindu, R. Aarthi, and P. Yogesh

1 Introduction Wireless Body Area Networks (WBAN) play a vital role in modern medical systems due to their ability to gather real-time biomedical data through smart medical sensors in or around the patient’s body [7, 8]. WBANs and intelligent healthcare systems improve patient’s quality of care without disturbing their comfort. But they are easily prone to modification, data breach, eavesdropping and other attacks by the hackers and intruders. The data composed or communicated in WBANs are very sensitive and significant as these are the foundation of medical diagnostics. To enhance security, Anonymous Authentication (AA) scheme for WBANs has been used in this paper to provide not alone verification, validation and privacy preservation however also to guarantee secrecy, truthfulness, and non-repudiation based on a shared key.

2 Related Works Jinyuan Sun et al. [2] have proposed bilinear pairing operations on elliptic curves. The proposed scheme is based on anonymous credentials and pseudorandom number generator. B. Tiwari et al. [5] have proposed a scheme which uses the physiological K. Mohana Bhindu (B) · R. Aarthi · P. Yogesh Department of Information Science and Technology, College of Engineering, Anna University, Chennai, India e-mail: [email protected] R. Aarthi e-mail: [email protected] P. Yogesh e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_25

227

228

K. Mohana Bhindu et al.

value as input for key generation, generated at patient’s body sensor that is collected and transmitted by PDA at patient’s body. Oscar Garcia et al. [1] have developed a novel deployment model for wireless sensor networks for ubiquitous healthcare based on the ideas of patient area networks and medical sensor networks, and suggested a complete and effective security framework. Rihab Boussada et al. [6] have proposed several privacy-preserving approaches and most of them have focused only in content oriented privacy, omitting the contextual privacy aspects. Wassim Drira et al. [3] have presented a hybrid authentication and key establishment scheme which combines symmetric cryptography and identity-based cryptography. Moreover the system is not efficient in performing hashing due to the 8-bit microcontroller used and the system is time consuming and complex in nature. Lin Yao et al. [4] have suggested a novel subtle data aggregation scheme for BSNs (Body Sensor Networks) based on information hiding, termed Sensitive Data Aggregation Scheme (SDAS). From the survey we have made, we apply Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm to provide effective security to WBANs.

3 System Architecture The proposed system has five major modules: system initialization, data generation and encryption, bilinear pairing-based aggregation, application provider and access control. These five modules and the interaction among them are shown in Fig. 1. In this work, the medical data are generated with the help of mobile phones which are incorporated with the required sensors. In the key generation phase, the public and the private key pair are generated. The generated medical data is encrypted using the public key and a signature is attached to it. Since some brute force attack may reveal the encrypted data, the bilinear pairing-based homomorphic encryption takes place to prevent this. To enhance the security further a digital signature is also added to it.

4 Methodology and Analysis 4.1 Medical Data and Secret Identity Generation The medical data such as ECG, heartbeat, blood pH, temperature etc. have been generated in real time with a healthcare application using mobile phone. The medical data are easily prone to modification, data breach, eavesdropping and other security attacks. It is required to shield all subtle medical data relating to the patients’ health. Once the medical data are generated, a secret identity is anonymously generated and stored in the network manager which is depicted in Fig. 2.

An Efficient Anonymous Authentication with Privacy …

229

Fig. 1 Security and privacy using anonymous authentication Scheme in WBAN system

Fig. 2 Medical data generated with a healthcare application on a mobile device

4.2 Key Generation Using the key generation function, pairs of public key and the private keys of 256 bits are generated using elliptic curve algorithm at the network manager in order to achieve better security. ECC is a popular standard for public key cryptography, and a properly generated 256-bit key is strong enough to resist attacks and also the processing time is fast since it uses small bit key length and avoids compute intensive operations like multiplication and division. The encryption of the data and

230

K. Mohana Bhindu et al.

Fig. 3 Generation of public and private keys of 256 bits

the verification of the signature is done using the public key and the decryption and the signature generation is done using the private key. Figure 3 depicts the generation of public and the private keys using elliptic curve algorithm as described in Algorithm 1. The output of Algorithm 1 is given in Fig. 3. Algorithm 1 Key Generation Input: compute public and private keys(private keys is a random number and B is a generator point on the elliptic curve) Output: shared secret key 1: Private Key = a 2: Public key PA = a*B 3: Private Key = b 4: Public key PB = b*B KAB = a (bB) KAB = b (aB) 5: send each other their public keys. 6: Both take the product of their private key and the other users public key. Shared Secret Key = KAB = abB 7: Encrypt the data using shared key 8: Attach a signature to the cipher text

4.3 Encryption and Signature Generation Algorithm 2 depicts the encryption and signature generation algorithm where the encrypted text is computed by getting the instance from Elliptic Curve Diffie Hellman (ECDH) algorithm. The input parameters for encryption are plain text and public key that are initialized in the encrypt mode and encrypted using base 64 encoder. It is then

An Efficient Anonymous Authentication with Privacy …

231

signed using the Elliptic Curve Digital Signature Algorithm (SHA1 with ECDSA) that takes the private key and the plain text as input. It also generates an attribute along with the initialization of the signature and these are reported to the LPU (local processing unit) in the aggregation phase. SHA1 with ECDSA signature follows an efficient asymmetric encryption method. This algorithm first calculates a unique hash of the input data and the hash is then encrypted with a private key using the elliptic curve algorithm. Algorithm 2 Encryption and Signature generation algorithm Input: Medical data (Plain text) Output: Signed encrypted data 1: Get the public key (n, e) 2: Convert plain text to positive integers m, 1 < m < n 3: Compute the cipher text c = me mod n 4: Create a message digest of the information to be sent 5: Represent this digest as an integer m between 1 and n−1 6: Generate a signature using a private key (n, d) s = md mod n 7: Send cipher text c and signature s to the data aggregation phase

4.4 Bilinear Pairing-Based Aggregation at LPU In order to verify the encrypted data and the digital signature, public key verification has to take place. If verified successfully, then it will go to the bilinear pairing phase where the security has been strengthened. The steps involved in signature verification and data aggregation are explained in Algorithm 3. Algorithm 3 Signature verification and Data aggregation Input: Signed cipher text Output: Aggregated signed cipher text

232

K. Mohana Bhindu et al.

1: Receive a signed cipher text from the encryption and signature generation phase 2: Uses sender’s public key (n, e) to compute integers v = se mod n 3: Extract the message digest from this integer 4: Independently compute the message digest of the information that has been signed 5: if both message digests are identical then 6: the signature is valid 7: else 8: the signature is invalid 9: end if 10: Aggregate the cipher text 11: Aggregate the plain text 12: Send a aggregated cipher text and signature to data store phase In bilinear pairing an efficient algorithm called Weil pairing is used where it divides the total number of bytes into batch 1 and batch 2 that is described in Fig. 4, and it compares the generated ratio along with Weil pairing ratio. If the ratio is higher, then it indicates that the data is enough to be transmitted otherwise homomorphic encryption is applied. Finally, cipher text and signature aggregation take place in order to avoid redundancy and communication overhead.

4.5 Decryption and Access Control In the decryption phase, the generated signature is verified based on the attribute that is generated during the signature initialization and the public key. When the validity check is successful, the verification of total number of bytes takes place, in order to detect and identify the attacks. If the total number of bytes generated initially are equal to the total number of bytes generated currently, the encrypted data are loaded into the MySQL database successfully else missing data alert is given and the data is dropped. The decryption of medical data is explained in Algorithm 4.

An Efficient Anonymous Authentication with Privacy …

233

Algorithm 4 Decryption and Access Control Input: Signed aggregated cipher text Output: Medical data (Plain text) 1: Receive aggregated signed encrypted data from the data aggregation phase 2: Verify a signature using public key (n, e) 3: if the signature is valid then 4: Decrypt the aggregated cipher text 5: using private key (n, d) m = cd mod n 6: Extract the plain text from the message representation m 7: Store the plain text to the database 8: else 9: Data Alert given and data dropped 10: end if Figure 5 describes the validation status based on which the encrypted data will be loaded into the database.

Fig. 4 Bilinear pairing-based secure data aggregation

234

K. Mohana Bhindu et al.

Fig. 5 Weil pairing—signature verification status

5 Conclusion In this paper, the efficiency of the proposed framework is improved with the use of highly flexible and secure elliptic curve cryptographic encryption algorithm and digital signature approach. Moreover, the transferred data has been authenticated with a suitable access control technique and also validated against attacks to prove that the proposed scheme is able to thwart most of the attacks. This scheme also meets seven security requirements like authentication, anonymity, attack resistance, non traceability and it is more secure for practical WBAN-based applications. Moreover, the framework proposed in this paper ensures that the storage requirement is optimized. The efficiency of the framework can be further improved with the help of highly flexible and secure cryptographic encryption algorithms combined with authentication schemes.

References 1. Garcia-Morchon, O., Falck, T., Heer, T., & Wehrle, K. (2009). Security for pervasive medical sensor networks. Mobile and Ubiquitous Systems, Networking Services, 09, 1–10. 2. Sun, J., Zhu, X., & Fang, Y. (2010). Preserving privacy in emergency response based on wireless body sensor networks. IEEE Globe communications, 12(1), 1–6. 3. Drira, W., Renault, E., & Zeghlache, D. (2012). A hybrid authentication and key establishment scheme for WBAN. In Proceedings IEEE 11th International Conference Trust, Security Privacy Computing Communication (TrustCom), pp. 78–83. 4. Ren, J., Guowei, W., & Yao, L. (2013). A sensitive data aggregation scheme for body sensor networks based on data hiding. Journal of Communications and Networks, 17(7), 1317–1329. 5. Tiwari, B., & Kumar, A. (2013). Physiological value based privacy preservation of patients data using elliptic curve cryptography. Health Informatics an International Journal, 2(1), 1–14. 6. Ren, J., Wu, G., & Yao, L. (2013). A sensitive data aggregation scheme for body sensor networks based on data hiding. Personal and Ubiquitous Computing, 17(7), 1317–1329.

An Efficient Anonymous Authentication with Privacy …

235

7. He, D., Chan, S., & Tang, S. (2014). A novel and lightweight system to secure wireless medical sensor networks. IEEE Journal of Biomedical and Health Informatics, 18(1), 316–326. 8. Zhou, J., Cao, Z., Dong, X., Xiong, N., & Vasilakos, A. V. (2015). 4S: A secure and privacy-preserving key management scheme for cloud-assisted wireless body area network in m-healthcare social networks. Information Sciences, 314, 255–276.

A Study on Big Data Analytics and Its Challenges and Tool K. Kalaiselvi

1 Introduction In the digital world, as of now all the data are generated from various sources such as Social media, Photographic devices such as camera, smart phones etc. The main of the paper was to study and analyze the importance of big data in healthcare in order to construct an effective literature review. These generate a huge amount of data day by day. It helps all the other research areas by providing them this collection of data. This collection of data refers to the data which are collected and stored as they are retrieved. It requires more than the traditional method of database management tools and data manipulation application. These data are generally in the form of structured, semi-structured, and unstructured format and in various sizes according to the file. As the data are stored as soon as they are received there can be repeated data too [1]. While organizing these data they are categorized according to volume, velocity and variety. Volume refers to the tremendous quantity of data or size of the data that are being produced day by day whereas velocity refers to rapid growth of these data from the day they are collected. Variety can be described as the types of data, this can be based on the size, format or may be based on structured, unstructured or semi-structured etc. The various methods for extracting the data have been studied by Gandomi and Haider [2]. Figure 1 explains main characteristics of big data and it helps in understanding the process of obtaining data and taking a much more enhanced decision. It also helps in making the data extraction process very cost efficient and innovative. Data are stored in warehouses and later used to manipulate according to various needs. The existing methods of data mining are not capable and well developed in order to handle these huge datasets.

K. Kalaiselvi (B) VELS Institute of science, Technology and Advanced studies, Chennai, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_26

237

238

K. Kalaiselvi

Fig. 1 Various characteristics of big data. The three types of v, namely volume, velocity, and variety

The major issue is the coordination between the datasets and the analysis tool and these issues are present only during the practical application. The research on big data helps to understand the important uses and patterns which help the data to be categorized and organized. All the data which are available cannot be processed or used for further development. The main aim of this research is to understand the challenges in big data and the techniques which are available. This research focuses on the challenges that were present while understanding the concept of big data and various processes involved in big data and extracting useful knowledge from it and tools and techniques.

2 Challenges in Big Data In the past few years the amount of data collected from various domains like hospitality, government administration and various other sections has increased tremendously. Web-based applications produces massive amount of data day by day, these sources include social media, Internet text and documents. This can be considered as an advantage of big data since it provides a better understanding for the researchers and can be used according to their various needs. Even various organizations use these data for their day-to-day business activities. However, every advantage comes with negative impacts and challenges too. To handle various challenges it is compulsory to have understanding in various computational techniques, methods to keep information and related data secure and methods to understand big data. For example, few methods or application that help in organizing data that are small in size do cannot be used in analyzing and manipulating big data. In health care sector there are maximum numbers of challenges and this can be identified from the number of researchers in this field [2]. Figure 2 explains the stages through which data is processed once it is stored in the server. First it is checked for the information sensitivity, later for user authentication it verifies if the particular user can access the database and makes changes. Then the third stage the data is protected from unauthorized access. In the final stage it is allocated and moved to warehouse. Few of the challenges that were faced in the health care sector while understanding the concept of big data analytics can be broadly classified into four sections. They

A Study on Big Data Analytics and Its Challenges and Tool

239

Fig. 2 Workflow of Big data, representing the various stages through which data are processed after it is stored in the server

are Database and manipulation techniques; collection of data and computational complexities; scalability and understanding of data; and Methods to secure the data.

2.1 Database and Manipulation Techniques For the past few years the size of data has grown tremendously and the main source of data are from devices such as mobile, sensory technologies, radio frequency readers etc. More cost is spent in storing these data rather than it being organized else the unwanted data are later deleted. Hence, the main challenge for big data organizing and categorizing is in storing these data which can be accessed easily and accurately. The next main aim is accessibility and representation. To overcome storage issues solid state drive (SSD) were introduced. However these mediums are not enough for processing big data.

3 Open Research Issues in Big Data Analytics Big data analytics is becoming the one of the most interesting topics according to researchers. It mainly aims at researching big data and techniques of extracting information from data. There are various applications of big data and data science. These mainly include the study to understand patterns, processing of signals and storing data issues. If these data’s are used appropriately it can help in creating a better helpful future. Internet has global inter connected communication, it is also known as the heart of business due to various characteristics and the advantages towards improving a business. As of now the number of devices that support Internet is increasing and hence as a result all these devices generate data. More than the user the device is

240

K. Kalaiselvi

becoming the user. In short these devices are just like humans. Internet of things is gaining popularity recently due to the increasing number of opportunities and challenges it has. Cloud computing provides a very good way for storing big data. It not only helps in reducing time consumption but also helps in reducing cost incurred. All these issues will take big data and cloud computing to a high level of development. Bio-inspired computing is a method that was developed based on nature’s way of solving complex difficulties that are faced. All these systems are organized automatically and do not have any control unit. These techniques are developed based on biological factors such as DNA. These computations include storing, retrieving, and processing of data [3]. One of the main features of such a way of computing is that it integrates biologically retrieved factors to perform computational functions and receive performance based on it. These systems are mainly suitable only for big data applications. A quantum computing is a mechanism that involves quantum computer which has a memory very larger compared to other systems and can manipulate a very huge datasets of inputs simultaneously [4]. These systems can solve problems that are very difficult on recent computers. The main challenge faced is in developing a quantum computer due to its specifications and the time consumption in developing such a system. Most of the challenges faced can be solved faster by larger-scale quantum computers compared with classical computers. Hence it is a challenge for the upcoming generation to build such a system that facilitates quantum computing to solve big data problems.

4 Tools for Big Data Processing A variety of tools are available for processing big data. Few of the widely used techniques for manipulating big data are MapReduce, Apache Spark, and Storm. These existing tools focus on processing in bulk and analyzing the link between each data or if it could be correlated. There are tools available for batch processing on the Apache Hadoop infrastructure [5]. Most of the data which are produced in streams which are correlated are used for real time analytic. Map reduce is used for processing large datasets which are based on divide and conquer method. The divide and conquer method executes data in two process they are Map step and Reduce Step. It is similar to Hadoop, but it has two kinds of nodes namely master node and worker node. Master node equally classifies the input into problems that are to be solved and then sub component problems of it. Then later it is distributed to worker nodes. The master node combines the outputs for all the problems in reduce step. It is mainly chosen because it helps in fault-tolerant and high throughput data processing.

A Study on Big Data Analytics and Its Challenges and Tool

241

Apache spark is an open source data processing tool used to support much faster speed processing and complicated analysis techniques.

5 Application of Big Data Big data is widely used nowadays mainly in healthcare industry. Few of the application in healthcare industry is for medical image processing. Medical imaging is important for organ function to detect any abnormality caused to organs due to diseases. It is also utilized for identifying tumors in various parts of the body. As the size of the data increases, the machine will be able to understand the dependencies comparing the other data [6]. Since the amount of data increases day by day this has also helped in creating technologies that can be very accurate and help in improving the treatment. Big data can also be used by drug manufactures in order to understand the drugs are prescribed by the doctors. Other applications in healthcare industry includes for organizations that provide insurance in order to analyses the dataset for various healthcare benefits. Even government uses big data analytics to improve the efficiency within. Big data is used in order to research and understand cyber security, the threats that the nation is receiving. The data for government is extracted from sources like satellite social media etc. and they are further classified according to needs. Big data is also used in crime prediction and prevention. It helps to understand the behavior of people in real time and identify threats based on a particular location. Big data can also be used in manufacturing industry since it helps to track product quality and defects, understand the need for supply, forecasting the needs of the consumers, and in improving the quality simultaneously reducing cost incurred.

6 Conclusion In the past few years the data available now was generated with a very rapid growth in the quantity. Arranging these data and analyzing them according to the needs of the user is challenging since it requires automation along with human assistance. From this it enables to conclude that a development is required in the area of tools used to analyze these big data. From this survey, it enables to understand that most of the widely used big data platform has its own advantages and disadvantages that needs to be improved. Some of them are design based issues where data needs to analyzed in bulk and then processing them. These issues can be solved with the help of implementing MapReduce techniques. If the problems are divided into sections it will help in easily identifying solutions and saving time. The tools which are used for big data analytics have functionalities that are limited. The different methods which are used include analysis based on statistics, machine based understanding, methods for extracting data, analysis the improve efficiency of search etc. In the coming

242

K. Kalaiselvi

future, developments need to be focused mainly on enhancing the techniques to solve problems of big data effectively and efficiently. Since one of the important applications of big data is health care industry the existing methods are required to analyze these data in clinical setting. These methods address some concerns, opportunities and challenges such as medical image enhancement, registration and segmentation to deliver better recommendations at clinical level.

7 Future Work There are huge amount of data that are continuously being collected from various sources such as digital devices, Internet, various applications from all over the world regardless of the fields or areas from where it is being produced. It is expected to keep increasing from day to day. These data’s are considered to be useless if they are not organized according to the needs of the user and analyzed to get useful meaningful information [7]. Hence once they are collected and kept for a longer period of time it becomes useless and then it is deleted. This has become one of the main challenges to develop techniques or tools which can be used to analyses the big data. Powerful computing systems are required for performing this task. Developing such systems require a lot of time and lot of efforts to implement these techniques. The transformation of data into meaningful information from analyzing it is not an easy task. These data may be in different formats or models. Algorithm implementation, and optimization is also important since it needs further more development to support big data analytics. The existing tools which are being used needs to be developed in such a way that it can handle unclear and missing values.

References 1. Kakhani, M. K., & Biradar, S. R. (2015). Research issues in Big Data analytics 2(8), 228–232. 2. Gandomi, A., Haider, M. (1982). International Journal of Computer Information Science, 341–356. 3. Das, T. K., & Kumar, P. M. (2013). Big data analytics: A framework for unstructured data analysis. International Journal of Engineering and Technology, 5(1), 153–156. 4. Characteristics of Big Data. http://www.edureka.com/blog/big-data-ap. Last accessed August 26, 2019. 5. Panagiota, G., Korina, K., & Sameer, K. (2019). Big data analytics in health sector: Theoretical framework, techniques and prospects. International Journal of Information Management, 50, 206–216. 6. Oussous, A., Benjelloun, F. Z., & Lahcen, A. A. (2018). Big data technologies: A survey. Journal of King Saud University—Computer and Information Sciences, 30, 431–448. 7. Huang, T., Lan, L., Fang, X., An, P., Min, J., & Wang, F. (2015). Promises and challenges of big data computing in health sciences, 2(1), 2–11. 8. Das, T. K., Acharjya, D. P., & Patra, M. R. (2014) Opinion mining about a product by analyzing public tweets in twitter. In International Conference on Computer Communication and Informatics.

A Study on Big Data Analytics and Its Challenges and Tool

243

9. Chen, X. Y., & Jin, Z. G. (2012). Research on key technology and applications for internet of things. Physics Procedia, 561–566. 10. Herland, M., Khoshgoftaar, T. M., & Wald, R. (2014). A review of data mining using big data in health informatics. Journal of Big Data, 1(2), 1–35.

“Real-Time Monitoring with Data Acquisition of Energy Meter Using G3-PLC Technology” Deepak Sharma and Megha Sharma

1 Introduction In the existing billing process for reading the meter information, a person notes down the reading from the home and gives the information to the utilities for generating the bill. Thus, there may be the chances of theft or data loss, and the data is discrete, all process requires much time and cost accordingly thus it is not reliable. So, we need an automatic system to read the data via smart meter with different technologies as RF communication, Xbee, GPRS, and PLC, so that manual cost and time can be reduced. Smart meter is the one of the solutions for transferring data from consumer to utility. The main challenge of the existing work is that it has some limitation for long-distance communication for transferring data from consumer to utility server. In the domestic and export market of meter, the large number of users is available. The data of those meters should be provided to utility through a secure mechanism. Existing work is not reliable for transferring large number of users (meters) data with a secured and cost-effective way. In the present scenario, information technology is in golden era that focuses on both creation as well as dispersion of information. The advantage of using electric power lines as the data transmission medium is that it is available everywhere as in homes, companies, building, and farmer’s field. These are directly connected to the power grid. The existing AC electrical wiring is used in the power line carrier (PLC) communication systems, which works as the network medium. Through this network medium, it is possible to provide high-speed network access points everywhere on D. Sharma (B) Department of Electronics & Communication Engineering, Poornima College of Engineering, Jaipur, India e-mail: [email protected] M. Sharma Department of Computer Science Engineering, MNIT, Jaipur, India © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_27

245

246

D. Sharma and M. Sharma

Fig. 1 Basic principle of PLC communication [1]

those where an AC outlet is available. This proposed work has main focus on the use of G3-PLC to exchange information between HES and electrical meter.

1.1 PLC Technology The basic principle of PLC communication is to putting data (modulation) on AC signal from transmitting end and demodulation of the signal at the receiver end and receives actual data from AC signal. The power grid is injected with a modulated highfrequency signal in which this network is spreading to other network participants. The injected signal is superimposed on the power voltage, this receiver device separates the signals in the communication band, and this signal demodulates. This will produce the original data [1] (Fig. 1).

1.2 G3-PLC Many communication technologies (GSM/GPRS, radio frequency, power line communication) are developed in the current scenario for transferring the data from meter to head-end system (HES) and vice versa. G3-PLC is one of the most important protocols in PLC that is used for smart grid. G3-PLC provides the controlling and managing meter data to the utility server as well as monitoring the electricity consumption at the consumer end. An automation system for smart grid application (like energy management for power supply control and electrical vehicle charging system) also uses G3-PLC technology.

“Real-Time Monitoring with Data Acquisition of Energy Meter …

247

To make the power grid smart, we need to implement a two-way communication over it otherwise in single-way electricity transfer, it works as power distributor system. According to industrial requirement, the G3-PLC is implemented for smart grid system, to get data with high speed for long distance with reliability over the existing power line channel so that overall maintenance and infrastructure cost reduces. With the use of AES128 encryptions process at MAC level provide more secure data during transmission and IPv6 is used for at the place of IPv4 for more efficient routing and packet processing and information flow in fast manner to the destination address [2]. G3-PLC Module Technique for Data Exchange The power line communication channel provides a strong connection. According to available frequency, time, place, and equipment connected to PLC decide channel parameters. There may be chances of interference in the area of low frequency that is 10–200 kHz. Due to different type of noise, delay is increased up to hundred microseconds during the data exchange. Delay can be occurred due to narrow band interference; noise occurs in the background and impulsive noise. One of the modulation techniques, orthogonal frequency division multiplexing (OFDM), is used for efficiently utilizing the assigned bandwidth for channel coding techniques. In case of noise such as impulsive, and interference in narrowband, this technique provides a robust communication. Other type of noise like frequency selective attenuation is also considered in this technique.

1.3 Router The low PAN bootstrapping agent (LBA) or called router used here is Ethernet/PLCG3 Bridge of Webdyn. In smart grid, for transferring packets from Ethernet to G3PLC network and vice versa, a Webdyn router is used. It is mainly used for power line carrier networks.

2 Literature Survey Pinomaa et al. [3] work explains the ideal smart grid functionality for low-voltage direct current (LVDC). Monitoring and controlling of transmitted data through power line communication system are required functionality of smart grid. The proposed work is the analysis of theoretical information and verifies it by measurements done on power line communication due to connection of two modems. The work describes as the AXMK cable length is increased, the channel capacity is reduced and the number of sub carrier is also reducing [3]. Patil et al. [4] explain a PLC system for automatic meter reading. Proposed system is divided into three parts.

248

D. Sharma and M. Sharma

Load Management System: When consumer consumption is higher than predefined over load limit, an alarm/warning is sent on the LCD. Accordingly, the consumer is required to reduce the load, otherwise microcontroller will cut down consumer supply by relay off. Energy Meter: The billing calculation is done at server side, and after computation of data billing information is sent to the meter by utility server via PLC communication. This billing consumption information is display on meter LCD for consumer. PLC Modem: All the meter and server communication are done by PLC modem. At the transmitter end, information signal is modulated on carrier signal and transmitted. At the receiver end, the information signal is collected from carrier signal by frequency-shift key (FSK) demodulation technique [4]. Nguyen et al. [5] work explores DC–DC convertors to exchange necessary information to assume the power line communication on DC bus and make with Modbus protocol. The original requirement is low cost, long-distance communication, avoiding propagation clashes, and work at low-frequency carrier. This work has a renewable energy generator and works on low-frequency carrier using PLC technology [5]. Baskaran and Tuithung [6] work is focused on the information downloading from the smart meter and segregation of information to incorporate customer preferences. Healthy monitoring and controlling the meter information and conversion of information in user-friendly format on meter LCD display. At the customer side, if any case of fault condition occurs in meter, then a possibility of meter relay disconnection should be available. The work consists of three smart meters connected with Xbee communication in gateway of neighborhood area network (NAN). NAN (data concentrator) is connected with wired connection to power operator [6]. Diao et al. [7] work is done of loudspeakers operating status monitoring through PLC communication. In the analysis of different communication technology, the decision is made that power line communication technology (G3-PLC) is more suitable for monitoring and controlling the loudspeakers. Though the results this decision is proven the reliability of system [7]. Chauvenet et al. [8] work focuses on the practical solution and standard approach to use the G3-PLC technology in the field of Internet of Things (IoT). The solution provided to use IoT protocol is called constrained application protocol (CoAP) for smart grid communication to controlling and monitoring the data over the G3-PLC network. G3-PLC with IoT is explained in the work. The system is actively used and provided authentic information on regular basis [8]. These technologies have their benefits and limitations, some of them can transmit fast data using wireless technology but those are very costly, other use the power line cable for PLC communication but these are used for small area with a limited number of consumer, and some technologies have drawback of data loss during transmission. Those technologies use the radio frequency to transmit the data is not good for human health.

“Real-Time Monitoring with Data Acquisition of Energy Meter …

249

In future, RF pollution will become the main issue due to large-scale uses in mobile, smart grid, TV, and GSM/GPRS equipment. To overcome these issues for future prospective, a power line communication technology becomes the main focus of researcher. To improve the PLC communication, G3-PLC is developed with the use of OFDM modulation, IPv6 protocol and AES-128 encryption, 6LoWPAN, and other more features.

3 Proposed Work PLC is one of the best communication technologies that used for transmitting the information over the preexisting power supply lines with minimum cost. In this proposed approach, a cost-effective method is used for implementing G3-PLC protocol in the microcontroller unit for sending the information to the utility server over power line. Router is used for receiving the data from power line and transfers it to the utility server through Ethernet. We use a secure mechanism AES128 encryption process for MAC layer security to provide protection against theft and grid assets. IPV6 is used for effective routing, package processing, and direct data flow.

3.1 Working Process of G3-PLC Step 1 Initializations at Physical Layer In the initialization phase, baud rate (115,200, 57,600, 38,400) is selected according to the application. We required less data transfer with normal speed, so in the proposed work we select the baud rate 38,400 for transmitting the data. No error occurs at this baud rate and suitable for less power dissipation by system. Provide 5 s delay for the stable system. Then we check the mode through get mode command whether the device is in ADP mode or not. If device is not in ADP mode, then it is definitely in boot mode. From boot mode, we can set the ADP mode. Then we set the mode that whether we want to make the meter module as device or router as coordinator. Currently, we are using the router of Webdyn company. So, only we make the device. We are using FCC frequency band (10–490 kHz). Step 2 Set Key for Connection Set ADP key password secure key (PSK) of the device for establishing the connection of the device PSK should match with router store key for discovering the device over the network. After deciding the PSK, set waiting delay for making the system stable (Fig. 2).

250

D. Sharma and M. Sharma

Fig. 2 Flow diagram of proposed work

Step 3 Network Discovery After deciding the key, the device starts the process of network discovery for sending the data from meter to HES through router. Router sends the response of network discovery to the meter in Bicon as okay. That the device is now able to join the network as the network lines are free. At the time of network discovery process, it provides the personal area network count (PAN count), PAN ID, LBA address, link quality index ID (LQI ID). The PAN count is a number of devices that are connected in the network, and PAN ID is the unique identity of each device through which device connection is design. The MAC address is decided by the combination of PAN count, PAN ID and LBA address, and LQI ID. Each device has a unique MAC address of six two-digit

“Real-Time Monitoring with Data Acquisition of Energy Meter …

251

hexadecimal numbers. This MAC address of all devices is stored in the coordinator information table of the router. Step 4 Join Network At the time of network join device required PAN ID and LBA address sent to the network, then router assigns the address to the device. The assigned address is called its IPv6 address. Step 5 Send Data Using IPv6 and UDP Protocol IPv6 protocol is used for sending the data from device to device. Packet is prepared with the DLMS protocol. Step 6 Send Data to HES Finally, the device sends data to the head-ended system (HES). It can be used for ping from HES to meter and meter to HES. After check the response of HES to meter, if error occur during transmission of data, router discovery done with the retry of 2 and still the router not discover then power reset given to the module and reach to initialization. If data transfer is okay, proper response is received and the link is stable permanently. The system waits for schedule time for transferring the data. System checks the link in each hour. If the link breaks, then again router discovery starts.

4 Result and Analysis Fourteen meters were connected on same power line with one LBA (router) on that line router was connected to HES through Ethernet. The test was performed for two weeks. The meters are running from past 8 months and data is monitored on regular bases as shown in Fig. 3. The current week results images are shown in this paper: alarm information, energy consumption on bill date, load profile information, event log data, TOU data, power utility data, instantaneous parameter, instantaneously relay

Fig. 3 Analysis view of installed 14 m communication chart on day wise on HES

252

D. Sharma and M. Sharma

Fig. 4 Analysis view of meter instant data

Fig. 5 Analysis view of meter billing data

connection, and disconnection of meter supply done on regular basis or on-demand basis (Figs. 4 and 5). • The data information transmission from meter to HES through router is more than 90% successful in all condition’s hourly basis. • Daily basis downloading is increased, and it is more than 95% successful. • Instant data can read in less than 30 s. • Relay connect and disconnect operation can be performed within 30 s. • Other data such as daily load survey, load profile, tampers and events can be successfully read on-demand base or schedule base.

5 Conclusions and Future Work In the proposed work, we design programming and testing an existing smart meter with new G3-PLC module to receive the data on HES through the router. Embedded C and assembly language are used for coding. Due to G3-PLC technology, we are able to achieve 90% data on a hourly basis and 95% data on daily basis from smart

“Real-Time Monitoring with Data Acquisition of Energy Meter …

253

meter to HES, and instant data read and relay operation can be done within 30 s. Uses of the technology in future prospective the addition of a greater number of meters should be considered the capability of the routers should increase to make the network more reliable so that success rate of information exchange should increase and the technology will be more reliable in field area.

References 1. Renesas Electronics, PLC basic principle, https://www.renesas.com/us/en/about/edge-mag azine/solution/27-smart-meter.html. Last accessed 2018/09/06. 2. G3-PLC overview, http://www.g3-plc.com/what-is-g3-plc/g3-plc-overview/. Last accessed 2019. 3. Pinomaa, A., Ahola, J., & Kosonen, A. (2011). Power-line communication-based network architecture for LVDC distribution system. In 2011 IEEE International Symposium on Power Line Communication and Its Applications (pp. 358–363). 978-1-4244-7750-0/11. 4. Patil, S. S., Mithari, V., Mane, P., & Patil, A. (2017). Automatic meter reading using PLC. International Research Journal of Engineering and Technology (IRJET), 4(10), 1062–1065. 5. Nguyen, T. V., Petit, P., Aillerie, M., Charles, J. P., & Le, Q. T. (2015). Power line communication system for grid distributed renewable energy. Journal of Fundamentals of Renewable Energy and Applications, 5(3). ISSN: 2090-4541©JFRA. 6. Baskaran, S., & Tuithung, T. (2017). Remote monitoring and control of smart distribution grid using Xbee communication. In Proceeding of 2018 IEEE International Conference on Current Trends toward Converging Technologies, Coimbatore, India. 978-1-5386-3702-9/18/. 7. Diao, B., Chen, G., & He, F. (2018). Loudspeaker operation status monitoring system based on power line communication technology. International Journal of Image, Graphics and Signal Processing, 54–62. 8. Chauvenet, C., Etheve, G., Sedjai, M., & Sharma, M. (2017). G3-PLC based IoT sensor network for Smart Grid. IEEE ISPLC 2017 1570327888.

A Profound Analysis of Parallel Processing Algorithms for Big Image Applications K. Vigneshwari and K. Kalaiselvi

1 Introduction Due to frequent use of image in web pages, social sites, goods in shopping Web sites, etc., there arises a rapid need for processing large files in big data analysis. These types of pictures should be generally used for various categories of applications such as content-based image retrieval (CBIR), image explanation and labeling, and image comfortable identification. To enhance the adaptability of enormous images and the maintaining, the computation intricacy of the image processing model is very difficult; therefore, it is essential to make use of distributed environments through accelerators of development of these huge pictures. From the work [1], there are four advantages of distributed systems when compared to remote system that are (i) data sharing that permits multiple users to access a general database; (ii) machine sharing that permits multiple users to allocate their devices; (iii) communications with the purpose of facilitates communication of machine with each other more easily; (iv) flexibility that enables a distributed computer to deploy the workload over the connected machines in a successful manner. Better flexibility, reliability, and efficiency are found in distributed environments when compared with single user[2]. For effective data processing in distributed systems, a number of existing environments like MapReduce (MR) [3], Spark [4], Storm [5], and Hadoop [6] are used. These models are based on open-source environment and are appropriate for diverse areas. MR [3] exhibits effectiveness in processing and creation of large data sets. K. Vigneshwari (B) · K. Kalaiselvi Department of Computer Science, VELS Institute of Science, Technology and Advanced Studies, Chennai, India e-mail: [email protected] K. Kalaiselvi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_28

255

256

K. Vigneshwari and K. Kalaiselvi

The user-defined Map function gives the output for every input pair and a reduce function that combines each and every one intermediary values to the very same intermediary key. As illustrated in this model is expressible for several real-world tasks. This function is used for parallel and distributed environment to effectively make use of the resources of a huge distributed structure. Spark is a computer programming language depending on the Ada programming language and designed for the improvement of high reliability software. It is used in systems where predictable and highly consistent operation is vital and also assists demand safety, security applications. It constantly develops unbounded streams of information designed for real-time processing, as Hadoop performs designed for batch processing, in contrast, Spark is meant for real-time analytics [5, 6]. In this review work, the facts of a distributed image processing system which is developed depending on Hadoop [5] derived from MR [3] are studied. In comparison with the traditional image processing methods, parallel processing is able to unquestionably attain high-tech improvements. Given the distributed resources restricted to a single machine, until now in the meantime, this effort is a challenging. In recent years, researchers have put effort to propose image processing algorithms, considering the high competence with the purpose of parallel processing, conveying parallel methods in image classification and extraction of features. The algorithms capable of parallel running on various nodes and the efficacy of time effectiveness are elevated further. Currently, some common existing frameworks for image processing algorithms should have attained improved performance for their parallelism. These parallel frameworks are intended for processing of large images further lead to progression in image dispensation applications.

2 Related Work Recently, the big data have effectively introduced into a necessary area with the purpose focused toward huge-scale computational issues undoubtedly. The formation about earlier parallel processing methods under general applications, medical applications, and image processing applications is studied in this section.

2.1 Review of Parallel Processing Methods—Many Applications Pavlo et al. [7] performed a comparison with the purpose of showing Hadoop is 2.50 times lesser when compared to similar data base management system (DBMS) excluding in the case of data loading. A standard containing of a group of tasks is essential with the purpose of run on an open basis of MR and also on two parallel DBMSs. The study on performance examination of these DBMSs was found in the

A Profound Analysis of Parallel Processing Algorithms …

257

direction of exceptionally improved results for higher load information and enhanced execution time than the existing MR system. Anderson and Tucek [8] disparaged with the aim of the present Hadoop system is scalable, and on the other hand it gives lesser effectiveness per node, below 5 MB/s processing rates with the aim of recent studies. It is experimenting with the purpose of the data intensive scalable computing (DISC) systems have not solved the issues regarding accuracy systems via focusing on scalability not including efficiency into consideration. Li et al. [9] proposed a new improved MR framework with the aim of the use of hash methods to permit quick in-memory processing. Testing of this improved MR framework with real-world workloads and proposed enhanced MR framework significantly raise the improvement of Map tasks and provide better results to be returned frequently all the way through the task. Jiang et al. [10] conducted an evaluation on learning of MR on a 100-node cluster of Amazon Elastic Compute Cloud (EC2) between the different steps of parallelism. Five design factors are known, and results demonstrate with the aim by carefully fine-tuning these factors. The improved results of Hadoop are able to be increased by a factor of 2.5–3.5 with the purpose is both elastically scalable and efficient. MR model is intended for parallel processing and to address the huge data sets towards an expansive selection of real-world tasks [3]. Mohammed et al. [11] studied about the details of the MR programming framework, which is implemented on clinical big data samples. The use of Hadoop-based MR structure gives a new prospect in the expanding stage of big data analytics. Wang et al. [12] reported on a method with the aim of automatically deciding whether an exacting adverse event (AE) is originated by a precise drug depending on the concentrate of PubMed citations. An adverse drug events (ADEs) taxonomy is developed toward recognizing the neutropenia based on a preselected set of drugs, which was tested on a diverse set of 76 drugs to neutropenia. The results declared the substantiation AUROC with accuracy 0.93 and 0.86. Nguyen et al. [13] introduced new method for processing and storing clinical signal information regarding the Apache HBase system and the MR programming model through an included web-based information visualization layer. This solution cancels out the want to manage information into and out of the storage system at the same time as moreover simply parallelizing the computation. The results are experimented to approximation upward of 50 TB of clinical and in patients’ medical data sets. Sweeney et al. [14] developed an open basis Hadoop image processing interface (HIPI) designed for computer vision exploiting MR knowledge. HIPI theoretical is the extremely technical information about Hadoop’s system and is flexibly sufficient to place into practice several methods in present PC vision literature. It is applied to huge-scale image processing and visualization projects. A new high-performance computing (HPC) platform is developed for the rapid examination of tissue microarrays (TMAs) virtual slides [15]. The automated system efficiently handled 230 patient data by circa22 in a minute’s time. Upon real-time investigation of over 90 TMAs, multiplex biomarker experiments were speeded up particularly.

258

K. Vigneshwari and K. Kalaiselvi

2.2 Review of Parallel Processing Methods—Image Processing Applications Hadoop MR, MR, Spark platform of processing methods are going on, such as addressing the issue of describing a system for computationally exhaustive data dispensation and disseminated storage space. A number of the works are described as follows: Kohlwey et al. [16] developed a new method for general search of cloud scale biometric images. Here the experimentation details of human iris matching within this model framework are presented. At last, the chance for future study is also discussed at the end of the work. Vemula and Crick [17] proposed a Hadoop-based library in order to maintain hugescale image processing by introducing the Hadoop image processing framework. It aspires in the direction of image processing applications on the way to control the Hadoop MR framework not together with of master nodes and introduces an added source of intricacy and error into their MR. Hadoop distributed file system (HDFS) framework has been also introduced to manage and progress the big remote sensing information applications [18]. These methods are experimented in remote sensing images are capable in the direction of being easily practiced in a distributed environment. The results demonstrate with the purpose of these frameworks are proficiently handling and handing out of big remote sensing data. Almeer [19] makes use of a Hadoop platform designed for large-scale remote image applications. It is directly applied to the following image formats like Tag Image File Format (TIFF), Joint Photographic Experts Group (JPEG), Bitmap (BMP), and Graphics Interchange Format (GIF). Kune et al. [20] recognized the needs of the overlapped data association and introduced a two-phase development in the direction of the Hadoop distributed file system (HDFS) and MR framework, known as XHAMI, to handle them. It is experimented to image processing domains. From the results, it demonstrated with the purpose of the introduced XHAMI needs lesser storage space and increases the system efficiency particularly for overlapped information. Bajcsy et al. [21], they require the usual standard and the constant tests work which has been continued to be an inspiration for Hadoop cluster-based big image processing. Every one of microscopic image sets experimented on the National Institute of Standards and Technology (NIST) Raritan cluster by means of Java Remote Method Invocation (RMI) by means of a variety of configurations. Sozykin and Epanchintsev [22] developed a MapReduce image processing framework (MIPr), which is based on distributes paradigm. Furthermore, the MIPr includes the high-level image processing API used for development. This framework extensively achieves improved image processing in the Hadoop distributed system. Yamamoto and Kaneko [23] discussed on how in the direction begins a parallel distributed environment of a video database via the use of the computational issue in a cloud computing environment. But still the results of video processing applications remain an open challenge. This is able to be managed by developing parallel implements by means of MR on Hadoop platform.

A Profound Analysis of Parallel Processing Algorithms …

259

3 Inferences from the Review Because of huge collection sizes and higher computational time of current image processing methods, current image collections should not be processed capably on one computer. So the image processing applications require a distributed computing environment. On the other hand, the distributed computing is a difficult task with the purpose of difficulty deep technical information and often not used by researches with the purpose of introducing image processing algorithms. The MR, Hadoop framework is required for the purpose to allow the researches in the direction to focus on image processing methods and hide the details of difficulties in the distributed computing.

4 Solutions The usage of the computational resource in a CC environment for processing image database, parallel, is an emerging research topic. Currently, the open-source cloud computing for Apache Hadoop distribution is available from MR, which offers a common framework designed for processing images that are capable of experimenting in parallel. Therefore, in order to attain a raise in time effectiveness by not including the performance results. Dong et al. [24] proposed an image cloud processing (ICP) system to attempt this issue. The typical ICP framework includes two image processing mechanisms, i.e., static image cloud processing (SICP) and dynamic image cloud processing (DICP), and SICP is designed for processing the individual’s significant image information with the purpose of having stored in the distributed structure. DICP is introduced for the dynamic requests beginning the clients and should be capable of returning the results instantaneously. The results are measured in ImageNet data set are chosen to confirm the ability of the ICP structure over the conventional methods in terms of time complexity and performance results.

5 Conclusion In the recent study, Hadoop and MR framework has developed into a common formation designed for big data in the business. Various numbers of frameworks have been introduced in order to improve the results of developing Hadoop-related issues. This review work also shows how those frameworks are capable to be used in the direction of constructing a short and able solution toward a big data image processing application. Although the majority of previous researches are mostly focused on optimizing the image processing algorithms in order to improve superior effectiveness, these previous researches will not be experimented in a parallel environment;

260

K. Vigneshwari and K. Kalaiselvi

therefore still it has a scalability challenge. The aspiration specified by the recent work success and limits permits the future work to focus on the improvement of a successful parallel processing framework meant for massive image information via the use of the CC capacity. The future work introduces a successful processing and construction preferred image cloud processing (ICP) to successfully handle the data bang in image processing applications.

References 1. Tanenbaum, A. S., & Van Steen, M. (2007). Distributed Systems: Principles and paradigms (pp. 7–8). Upper Saddle River, NJ: Prentice Hall. 2. Fleischmann, A. (2012). Distributed systems: Software design and implementation (pp. 4–5). Berlin: Springer. 3. Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1), 107–113. 4. Zaharia, M., Chowdhury, M., Franklin, M. J., Shenker, S., & Stoica, I. (2010). Spark: Cluster computing with working sets. HotCloud, 10(10–10), 1–7. 5. Hadoop, W. T. (2009). The definitive guide, 1st ed. O’Reilly Media. 6. Shvachko, K., Kuang, H., Radia, S., & Chansler, R. (2010). The hadoop distributed file system. In IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST) (pp. 1–10). 7. Pavlo, A., Paulson, E., Rasin, A., Abadi, D. J., DeWitt, D. J., Madden, S., & Stonebraker, M. (2009). A comparison of approaches to large-scale data analysis. In Proceedings of the 2009 ACM SIGMOD International Conference on Management of Data (pp. 165–178). 8. Anderson, E., & Tucek, J. (2010). Efficiency matters! ACM SIGOPS Operating Systems Review, 44(1), 40–45. 9. Li, B., Mazur, E., Diao, Y., McGregor, A., & Shenoy, P. (2011). A platform for scalable onepass analytics using mapreduce. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data (pp. 985–996). 10. Jiang, D., Ooi, B. C., Shi, L., & Wu, S. (2010). The performance of mapreduce: An in-depth study. Proceedings of the VLDB Endowment, 3(1–2), 472–483. 11. Mohammed, E. A., Far, B. H., & Naugler, C. (2014). Applications of the MapReduce programming framework to clinical big data analysis: Current landscape and future trends. BioData mining, 7(1), 1–23. 12. Wang, W., Haerian, K., Salmasian, H., Harpaz, R., Chase, H., Friedman, C. (2011). A drugadverse event extraction algorithm to support pharmacovigilance knowledge mining from PubMed citations. In AMIA Annual Symposium Proceedings (pp. 1464–1471). Bethesda, Maryland: American Medical Informatics Association. 13. Nguyen, A. V., Wynden, R., Sun, Y. (2011). HBase, MapReduce, and integrated data visualization for processing clinical signal data. In AAAI Spring Symposium: Computational Physiology 2011. 14. Sweeney, C., Liu, L., Arietta, S., & Lawrence, J. (2011). HIPI: A Hadoop image processing interface for image-based mapreduce tasks. Chris: University of Virginia. 15. Wang, Y., McCleary, D., Wang, C.-W., Kelly, P., James, J., Fennell, D., et al. (2011). Ultra-fast processing of gigapixel tissue microarray images using high performance computing. Cellular Oncology, 34(5), 495–507. 16. Kohlwey, E., Sussman, A., Trost, J., Maurer, A. (2011). Leveraging the cloud for big data biometrics: Meeting the performance requirements of the next generation biometric systems. IEEE World Congress on Services (SERVICES) (pp. 597–601). 17. Vemula, S., & Crick, C. (2015). Hadoop image processing framework. IEEE International Congress on Big Data (BigData Congress) (pp. 506–513).

A Profound Analysis of Parallel Processing Algorithms …

261

18. Wang, C., Hu, F., Hu, X., Zhao, S., Wen, W., & Yang, C. (2015). A Hadoop-Based distributed framework for efficient managing and processing big remote sensing images. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2(4), 63–67. 19. Almeer, M. H. (2012). Cloud hadoop map reduce for remote sensing image analysis. Journal of Emerging Trends in Computing and Information Sciences, 3(4), 637–644. 20. Kune, R., Konugurthi, P. K., Agarwal, A., Chillarige, R. R., & Buyya, R. (2017). XHAMI– extended HDFS and MapReduce interface for Big Data image processing applications in cloud computing environments. Software: Practice and Experience, 47(3), 455–472. 21. Bajcsy, P., Vandecreme, A., Amelot, J., Nguyen, P., Chalfoun, J., Brady, M. (2013). Terabytesized image computations on hadoop cluster platforms. In IEEE International Conference on Big Data (pp. 729–737). 22. Sozykin, A., & Epanchintsev, T. (2015). MIPr-a framework for distributed image processing using Hadoop. In 9th International Conference on Application of Information and Communication Technologies (AICT) (pp. 35–39). 23. Yamamoto, M., & Kaneko, K. (2012). Parallel image database processing with MapReduce and performance evaluation in pseudo distributed mode. International Journal of Electronic Commerce Studies, 3(2), 211–228. 24. Dong, L., Lin, Z., Liang, Y., He, L., Zhang, N., Chen, Q., Cao, X. & Izquierdo, E. (2016). A hierarchical distributed processing framework for big image data. IEEE Transactions on Big Data, 2(4), 297–309

Breast Cancer Detection Using Supervised Machine Learning: A Comparative Analysis Akansha Kamboj, Prashmit Tanay, Akash Sinha, and Prabhat Kumar

1 Introduction Today, the most common issue persisting in our society is the large number of people suffering from one disease or the other. Although there is a great advancement in the field of medical science, people lack knowledge and are also careless regarding their health issues. Delay in diagnosis of diseases can result in severe health conditions and may also result in the loss of life. One such disease is cancer which is caused by abnormal cell growth. The cells spread uncontrollably in the body forming a lump or mass, also called tumour. Lack of proper treatment and delay in diagnosis are the causes of large amount of death rates of cancer patients. There are various types of cancers. Breast cancer is one among them. It is the second leading cause of deaths among women today. According to a survey in the USA, 1 out of 8 women is diagnosed with breast cancer in her lifetime.1 In Asia, it is the most common type of cancer found in women which often proves fatal. Hence, early detection of breast cancer is both necessary and important. Although the symptoms at its early stage 1 National Breast Cancer Foundation, Inc. https://www.nationalbreastcancer.org/breast-cancerfacts. Accessed on 30/08/2019.

A. Kamboj · P. Tanay · A. Sinha (B) · P. Kumar Computer Science and Engineering Department, National Institute of Technology Patna, Patna, India e-mail: [email protected] A. Kamboj e-mail: [email protected] P. Tanay e-mail: [email protected] P. Kumar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_29

263

264

A. Kamboj et al.

are poor, however, if detected, the chances of survival increase. Owing to lack of proper staff and limited breast cancer specialists, the survival chances are very low in India. Various methods are available today for breast cancer detection, namely X-rays, ultrasound and magnetic resonance imaging (MRI), and the most common among them is detection through mammography images which are the X-ray images of breast. Detection through mammography is both innocuous and safe but there exist multiple challenges in deducing breast cancer through mammography images which result in diagnosis delay and high death rates. The chance of curing breast cancer at its early stage is 90% more than any other type of cancer. Since the chances of survival at initial stage are high, there is a great need for developing an automated system which can detect cancer at the earliest. Using machine learning for interpreting mammography images can overcome the difficulty faced, thereby allowing the radiologists to arrive at a conclusion easily. This paper provides a comparative study of the different supervised machine learning models for prediction of breast cancer through features extracted from the digitized mammography images. Classification of tumour into benign and malignant is done on the basis of Wisconsin breast cancer dataset.2 This dataset consists of around 699 feature vectors obtained from digitized mammography images. Each feature vector consists of nine real-valued features which are computed through cell nucleus present in tumour. Different classification models used are logistic regression, Naïve Bayes classifier, gradient boosting classifier, random forest, decision tree and SVM classifier. The classifiers are trained using the Wisconsin breast cancer dataset and the performance of the classifiers is compared using different metrics. The work presented in this paper can be utilized to identify the best classifier for breast cancer detection which can either be further modified for improving its accuracy or can be clubbed with other techniques for better performance. The rest of the paper is organized as follows: Sect. 2 summarizes the existing related work in the concerned area; Sect. 3 provides a detailed description of the work carried out in this research; Experimental results have been analyzed Sect. 4 and finally, Sect. 5 presents the concluding remarks and directions for future work.

2 Related Work Breast cancer being the leading cancer type in women worldwide has attributed to a plethora of research and publications in the past couple of decades. This section highlights the research works that use well-known classification techniques to determine whether a tumour is benign or malignant.

2 UCI

Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/breast+cancer+wiscon sin+(original). Accessed on 30/08/2019.

Breast Cancer Detection Using Supervised Machine …

265

Since SVMs are greatly affected by the choice of the kernel functions [1], Hussain et al. demonstrate the efficiency of tumour classifications based on the choice of different SVM kernels [2]. A similar technique of using SVM for tumour classification has been used by the authors in [3–5]. They illustrate the use of two non-identical kernels for the same and present the measure of their individual performances with the help of various performance evaluation matrices such as confusion matrix, sensitivity and specificity. The use of convolutional neural networks for breast cancer detection has been proposed by Cire¸san et al. [6]. They have made use of max-pooling convolutional neural networks for detection of mitosis in breast histology images. They have followed the approach of classification of each pixel followed by post preprocessing of the output network. In recent works, the authors of [7] shed light on a computeraided diagnosis (CAD) scheme using deep belief neural networks for breast cancer classification. The model uses back propagation neural networks with Liebenberg Marquardt learning function. The model thus constructed boasts an accuracy of 99.68%. The authors in [8, 9] use weighted Naïve Bayesian approach for breast cancer detection. Five-fold cross-validation tests have been incorporated to authenticate the efficiency of the proposed model. Karabatak and Ince [10] have published an approach involving expert systems for the detection of breast cancer. The authors make use of association rules as well as neural networks to create an automatic diagnostics system. The crux of their work lies in the usage of association rules, which according to the authors, can be used to reduce the largeness of the breast cancer database. This also leads to the reduction of the number of features being pushed into the neural network, hence reducing the overall complexity of neural network model computation. The authors have used Wisconsin breast cancer dataset for model construction and validation. They have applied a three-fold cross-validation scheme to judge the performance of the model. The authors argue that this combination of association rules and neural networks can also be used to construct automatic diagnostic systems for other diseases.

3 Methodology In this paper, various models have been compared for classifying the tumour present in breast as benign and malignant. Wisconsin breast cancer dataset is used for this purpose. This dataset includes nine features for each and every feature vector. The dataset is already scaled in the range of 1–10 and includes the following features, namely ‘Clump thickness’ which is the thickness of tumour present in breast, ‘Uniformity of cell size’ which is measured as the consistency in the size of tumour cells, ‘Uniformity of cell shape’ which is the measure of uniformity in cell shapes and highlights marginal variances, ‘Marginal Adhesion’ which is the measure of the quantity of cells outside of the epithelial having tendency to bind together, ‘Single Epithelial Cell Size’ which determines the uniformity in cells and signifies whether

266

A. Kamboj et al.

the epithelial cells are sufficiently large or not, ‘Bare Nuclei’ which is the ratio of number of cells not surrounded by cytoplasm to cells that are surrounded by cytoplasm, ‘Bland Chromatin’ which is the classification of cells from fine to coarse, ‘Normal Nucleoli’ which is the classification of visibility of nucleoli and ‘Mitosis’ which is measure of cell predictability. Preprocessing of dataset is done to remove any kind of ambiguity present and to eliminate null values. The dataset is divided into training and testing set in the ratio of 75:25. Only the training dataset is passed to each model and prediction results are analyzed on the testing dataset. In order to obtain the best prediction model, training dataset is passed to different supervised learning models. The models used for classification are logistic regression which is used to classify dichotomous dependent variables, Naïve Bayes classifier which is based on Bayes theorem for classifying the dataset, gradient boosting classifier which uses ensemble of weak models for prediction, random forest which is built on top of decision tree to classify the outcomes, decision tree which uses a tree or a graph-like data structure to make decisions on the given dataset and support vector machine classifier which is a descriptive classifier and uses hyperplanes to arrive at a particular outcome. After training all the above models using the given dataset, the performance of the models has been compared. Comparison is done on the basis of f1 score and confusion matrix obtained from the models. Figure 1 outlines the different stages of proposed methodology. The pseudocode describing the steps of the procedure is given below: 1. Extract the dataset from an excel file 2. Pre-process the obtained dataset 3. Divide the dataset into training and testing in the ratio of 3:1

Fig. 1 Flowchart of the methodology

Breast Cancer Detection Using Supervised Machine …

267

4. Pass the training dataset into each and every Model 5. Pass the testing dataset to obtain accuracy 6. Obtain confusion matrix.

4 Results To illustrate the results of the comparison of the efficiency of classification models, this paper has presented comparative results as shown in Table 1. It consists of classifiers, the accuracy each of them produced on the data, confusion matrix, precision, recall and f1-score of each and every classifier. The accuracy indicates the percentage of the test dataset that was predicted accurately by the model. The accuracy of half of the classifiers like logistic regression, random forest classifier and Naive Bayes is approximately 96%. Linear SVC and gradient boosting perform the best with an accuracy of 97% while decision tree has worst performance. Confusion matrix is the visualization of the efficiency of the prediction capability of the learning models. The confusion matrix is drawn on two classes, namely benign and malignant. The rows of the confusion matrix indicate predicted attributes, while the columns indicate actual attributes. The first cell in the first row indicates True Table 1 Accuracy and confusion matrix comparison Classifiers

Accuracy (%)

Logistic regression

96.49

Random forest

Naïve Bayes

Gradient boosting

Decision tree

Linear SVC

Confusion matrix Benign

Malignant

Benign

113

3

Malignant

3

52

Benign

Malignant

Benign

113

3

Malignant

4

51

Benign

Malignant

Benign

112

4

Malignant

1

54

95.90

96.50

97.07

Benign

Malignant

Benign

113

3

Malignant

2

53

95.90

Benign

Malignant

Benign

111

5

Malignant

2

53

97.66

Benign

Malignant

Benign

113

3

Malignant

1

54

Precision (%)

Recall (%)

F1 score (%)

97.41

97.41

96.49

96.58

97.41

95.89

99.12

96.54

97.09

98.26

97.42

96.66

98.23

95.68

95.93

99.12

97.41

97.67

268

A. Kamboj et al.

Positive predictions for benign classifications that imply correct predictions. The second cell in the first row indicates False Positives, which means that the model predicted the tumour to be benign; however, the tumour turned out to be cancerous. In the same manner, the first cell of the second row indicates False Negatives meaning the tumour was incorrectly classified as malignant by the model. The last cell has True Negatives where the model has predicted a malignant tumour and tumour was in fact malignant. The table also presents precision, which is the fraction of relevant instances among the retrieved instances and recall, which is the fraction of total amount of relevant instances that are actually retrieved.

5 Conclusion and Future Work This paper provides a comparative analysis of different supervised machine learning models for predicting breast cancer. Accuracy and confusion matrix are calculated for every model and compared to obtain the best possible machine learning model for breast cancer detection. Results obtained depicts that every classifier has performed appreciably good for the given dataset. However, linear SVC and gradient boosting have performed exceptionally well by giving the accuracy of 97.66% and 97.07%, respectively. This comparative analysis can serve as an initial step for obtaining the best classifier for breast cancer detection which may be further modified or clubbed with other techniques to improve the accuracy. The work can be further extended by using mammography images for breast cancer detection.

References 1. Mani, S., Kumari, S., Jain, A., & Kumar, P. (2018). Spam review detection using ensemble machine learning. In P. Perner (Ed.), Machine learning and data mining in pattern recognition. MLDM 2018. Lecture Notes in Computer Science (Vol. 10935). Cham: Springer. 2. Hussain, M., Wajid, S. K., Elzaart, A., & Berbar, M. (2011, August). A comparison of SVM kernel functions for breast cancer detection. In 2011 Eighth International Conference Computer Graphics, Imaging and Visualization (pp. 145–150). IEEE. 3. Krishnan, M. R., Banerjee, S., Chakraborty, C., Chakraborty, C., & Ray, A. K. (2010). Statistical analysis of mammographic features and its classification using support vector machine. Expert Systems with Applications, 37, 470–478. 4. Huang, M. W., Chen, C. W., Lin, W. C., Ke, S. W., & Tsai, C. F. (2017). SVM and SVM ensembles in breast cancer prediction. PLoS ONE, 12(1), e0161501. 5. Wang, H., Zheng, B., Yoon, S. W., & Ko, H. S. (2018). A support vector machine-based ensemble algorithm for breast cancer diagnosis. European Journal of Operational Research, 267(2), 687–699. 6. Cire¸san, D. C., Giusti, A., Gambardella, L. M., & Schmidhuber, J. (2013, September). Mitosis detection in breast cancer histology images with deep neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 411–418). Berlin: Springer.

Breast Cancer Detection Using Supervised Machine …

269

7. Abdel-Zaher, A. M., & Eldeib, A. M. (2016). Breast cancer classification using deep belief networks. Expert Systems with Applications, 46, 139–144. 8. Karabatak, M. (2015). A new classifier for breast cancer detection based on Naïve Bayesian. Measurement, 72, 32–36. 9. Kim, W., Kim, K. S., & Park, R. W. (2016). Nomogram of naive Bayesian model for recurrence prediction of breast cancer. Healthcare Informatics Research, 22(2), 89–94. 10. Karabatak, M., & Ince, M. C. (2009). An expert system for detection of breast cancer based on association rules and neural network. Expert Systems with Applications, 36(2), 3465–3469.

An Analytical Study on Importance of SLA for VM Migration Algorithm and Start-Ups in Cloud T. Lavanya Suja and B. Booba

1 Introduction Cloud computing is a model for enabling ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction [1]. The omnipresence of cloud and the easy ability to scale up and down the business have invited many to start business using cloud. Among them, the start-ups are quite a good number in cloud. As the demands of the cloud consumer are growing, the cloud provider has to be ingenious. Service level agreement (SLA) is a contract signed between the cloud consumer and the cloud provider including the Quality of Services (QoS) offered. To be a prolific provider in the cloud business, the cloud provider should be able to meet the contemporary needs of the cloud consumer. SLA is one among the limiting factors which drives the cloud business, and so, in this paper, a study on feasibility of SLA compliance especially for cloud start-ups is done. It commences from the fact given in [2] where the author proposes a decentralized virtual machine (VM) migration algorithm for data centers in cloud. Several alternatives are in the market and numerous updates are done time and again. VM migration algorithms concentrate on various factors like migration time, makespan, energy utilized, server uptime, and downtime which are the QoS listed in the SLAs.

T. Lavanya Suja (B) Research Scholar, Department of Computer Science, VISTAS, Chennai, India e-mail: [email protected] B. Booba Professor, Department of Computer Science, VISTAS, Chennai, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_30

271

272

T. Lavanya Suja and B. Booba

In the previous work [3], an elaborate survey on VM migration algorithms was done. After that, analysis of the monetary benefits and losses for the cloud consumer and cloud consumer is done and a rough estimate of average penalty amount for major cloud providers was calculated to show the importance of SLA and abiding by it. As the next step, a detailed feasibility study is done on whether the SLAs are followed 100% by the cloud provider and how much importance is given for SLA. Further, this study reveals the fact that how much SLA is important in cloud and especially for cloud start-ups. The rest of the paper is organized like this. Section 2 talks and gives the literature review of some prominent VM migration algorithms and their contribution to SLA compliance and its inclusion is done. Section 3 gives the feasibility study from the wide literature review, then the findings are put forth in Sect. 4 as tabular columns for better understanding, and finally, Sect. 5 gives the conclusion and future work of this study.

2 Literature Review In our previous work [3], we mentioned the need for look ahead factor for VM migration algorithm and the fact is reiterated in [4]. They have used linear regression technique to predict the overloaded and under loaded hosts. After calculating the value, decision is taken for migration as a result, it utilizes less energy and most important is it has minimum SLA violation. The authors have done detailed study on cloud computing algorithms and given a meaningful taxonomy [5]. Their classification of algorithms as genetic, nature inspiring is worthful and they have given a clear picture of evolutionary computing approaches. Adaptive SLA matching is the approach used in [6] to map the public SLA with private SLAs. The authors assert that a framework like this will benefit the cloud users in terms of cost and add to the better SLAs in the cloud database therefore raise the bar in cloud technology. In a detailed survey, statistic of downtime [7] has brought up the fact that a downtime of major cloud service providers like Amazon, Google Cloud affects in turn their cloud customer like GitHub, Coursera, HipChat, etc. This clearly brings the importance of service availability factor in SLA. In [8], the artificial bee colony (ABC) algorithm is compared against differential evolution (DE), genetic algorithm (GA), and particle swarm optimization (PSO) algorithms under various functions like sphere, Griewank, Rastrigin, and Rosenbrock. The results show that the ABC algorithm is a better candidate for multi-modal engineering problems with high dimensionality. Multi-modal problems are special kind of problems in which several global or local solutions exist [9]. This fact gives an important ingredient for developing the VM migration algorithm in cloud computing which includes SLA factors compliance. Since SLA factors are quite a few in numbers and all has to be achieved in all time

An Analytical Study on Importance of SLA for VM Migration …

273

irrelevant of any situations prevalent in the cloud, there could be more than one optimal solution. Nature-inspired artificial bee colony algorithm with minimum migration time [10] is compared with other algorithms like local regression-minimum migration time, dynamic voltage frequency scaling, interquartile range-minimum migration time, and median absolute deviation-minimum migration time. In their simulation results, it shows that ABC-MMT is consuming least energy than all of them with minimum VM migrations achieved. As a negative impact, there are more SLA violations, and so, it reveals the fact that there is a trade off in this algorithm for energy consumption and SLA compliance. This proves the fact that there is a dire need for new algorithms to be SLA compliant and energy efficient at the same time. Genetic algorithm (GA)-based soft computing technique is proposed in [11] where cloud analyst is used to compare the response time of the GA and other algorithms like first come first serve, stochastic hill climbing, and round robin. The simulation results are not remarkable for test date with one data center. As the number of data centers increase to 4, 5, and 6, the response time of GA is less by 15–20 ms. The important fact is that this GA includes the delay cost, L while calculating the processing unit vector. The L factor is the penalty cost incurred by cloud provider in case he is not able to meet the SLA. This is one among the few GA approaches which take into account the penalty cost of SLA violation. This again proves that SLA aware VM migration algorithm is a better choice for cloud infrastructure and services. While migrating one or more VMs from the virtual cluster, load balancing is the main factor to be considered. The load can be computation load, network load, memory load, and delay load, and so based upon these factors, a virtual cluster is said to be overutilized or underutilised. In order to balance the load of VM on a VC, several approaches have been proposed so far and one such is stochastic hill climbing (SHC) [12]. It belongs to the incomplete methods category of optimization problem and gives an optimal local solution. The word optimum fetches the minimum response time against FCFS and RR algorithms. On the contrary, the centralized node responsible for load balancing is very crucial and when it becomes unavailable or crashes makes the whole algorithm shut down its work. This is a very serious issue in this algorithm, so a better improvement should be added to this approach. Another soft computing approach gives optimal solution but is not fully reliable all the time as it is a centralized working. A better algorithm should adopt a decentralized behavior. A recent study on VM consolidation is done in [13] which include resource selection and resource matching by a novel approach. This is called as Pareto optimal match-swap algorithm which reduces the energy consumption by turning off maximum number of VMs in a physical machine (PM) and aware of SLA so with a smaller number of SLA violations. This is a good sign and so Pareto principle can be included for new VM migration algorithm to be developed in future.

274

T. Lavanya Suja and B. Booba

3 Feasibility Study The study done on few important algorithms prove the fact that less SLA violations are one of the key factors for VM migration algorithms. The factor SLA compliance and its inclusion are very much needed for cloud start-ups. Feasibility study is the measurement of the implementation details of an existing system. From this study, we get to know the strength and weakness of the VM migration algorithms thereby understanding the importance of SLA compliance SLAs should include the business level objectives of the cloud consumer. This fact is very much important because the factors defined in SLAs are of no use unless and otherwise they are known how it will be useful for their business [14]. As the infrastructure maintenance and support are done by the cloud provider, the cloud consumer can and should concentrate on a well laid, not overloaded SLA. The start-ups in recent years have leveraged the infrastructure of cloud and have increased their net worth of average $750 million [15]. SLA monitoring has got importance very much as it costs much for the cloud provider and the cloud consumer. As cloud providers lay this responsibility of reporting SLA violation within 30 days of the next payment cycle, it becomes mandatory for the cloud consumer to be vigilant on this. Accuracy and fast detection are the main characteristics of the SLA monitoring. Apart from this, inclusion of SLA factors plays a major role. In [16], the author emphasizes on the number of times the SLA factors are breached.

4 Findings Based on the fact analyzed, below is Table 1 on the classification of VM migration exclusively for start-ups in cloud computing. Table 1 Feasibility study of SLA compliance in VM migration algorithms Name of the algorithm

Category

Strengths

Weakness

Adaptive SLA matching

Mathematical

Include all SLAs

Takes extra time

Artificial bee colony Nature inspired

More than one optimal solution

Complex design

ABC-MMT

Nature inspired

Minimal migration time

Maximum SLA violations

GA-load balancing

Genetic

Penalty for SLA violation

Increased time complexity

Stochastic hill climbing

Incomplete methods optimal solution

Faster response time

Control node’s dependency

POM-swap

Mathematical

Minimum SLA violations

Complex design

An Analytical Study on Importance of SLA for VM Migration …

275

5 Conclusion and Future Work Cloud start-ups have been increasing year by year in cloud business [17]. Once who were cloud start-ups has become a private incorporation in few years, for example, Uber. This was possible because of cloud and its services [18, 19]. To make a business flourish in cloud, proper time and importance should be given for framing the SLA between cloud provider and cloud consumer. Hence, in this paper, we have taken prominent 5 VM migration algorithms and done a feasibility study on the focal point of SLA compliance. It is found that SLA compliance is not 100% which costs profit loss for cloud provider. As a result, it is mandatory to bring innovative techniques to devise an algorithm for best SLA compliance especially start-ups as they share a huge share in cloud business and attract other consumers to embrace the cloud.

References 1. https://www.nist.gov/news-events/news/2011/10/final-version-nist-cloud-computing-defini tion-published. 2. Wang, X., Liu, X., Fan, L., & Jia, X. (2013). A decentralized virtual machine migration approach of data centers for cloud computing. Mathematical Problems in Engineering, 2013. 3. Lavanya Suja, T., & Booba, B. (2019). A study on virtual machine migration algorithms in cloud computing. International Journal of Emerging Technologies and Innovative Research (JETIR), 6(3), 337–340. 4. Farahnakian, F., Liljeberg, P., & Plosila, J. (2013, September). LiRCUP: Linear regressionbased CPU usage prediction algorithm for live migration of virtual machines in data centers. In 2013 39th Euromicro Conference on Software Engineering and Advanced Applications (pp. 357–364). IEEE. 5. Zhan, Z. H., Liu, X. F., Gong, Y. J., Zhang, J., Chung, H. S. H., & Li, Y. (2015). Cloud computing resource scheduling and a survey of its evolutionary approaches. ACM Computing Surveys (CSUR), 47(4), 63. 6. Maurer, M., Emeakaroha, V. C., Brandic, I., & Altmann, J. (2012). Cost–benefit analysis of an SLA mapping approach for defining standardized Cloud computing goods. Future Generation Computer Systems, 28(1), 39–47. 7. Gagnaire, M., Diaz, F., Coti, C., Cerin, C., Shiozaki, K., Xu, Y. … Leclerc, P. (2012). Downtime statistics of current cloud solutions. International Working Group on Cloud Computing Resiliency, Tech. Rep. 8. Karaboga, D., & Basturk, B. (2012). On the performance of artificial bee colony algorithm. Swarm and Evolutionary Computation, 2(5), 39–52. 9. https://en.wikipedia.org/wiki/Evolutionary_multimodal_optimization. Date of access: 23/08/2019. 10. Ghafari, S. M., Fazeli, M., Patooghy, A., & Rikhtechi, L. (2013, August). Bee-MMT: A load balancing method for power consumption management in cloud computing. In 2013 Sixth International Conference on Contemporary Computing (IC3) (pp. 76–80). IEEE. 11. Dasgupta, K., Mandal, B., Dutta, P., Mandal, J. K., & Dam, S. (2013). A genetic algorithm (ga) based load balancing strategy for cloud computing. Procedia Technology, 10, 340–347. 12. Mondal, B., Dasgupta, K., & Dutta, P. (2012). Load balancing in cloud computing using stochastic hill climbing—A soft computing approach. Procedia Technology, 4(2012), 783–789. 13. Li, W., Wang, Y., Wang, Y., Xia, Y., Luo, X., & Wu, Q. (2017). An energy-aware and under-SLAconstraints VM consolidation strategy based on the optimal matching method. International Journal of Web Services Research (IJWSR), 14(4), 75–89.

276

T. Lavanya Suja and B. Booba

14. http://www.cloud-council.org/Cloud_Computing_Use_Cases_Whitepaper-4_0.pdf. Last accessed: 24/10/2019. 15. https://www.sifytechnologies.com/blog/how-the-start-up-industry-can-benefit-from-cloudcomputing. Last accessed: 17/08/2019. 16. Emeakaroha, V. C., Ferreto, T. C., Netto, M. A., Brandic, I., & De Rose, C. A. (2012, July). Casvid: Application level monitoring for SLA violation detection in clouds. In 2012 IEEE 36th Annual Computer Software and Applications Conference (pp. 499–508). IEEE. 17. https://www.datamation.com/cloud-computing/slideshows/top-10-cloud-computing-startups. html. Last accessed 17/08/2019. 18. https://www.americanexpress.com/en-us/business/trends-and-insights/articles/6-ways-cloudcomputing-helps-businesses-save-time-and-money. Last accessed 17/08/2019. 19. https://cloudharmony.com/status. Last accessed: 28/07/2019.

In-Database Analysis of Road Safety and Prediction of Accident Severity Sejal Chandra, Parmeet Kaur, Himanshi Sharma, Vaishnavi Varshney, and Medhavani Sharma

1 Introduction Road accidents are one of the leading global safety issues due to the high number of resulting injuries and fatalities. On-road safety is being recognized globally as a key area of concern for governments and citizens alike. In India, several measures have been undertaken to ensure safety on roads, such as the Motor Vehicles (Amendment) Bill, imposing of strict traffic regulations, and raising awareness among commuters. [1] In another significant step, India became a signatory to the Brasilia declaration, thereby committing support to reducing accidents on roads [2]. A number of reasons are associated with road accidents. Though a few of these are specific to each country, most of these generally relate to a rapid increase in traffic due to increased urbanization, conditions of roads, weather conditions, etc. The death toll due to road accidents is one of the major causes of death in developing countries such as India. In India, road accidents have been one of the main reasons for death and injuries, especially of people less than 50 years of age. Around 5 lakh accidents were reported in India in the year 2016 which claimed the lives of approximately 1.5 lakh persons and caused injuries to many more. If severity of accidents is expressed as a number of death casualties per 100 accidents, it was found to be 31.4 in India for the year 2016 [1, 3]. It has been observed that the countries with low-income population groups having fewer numbers of vehicles encounter a high majority of road accidents in the world.

S. Chandra · P. Kaur (B) · H. Sharma · V. Varshney · M. Sharma Department of Computer Science and Information Technology, Jaypee Institute of Information Technology, Noida, India e-mail: [email protected] S. Chandra e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_31

277

278

S. Chandra et al.

This can be attributed to the absence of laws related to road safety or their implementation; poor conditions of road infrastructures and inadequate medical facilities during emergency situations. In comparison, in developed countries such as the UK, road accidents due to weather conditions have been one of the traffic safeties challenges [4]. In the recent years, big data technologies have been ably put to use to solve a number of problems facing the society at large. This work presents an in-database analysis of road safety and prediction of accident severity using data related to road accidents and vehicle and road conditions. The first phase of the work employs a NoSQL database to identify the relationship between road accidents and factors affecting their number. Subsequently, a predictive model is built for accident severity prediction based on the major influential factors identified during the first phase. Data for the study has been taken from the Web site, Kaggle [5], which offers a public data platform for data scientists. The data has initially come from the Open Data Web site of the UK government (https://data.gov.uk/), where they were published by the Department of Transport. This information includes geographical locations, vehicle types, weather conditions, number of casualties, etc. This detailed information can be used for interesting and useful research and analysis. The remaining paper is structured as follows: Sect. 2 presents the related work done in this field. Section 3 discusses the dataset, proposed methodology and results of its implementation. We conclude the presentation in Sect. 4.

2 Related Work A number of studies have been conducted for identifying the predominant factors causing road accidents. The work in [6] studies a vehicle’s role in road accidents since it could affect the severity of the accident. Machine learning algorithms such as decision tree, Naïve Bayes, and ensembling technique have been used in the work. The study in [7] identifies high-density accident hotspots and, in turn, creates a clustering technique which determines the reasons that most likely exist in specific clusters. Various authors have discussed various algorithms to estimate road accident severity and a comparative analysis has been done to identify the best fit method with the highest accuracy. The work in [8] has made use of multilayer perceptron model in neural networks for determining the severity of road accidents. In the paper [9], a gender-based approach has been followed to classify accident severity. It has been observed that after mid of the 1980s, the number of female drivers involved in severe motor accidents increased considerably, and however, the study of various factors showed that a high number of male drivers are involved in alcohol-related crashes [15]. A study focusing on accidents separately in rural and urban areas is presented in [10]. Condition of vehicles and their impact on accidents is explored in [11]. Studies specific to countries are presented in [12, 13]. Recently, a study carried out on data from China has observed that driver’s fatigue and over-speeding result in severe

In-Database Analysis of Road Safety and Prediction …

279

accidents. The present work aims to identify the significant parameters that cause majority of accidents and build a predictive model to determine accident severity.

3 Proposed Methodology The foremost step in the proposed methodology was to load the dataset into the database, i.e., Cassandra [16]. The attributes of the dataset are depicted in Table 1. Subsequently, Cassandra query language (CQL) was used to execute aggregationbased queries on the data. These queries were designed to find the influence of various attributes such as accident severity, road surface conditions, speed limit, weather conditions, and junction details on number and severity of accidents. A sample of the designed CQL queries is illustrated in Fig. 1 and results are presented.

3.1 Data Analysis and Visualization The dataset was queried to find the number of accidents as per following parameters: 1. Road junctions: The first query was designed to check the variation in the number of accidents from 2005 to 2010, taking road junctions as a factor. As observed from Fig. 2a, though accidents happened at crossroad junctions, T or staggered junction, mini-roundabouts, the number of accidents reported at the T junction is the highest. Thus, designing the road junctions plays a significant role in accidents. Table 1 Dataset attributes Junction detail

Light conditions Road_surface conditions

Speed_limit

Weather_conditions

Year

Accident severity

Urban_or_rural area

Time

Number_of casualties

Age_of driver

1. 2. 3. 4. 5. 6.

Road_type

SELECT COUNT(*) from cloudfile where Junction_Detail=’Mini-roundabout’ AND Year=2005; SELECT COUNT(*) from cloudfile where Road_Surface_Conditions=’Snow’ AND Year=2010; SELECT COUNT(*) from cloudfile where Speed_limit=0): 30: TweetSent =’positive’ 31:else 32: TweetSent =’negative’ 33:EndIf 34: return TweetSent

__________________________________________________________________ Algorithm 2 1: Input: TweetContent (Tc) 2: Output:CleanTweet(Tcl) 3: ProcedurepreProcess(Tc) 4: TweetCon1 = removewStopWords(Tc) 5: TweetCon2 = removeHyperlinks(TweetCon1) 6: Tcl[] =ConvertCamelCase(TweetCon1) 7: return Tcl

SentEmojis: Sentiment Classification Using Emojis Table 2 Test results

299

S. No.

Input

Result

1

I’m on cloud nine today.

Happy

2

Today’s weather.

Angry

4 Experimental Environment and Results We use Python language, and we run all our experiments on Windows operating system. We collected 10,000 tweets consisting of emojis. We compute the sentiment score of all the tweets using the proposed SentEmoji technique. The proposed approach was found to be effective and give an accuracy of 87% (Table 2).

5 Conclusion and Future Work On social networking platforms, lots of emojis are used to express the views in the form of visual sentiments. We have used the six most tweeted emojis in our work for sentiment analysis. The sentiment analysis of related sentiments has been performed. For example, “I just purchased an iPad” is good for Apple, but not for Samsung. Compound sentiment: contains positive sentiments and negative sentiments within the same sentence. Example: “I like Modi, but do not like BJP.” Conditional sentiment: includes actions that may happen in the future. For example, the customer is not upset now but says he will be upset if the company does not call him back. Ambiguous negative words: Their sentiment has to be properly understood and tagged accordingly. For example, “That movie was so sick” is actually a positive statement. Sarcasm: is one of the most difficult sentiments to interpret properly. These sentiments have been tackled with the use of emojis. In future, more emoticons can be added to the given list of emoticons, more number of tweets can be included for a better idea of the accuracy of our proposed algorithms. The proposed work can be extended for working on the accuracy of sentiment classification by including emoticons to our work.

References 1. Bahri, S., Bahri, P., & Lal, S. (2018, Jan). A novel approach of sentiment classification using emoticons. Procedia Computer Science, 132, 669–678. 2. Barbieri, F., Ronzano, F., & Saggion, H. (2016, May). What does this emoji mean? A vector space skip-gram model for twitter emojis. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) (pp. 3967–3972). 3. Al-Halah, Z., Aitken, A., Shi, W., & Caballero, J. (2019). Smile, be Happy :) Emoji Embedding for Visual Sentiment Analysis. arXiv preprint arXiv:1907.06160.

300

S. Lal et al.

4. Novak, P. K., Smailovi´c, J., Sluban, B., & Mozetiˇc, I. (2015). Sentiment of emojis. PLoS ONE, 10(12), e0144296. 5. Jiang, F., Liu, Y. Q., Luan, H. B., Sun, J. S., Zhu, X., Zhang, M., et al. (2015). Microblog sentiment analysis with emoticon space model. Journal of Computer Science and Technology, 30(5), 1120–1129. 6. Severyn, A., & Moschitti, A. (2015, August). Twitter sentiment analysis with deep convolutional neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 959–962). ACM. 7. Zhao, J., Dong, L., Wu, J., & Xu, K. (2012, August). Moodlens: An emoticon-based sentiment analysis system for chinese tweets. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1528–1531). ACM. 8. Yamamoto, Y., Kumamoto, T., & Nadamoto, A. (2014, December). Role of emoticons for multidimensional sentiment analysis of Twitter. In Proceedings of the 16th International Conference on Information Integration and Web-Based Applications & Services (pp. 107–115). ACM.

Protection of Six-Phase Transmission Line Using Bior-6.8 Wavelet Transform Gaurav Kapoor

1 Introduction The faults on SPTL have to be detected quickly so as to repair the faulted phase, reestablish the electricity supply, and reduce the outage time as soon as feasible. In the latest years, lots of studies have been dedicated to the problem of fault recognition, categorization, and position assessment in the SPTLs. A brief literature review of various recently reported methods is introduced henceforth. Recently, a standalone wavelet transform has been used for the detection as well as the classification of the triple line to ground faults which occurs at unlike positions on a twelve-phase transmission line [1]. The research work addressed by the researchers in [2] exemplifies that WT and ANN have been used for the protection of the SPTL. In the study reported in [3], WT is employed as a fault detector for the protection of the SPTL. Authors in [4] employed a composite of MNN and WT, and addressed the issue of recognizing the faulted zone and evaluating the location of faults in the SPTL. In the research reported in [5–7], the authors employed the MM and WT for the protection of the SPTL. WHT, WT, and HHT have been used in [8–10], respectively, for fault recognition in different configurations of TLs. In this work, the Bior-6.8 wavelet transform (BWT) is utilized, and it is executed for the detection of faults in the SPTL. Such type of work has not been represented so far to the best of the information of the author. The results exemplify that the BWT competently detects the faults, and the standardization of the BWT is not receptive to the variation in fault factors. This article is structured as: Sect. 2 reports the specifications of the SPTL. Section 3 describes the flow diagram for the BWT. Section 4 presents the results of the investigations carried out in this work. Section 5 concludes the paper. G. Kapoor (B) Department of Electrical Engineering, Modi Institute of Technology, Kota, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_34

301

302

G. Kapoor

138 kV source Relay

Load-1

Six Phase Transmission Line

138 kV source

Fault

Bus-1

Load-2 Bus-2

Fig. 1 Graphic of SPTL

2 The Specifications of SPTL Figure 1 depicts the graphic of SPTL. The SPTL has a rating of 138 kV, 60 Hz. The SPTL has a total length of 68 km. The SPTL is alienated into two parts of length 34 km both. The current measurement blocks are connected only at bus-1 for measuring the currents of SPTL. The simulation model of a 138 kV SPTL is designed using MATLAB.

3 BWT Based Protection Process Figure 2 depicts the flow diagram of the BWT. The steps are shown below: Step 1: The six-phase currents are recorded through the current measurement blocks connected at bus-1. Fig. 2 Flow diagram for BWT

Capture six phase currents

BWT based currents analysis

Features retrieval in terms of BWT coefficients

No Is |BWT coefficient| faulted phase > |BWT coefficient| healthy phase

Yes Simultaneous fault detection and faulty phase recognition

No fault

Protection of Six-Phase Transmission Line …

303

Current (A)

200 I(:,1) I(:,2) I(:,3) I(:,4) I(:,5) I(:,6)

100 0 -100 -200

0

100

200

300

400

500

600

700

800

900

1000

Samples 10

Voltage (V)

2

5 V(:,1) V(:,2) V(:,3) V(:,4) V(:,5) V(:,6)

1 0 -1 -2

0

100

200

300

400

500

600

700

800

900

1000

Samples

Fig. 3 Six-phase currents and voltages for no-fault

Step 2: BWT is utilized to calculate the Bior-6.8 coefficients of six-phase currents. Step 3: The phase will be declared as the faulty phase if its BWT coefficient has the larger amplitude as compared to the healthy phase, under fault situation.

4 Performance Assessment The performance of BWT has been tested for no-fault, converting faults, near-in relay faults, far-end relay faults, cross-country faults, and inter-circuit faults. In the separate subcategories, the results of the study are investigated.

4.1 The Efficacy of BWT for No-Fault Figure 3 shows the current and voltage waveforms for no-fault. Figure 4 exemplifies the BWT coefficients of currents for no-fault. Table 1 reports the results for no-fault.

4.2 The Efficacy of BWT for Converting Faults The BWT has been investigated for the converting faults. Figure 5 exemplifies the six-phase currents of the SPTL when initially the CG fault at 34 km at 0.07 s is converted into the ABCG fault at 0.17 s among RF = 2.25  and RG = 1.25 . Figure 6 exemplifies the BWT coefficients of the six-phase current when the CG

304

G. Kapoor A

0 -50 0

10

20

30

40

50

50 0 -50

-100

0

10

20

Samples

-50 10

20

30

50

-50 0

10

40

50

0 -50 0

10

20

Samples

30

30

40

50

40

50

F

100

50

-100

20

Samples

Amplitude

Amplitude

Amplitude

0

0

40

0

-100

E

100

50

-100

30

50

Samples

D

100

C

100

Amplitude

50

-100

B

100

Amplitude

Amplitude

100

40

50

50 0 -50

-100

0

Samples

10

20

30

Samples

Fig. 4 BWT coefficients of six-phase currents for no-fault

Table 1 Results of BWT for no-fault BWT coefficients Phase-A

Phase-B

Phase-C

Phase-D

Phase-E

Phase-F

68.1739

80.7299

60.9458

69.7792

75.3017

80.5093

4000

I(:,1) I(:,2) I(:,3) I(:,4) I(:,5) I(:,6)

3000

Current (A)

2000 1000 0 -1000 -2000 -3000 -4000 -5000

0

1000

2000

3000

4000

5000

6000

7000

Samples

Fig. 5 Current waveform when CG fault is converted into ABCG fault

fault is converted into the ABCG fault. The fault factors preferred for all the fault cases are: F L = 34 km, RF = 2.25 , and RG = 1.25 . Table 2 reports the results for different converting faults. It is viewed that the BWT is insensitive to the converting faults.

Protection of Six-Phase Transmission Line … 1000

0 -500 -1000 100 110 120 130 140 150

500 0 -500 100

Samples

D

20 0 -20 -40 100 110 120 130 140 150

Samples

110 120 130 140 150

Samples

Samples

E

Amplitude

Amplitude

40

C

1500 1000 500 0 -500 -1000 100 110 120 130 140 150

40 20 0 -20 -40 -60 100 110 120 130 140 150

F

Amplitude

500

B

Amplitude

A Amplitude

Amplitude

1000

305

60 40 20 0 -20 -40 100 110 120 130 140 150

Samples

Samples

Fig. 6 BWT coefficients when CG fault is converted into ABCG fault

4.3 The Efficacy of BWT for Near-in Relay Faults The efficiency of the BWT is tested for the near-in relay faults on the SPTL. Figure 7 depicts the ABCDFG near-in relay fault current at 5 km at 0.0525 s among RF = 3.15  and RG = 2.75 . Figure 8 shows the BWT coefficients for ABCDFG fault. The fault factors for all the fault cases are: T = 0.0525 s, RF = 3.15 , and RG = 2.75 . Table 3 details the results of the BWT for different near-in relay faults. It is confirmed from Table 3 that the BWT has the ability to detect the near-in relay faults precisely.

4.4 The Efficacy of BWT for Far-End Relay Faults The BWT has been explored for different far-end relay faults. Figure 9 illustrates the BCDEFG far-end relay fault at 63 km at 0.0635 s among RF = 1.55  and RG = 2.35 . Figure 10 exemplifies the BWT coefficients for BCDEFG far-end relay fault. The fault factors chosen for all the fault cases are: T = 0.0635 s, RF = 1.55 , and RG = 2.35 . Table 4 reports the results for far-end relay faults. It is inspected that the effectiveness of BWT remains impassive for the far-end relay faults.

4.5 The Efficacy of BWT for Cross-Country Faults The BWT has been checked for different cross-country faults. Figure 11 exemplifies the ABCG fault simulated at 45 km and DEG fault simulated at 23 km at 0.0825 s

Converted fault

ABCG (0.17)

DEFG (0.25)

ABG (0.15)

DG (0.25)

ABCG (0.16)

Fault

CG (0.07)

DEG (0.15)

DEFG (0.05)

ABCG (0.1)

ABG (0.06)

2.8875 × 103 3.0312 × 103 2.5855 × 103

2.0936 × 103

3.6578 × 103

2.5660 × 103

103

57.6578

1.6260 ×

Phase-B

53.9557

749.2219

Phase-A

BWT coefficients

Table 2 Results of BWT for converting faults

731.8349

1.8244 × 103

108.7962

59.7814

1.6543 × 103

1.8869 × 103

88.3967

136.1919

1.1972 × 103

2.4026 × 103

2.1560 × 103

65.2933

47.0427

63.4798

1.4135 ×

Phase-E

Phase-D 103

Phase-C

100.5413

128.5306

1.1117 × 103

972.3618

98.0198

Phase-F

306 G. Kapoor

Protection of Six-Phase Transmission Line … 10 4

1.5

I(:,1) I(:,2) I(:,3) I(:,4) I(:,5) I(:,6)

1

Current (A)

307

0.5 0 -0.5 -1 -1.5

0

1

2

3

4

5

6

10 4

Samples

Fig. 7 ABCDFG near-in fault at 5 km at 0.0525 s among RF = 3.15  and RG = 2.75  A

0 -500 0

100 200 300 400 500

500 0 -500

-1000

0

D

0

100 200 300 400 500

Samples

E

-500 100 200 300 400 500

40 20 0 -20 -40

0

F

1000

Amplitude

Amplitude

Amplitude

0

0

0 -500

-1000

100 200 300 400 500

60

500

-1000

500

Samples

Samples 1000

C

1000

Amplitude

500

-1000

B

1000

Amplitude

Amplitude

1000

100 200 300 400 500

500 0 -500

-1000

0

100 200 300 400 500

Samples

Samples

Samples

Fig. 8 BWT coefficients for ABCDFG near-in fault at 5 km at 0.0525 s Table 3 Results of BWT for near-in relay faults Fault type

BWT coefficients Phase-A

Phase-B

Phase-C

Phase-D

Phase-E

Phase-F

ABCDFG (5 km)

891.8842

1.0625 × 103

927.3037

859.2393

46.1408

872.6147

ADEF G(6 km)

698.0516

198.8459

199.5269

1.1002 × 103

999.0319

983.6861

ABCDG (7 km)

916.7233

1.2260 × 103

1.1728 × 103

839.4728

193.0700

191.8798

DEFG (8 km)

16.0221

39.9194

23.7915

1.2483 × 103

952.9179

1.2088 × 103

ABG (9 km)

1.2983 × 103

1.0675 × 103

42.0850

21.7222

24.9438

26.1326

308

G. Kapoor 10 4

1

I(:,1) I(:,2) I(:,3) I(:,4) I(:,5) I(:,6)

0.8

Current (A)

0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

1000

2000

3000

4000

5000

6000

Samples

Fig. 9 BCDEFG far-end fault at 63 km at 0.0635 s among RF = 1.55  and RG = 2.35  A

0 60

70

80

90

100

0 -2000 -4000 50

60

Samples

-2000 70

80

-4000 50

100

60

90

100

0 -2000 -4000 60

70

80

90

100

90

100

2000 0 -2000 -4000 50

60

Samples

Samples

80

F

4000

2000

-6000 50

70

Samples

Amplitude

Amplitude

Amplitude

0

60

90

0 -2000

E

4000

2000

-4000 50

80

2000

Samples

D

4000

70

C

4000

Amplitude

50

-50 50

B

2000

Amplitude

Amplitude

100

70

80

90

100

Samples

Fig. 10 BWT coefficients for BCDEFG far-end fault at 63 km at 0.0635 s Table 4 Results of BWT for far-end relay faults Fault type

BWT coefficients Phase-A

Phase-B

Phase-C

Phase-D

Phase-E

Phase-F

BCDEFG (63 km)

102.5780

3.5600 × 103

3.3889 × 103

2.3638 × 103

3.4125 × 103

3.3351 × 103

ABCDEG (64 km)

2.8019 × 103

4.5912 × 103

3.5878 × 103

2.8539 × 103

4.0944 × 103

164.2652

ABCG (65 km)

2.5793 × 103

3.0995 × 103

2.2840 × 103

55.7570

60.9497

74.9246

BCDFG (66 km)

231.3704

4.1794 × 103

1.8869 × 103

980.0432

166.4762

2.0898 × 103

DEFG (67 km)

42.5489

63.7337

94.7558

2.3107 × 103

2.6048 × 103

3.7600 × 103

Protection of Six-Phase Transmission Line … 8000

I(:,1) I(:,2) I(:,3) I(:,4) I(:,5) I(:,6)

6000 4000

Current (A)

309

2000 0 -2000 -4000 -6000 -8000

0

1000

2000

3000

4000

5000

6000

Samples

Fig. 11 ABCG fault at 45 km and DEG fault at 23 km among RF = 2.15  and RG = 1.15 

0 -1000

-2000 100 110 120 130 140 150

500 0 -500 -1000

Amplitude

Amplitude

2000

0 -1000 100 110 120 130 140 150

Samples

0 -1000 -2000 100 110 120 130 140 150

Samples

E

1000 0 -1000 -2000 -3000 100 110 120 130 140 150

C

1000

Samples

D

1000

2000

-1500 100 110 120 130 140 150

Samples 2000

B

40

Amplitude

1000

1000

Amplitude

A Amplitude

Amplitude

2000

F

20 0 -20 -40 -60 100 110 120 130 140 150

Samples

Samples

Fig. 12 BWT coefficients for ABCG fault at 45 km and DEG fault at 23 km at 0.0825 s

among RF = 2.15  and RG = 1.15 . Figure 12 depicts the BWT coefficients for ABCG and DEG faults. It is explored from Table 5 that the BWT performs efficiently for the detection of cross-country faults on SPTL.

4.6 The Efficacy of BWT for Inter-circuit Faults The BWT is investigated for different inter-circuit faults. Figure 13 depicts the ABCG and DEFG fault at 38 km at 0.0674 s among RF = 1.75  and RG = 1.15 . Figure 14 exemplifies the BWT coefficients for ABCG and DEFG inter-circuit fault. The fault factors are set as: T = 0.0674 s, F L = 38 km, RF = 1.75 , and RG = 1.15 . Table 6

Fault-2 (km)

DEG (23)

DFG (43)

EFG (35)

DEG (22)

EG (32)

Fault-1 (km)

ABCG (45)

ABG (25)

CG (33)

ACG (46)

AG (36)

103

865.8539

991.9763

184.7974

1.3968 × 103

1.3188 ×

Phase-A

BWT coefficients

Table 5 Results of BWT for cross-country faults

67.0772

146.0776

185.4791

209.2212

1.4236 × 103

83.7905

211.5655

79.0884

1.9389 × 103

885.0209

1.5339 × 103

1.2086 × 103

79.3096

Phase-F

2.0513 × 103

103

2.7983 × 103

334.9707

2.9526 ×

Phase-E

127.0202

103

1.0119 × 103

1.9910 ×

Phase-D

1.3637 × 103

103

220.4369

1.5593 ×

2.4302 × 1.9621 × 103

Phase-C 103

Phase-B

310 G. Kapoor

Protection of Six-Phase Transmission Line … 10 4

1

I(:,1) I(:,2) I(:,3) I(:,4) I(:,5) I(:,6)

0.8 0.6

Current (A)

311

0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

Samples

Fig. 13 ABCG and DEFG fault at 38 km at 0.0674 s among RF = 1.75  and RG = 1.15  A

500 0 -500 0

20

40

60

80

2000 0 -2000 -4000

100

0

20

Samples

-2000 -4000

0

20

40

60

Samples

0 -1000 -2000

100

0

20

80 100

0 -1000 0

20

40

60

60

80

100

F

1000

1000

-2000

40

Samples

Amplitude

Amplitude

Amplitude

0

80

E

2000

2000

60

1000

Samples

D

4000

40

C

2000

Amplitude

1000

-1000

B

4000

Amplitude

Amplitude

1500

80 100

500 0 -500 -1000

0

20

Samples

40

60

80 100

Samples

Fig. 14 BWT coefficients for ABCG fault and DEFG fault at 38 km at 0.0674 s

tabularizes the results for different inter-circuit faults. It is inspected from Table 6 that the BWT effectively detects all types of inter-circuit faults.

5 Conclusion The Bior-6.8 wavelet transform (BWT) is seemed to be very efficient under varied fault categories for the SPTL. The BWT coefficients of six-phase fault currents are assessed. The fault factors of the SPTL are varied. It is discovered that the deviation in fault factors do not influence the fidelity of the BWT. The outcomes substantiate that the BWT has the competence to protect the SPTL beside different fault categories.

Fault-2

DEFG

DEFG

DG

DEFG

EFG

Fault-1

ABCG

ACG

ABG

CG

AG

2.5086 × 103

152.7299 1.3661 × 103

2.8924 × 211.0439 105.2202

1.3364 × 103

2.4078 ×

210.8773 105.0757

1.4980 ×

2.0536 × 103 103

211.1309

291.7262

103

2.5352 × 103

2.9869 ×

103

1.4835 ×

1.7429 × 103

103

1.9869 ×

Phase-D

2.5793 × 103

Phase-C 103

Phase-B

103

Phase-A

BWT coefficients

Table 6 Results of BWT for inter-circuit faults

103

1.7764 × 103

4.4588 × 103

318.0983

3.0082 × 103

3.5262 ×

Phase-E

1.7935 × 103

1.6652 × 103

317.8726

1.6156 × 103

1.2732 × 103

Phase-F

312 G. Kapoor

Protection of Six-Phase Transmission Line …

313

References 1. Kapoor, G. (2018). Wavelet transform based detection and classification of multi-location three phase to ground faults in twelve phase transmission line. Majlesi Journal of Mechatronic Systems, 7(4), 47–60. 2. Koley, E., Verma, K., & Ghosh, S. (2015). An improved fault detection, classification and location scheme based on wavelet transform and artificial neural network for six-phase transmission line using single end data only. Springer Plus, 4(1), 1–22. 3. Kapoor, G. (2018). Six-phase transmission line boundary protection using wavelet transform. In Proceedings of the 8th IEEE India International Conference on Power Electronics (IICPE), Jaipur, India. 4. Koley, E., Verma, K., & Ghosh, S. (2017). A modular neuro-wavelet based non-unit protection scheme for zone identification and fault location in six-phase transmission line. Neural Computer & Applications, 28(6), 1369–1385. 5. Sharma, K., Ali, S., & Kapoor, G. (2017). Six-phase transmission line boundary fault detection using mathematical morphology. International Journal of Engineering Research and Technology, 6(12), 150–154. 6. Kapoor, G. (2018). Fault detection of phase to phase fault in series capacitor compensated sixphase transmission line using wavelet transform. Jordan Journal of Electrical Engineering, 4(3), 151–164. 7. Kapoor, G. (2018). Six-phase transmission line boundary protection using mathematical morphology. In Proceedings of the IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 857–861), Greater Noida, India. 8. Sharma, P., & Kapoor, G. (2018). Fault detection on series capacitor compensated transmission line using Walsh hadamard transform. In Proceedings of IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 763–768), Greater Noida, India. 9. Gautam, N., Ali, S., & Kapoor, G. (2018). Detection of fault in series capacitor compensated double circuit transmission line using wavelet transform. In Proceedings on IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 769–773). IEEE, Greater Noida, India. 10. Sharma, N., Ali, S., & Kapoor, G. (2018). Fault detection in wind farm integrated series capacitor compensated transmission line using Hilbert Huang transform. In Proceedings on IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 774–778). IEEE, Greater Noida, India.

Protection of Nine-Phase Transmission Line Using Demeyer Wavelet Transform Gaurav Kapoor

1 Introduction An increase in the predictability of electrical power has been alleged by the people of the modern age group. The electrical power transfer potentiality of the currently operating power transmission systems ought to be augmented in order to assist the significant increase in the necessity of electrical energy. The NPTLs have been suggested as an imminent replacement of the prevalent configuration of the electrical power transmission system which has the prospective for transferring the large extent of electrical energy. The feasibility of fault occurrence on the NPTL is more when comparing with DCTL. Thus, accurate detection of the faults in the NPTL turns out to be very decisive for mitigating the thrash of gain and providing rapid renovates. Several reported research works addressed the issue related to fault detection in TLs. Some important research attempts are presented in concise here in this section. Recently, WT has been used for the detection as well as the classification of the triple line to ground faults which occur at different locations on a TPTL [1]. Reference [2] exemplifies that decision tree (DT) has been used for the protection of the SPTL. In [3–12], WT is employed for TL protection beside different faults. HHT is adopted in [13–16] for TL protection. MM has been used in [17–19]. WHT is used in [20] for SCTL protection. In this work, the demeyer wavelet transform (DMWT) is used for NPTL protection. No such type of work has been reported yet to the best of the knowledge of the author. The results exemplify that the DMWT well detects the faults.

G. Kapoor (B) Department of Electrical Engineering, Modi Institute of Technology, Kota, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_35

315

316

G. Kapoor

765 kV source DMWT Relay

Load-1

Nine Phase Transmission Line

Bus-1

765 kV source

Fault

Load-2 Bus-2

Fig. 1 Graphic of NPTL

This article is structured as: The specifications of the NPTL are reported in Sect. 2. Section 3 presents the flowchart of DMWT. Section 4 is dedicated to the results of DMWT. Section 5 completes the article.

2 The Specifications of NPTL The system of a 765 kV NPTL is designed using MATLAB. Figure 1 exemplifies the illustration of the NPTL. The NPTL has a rating of 765 kV, 50 Hz and has a total length of 200 km. The NPTL is alienated into two zones of length 100 km each. The relay and transducers are connected at bus-1 for the relaying of NPTL.

3 DMWT-Based Protection Technique Figure 2 shows the process of the DMWT with the following steps: Step 1: Nine-phase currents are recorded through CTs installed at bus-1. Step 2: DMWT is employed to calculate the DMWT coefficients of phase currents. Step 3: The phase will be confirmed as the faulty phase if its DMWT coefficient has the larger amplitude as compared to the healthy phase, under fault situation.

4 Performance Estimation The DMWT has been tested for deviation in fault resistance (RF ), fault switching time (FST), near-in relay faults, and far-end relay faults. The results are shown below.

Protection of Nine-Phase Transmission Line Using Demeyer …

317

Record nine phase current signals

Signal processing using DMWT

Calculate DMWT coefficients

No Is |DMWT coefficient| faulted phase > |DMWT coefficient| healthy phase

No fault

Yes Simultaneous fault detection and faulty phase recognition

Fig. 2 Flowchart of DMWT

4.1 Response of DMWT for Healthy Condition Figure 3 shows the nine-phase current and voltage waveforms for no-fault. Figure 4 exemplifies the DMWT coefficients of nine-phase currents for no-fault. Table 1 reports the results of DMWT for no-fault. 1000

Current (A)

500 0 -500

Voltage (A)

-1000

0

100

200

300

400

106

1.5 1

500

600

700

800

900

1000

600

700

800

900

1000

Samples

0.5 0 -0.5 -1 -1.5

0

100

200

300

400

500

Samples

Fig. 3 Nine-phase current and voltage waveforms for no-fault

318

G. Kapoor 5000

2000

0

Amplitude

-5000

5000

4000

0

0 0

10

20

30

40

50

-2000

0

10

20

30

40

50

-5000 50

5000

5000

5000

0

0

0

-5000

0

10

20

30

40

50

2000 0 -2000 -4000

0

10

20

30

40

50

-5000 -5000 0 100 110 120 130 140 150 5000

5000

0

0

-5000

50

60

70

80

90

Samples

100

-5000

0

60

70

80

90

100

10

20

30

40

50

10

20

30

40

50

Fig. 4 DMWT coefficients of nine-phase currents for no-fault

Table 1 Response of DMWT for no-fault DMWT coefficients Phase-A

Phase-B

Phase-C

3.2714 × 103

3.1205 × 103

2.4987 × 103

Phase-D

Phase-E

Phase-F

3.1959 ×

103

2.4831 ×

103

3.0259 × 103

Phase-G

Phase-H

Phase-I

3.2847 × 103

2.3478 × 103

3.0541 × 103

4.2 Response of DMWT for Different FST The DMWT is investigated for variation in fault switching time. Figure 5 depicts the ABCDEFGHI-g fault at 100 km at 0.05 s among RF = 0.5  and RG = 1.25 . The fault factors for all the fault cases are set as: T = 0.05 s, F L = 100 km, RF = 0.5 , and RG = 1.25 . Tables 2, 3, 4, 5, and 6 tabularize the results for variation in fault switching time. It is inspected from Tables 2, 3, 4, 5 and 6 that the variation in the fault switching time does not manipulate the working of the DMWT.

4.3 Response of DMWT for Various Near-in Relay Faults The efficiency of the DMWT is tested for various near-in relay faults on the NPTL. Figure 6 depicts the ABCGHI-g near-in relay fault current at 5 km at 0.0525 s among RF = 1.15  and RG = 2.15 . The fault factors for all the fault cases are: T = 0.0525 s, RF = 1.15 , and RG = 2.15 . Tables 7, 8, 9, 10, and 11 detail

Protection of Nine-Phase Transmission Line Using Demeyer …

319

10 4

4 3

Current (A)

2 1 0 -1 -2 -3 -4

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

Samples

Fig. 5 Nine-phase current for ABCDEFGHI-g fault at 100 km at 0.05 s among RF = 0.5  and RG = 1.25  Table 2 Response of DMWT for ABCDEFGHI-g fault at 100 km at 0.05 s among RF = 0.5  and RG = 1.25  Fault—ABCDEFGHI-g (FST = 0.05 s) Phase

DMWT coefficient

A

9.6529 ×

104

B

9.5832 ×

104

C

9.4992 × 104

Phase

DMWT coefficient

Phase

DMWT coefficient

D

9.9595 ×

104

G

9.7435 × 104

E

9.7343 ×

104

H

9.8087 × 104

F

9.7991 × 104

I

9.8233 × 104

Table 3 Response of DMWT for ABCGHI-g fault at 100 km at 0.1 s among RF = 0.5  and RG = 1.25  Fault—ABCGHI-g (FST = 0.1 s) Phase

DMWT coefficient

A

6.6600 ×

104

B

5.7361 ×

104

C

8.6408 × 104

Phase

DMWT coefficient

Phase

DMWT coefficient

D

3.3471 ×

103

G

5.8329 × 104

E

2.8761 ×

103

H

8.4387 × 104

F

3.0165 × 103

I

6.2698 × 104

Table 4 Response of DMWT for DEF-g fault at 100 km at 0.2 s among RF = 0.5  and RG = 1.25  Fault—DEF-g (FST = 0.2 s) Phase

DMWT coefficient

A

3.2714 ×

103

B

2.6743 ×

103

C

2.4771 × 103

Phase

DMWT coefficient

Phase

DMWT coefficient

D

5.5435 ×

104

G

2.8317 × 103

E

7.1554 ×

104

H

2.3478 × 103

F

6.6910 × 104

I

3.0543 × 103

320

G. Kapoor

Table 5 Response of DMWT for ABGH-g fault at 100 km at 0.06 s among RF = 0.5  and RG = 1.25  Fault—ABGH-g (FST = 0.06 s) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

6.5637 ×

104

D

3.2176 ×

103

G

6.0887 × 104

B

4.1158 × 104

E

3.4241 × 103

H

7.8109 × 104

C

3.7059 ×

F

3.0212 ×

I

3.0522 × 103

103

103

Table 6 Response of DMWT for ABDEF-g fault at 100 km at 0.0725 s among RF = 0.5  and RG = 1.25  Fault—ABDEF-g (FST = 0.0725 s) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

6.2259 ×

104

D

6.6824 ×

104

G

3.2010 × 103

B

5.7019 × 104

E

7.4206 × 104

H

2.6859 × 103

C

2.9180 ×

F

7.6889 ×

I

3.0526 × 103

103

104

10 5

1.5

Current (A)

1 0.5 0 -0.5 -1 -1.5

0

1

2

3

Samples

4

5

6

10 4

Fig. 6 Nine-phase current for ABCGHI-g near-in relay fault at 5 km at 0.0525 s among RF = 1.15  and RG = 2.15  Table 7 Response of DMWT for ABCGHI-g fault at 5 km Fault—ABCGHI-g (5 km) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

2.3228 × 103

D

379.2883

G

6.7632 × 103

B

8.0218 ×

103

E

281.4909

H

5.0779 × 103

C

4.8818 ×

103

F

233.8436

I

2.9468 × 103

Protection of Nine-Phase Transmission Line Using Demeyer …

321

Table 8 Response of DMWT for DEFGHI-g fault at 6 km Fault—DEFGHI-g (6 km) Phase A B C

DMWT coefficient 374.7763 377.8481 220.1265

Phase

DMWT coefficient

Phase

DMWT coefficient

D

5.2858 ×

103

G

6.6807 × 103

E

7.3825 ×

103

H

6.6118 × 103

F

4.6599 ×

103

I

3.6314 × 103

Table 9 Response of DMWT for GHI-g fault at 7 km Fault—GHI-g (7 km) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

426.6485

D

461.7225

G

8.4824 × 103

B

412.7111

E

273.5811

H

6.9302 × 103

C

239.0223

F

311.4014

I

4.4313 × 103

Table 10 Response of DMWT for ABC-g fault at 8 km Fault—ABC-g (8 km) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

4.7572 ×

103

D

427.7031

G

345.4945

B

1.0123 × 103

E

181.3406

H

144.8453

C

8.4927 × 103

F

263.2486

I

369.5477

Table 11 Response of DMWT for DEF-g fault at 9 km Fault—DEF-g (9 km) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

103

A

421.0565

D

7.3966 ×

G

454.4494

B

458.4998

E

1.1272 × 104

H

113.3568

F

9.0005 ×

I

397.3337

C

247.6223

103

the results of the DMWT for five different near-in relay faults. It is confirmed from Tables 7, 8, 9, 10, and 11 that the DMWT has the ability to detect the near-in relay faults precisely.

4.4 Response of DMWT for Various Far-End Relay Faults The DMWT has been explored for different far-end relay faults (from 195 km to 199 km). Figure 7 illustrates the ABCDEFG-g far-end relay fault at 195 km at

322

G. Kapoor 10 4

4 3

Current (A)

2 1 0 -1 -2 -3

0

1000

2000

3000

4000

5000

6000

Samples

Fig. 7 Nine-phase current for ABCDEFG-g fault at 195 km at 0.0725 s among RF = 2.35  and RG = 1.35 

0.0725 s among RF = 2.35  and RG = 1.35 . The fault factors chosen for all the fault cases are: T = 0.0725 s, RF = 2.35 , and RG = 1.35 . Tables 12, 13, 14, 15, and 16 report the results for various far-end relay faults. It is inspected from Table 12 Response of DMWT for ABCDEFG-g fault at 195 km Fault—ABCDEFG-g (195 km) Phase

DMWT coefficient

A

6.5342 ×

104

B

6.2020 ×

104

C

4.8286 × 104

Phase

DMWT coefficient

Phase

DMWT coefficient

D

6.6334 ×

104

G

3.1576 × 104

E

6.9457 ×

104

H

1.5821 × 103

F

5.1493 × 104

I

4.5125 × 103

Table 13 Response of DMWT for ABCDE-g fault at 196 km Fault—ABCDE-g (196 km) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

5.9693 × 104

D

4.7158 × 104

G

2.7718 × 103

B

5.8402 × 104

E

5.3524 × 104

H

2.0598 × 103

C

7.1943 ×

F

2.4106 ×

I

2.5788 × 103

104

103

Table 14 Response of DMWT for FGHI-g fault at 197 km Fault—FGHI-g (197 km) Phase

DMWT coefficient

A

3.0705 ×

103

B

3.6173 ×

103

C

2.2589 × 103

Phase

DMWT coefficient

Phase

DMWT coefficient

D

2.4893 ×

103

G

4.1258 × 104

E

6.3860 ×

103

H

9.0637 × 104

F

4.0716 × 104

I

6.0105 × 104

Protection of Nine-Phase Transmission Line Using Demeyer …

323

Table 15 Response of DMWT for DEF-g fault at 198 km Fault—DEF-g (198 km) Phase

DMWT coefficient

A

3.2225 ×

103

B

3.2168 ×

103

C

2.2314 ×

103

Phase

DMWT coefficient

Phase

DMWT coefficient

D

5.9155 ×

104

G

2.8019 × 103

E

7.9623 ×

104

H

2.3627 × 103

F

6.5399 ×

104

I

2.9988 × 103

Table 16 Response of DMWT for ABC-g fault at 199 km Fault—ABC-g (199 km) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

4.7168 ×

104

D

2.8172 ×

103

G

2.5835 × 103

B

5.8588 × 104

E

2.0855 × 103

H

1.7271 × 103

C

7.0822 ×

F

1.9917 ×

I

2.9099 × 103

104

103

Tables 12, 13, 14, 15, and 16 that the effectiveness of DMWT remains impassive for different far-end relay faults.

4.5 Response of DMWT for Different RF The DMWT is investigated for variation in fault resistances. Figure 8 depicts the ABDEF-g fault at 100 km at 0.1 s among RF = 5  and RG = 0.001 . The fault factors for all the fault cases are set as: T = 0.1 s, F L = 100 km, and RG = 0.001 . Tables 17, 18, 19, 20, and 21 tabularize the results for variation in fault resistances. It 10 4

3

Current (A)

2 1 0 -1 -2 -3

0

1000

2000

3000

4000

5000

Samples

Fig. 8 Nine-phase current for ABDEF-g fault at 0.1 s at 100 km among RF = 5 

6000

324

G. Kapoor

Table 17 Response of DMWT for ABDEF-g fault among RF = 5  Fault—ABDEF-g (5 ) Phase

DMWT coefficient

A

4.7554 ×

104

B

5.4148 ×

104

C

2.9081 ×

103

Phase

DMWT coefficient

Phase

DMWT coefficient

D

5.6296 ×

104

G

3.0555 × 103

E

7.8868 ×

104

H

2.3936 × 103

F

7.1557 ×

104

I

3.0643 × 103

Table 18 Response of DMWT for GHI-g fault among RF = 30  Fault—GHI-g (30 ) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

3.2714 ×

103

D

3.3065 ×

103

G

5.6121 × 104

B

2.9509 × 103

E

2.2317 × 103

H

4.8313 × 104

C

2.4353 ×

F

3.0248 ×

I

2.9160 × 104

103

103

Table 19 Response of DMWT for ABCDEF-g fault among RF = 70  Fault—ABCDEF-g (70 ) Phase

DMWT coefficient

A

2.4545 ×

104

B

4.0284 ×

104

C

2.5046 ×

104

Phase

DMWT coefficient

Phase

DMWT coefficient

D

3.0758 ×

104

G

3.0847 × 103

E

3.8890 ×

104

H

2.3936 × 103

F

2.4407 ×

104

I

3.0931 × 103

Table 20 Response of DMWT for DEGHI-g fault among RF = 100  Fault—DEGHI-g (100 ) Phase

DMWT coefficient

Phase

DMWT coefficient

Phase

DMWT coefficient

A

2.9597 ×

103

D

2.2701 ×

104

G

2.8204 × 104

B

3.2367 × 103

E

2.8150 × 104

H

2.4949 × 104

C

2.3080 ×

F

2.9495 ×

I

2.0485 × 104

103

103

Table 21 Response of DMWT for DEF-g fault among RF = 150  Fault—DEF-g (150 ) Phase

DMWT coefficient

A

2.9596 ×

103

B

3.1724 ×

103

C

2.4978 × 103

Phase

DMWT coefficient

Phase

DMWT Coefficient

D

1.8552 ×

104

G

3.3644 × 103

E

1.8981 ×

104

H

2.5615 × 103

F

1.5812 × 104

I

2.8049 × 103

Protection of Nine-Phase Transmission Line Using Demeyer …

325

is inspected from Tables 17, 18, 19, 20, and 21 that variation in the fault resistances does not manipulate the working of the DMWT.

5 Conclusion A DMWT-based fault detection method has been presented in this work for the protection of NPTL. The DMWT is applied which decomposes the fault currents and evaluates the DMWT coefficients. The fault factors of the NPTL are varied. According to the performance appraisal, it is revealed that the DMWT detects the faults efficiently.

References 1. Kapoor, G. (2018). Wavelet transform based detection and classification of multi-location three phase to ground faults in twelve phase transmission line. Majlesi Journal of Mechatronic Systems, 7(4), 47–60. 2. Shukla, S. K., Koley, E., Ghosh, S., & Mohanta, D. K. (2019). Enhancing the reliability of sixphase transmission line protection using power quality informatics with real-time validation. International Transactions of Electrical Energy Systems, 1–21. 3. Kapoor, G. (2020). Fifteen phase transmission line protection using daubechies-4 wavelet transform. International Journal of Engineering, Science and Technology, 12(1), 1–14. 4. Kapoor, G. (2018). Six-phase transmission line boundary protection using wavelet transform. In Proceedings of the 8th IEEE India International Conference on Power Electronics (IICPE), Jaipur, India. 5. Kapoor, G. (2018). Fault detection of phase to phase fault in series capacitor compensated sixphase transmission line using wavelet transform. Jordan Journal of Electrical Engineering, 4(3), 151–164. 6. Gautam, N., Ali, S., & Kapoor, G. (2018). Detection of fault in series capacitor compensated double circuit transmission line using wavelet transform. In Proceedings on IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 769–773). IEEE, Greater Noida, India. 7. Kapoor, G. (2019). A protection technique for series capacitor compensated 400 kV double circuit transmission line based on wavelet transform including inter-circuit and cross-country faults. International Journal of Engineering, Science and Technology, 11(2), 1–20. 8. Kapoor, G. (2018). Wavelet transform based fault detector for protection of series capacitor compensated three phase transmission line. International Journal of Engineering, Science and Technology (Africa), 10(4), 29–49. 9. Kapoor, G. (2018). Detection of phase to phase faults and identification of faulty phases in series capacitor compensated six phase transmission line using the norm of wavelet transform. i-manager’s Journal of Digital Signal Processing, 6(1), 10–20. 10. Sharma, D., Kapoor, G., & Agarwal, S. (2019). Protection of double-circuit transmission line integrated with a wind farm using Daubechies-5 wavelet transform. JEA Journal of Electrical Engineering, 3(1), 1–7. 11. Gautam, N., Kapoor, G., & Ali, S. (2018). Wavelet transform based technique for fault detection and classification in a 400 kV double circuit transmission line. Asian Journal of Electrical Sciences, 7(2), 77–83.

326

G. Kapoor

12. Kapoor, G. (2018). A discrete wavelet transform approach to fault location on a 138 kV two terminal transmission line using current signals of both ends. ICTACT Journal of Microelectronics, 4(3), 625–629. 13. Kapoor, G. (2019). Detection and classification of four phase to ground faults in a 138 kV six phase transmission line using Hilbert Huang transform. International Journal of Engineering, Science and Technology, 11(4), 10–22. 14. Kapoor, G. (2019). Detection and classification of three phase to ground faults in a 138 kV sixphase transmission line using Hilbert-Huang transform. JEA Journal of Electrical Engineering, 3(1), 11–21. 15. Sharma, N., Ali, S., & Kapoor, G. (2018). Fault detection in wind farm integrated series capacitor compensated transmission line using Hilbert Huang transform. In Proceedings on IEEE International Conference on Computing, Power and Communication Technologies (GUCON), pp. 774–778. IEEE, Greater Noida, India. 16. Kapoor, G. (2019). Detection and classification of single line to ground boundary faults in a 138 kV six phase transmission line using Hilbert Huang transform. i-manager’s Journal on Electrical Engineering, 12(3), 28–41. 17. Sharma, K., Ali, S., & Kapoor, G. (2017). Six-phase transmission line boundary fault detection using mathematical morphology. International Journal of Engineering Research and Technology, 6(12), 150–154. 18. Kapoor, G. (2018). Six-phase transmission line boundary protection using mathematical morphology. In Proceedings of the IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 857–861), Greater Noida, India. 19. Kapoor, G. (2018). Mathematical morphology based fault detector for protection of double circuit transmission line. ICTACT Journal of Microelectronics, 4(2), 589–600. 20. Sharma, P., & Kapoor, G. (2018). Fault detection on series capacitor compensated transmission line using Walsh hadamard transform. In Proceedings of IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 763–768), Greater Noida, India.

A Comparative Analysis of Benign and Malicious HTTPs Traffic Abhay Pratap Singh and Mahendra Singh

1 Introduction In a digital era, the malware becomes a serious problem to the cybersecurity research community. The basic functionality of malware is to damage, disorder, or acquire illegal access to a computer system or exfiltration of valuable information from a network. Advanced malware relies on communication networks to receive commands, coordinate distributed denial-of-service (DDOS) attacks, deliver information to the attackers, and infect new objects [1]. The HTTPs protocol provides Web applications to safe and secure HTTP communication over SSL or TLS, which makes it a tough task for the other party to collect information about user’s Web site utilizing packets sniffer or man-in-the-middle (MitM) attacks [2]. HTTPs provide three important services, i.e., integrity, confidentiality, and authentication. With the help of these services, it creates a secure tunnel between client and server. The fundamental goal of the HTTPs protocol is to keep user’s Web browsing activities away from spies and encrypt the content between client and server so that adversary cannot inspect the payload. From the security perspective, HTTPs make security monitoring techniques incapable to recognize the Web traffic and to detect anomalies or malicious actions that can be unseen in enciphered connections [3]. From the standpoint of the attacker, encryption is a significant tool, where a novice user uses for good reason and threat actor uses for bad purpose. In recent years, cyberattackers have begun to acclimatize their actions to incorporate encryption. As per the report of 2017 “trustwave global security,” 36% of malware detected in the past used encipher technology [4]. The growing popularity of enciphered network traffic is a twofold edge sword, on one A. P. Singh (B) · M. Singh Department of Computer Science, Gurukula Kangri Vishwavidyalaya, Haridwar, India e-mail: [email protected] M. Singh e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_36

327

328

A. P. Singh and M. Singh

hand, it offers secure data transmission, and shields against eavesdropping, on the contrary, it obscures the genuine monitoring of network traffic, with traffic classification [3]. In this paper, we will consider HTTPs (HTTP over SSL/TLS) the most widely recognized encrypted network traffic protocols. We first captured network traffic (PCAPs) using packet capturing tool Wireshark [5]. Some malicious (PCAPs) are downloaded from packet total [6]. We performed passive monitoring to inspect the SSL/TLS metadata like unencrypted header information, cipher suite, version number, etc. The rest of the paper is prepared as follows: in Sect. 2, we discussed the history of SSL/TLS. Section 3 explores the SSL/TLS handshake process via Wireshark. In Sect. 4, we define main information related to SSL/TLS metadata. Lastly, in Sect. 5, the conclusion and future work will be presented.

2 Background The HTTPs makes use of SSL/TLS to encrypt the communication between client and server. The first public announcement of SSL was actually SSL 2.0, released in 1995. It was rapidly updated in 1996 and changed via SSL 3.0, just because of the protocol suffered weaknesses like missing handshake information, usage of a weak, and partly vulnerable algorithm. SSL 3.0 [7] addressed much vulnerability in the prior revisions by familiarizing a variety of new features and mechanisms. It delivers more security enhancement matched to SSL 2.0 like more cipher suites containing novel cryptographic parameters, support for key agreement during the handshake phase, different keys for different cryptographic primitives, and support for certificate chains instead of only a single certificate. In 1999, TLS 1.0 was released as a successor to SSL 3.0. TLS 1.0 was depended on SSL 3.0 and it is well-defined in RFC 2246 [8]. Despite the differences from SSL 3.0 were not big, the name change was part of the standardization process by the Internet Engineering Task Force (IETF). The next version TLS1.1 RFC 4346 [9] was not relieved until April 2006 and enclosed essentially only security fixes. Though a noteworthy change in the protocol was the combination of TLS extensions, which were released a few years ago in June 2003, TLS 1.2 RFC 5246 [10] was discharged in August 2008. It included support for authentications encipher and for the most part removed all hard-coded security natives from the specification, building the protocol completely flexible. Recently TLS 1.3 RFC 8446 [11] acquits in August 2018. TLS 1.3 has additionally announced a bigger number of developments than any previous variant of the protocol. It provides more features like shorter handshake and authenticated encipher with associated data (AEAD) cipher; version negotiation removed these features further to expand the security and robustness of the protocol.

A Comparative Analysis of Benign and Malicious HTTPs Traffic

329

3 Exploring SSL/TLS Handshake Using Wireshark The following image is an explanation of the handshake between the Web client and Web server captured using Wireshark, one of the best well-known network monitoring analysis tool (Fig. 1).

Fig. 1 Initial handshake process

3.1 Client Hello The first client is used to start a TCP connection and establish security features that will be connected with it. Usually, in the SSL/TLS handshake process, the first time client hello message is initiated and it is sent by the client to launch a session with server. The client displays the parameters appeared in Fig. 2.

Fig. 2 Client hello

330

A. P. Singh and M. Singh

3.2 Server Hello Once the client finishes sending its offered parameters to the Web server, the Web server will reply on the behalf of server hello message. The server hello encloses the subsequent information. The server-side shows the parameters in Fig. 3.

Fig. 3 Server Hello

3.3 Server Certificate The Web server sends a detailed list of X.509 certificates to the client for authentication. The server certificate covers its public key. This certificate authentication is completed by certificate authority (CA) digitally signs the server’s certificate and trustworthy by the operating system and a browser that combines a list of recognized certificate authorities, or by manually importing a certificate that the user trust. This type of certificate is also called as root CAs. Figure 4 displays the parameters of certificates.

Fig. 4 Server certificate

A Comparative Analysis of Benign and Malicious HTTPs Traffic

331

3.4 Server Key Exchange and Hello Done Server sends a certificate along with server key exchange and server hello done. The cipher picked by the server in this illustration session will apply the DH (Diffie– Hellman) key generation technique. The server key exchange message comprises numeric modules that will be utilized for the generation of the session keys in the DH algorithm [12] (Fig. 5).

Fig. 5 Server key exchange and hello done

3.5 Client Reply to Server The client responds back to server with three things namely a client key exchange, a change cipher spec (CCS), and a finished message. A client key exchange means the random number produced by the client, enciphered using the Web server public key. It is used with a Web client and Web server random numbers to create a master key. If the Web server can decipher the message with the help of private key and the master key can be created locally, the client can ensure that the server has authenticated itself. The change cipher spec informs the Web server that all future messages will be enciphered with the algorithm and key just negotiated. Finished message specifies that the SSL/TLS exchange is done from the client-side (Fig. 6).

Fig. 6 Client reply to server

332

A. P. Singh and M. Singh

3.6 Server Reply to Client This is the last step of handshake process where the server sends change cipher spec followed by an encoded hash of the handshake components so that client could also confirm the authenticity of the process. If the Web client and Web server have determined another encrypted handshake message is right, the connection will stay open, and data will be shown in encrypted application data records. If a problem is noticed on any side through the verification, then the session is ended and an alert record will be referred that indicate the problem [13] as shown in Fig. 7.

Fig. 7 Encrypted alert

3.7 Application Data Flow After successfully completing the entire SSL/TLS handshake process and verifying all the cryptographic parameter from client to server, the application data flow will be encrypted now as showed in Fig. 8 (Table 1).

Fig. 8 Application data flow

4 Discussion and Analysis The SSL/TLS handshake process was explored using Wireshark for both selfgenerated benign traffic and collected malicious PCAPs. It will facilitate to extract unencrypted features which will be used for monitoring purpose. We studied these SSL/TLS metadata related to both malicious traffic and benign traffic and found some interesting facts related to four features which can be used to differentiate between malicious and normal traffic. These features are summarized as hereunder.

A Comparative Analysis of Benign and Malicious HTTPs Traffic

333

Table 1 Summary of display filter in Wireshark [14] SSl.handshake.type == 1

Client hello

SSl.handshake.type == 2

Server hello

SSl.handshake.type == 11

Certificate, server key exchange, server hello done

SSl.handshake.type == 16

Client key exchange, change cipher spec, encrypted handshake message

SSL.record.content_type == 23

Application data flow

SSL.record.content_type == 21

Encrypted alert

SSL.record.content_type == 22

Only handshake packets

4.1 Version Number A version number indicates what kind of version you are using with TLS protocol. Currently, client supports TLS1.2 version which is secure, even though TLS1.3 version launched, but mostly client supports TLS1.2 because of some compatibility issues. If the client supports TLS 1.0 or lower-level version, then it may support insecure cipher or malware. The malwares prefer weak cryptographic parameters so that they could interject malware code while communicating between client and server. TLS1.0 version has already found vulnerable to attacks like beast and poodle attack.

4.2 Strong and Weak Cipher Suite A strong cipher suite means that it is not easily breakable and provides secure data transmission between client and server. The key length size of it is generally more than 128-bit. If a cipher using 128-bit key size or less than it considered as a weaker cipher suite, generally, malware prefers weak and old cipher suite. Inspecting cipher suites provided a good way to detect malicious connection but sometimes malware also uses strong cipher suites for communication [15]. A list of strong and weak cipher suite below there: Strong cipher (secure cipher suite) TLS_ECDHE_ECDSA_with_AES_128_GCM_SHA256 TLS_ECDHE_RSA_with_AES_128_GCM_SHA256 TLS_RSA_with_AES_128_GCM_SHA256 TLS_RSA_with_AES_256_GCM_SHA384 TLS_RSA_with_AES_128_CBC_SHA256 TLS_RSA_with_AES_256_CBC_SHA256

334

A. P. Singh and M. Singh

TLS_RSA_with_AES_256_GCM_SHA384 Weak cipher (insecure cipher suite) TLS_RSA_with_AES_128_CBC_SHA TLS_RSA_with_AES_256_CBC_SHA TLS_RSA_with_3DES_EDE_CBC_SHA TLS_RSA_with_RC4_128_SHA TLS_RSA_with_RC4_128__MD5 TLS_ECDH_Anon_with_AES_256_CBC_SHA TLS_ECDH_Anon_with_AES_128_CBC_SHA

4.3 SNI SNI stands for server name indication which permits the server to securely host several SSL/TLS certificates for various Web sites, all under one IP address. SNI is an explicit string value from the provided SSL/TLS client hello message. This is the appropriate method to find out which services are opened by the original HTTP’s connection. Bortolameotti et al. [16] utilized the SNI extension related to SSL certificate information to identify connections toward malevolent Web sites. The monitoring-based SNI extensions depend on the server-name value in the check extension. This value offers the DNS name of the HTTP’s Web site to be retrieved. It is good way to trace which type of services are retrieved; also server-name value can be matched against a list of blacklist or whitelist to impose HTTP’s filtering [17].

4.4 Digital Certificate A digital certificate is a way to identify the authenticity of both client and server; it also defines that whether you are communicating with a legitimate server or malicious server. A digital certificate is valid if and only if the certificate is digitally signed through certificate authority and it must not terminate. Our browser and operating system have already installed root certificates. When a client visits a particular Web site, then the server sends a valid certificate to client and if it matches to the client root certificate then a green padlock signal is visible otherwise a security warning message will flash (as displayed in Fig. 9). It indicates that if users ignore certificate warnings, the client will be susceptible to the SSL/TLS interception attacks. Researchers have revealed that various users overlook SSL/TLS certificate warnings provided by the browser [18].

A Comparative Analysis of Benign and Malicious HTTPs Traffic

335

Fig. 9 Certificate warning

There is one more certificate which attackers generally use, i.e., self-signed certificate which means that it has been signed by the server’s private key and not by the public key of a trusted other party. Attackers typically utilize self-signed certificates and freely generated certificates as they are nippy and inexpensive to generate [19]. We also extracted SHA1 fingerprints from PCAPs and tested them on SSLbl1 and found that a few certificates were malicious.

5 Conclusion A feature analysis of SSL/TLS handshake process was performed for both benign and malicious HTTPs traffic. The SSL/TLS metadata was explored in more precise manner with Wireshark networking tool. The various features of client hello, server hello, server certificate, and server key exchange were elaborated. A comparative study shows that benign and malicious traffic can be distinguished on the basis of version number, cipher suite, SNI and digital certificate. In future work, we propose to extract statistical features from the captured packets in order to enable us to effectively classify network traffic.

References 1. Shimoni, A., & Barhom, S. (2014). Malicious traffic detection using traffic fingerprint. https:// github.com/arnons1/trafficfingerprint. 2. McCarthy, C., & Zincir-Heywood, A. N. (2011). An investigation on identifying SSL traffic. In Proceedings of 4th IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA) (pp. 1–8).

1 https://sslbl.abuse.ch/.

336

A. P. Singh and M. Singh

3. Husák, M., Cermák, M., Jirsík, T., & Celeda, P. (2016). HTTPS traffic analysis and client identification using passive SSL/TLS fingerprinting. EURASIP Journal on Information Security, 1–14. 4. Antonakos, J., Davidi, A., De La Fuente, C. (2017). 2017 trustwave global security report. Tech. Rep. 5. Wireshark homepage, https://www.wireshark.org/. Last accessed 2019/07/10. 6. Becker, J., & Free, A., Online PCAP Analysis Engine, https://packettotal.com/. 7. Freier, A., Karlton, P., & Kocher, P. (2011). The secure sockets layer (SSL) protocol version 3.0. 8. Dierks, T., & Allen, C. (1999). RFC 2246 the TLS Protocol Version 1.0. 9. Rescorla, E., & Modadugu, N. (2006). RFC 4346 Datagram transport layer security. 10. Rescorla, E. (2008). RFC 5246 the transport layer security (TLS) protocol version 1.2. 11. Rescorla, E., Tshofenig, H., & Modadugu, N. (2019). RFC 8446 the Datagram Transport layer Security (DTLS) protocol version 1.3 draft-IETF-TLS-DTLS 13–31. 12. Rescorla, E. (2001). SSL and TLS. Reading, MA: Addison-Wesley. 13. Vandeven, S. (2013). SSL/TLS: What’s under the hood. SANS Institute InfoSec Reading Room, 13. 14. Wireshark Filter for SSL Traffic. https://davidwzhang.com/2018/03/16/wireshark-filter-for-ssltraffic/. 15. Nguyen, N. H.: SSL/TLS interception challenge from the shadow to the light. SANS Institute (2019). 16. Bortolameotti, R., Peter, A., Everts, M. H., & Bolzoni, D. (2015). Indicators of malicious SSL connections. In International Conference on Network and System Security (pp. 162–175). Cham: Springer. 17. Shbair, W. M., Cholez, T., François, J., & Chrisment, I. (2016). Improving SNI-based https security monitoring. In International Conference on Distributed Computing Systems Workshops (ICDCSW) (pp. 72–77). IEEE. 18. Sunshine, J., Egelman, S., Almuhimedi, H., Atri, N., & Cranor, L. F. (2009). Crying wolf: An empirical study of SSL warning effectiveness. In Proceedings of the USENIX Security Symposium (pp. 399–416). 19. Torroledo, L., Camacho, L. D., & Bahnsen, A. C. (2018). Hunting malicious TLS certificates with deep neural networks. In Proceedings of the 11th ACM workshop on Artificial Intelligence and Security (pp. 64–73).

Comparative Study of the Seasonal Variation of SO2 Gas in Polluted Air by Using IOT with the Help of Air Sensor Vandana Saxena, Anand Prakash Singh, and Kaushal

1 Introduction Sulphur dioxide is produced by many industries as a by-product especially in petrochemical industries, coal and petroleum contains sulphur compounds these compounds when combusted generate SO2 [2]. This gas when evaporated in air in the presence of a catalyst SO2 forms H2 SO4 , thus causing acid rain. This acid rain adversely affects plantation, aquatic life and wild life. Sulphur dioxide represents only a minimal part of automotive emission; however, this pollutant may have a synergistic effect with other pollutants. Sulphur dioxide is highly soluble in aqueous surfaces of the respiratory tract. It is absorbed in the nose and the upper airways where it exerts irritant effect and the minimum concentration reaches the lungs [3]. Diesel is the primary fuel, which is widely used for commercial purpose being the cheapest one. The quality of fuel supplied is not good as compared to world supply. Table 1 shows that the sulphur content in the various types of fuels used in India.

V. Saxena (B) · A. P. Singh · Kaushal IIMT College of Engineering, Greater Noida, India e-mail: [email protected] A. P. Singh e-mail: [email protected] Kaushal e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_37

337

338 Table 1 Sulphur content in the fuels supplied in India

V. Saxena et al. Fuel

Sulphur Content (%)

Kerosene

0.25

Fuel oil

3.5–3.6

Gasoline

0.26

Diesel—HSD

1.2

Diesel—LDO

1.9

2 Materials and Methods 2.1 Instrument HPCL Shimadzu make model LC 10D; Column: C18 (250 × 4 mm) column length (E Merck made); Detector: UV/Visible (SPD-IO) Mobile phase: Acetonitrile and water (HPCL grade) in the ration of 70:30. Flow rate 1.75 ml/min.

2.2 Preparation of Reagents (a) Absorbing solution for SO2 (0.1 M Sodium tetrachloromercurate) 27.2 g of mercuric chloride was mixed with 11.7 g of sodium chloride in 1000 ml of distilled water. (b) Formaldehyde solution (0.2%). (c) p-Rosaniline hydrochloride dye (0.04%) 0.10 g of p-Rosaniline hydrochloride was dissolved in the 50 ml distilled water. Solution kept in dark and after 48 h it was filtered. The solution was finally made up to 50 ml, which remains stable for 3 months. It was stored in the dark and at regenerated temperature. 10 ml of the solution was mixed with 3 ml of HCI and kept in the dark for 5 min then the volume was made up to 50 ml. (d) Sodium thiosulphate solution (0.01 M) 25.0 g sodium thiosulphate was dissolved in 1000 ml of distilled water. The stock solution was approximately 0.1 N solutions. The 100 ml of stock was diluted to 1000 ml with distilled water. (e) Potassium dichromate solution 4.904 g of potassium dichromate was dissolved in the 1000 ml of distilled water. (f) Iodine Solution (0.1 N) 25 g of potassium iodide was dissolved in the distilled water then the 12.7 g of iodine was added and diluted up to 1 l. (g) Iodine (0.01 N) 0.01 N iodine solution was obtained by diluting 10 times the solution of 0.1 N iodine and standardized with 0.01 N sodium thiosulphate solution. (h) Standard sodium metabisulphite solution. Sodium metabisulphite (assay 95%) 441.3 mg was dissolved in 1000 ml of distilled water. The sodium contained

Comparative Study of the Seasonal Variation of SO2 Gas …

339

approximately 0.40 mg 1 ml of SO2 , Stock solution of sodium metabisulphite was standardized by O.O1N of iodine and the normality of solution was adjusted to 0.0123 N. The solution contained 150 µ 1 SO2 (250 C and 760 mn Hg).

3 Procedure 3.1 Standard Curve 2 ml of standard stock sodium metabisulphite solution was diluted by 100 ml of distilled water. Solution contained 3.0 µ 1 SO2 /ml, the standard curve was prepared by taking different concentration ranges 0.5–3.0 µ 1 SO2 /ml. The standard solution was further processed as per the procedure described below for unknown samples.

3.2 Sample Collection The air was drawn through 10 ml of absorbing solution (0.1 M sodium tetra chloromercurate) at the flow rate of 0.51/m for 8 h. The sample was carried to a laboratory for analysis.

3.3 Analysis 1.0 ml of formaldehyde solution was added into 10 ml of sample. Then 1.0 ml of p-rosainline was added and mixed well. Absorbance was recorded at 560 mm on spectrophotometer after 20 min. Calculation Volume of air at 25 ◦ C and 760 mm Hg (STP) Vs =

298.2 V (760 − Pm) × 760 (1 + 273.2)

where V Pm T Vs

Volume of air in litres during sampling period Atmospheric pressure at the sampling period Temperature recorded at the sampling period Volume of air on STP in letter (at 25 °C at 760 mm Hg).

V = F×T

340

V. Saxena et al.

where F Average flow rate of gas T Time.

F=

F1 + F2 2

where F 1 Initial flow rate of gas in l/min F 2 Final flow rate of gas in l/min.

µ1 of SO2 Vs   (SO2 ) in µg/m3 = ppm by volume 64 × 106 /24470 (S O2 ) ppm by Vol =

4 Results Concentration of sulphur dioxide in the environment spatially depends upon the quality of fuel and the condition of the engine. Concentration of sulphur dioxide in the environment of Faizabad was observed at the different stations throughout the year. The seasonal variation observed in the SO2 is mentioned below. Monthly mean and average values of SO2 are within the prescribed limit of CPCB for AAO [4] (Fig. 1).

4.1 Seasonal Variation of Sulphur Dioxide (SO2 ) 4.1.1

Summer

The maximum concentration of sulphur dioxide (SO2 ) was in the month of May while lowest concentration was in the month of March (Table 2). The average concentration of the summer season concentration (µg/m3 ) (Fig. 2).

Comparative Study of the Seasonal Variation of SO2 Gas …

341

Naka

Chauk

Kotwali

Roadways

Ayodhya

27.88

33.70

35.32

41.40

45.34

NAAQ Standards (24hrs) 50

μg/m3

40 30

45.34

41.4 33.7

35.32

Chauk

Kotwali

27.88

20 10 0 Naka

Roadways

Ayodhya

Fig. 1 Annual average concentration of sulphur dioxide in the ambient air of Faizabad city

Table 2 Sulphur Dioxide level (µg/m3 ) in ambient air of Faizabad City during summer Month

Naka

Chauk

Kotwali

Roadways

Ayodhya

March

32.31 30.13

40.87 42.22

49.23 45.98

52.57 50.18

61.23 57.21

April

29.00 25.22

38.19 30.10

43.09 35.17

50.00 47.90

53.44 51.49

May

24.55 23.88

35.67 32.66

32.66 30.69

44.63 38.72

47.95 40.72

June

25.53 21.76

29.76 25.55

28.00 30.75

38.11 30.16

35.13 31.62

Minimum

21.26

25.55

28.00

30.10

31.62

Maximum

32.31

40.82

49.23

52.57

61.23

Average (N = 8)

26.55

34.25

26.94

44.02

47.35

N = No. of sample

4.1.2

Monsoon

Heavy rains observed during the season resulted in a decrease of the SO2 level Ware Naka 22.63 Chauk 27.91, Kotwali 29.24, Roadways 33.77, Ayodhya 35.61 (Fig. 3 and Table 3).

4.1.3

Winter

The average concentration of the winter season. Concentration (µg/m3 ) No (Fig. 4 and Table 4).

342

V. Saxena et al. Naka

Chauk

Kotwali

Roadways

Ayodhya

26.55

34.25

26.94

44.02

47.35

NAAQ Standards (24 hrs) 34.25

μg/m3

40 30

47.35

44.02

50 26.94

26.55

20 10 0 Naka

Chauk

Roadways

Kotwali

Ayodhya

Fig. 2 Average concentration of sulphur dioxide in the ambient air Faizabad city during summer season Naka

Chauk

Kotwali

Roadways

Ayodhya

22.63

27.91

29.24

33.77

35.61

NAAQ Standards (24 hrs) 33.77

35.61

Roadways

Ayodhya

40

μg/m3

30

27.91

29.24

Chauk

Kotwali

22.63

20 10 0 Naka

Fig. 3 Average concentration (24 h) of sulphur dioxide in ambient air during monsoon season Table 3 Sulphur dioxide level (µg/m3 ) in ambient air of Faizabad city during Monsoon Month

Naka

Chauk

Kotwali

Roadways

Ayodhya

July

15.00 20.30

25.76 21.00

27.40 18.60

29.39 27.00

30.75 28.35

August

18.30 21.15

24.00 25.10

25.00 25.75

30.90 32.80

33.25 35.09

September

22.87 26.77

28.36 30.50

30.09 34.00

34.09 37.00

34.36 38.33

October

27.30 29.32

32.27 36.32

35.75 37.39

39.00 40.00

40.90 43.88

Minimum

15.00

21.00

18.60

29.39

28.35

Maximum

29.32

36.32

37.39

40.00

43.88

Average (N = 8)

22.63

27.91

29.24

33.77

35.61

N = No. of sample

Comparative Study of the Seasonal Variation of SO2 Gas …

343

Naka

Chauk

Kotwali

Roadways

Ayodhya

34.13

38.9

39.76

46.43

52.98

μg/m3

NAAQ Standards (24 hrs) 60 50 40 30 20 10 0

34.13

Naka

38.9

39.76

Chauk

Kotwali

46.43

Roadways

52.98

Ayodhya

Fig. 4 Average concentration (24 h) of SO2 in the ambient air during winter season

Table 4 Sulphur dioxide level (µg/m3 ) in ambient air of Faizabad city during winter Month

Naka

Chauk

Kotwali

Roadways

Ayodhya

November

25.11 20.18

33.76 36.76

34.31 33.63

40.15 38.64

50.33 53.73

December

27.30 30.75

37.07 39.90

30.15 37.53

46.11 50.18

49.11 45.63

January

28.21 31.32

38.62 46.25

40.32 45.00

45.16 48.33

52.16 54.28

February

35.73 37.22

40.72 42.37

47.50 50.13

50.17 52.76

60.53 58.41

Minimum

37.22

42.37

50.13

52.76

60.53

Average (N = 8)

34.13

38.9

39.76

46.43

52.98

N = No. of sample

5 IOT on Project In this project, we are using a sensor which detects the quality of air, from this method we measure the amount of gases present in the air (O2, Ar, N2 , CO2 etc.) and hence we detect the pollutants using sensor and then works to improve the air quality in that particular area. An air pollution analyser inside an official monitoring station uses a well-defined, standardized and selective principle. Analysers are type-approved and tested for interferences and under varying conditions. The environment in official monitoring stations is controlled, their instruments are regularly checked, and the measurements are subject to rigorous quality control and calibration procedures. Some sensors can be sensitive to weather conditions (wind speed, temperature, humidity) or can have difficulties distinguishing pollutants. When using sensors, the measurements should be carefully evaluated and validated [5]. The spatial representativeness of measured pollutant concentrations depends on pollutant, source and surroundings. Even if a

344

V. Saxena et al.

measurement is carried out correctly, it may only be representative for a very small area. The signals from sensors not only depend on the air pollutant of interest, but also on a combination of several effects, such as other interfering compounds, temperature, humidity, pressure and signal drift (instability of signal). At high concentrations, the signal from the air pollutant can be strong, but at ambient air levels, the signal is weaker in comparison to the interfering effects. The quality of sensor results, therefore, depends on technology and implementation (application, site, conditions, set-up). Reproducing sensor responses at different measurement sites or the portable use of sensors is thus difficult. Due to the influence of meteorological parameters on a sensor signal, simple correction and/or calibration is not always possible. Nevertheless, in certain well-defined situations, the measurement uncertainty of these devices may approach the level of ‘official’ measurement methods [6].

6 Conclusion The road traffic is considered one of the major sources of SO2 in the ambient air. In the present study, the current levels of SO2 were less than the recommended CPCB limits at all locations of the city studied in all seasons [7]. The trend indicates that the concentrations were higher in the Kotwali and Roadways while moderate in the Naka, Chauk whereas the minimum was in the Ayodhya area. During the study, it was found mainly three- and four-wheelers are the chief source of sulphur dioxide, however, the consumption of diesel, which is the main source of SO2 , is less than the petrol in the city. The power shading and frequent break down in electric power supply also contribute SO2 in the environment by increasing the use of fuel in electric generators. With further increase in the necessity, the energy consumption per capita will further rise. Consequently, loads of SO2 will ultimately community living in this vicinity cannot be ruled out. Frequency of cough in the human population exposed to automobile exhaust has been correlated with annual average concentration of SO2 . With the help of air sensor, we can measure the quality of air of the different areas and hence we are able to know the amount of pollutants particles present in that area. The air sensor detects the quality of air and shows on the monitor. The monitor is connected to the device of the pollution officer of that area and an alert is given with the help of email, these all technology is done by the help of Raspberry pi, and further connected to the network [8].

Comparative Study of the Seasonal Variation of SO2 Gas …

345

References 1. Ligockl, M. P., & Pankow, J. F. (1989). Measurements of the gas particle distributions of atmospheric organic compounds. Environment Science Technology, 23, 75–83. 2. Bufalini, T. J. (1971). Oxidation of SO2 in polluted atmosphere overview. Environment Science Technology, 5, 685–703 3. Calabrese, E. J., et al. (1981). A review of human health. Effects associated with exposure to diesel fuel exhaust. Environment International, 5, 373–477. 4. Baumbach, G., et al. (1995). The pollution in a large tropical city with a high traffic density. Results of measurement in Lagos. Science of the Total Environment, 169, 25–31. 5. Alary, R., Donati, J., Vie lard, H. (1994). Atmospheric pollution from road traffic in Parisrelation with road traffics and meteorological conditions. Pollution Atmospherique, 36(141), 55–56. 6. Further technical information about performance of sensors, http://db-airmontech.jrc.ec.europa. eu.aspx (2003) 7. Ravichandran, C., Chandrasekaran, G. E., Anuradha, R., & Radhika, T. (1996). Ambient air quality at selected sites in Tiruchirapalli city. International Journal of Environment and Pollution, 16(10). 8. Greenberg, A., Davack, F., Harkov et al.: Polycyclic hydrocarbons in New Jersey (USA). A comparison of winter and summer concentrations over two years periods. Atmospheric Environment, 19, 1325–1240.

Fake News Detection: Tools, Techniques, and Methodologies Deependra Bhushan, Chetan Agrawal, and Himanshu Yadav

1 Introduction The twenty-first century is the age of revolutionary advancements in every sector. Mammoth growth in the information sector and exponential proliferation in internetwork architectures are solely responsible for making Facebook, Whatsapp, and Twitter as the biggest player of the social media sector. The fusion of multimedia sites, fastest internet technology, and 24*7 active social networking websites altogether becomes vital as well as a fatal mixture for human life. Each day thousands of pictures are posted, hundreds of hours of videos are uploaded and millions of tweets and re-tweets are done across many social media sites. Such posts are highly rich by content, but also filled up with an enormous amount of false and misleading information that are intentionally fabricated and modified to fool and divert the people’s opinion [1].

1.1 Some Popular Events That Led to the Sudden Eruption in Fake News in Recent Years Yes, it’s true that the fake news was there in the past, but it was not so prolific and impactful as it is today [2]. It is impacting and affecting political situation, economic conditions, and social lives of many people around the globe in the following manner. • Research conducted in the USA and other European countries regarding the change in human behavior of consuming news reveals this fact that a large portion D. Bhushan (B) · C. Agrawal · H. Yadav Computer Science Engineering Department, Radharaman Institute of Science and Technology, Bhopal, Madhya Pradesh, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_38

347

348

• • • • • • •

D. Bhushan et al.

USA population (around 67%) is now totally dependent on social media to get any kind of news and they even think all news they are consuming is totally genuine and true in all aspects.it is more alarming. In many developing countries like India, the mainstream media too involved in spreading fake contents and news now a days on many baseless facts. Calamities that occurred in recent past like Kerala floods, Nepal earthquakes also witnessed a lot of chaos only due to fake Facebook posts and thousands of misleading tweets. Because of rumor and fake news, law and order completely fail to curb fake news generated outrage events. The historic, 2016 United States presidential election witnessed many such fake tweets and post in a more sophisticated and pre-planned manner that initiated opinion spamming culture over online world [2, 3]. Very famous “Brexit” referendum too, became a victim of fake news, triggered by different websites that changed the opinion and mindset of people. Bunch of fake posts generated by sophisticated designed social bots and virtual accounts also responsible for fake feedback reviews that manipulated many customers across distinguished e-commerce sites [4]. The sudden growth in such incidents, fake news became a buzzword and was widely used all over the world and that resulted in the announcement of post-truth as ‘word of the year by Collins Dictionary’ in 2017 and also fake news is accepted as ‘word of the year in 2016 by Oxford Dictionary’ [5].

1.2 The Current Need for the Detection of Fake News Fake news generally propagated, just to mislead the people who consumes such news. Many times fake news is neither detected by common people nor by any sophisticated media too. Fake and deceptive news should be given more importance in the current situation for the following reasons. • Print and electronic media, so-called the ‘fourth pillar of a healthy democracy’ has been drastically changed and the sudden eruption of fake contents in the huge amount reduced the credibility of news in recent years. • The complex and diverse nature of online social media is posed a big, hectic, and cumbersome task to cross-check each and every trending article or news running on different social networking sites. • Fake news in recent past not only modified the result of an election but also altered the choice and mood of many people up to a great extent across many big countries. • Many aspects of fake news like who generally create it? How and why it is created? How it propagates through a network so fast? How things are going viral, and how it can be detected at an early stage? Such mind storming analysis and questions are still a big matter of research.

Fake News Detection: Tools, Techniques, and Methodologies

349

1.3 General Overview of the Article After having well versed with preliminary knowledge about fake news, its consequences on our daily life, we have got enough idea, why there is utmost necessary to do a lot of research in the direction of detection of fake news. In Sect. 2, we will see the basic concepts and prerequisites related to fake news, which is the foundation for the entire discussion. In Sect. 3, our prime area of concern will be the discussion about some of the famous datasets that have been proposed from time to time by many researchers in due course for the detection of fake news patterns in online media. In the next Sect. 4, we have a grasp look over NLP and machine learning the two main tools and methodologies which have prime importance in fake news detection. Section 5 is dedicated to the exploration and outcomes of some of the remarkable research that was carried out by distinguished authors, fake news activists and researchers in recent years. In Sect. 6, we concluded our article showing some of the future aspects of fake contents detection along with associated challenges.

2 Concept of Fake News 2.1 Fake News: In Terms of Different Perspectives [6] The fake news although existed since ages, but the exact definition of this term still not covered, which satisfies the all aspects. Many authors defined fake news from different point of view. As per some author “it is the misinformation of news, while some others studied it, as the correlation of the fake content with the humorous, satire and hoax behavior [7]. There is a very narrow margin between misinformation and disinformation [8]. News is considered to be misinformation, when it possesses some wrong and misleading information that is not yet verified. On the other hand, disinformation is wrong information with known falsehood or intentional delivery of wrong new continuously (Table 1). Through deep observation of each and every definition, it is cleared that satire news is more or less is a humorous way to explain the political issue with proper context so, every time such kind of news should not be termed as fake or deceptive news. Also in the case of misinformation which, generally propagated without having the Table 1 Wrong news classification Different news types Definition

Misinformation (false news)

Disinformation (fake news)

Wrong information, Misleading data, information about anything, Not verified data

Wrong news (Intentionally delivered or published) to fool people

350

D. Bhushan et al.

proper knowledge and prior investigation of the subject also should not immediately refer as fake news [9].

3 Overview of Some Famous Datasets for the Fake News Detection This section explores some of the important details about the most popular dataset created by distinguished scientists and researchers. Here datasets generally refer to the repository of thousands of tweets, millions of post of fake content. • LIAR: LIAR dataset [10] is without any doubt a basic dataset that set an initial platform for detection of deceptive content Here, the data inputs are stored by PolitiFact (a famous fact-checking website). It consists of almost 12,800 short statements which are further divided into six categories as followings: completely false (pants-fire), false, barely true, half true, mostly true and true. • Fact-Checking-LTCSS: It is somewhat similar to LIAR dataset but having fivepoint scale developed by Andreas Vlachos et al. [11]. • BUZZFEED NEWS: This dataset [12] has the collection of a sample of news published on Facebook in the 2016 US presidential election. After proper analysis of post, the statements are overall labeled in mostly true, a mixture of true and false, and mostly false like categories. • CREDBANK: It is also one of the famous crowd-sourced knowledge-based dataset developed by Mitra and Gilbert [13] which has nearly 60 million tweets where overall tweets are related to 1000 news events. It has a lot of volume from the Twitter database. • FAKENEWSNET: This dataset [14] has much emphasis given on the dynamic context and social behavior of fake news, in order to overcome the demerits of earlier proposed dataset. This actually collects multi-dimension information. • Fake News Challenge (FNC)-1: It is a newly created dataset with AI features that work on a stance detection mechanism. The class labels for each post can be labeled as four types as following: Agree, Disagree, Discuss, and Unrelated. • Higgs Twitter Dataset [15]: As the name suggests this dataset was actually created after the chaos, curiosity, and excitement created worldwide, because of the large hydrone collider experiments held at CERN in Geneva in 2012. This dataset was built after monitoring the spreading processes on Twitter before, during, and after the announcement of the discovery of a new particle called god particle. • Wild Web Tampered Image Dataset [16]: In the real world chance of becoming viral of any content increases if the content is pictures or images. Pictures are highly vulnerable to being modified. This was actually created so show tampered web images propagating across many online social networking sites. This dataset contains 80 cases of forgeries consisting of total 13,577 images. • Columbia Uncompressed Image Splicing Detection Evaluation Dataset [17]: It is one of the simple datasets for image fact-checking as compared to previously

Fake News Detection: Tools, Techniques, and Methodologies

351

discussed here. In this 183 authentic images taken using one camera with EXIF information, and 180 spliced images created from the authentic images without post-processing. Here size of uncompressed images ranges from 757 × 568 to 1152 × 768 pixels in TIFF and BMP format.

4 NLP and Machine Learning Models, Their Relevance in Fake News Analysis To process the language, finding out its hidden feature and designing computing device that can recognize natural language deals with natural language process phenomenon (NLP) and deep learning, machine learning, artificial neural network, and all such sophisticated algorithms are essential and efficient techniques to implement those detection techniques of fake news.

4.1 Definition of NLP and Goal NLP is nothing but, a wide research wing of computer science which primarily deals with the automatic manipulation and detection of natural language that we human speak and tries to build a computer model that can understand human recognizable natural languages with the help of computing software [18]. Goals of NLP: Natural language processing has two goals: the science goal and the engineering goal. In science goal, the aim is to understand how language is produced and understood by an intelligent entity and engineering goals motivate to build such models that analyze and generate language that can reduce the machining gap [18].

4.2 Tasks of NLP During Text Analysis Processing Natural language processing not only meant to design and develop machine-friendly language analyzer but it also performs many other essential tasks, which can be used in many subsequent processes of fake news analysis (Table 2).

4.3 Popular Steps of Preprocessing of a Given Text [20, 21] • Stop Word Removal: Articles prepositions, pronouns, and conjunctions are called as stop words and such words are removed.

352

D. Bhushan et al.

Table 2 Various task of NLP [19] Some other task of NLP related to language processing Information retrieval

Finds documents based on various keywords

Information extraction

Identify and extract the data from the given source

Language generation

Based on the given description about text generating the proper language to identify a pattern

Text clustering

Grouping of similar types of word text in one corpus

Text classification

Assigning various property-based classifications to text. Ex. Putting spam emails into spam folder, etc.

Machine translation

Translate any language text into another language

Grammar checkers

Checking the grammar of any language

• Lower or upper case conversion: It is also a big part of preprocessing task and as per our need, we generally transform all letters to lowercase or sometimes in uppercase as the situation demands. • Tokenization: Most important task of preprocessing is related to converting or breaking the given sentences into tiny (smaller) pieces or so, called (small chunk) of the word. These small pieces of letters are called tokens. • Stemming: Related to converting the tokenized words into their original form, and it lessens the number of words or class-types by not counting similar words. • Lemmatization: Similar to stemming process, but more sensible than stemming, it removes some inflectional forms of sentences. • Part of speech tagging (POS): it is one of the important components of the preprocessing which is used to put the label of various parts of speech tags to each word of sentences.

4.4 Machine Learning Overview for Research Work in Detection of Fake News Although, NLP has the prime role of preprocessing and feature extraction of various text it is seen that many times this alone not enough. There is a need for some mechanism that can explain the pattern and extract the features more deeply and predict the result in a comprehensive manner. Machine learning along with distinguished models and classifiers mechanism can detect fake news and produce the optimum result.

Fake News Detection: Tools, Techniques, and Methodologies

353

4.5 Some Examples of Famous Machine Learning Models • Linear Regression: It is simplest machine learning algorithm used for establishing and thus predicting the linear relationship between the target value and one or more predictors. • K-means model: It widely used unsupervised machine learning algorithms for classifying given numbers of datasets into a certain number of clusters by finding numbers of groups in the data. • K-NN model: extension to the k-means, initially it stores all the available cases then after applying nearest neighbor algorithm this model classifies new cases to further analysis on the majority vote count of its k-neighbors. • Logistic Regression: This classification algorithm produces and predicts any discrete values like Binary (0/1), the Boolean value, i.e. true or false, of a given set of some independent variable(s). • Decision Tree: It is a very popular supervised machine learning algorithm that is useful for classification as well as the prediction model of machine learning. Its working process includes decision analysis. Decision-based tree has internal nodes referring test on an attribute whereas branch represents the outcome of the test. • Random Forest: It is a collection of many decision trees called “Forest. It is used to predict the class prediction. Here, the class having most numbers of counts becomes model’s prediction. • Support Vectors machine: SVM is a type of classifier model of machine learning model in which we generally divide the various class labels in discriminative classifier by a separating hyper-plane in so that all class label uniformly placed each side of the hyper-plane without any ambiguity. • Naive Bayesian model: The Naïve-Bayes another famous algorithm based on the probabilistic approach to classifying the class label by calculating probability based on Bayes theorem. This is easy to build and particularly useful for very large data sets. For simplicity reason, Naïve-Bayes is known for being the best among many algorithms.

5 Outcomes of Related Studies in Fake News Detection • Based on various models firstly, Rubin et al. [7] in his work, classified the three types of deception news as hoax propaganda and satire. • Shu et al. [22] discussed two main types of features observed namely linguisticbased and visual-based features [22]. In this paper [23] various psycholinguistic features that play a big role in fake news detection which carries sentiments and filled emotions in the form of linguistic knowledge are analyzed. • Rubin et al. [24] showed that RST-VSM method can be an effective way of complementing the existing lexical and semantic analyses tool in the process of fake content detection.

354

D. Bhushan et al.

• Chen et al. [25] showed the inefficient behavior for classification observed by n-grams and part of speech (POS) tag methodology. So they advised deep Syntax analysis along with Probabilistic Context-Free Grammars (PCFG) methodology. • Shlok Gilda [26] performed well in analyzing some key feature extraction of text and demonstrated that when text corpus is collected as ‘bi-grams’ then TF-IDF implementation upon it produces much effective model for detecting fake news • Horne et al. [27] also demonstrated that we can detect fake news on the basis of stop words, nouns used in the text put the extracted features into three categories that are complexity and readability feature, psychology feature and stylistic features. • Mathieu Cliché [28] on his paper broadly discussed sarcasm like the sentiment of the data flooded on Twitter and analyzed the sentiments of tweets by using ngrams feature of tweets. Moving forward, LIAR [10] is that dataset that attracted a lot of attention from different researchers for the study of fake news detection. • Wang et al. [10] used SVM model and later based on Kim’s CNN (convolution neural networks model), thereafter the performance of SVM, LR, Bi-LSTM are compared. • Karimi [29] worked on LIAR with an approach using an advanced model called Multi-source Multi-class Fake news detection framework called in short (MMFD), which includes three folded mechanism. • Kai Shu et al. [22] also one of the pioneers in the field of fake news detection mechanisms who demonstrated the data mining perspective and its close impact on fake news detection by putting deep observation on content-based and socialcontext-based analysis for deceptive news. • Hannah Rashkin et al. [30] also one of the researchers who used extensive analysis of various linguistic features on texts extracted from a different source of datasets and declared LSTM produced better results compared to other features. • Coming back to machine learning model, Bajaj [31] applied many algorithms and found that RNN architecture aided with GRUs did work very well and it lagged behind the other features like LSTM cells in that task.

6 Future Trends and Associated Challenges We have seen famous and remarkable research in fake news detection area based on different methodologies those are highly oriented by NLP and machine learning. Distinguished mechanisms of theories and brainstorming analysis showed that how can we mitigate the effect of dynamic fake news proliferation. Apart from the above discussions, there are still some challenges that exist in front of us that need further research. • We observed that almost the majority of the previous trends of research, related to the fake news detection belonged to the feature-oriented characteristics of fake news.

Fake News Detection: Tools, Techniques, and Methodologies

355

• In recent years with the advancement in machine learning models, the detection methodology is now shifted towards model-oriented techniques and data-oriented. • Coming to the application-oriented research of fake news, currently, it is in the initial phase and it keeps evolving day by day. This area needs more research and analysis. Since social media is full of dynamic activities. • It is that area of fake news detection which has very great potential in the fake news field because it all depends on one’s agenda, one’s mindset. How does someone want to spread fake content in which context? so analyzing such events in a better way and predict the nature of news that could be viral in the near future is big areas of research.

6.1 Occurrence of Distinguished Challenges While Analysis of Fake News • We have seen that BuzzFeedNews, LIAR datasets have only or two features. FakeNewsNet although is a well-structured dataset that has much large dimension of news types. But, we need more this kind of datasets. Their [22] analysis showed that we need many multidimensional dataset models for proper and diverse research in this area that will yield a better result. • Rumors, Social bots, and Sybil accounts on social media [32] in current times created a lot of sensation among many social media sites in the past 3 years. Not only this, but many times these incidents also used to block the propagation chain of true information because of its complex social network structure aided with cascading effect. • Without any doubt, the nature of social media is highly dynamic and in the past 5 years, the pattern and types of fake news also changed a lot. So, to predict how it will in the next 5 years and how to detect and mitigate them at an early stage is a big challenge in front of researchers.

References 1. https://en.wikipedia.org/wiki/Fake_news/. 2. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. 3. Bovet, A., & Makse, H. A. (2019). Influence of fake news in Twitter during the 2016 US presidential election. Nature Communications. https://doi.org/10.1038/s41467-018-07761-2. 4. Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of fake news by social bots. arXiv preprint arXiv:1707.07592. 5. https://www.newsweek.com/fake-news-word-year-collins-dictionary-699740. 6. Zhou, X., & Zafarani, R. (2018, December). Fake news: A survey of research, detection methods, and opportunities. ACM Computing Surveys, 1, 40p. 7. Rubin, V., Chen, Y., & Conroy, N. (2015). Deception detection for news: Three types of fakes news.

356

D. Bhushan et al.

8. Kucharski, A. (2016). Post-truth: Study epidemiology of fake news. Nature, 540, 7634, 525. 9. Parikh, S. B., & Atrey, P. K. (2018). Media-rich fake news detection: A survey. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 436–441). 10. Wang, W. Y. (2017). Liar, liar pants on fire: A new benchmark dataset for fake news detection. ArXiv preprint arXiv:1705.00648. 11. Vlachos, A., & Riedel, S. (2014). Fact checking: Task definition and dataset construction, pp. 18–22. https://doi.org/10.3115/v1/w14-2508. 12. Silverman, C., Strapagiel, L., Shaban, H., Hall, E., & Singer-Vine, J. (2016). Hyperpartisan Facebook pages are publishing false and misleading information at an alarming rate. Buzzfeed News, 20. 13. Mitra, T., & Gilbert, E. (2015). Credbank: A large-scale social media corpus with associated credibility annotations. In Ninth International AAAI Conference on Web and Social Media. 14. Shu, K., Mahudeswaran, D., Wang, S., Lee, D., & Liu, H. (2018). Fakenewsnet: A data repository with news content, social context and dynamic information for studying fake news on social media. ArXiv preprint arXiv:1809.01286. 15. De Domenico, M., Lima, A., Mougel, P., & Musolesi, M. (2013). The anatomy of a scientific rumor. Scientific Reports, 3, 2980. 16. Zampoglou, M., Papadopoulos, S., & Kompatsiaris, Y. (2015). Detecting image splicing in the wild (web). In 2015 IEEE International Conference Multimedia & Expo Workshops (ICMEW). 17. Hsu, Y.-F., & Chang, S.-F. (2006). Detecting image splicing using geometry invariants and camera characteristics consistency. In 2006 IEEE International Conference on Multimedia and Expo (pp. 549–552). IEEE. 18. NPTEL. Natural language processing By Prof. Pushpak Bhattacharyya. CSE IIT BOMBAY. 19. Applied Natural Language Processing By Prof. Ramaseshan R Chennai Mathematical Institute. 20. Jurafsky, D., & Martin, J. H. (2009). Speech and language processing: An introduction to natural language processing, speech recognition, and computational linguistics (2nd edn.). Prentice-Hall. 21. Manning, C. D., & Schütze, H. (1999). Foundations of statistical natural language processing. MIT Press. 22. Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19,(1), 22–36. 23. Mihalcea, R., & Strapparava, C. (2009). The lie detector: Explorations in the automatic recognition of deceptive language. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers (pp. 309–312). Association for Computational Linguistics. 24. Rubin, V. L., Conroy, N. J., & Chen. Y. (2015). Towards news verification: Deception detection methods for news discourse. In Proceedings of the Hawaii International Conference on System Sciences (HICSS48) Symposium on Rapid Screening Technologies, Deception Detection and Credibility Assessment Symposium (pp. 5–8). 25. Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1–4. 26. Gilda, S. (2017). Evaluating machine learning algorithms for fake news detection. In 2017 IEEE 15th Student Conference on Research and Development (SCOReD) (pp 110–115). IEEE. 27. Horne, B. D., & Adali, S. (2017). This just in: fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In The 2nd International Workshop on News and Public Opinion at ICWSM. 28. Mathieu Cliche. The sarcasm detector (2014). 29. Karimi, H., Roy, P., Saba-Sadiya, S., & Tang, J. ( 2018). Multi-source multi-class fake news detection. In Proceedings of the 27th International Conference on Computational Linguistics (pp. 1546–1557). 30. Rashkin, H., Choi, E., Jang, J. Y., Volkova, S., & Choi, Y. (2017). Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 2931–2937).

Fake News Detection: Tools, Techniques, and Methodologies

357

31. Bajaj, S. (2017) The pope has a new baby! fake news detection using deep learning. 32. Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M., & Procter, R. (2017). Detection and resolution of rumours in social media: A survey. arXiv preprint arXiv:1704.00656.

Synthesis and Analysis of Optimal Order Butterworth Filter for Denoising ECG Signal on FPGA Seema Nayak and Amrita Rai

1 Introduction Field Programmable Gate Arrays (FPGAs) are becoming increasingly popular for rapid prototyping of designs with the aid of software synthesis and simulation. FPGA is a good platform for testing, evaluating and implementing signal processing algorithms [1]. Digital filter implementation on FPGA offers superior performance by reducing the complexity of filter structure, hardware requirements and enhancing speeds. The conventional approach is based on the application of general-purpose multipliers. However, the performance of multipliers implemented on FPGA architecture does not allow constructing high-performance digital filters. The parallelism nature of FPGA provides an improvement in speed, less resource usages and low power consumption [2]. FPGA based designs allow more flexibility, cost reduction and less developing time [3]. Studies show that the implementation and synthesis of IIR filters on FPGA offer high throughput, hardware utilization effectiveness and high rate of precise calculation [4–6]. FIR filters have computational complexity more than IIR filters, which thereby increases the hardware implementation cost. The cost analysis of different IIR filters based on complexity using multipliers and adders has been studied by Yadav and Mehra [7] and further, they implemented on the FPGA platform resulted in high throughput. As per the literature review, not much research work is available on the implementation of digital filtering on FPGA using Verilog HDL using optimal order selections. In the present paper, the optimal order IIR digital filter like Butterworth is selected for denoising ECG signal S. Nayak (B) ECE Department, IIMT College of Engineering, Greater Noida, India e-mail: [email protected] A. Rai ECE Department, GL Bajaj Institute of Technology and Management, Greater Noida, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_39

359

360

S. Nayak and A. Rai

in MATLAB initially before implementation on FPGA. Implementation of hardware design in FPGAs is a formidable task and there is more than one way to implement the DSP design for digital filters. However, the biggest challenge in implementing digital filters in hardware is to achieve high speed at minimum hardware cost. Hence, careful choice of implementation method and tools based on design specification is imminent, which can save a lot of time and work. The implementation of a digital filter on FPGA illustrates that the approach is flexible and provides performance comparable or superior to the traditional approach [8]. Several methods are available for the generation of HDL Code: HDL Code generation from MATLAB and HDL Code generation from Simulink/Xilinx System generator. In Gaikwad, [9] used generated HDL command for implementation of Inverse Sinc filter on FPGA and Kumar and Meduri [10] generated HDL Code with Distributed Arithmetic Architecture of high order Matched filter by using the same technique. Modelsim 6.4a simulator is used for the simulation of generated test benches of the optimal order filter. Studies [3, 11, 12] reported simulation of FPGA based design using Modelsim software.

2 Methodology The following section discusses the methodology for implementation of the optimal order IIR digital filters for denoising ECG signal on FPGA platform. The work flow graph of the methodology is shown below in Figs. 1 and 2. After synthesis, the resource utilization summary in respect of number of slices, number of slice flip flops, number of 4-input LUTs, number of bonded IOBs, number of BUFG and DSP48A1 slice for initial design and the timing summary in respect of minimum input arrival time before clock (setup time) and maximum output required time after clock (hold time) for initial design, are captured and displayed. From the report of resource utilization summary, the resources to be used for the target FPGA are analyzed and from timing summary, the reports in respect of delays (setup and hold time), maximum frequency required are also analyzed.

3 Results and Discussions To select optimal order Butterworth filter for denoising ECG signal, firstly it designed in MATLAB and its optimal order is selected based on Signal to Noise Ratio (SNR) and Mean Square Error (MSE). The hardware complexity of optimal order Butterworth filter structure is checked in terms of multipliers, adders and delays in MATLAB. A final summary report on the study of complexity of the structure is presented in Table 1. Snap-shots of generated synthesis reports; simulation results in Modelsim 6.4a of generated test benches; estimation of power consumption by XPE 14.1 of designed

Synthesis and Analysis of Optimal Order …

361

Start

Creation & selection of optimal order digital filters for denoising ECG signal

Conversion of optimal order digital filter MATLAB code into Verilog HDL using HDL coder command line interface

Synthesis of generated Verilog code on target platform Xilinx Spartan-6 FPGA (XC6SLX75T) using Xilinx ISE 13.1

Analyzing device utilization, macro statistics (complexity of filter structure) and timing summary

Generation of RTL schematic of optimal order digital filter showing primary input, output and internal structure

Simulation of generated test benches of optimal order digital filter on Modelsim 6.4a

Fig. 1 Work flow graph for synthesis of digital filters on FPGA Estimation of power consumption of optimal order digital filter design using Xilinx Power Estimator 14.1

Summarization of dedicated hardware resources, timing and power

Study of optimal order digital filter structure complexity on MATLAB and on FPGA platform

End

Fig. 1 Work flow graph of the methodology for the synthesis of digital filters on FPGA

362

S. Nayak and A. Rai

Start

Set the design properties of the project in Design Window of Xilinx ISE

Add the source files (Verilog & test bench) of optimal order digital filters which are to be synthesize & simulate

Set the process properties options under XST tool

Run the synthesize XST tool

View synthesized text report

Capture & analysis of the synthesized report in terms of resources utilization and timing summary

Study the complexity of optimal order filter structure in terms of macro statistics

View RTL schematic of synthesized digital filter

Simulate test bench of optimal order digital filter using Modelsim 6.4a simulator

End Fig. 2 Work flow graph of design steps for synthesis and simulation of digital filters

Synthesis and Analysis of Optimal Order …

363

Table 1 Summary of IIR filters structure information in MATLAB SN

Type of filter

Order

Multiplier

Adder

States

Multiplication per input sample

Addition per input sample

1

Butterworth

5

10

10

5

10

10

filter on FPGA by traditional approach is discussed below. Synthesis reports and simulation results of the traditional approach of designing Butterworth digital filter is presented. The resource utilization summary (number of slice registers, slice LUTs, fully used LUT-FF pairs, bonded IOBs and BUFG/BUFGCTRLS), macro statistics summary (adders/subtractors, adder tree and registers), final registers and timing summary (minimum period, maximum frequency, setup time, hold time), RTL schematic digital filter, RTL diagram of internal structures [13], simulation waveform and power estimation report [14] of optimal order Butterworth filter is presented. Further, hardware resource utilization is to be used in FPGA, complexity of filter structure, speed and total estimated power of the traditional approach of designed digital filter is tabulated.

3.1 Butterworth Filter The optimal order of Butterworth filter for denoising ECG signal is selected on the basis of SNR (Signal to Noise Ratio) and MSE (Mean Square Error) is 5 [15]. Synthesis reports of Butterworth filter after synthesis on FPGA are given in Figs. 2, 3, 4, 5, 6, 7 and 8. Test bench simulation result is given in Fig. 3 with power estimator report in Fig. 10. After synthesis and simulation, all the reports are tabulated in Table 2. The Comparison of basic elements for Butterworth filter is done in FPGA Implementation with MATLAB and is given in Table 3. It is observed from Table 3, that number of multipliers has been reduced with increase in adder/subtractors and registers

4 Conclusion and Future Scope Apart from the implementation of FPGA based designs, there are several issues of concern related to area, speed and power. For the realization of higher order filters, the speed, area and power are affected because of complex computations, which increase as the order of the filter increases. The structural complexity of various optimal order IIR digital filters is realized in MATLAB using basic elements like multipliers, adders and delays. Multipliers usually have the highest implementation of computational cost and thus it is desired to reduce the number of multipliers in

364

Fig. 3 Device utilization summary for Butterworth filter

Fig. 4 Summary of macro statistics for Butterworth filter

S. Nayak and A. Rai

Synthesis and Analysis of Optimal Order …

Fig. 5 Register report for Butterworth filter

Fig. 6 Timing summary for Butterworth filter

Fig. 7 RTL schematic for Butterworth Filter

365

366

S. Nayak and A. Rai

Fig. 8 RTL internal structure for Butterworth Filter

Fig. 9 Modelsim simulation result for Butterworth Filter

different systems. Delays can be implemented by providing a storage register for each unit delay. After synthesis of the above filter in FPGA Spartan-6, the dedicated hardware resources utilizations [16] are summarized in Table 2. Table 3 shows reduced multipliers. Since multiplication is an operation that requires large chip area and more power consumption due to repeated addition; reduction in its number helps. But the inherent property of the Spartan-6 FPGA architecture having 132 slices of DSP48A1, which supports many functions that of an 18 × 18 bit multiplier [17], MACs, Pre-adders/subtractors (User Guide UG 389(v1.2) 2014) [18], wide bus multiplexers, magnitude comparator/wide counter reduces the number of multipliers and adders utilized suggesting small chip area and low power consumptions. The

Synthesis and Analysis of Optimal Order …

367

Fig. 10 Power estimation for Butterworth filter Table 2 Summary of resources for Butterworth filter SN

Parameters

FPGA Implementation

1

No. of Slices Register

109 out of 93,296

2

No. of Slice LUTs

254 out of 46,648

0%

3

No. of fully used LUT-FF pairs

14 out of 349

4%

4

No. of bonded IOBs

35 out of 348

10%

4

No. of BUFG/BUFGCTRLs

1 out of 16

6%

6

No. of DSP48A1S

11 out of 132

8%

7

Minimum Period

60.454 ns

8

Maximum frequency

16.541 MHz

9

Minimum input arrival time before 3.552 ns clock (Ts)

10

Maximum output required time after clock (TH)

3.701 ns

11

Power

113 mw

12

Macro statistics

MACs-5, Multipliers-3, Adder/subtractor-1 4, Regs-112

0%

368 Table 3 Summary of resources for Butterworth filter

S. Nayak and A. Rai SN

Resources

Butterworth

1

Multipliers

Reduced by 7

2

Adder/subtractor

Increased by 4

3

Registers

Increased by 109

inbuilt basic structure of MAC unit using pipeline registers between multipliers and accumulator also increases the throughput as reported by a study [8]. To make the FPGA an ideal fit and viable alternative in various market applications, the final product can further be made attractive by optimisizing the hardware components to be accommodated in a small chip area for low power consumption.

References 1. Thakur, R., & Khare, K. (2013). High speed FPGA Implementation of FIR Filter For DSP Application. International Journal of Modeling and Optimization, 3(1), 92–94 2. Kolawole, E. S., Ali, W. H., Cofie, P., Fuller, J., Tolliver, C., & Obiomon, P. (2015). Design and Implementation of low-pass, high- pass and band- pass finite impulse response (FIR) filters using FPGA. Circuit and System, 6, 30–48. 3. Narsale, R. M., Gawali, D., & Kulkarni, A. (2014). FPGA based design and implementation of low power FIR filter for ECG signal processing. International Journal of Science, Engineering and Technology Research, 3(6), 1673–1678. 4. Bokde, P. R., & Choudhari, N. K. (2015). Implementation of digital filter on FPGA For ECG signal processing. International Journal of Emerging Technology and Innovative Engineering, 1(2), 175–181. 5. Dixit, H. V., & Gupta, V. (2012). IIR filters using system generator For FPGA implementation. International Journal of Engineering Research and Applications, 2(5), 303–306. 6. Kansal, M., Saini, H. S., & Arora, D. (2011). Designing and FPGA implementation of IIR filter used for detecting clinical information from ECG. International Journal of Engineering and Advanced Technology, 1(1), 67–72. 7. Yadav, S. K., & Mehra, R. (2014). Analysis of FPGA based recursive filter using optimization techniques for high throughput. International Journal of Engineering and Advanced Technology, 3(4), 341–343. ISSN: 2249-8958. 8. Chou, C. J., Mohanakrishnan, S., & Joseph, B. (1993). FPGA implementation of digital filters. In Evans, Proceeding ICSPAT “93. 9. Gaikwad, P. K. (2013). FPGA based hardware level analysis of inverse sinc filters. International Journal of Computer Science and Mobile Applications, 1(3), 35–39. 10. Kumar, P. R., & Meduri, M. (2013). The implementation of high order matched fir filter with distributed arithmetic. International Journal of VLSI and Embedded Systems, 4, 673–676. Article 12196. 11. Vijaya, V., Baradwaj, V., & Guggilla, J. (2012). Low power FPGA implementation of realtime QRS detection algorithm. International Journal of Science, Engineering and Technology research, 1(5), 140–144. 12. Ravikumar, M. (2012). Electrocardiogram signal processing on FPGA for emerging healthcare applications. International Journal of Electronics Signals and Systems, 1(3), 91–96. 13. Xilinx Synthesis and Simulation Design Guide UG626 (v 14.4) December 18, 2012. 14. Xilinx XPower Estimator User GuideUG440 (v13.4) January 18, 2012.

Synthesis and Analysis of Optimal Order …

369

15. Bhogeshwar, S. S., Soni, M. K., & Bansal, D. (2016). Study of structural complexity of optimal order digital filters for de-noising ECG Signal. International Journal of Biomedical Engineering and Technology, INDERSCIENCE, in press. 16. Xilinx Spartan-6 FPGA Data Sheet DS162 (v3.1.1) January 30, 2015. 17. Xilinx Spartan-6 Family Overview, DS160 (v2.0) October 25, 2011. 18. Xilinx Spartan-6 FPGA DSP48A1 Slice User Guide UG389 (v1.2) May 29, 2014.

Prognosis Model of Hepatitis B Reactivation Using Decision Tree Syed Atef, Vishal Anand, Shruthi Venkatesh, Tejaswini Katey, and Kusuma Mohanchandra

1 Introduction Reactivation of hepatitis B alludes the sudden increment of hepatitis B virus (HBV) duplication in a patient with inactive or cured hepatitis B. The reactivation happens unexpectedly, yet more naturally is activated by immunosuppressive treatment of malignancy, immune system ailment, or organ transplantation. The reactivation can be temporary and clinically quiet, yet regularly causes a burst of disease that can be extreme bringing about intense hepatic disappointment. Most occurrences of reactivation resolve spontaneously, however, in the event that insusceptible concealment has proceeded, re-foundation of chronic hepatitis happens which can prompt dynamic liver damage and cirrhosis. The classification performance is different for different feature subsets. In the paper presented by Wu et al. [1], the preliminary features show an overall classification accuracy, sensitivity, and specificity as 72.22%, 76.06%, and 57.89%, respectively. K-Fold Cross Validation method was used to select the training and testing data. S. Atef · V. Anand · S. Venkatesh · T. Katey · K. Mohanchandra (B) Department of Information Science and Engineering, Dayananda Sagar Academy of Technology and Management, Bangalore, India e-mail: [email protected] S. Atef e-mail: [email protected] V. Anand e-mail: [email protected] S. Venkatesh e-mail: [email protected] T. Katey e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_40

371

372

S. Atef et al.

In this paper, we present our improved research on Hepatitis B Reactivation and on the fact of finding, if the person will have hepatitis B reactivated. The risk factors are the key attributes affecting the reactivation out of which HBV DNA level is one of the important key factors. This research aims at improving the accuracy level of finding the Hepatitis B Reactivation in a patient.

2 Existing Methodology Yao et al. [2] propose to establish the significant feature subsets of hepatitis B virus (HBV) reactivation and create classification prognosis models of HBV reactivation for primary liver carcinoma (PLC) patients. Genetic Algorithm (GA) was proposed to select the vital feature subsets of HBV reactivation from the preliminary features of primary liver carcinoma. Bayes and SVM were used to form classification prognosis models of HBV reactivation. The classification accuracy using significant features and the preliminary features were predicted. The Bayes classifier showed the highest classification accuracy of 82.07%. Shuai et al. [3] have adopted Logistic regression analysis to select the ideal feature subset to build a predictive model of hepatitis B virus (HBV) reactivation in PLC patients after precise RT. They express that TNM, HBV DNA level and outer margin of RT were risk factors for HBV reactivation with P < 0.05. The experimental results show that the classification accuracy increased by 4.45%. The study presented by Guan-peng et al. [4] shows that for HBV reactivation, the outside margin of RT, TNM of tumor stage and the HBV DNA levels are the risk factors with P < 0.05. The logistic regression analysis reduced the dimension and improved the classification accuracy. The classification prognosis model with BP and RBF neural network gave good performance in classification of HBV reactivation. Wu et al. [5] proposed genetic algorithm (GA) to extract the key feature subsets of HBV reactivation. The performance of Bayes and support vector machine (SVM) classifiers was compared. The experimental results show that GA improves the classification performance of HBV reactivation. The SVM classifier showed better performance than the Bayes classifier.

3 Implementation To improve the accuracy and the precision of obtaining the reactivation result, we have implemented the machine learning algorithm called Decision Tree.

Prognosis Model of Hepatitis B Reactivation Using Decision Tree

373

3.1 Data The dataset has been taken from the website called the NCBI (National Center for Biotechnology information). The dataset used contains 46 attributes of 133 records; on these records, we considered only numeric attributes for the prediction process.

3.2 Data Preprocessing During data preprocessing raw data acquired is converted into a form that fits machine learning. Organized and processed data allows us to get more accurate results from an applied machine learning model. Data preprocessing includes data visualization and data cleaning. Data cleaning includes discarding the null values.

3.3 Feature Selection The features are selected at random using the K-Fold Cross Validation method. This method uses the train/test split method which on every execution changes the train and test data which eventually gives the best result.

3.4 System Architecture Figure 1 shows the architecture of the proposed system. The user gives a separate dataset for training and also for testing. The dataset is preprocessed and later used to train the model. The test dataset is used to predict the performance of the trained model.

3.5 Implementation The model is developed using the decision tree algorithm. Decision tree is a supervised machine learning algorithm typically used in classification problems. In the proposed work, we split the dataset into two homogeneous sets based on vital differentiator in input variables. Decision tree classifier takes two arrays as input: an array M, with the size [m_samples, m_features] having the training samples, and an array N with integer values of size [m_samples], having the class labels of the training samples.

374

S. Atef et al.

Fig. 1 System architecture

Table 1 Experimental results of our proposed work Algorithm

Accuracy

Decision tree

94.00

4 Experimental Results and Evaluation In the present work, prediction of hepatitis B is done using decision tree algorithm. The result obtained shows that our proposed model achieves higher accuracy. Table 1 shows the algorithm used and the accuracy obtained as shown in Fig. 2. Table 1 represents the performance of the proposed algorithm over considered dataset. Error evaluations are carried out for MAE, MSE, RMSE, and R-Squared.

5 Conclusion In this work, we have performed a train and test split of data. The dataset obtained had null values and also a few attributes with alphabetic values that were removed after the data was preprocessed. Cleaning the data and converting the raw data to machine learning data was done. The machine algorithm called the decision tree used gave us promising results and better results than before. The experimental results show that the best average accuracy obtained is 94% from the dataset. The decision tree algorithm uses supervised data. The prognosis

Prognosis Model of Hepatitis B Reactivation Using Decision Tree

375

Fig. 2 Results evaluation of proposed system

model of Hepatitis B Reactivation has had a significant change in the percentage of accuracy. This implementation also showed that the doctors can now conformingly talk to the patients about the reactivation and warn them. Better curing techniques and methods can be employed to cure hepatitis B and also medication can be obtained for the reactivation part. This particular research can be carried out in the future by further improving the accuracy of finding the Hepatitis B virus being reactivated in a person. In the future, using an optimized solution in machine learning is suggested and can be used so as to obtain better results. Another improvement that can be done in the future is to obtain results for recognizing the genetic biomarker in the patient having hepatitis B.

References 1. Wu, G. P., Wang, S., Huang, W., Liu, T. H., Yin, Y., & Liu, Y. H. (2016). Classification prognosis model of hepatitis B virus reactivation after radiotherapy in patients with primary liver carcinoma based on BP neural network. Intelligent Computer and Applications, 6, 43–47. 2. Yao, H., Gong, J. L., Li, L., & Wang, Y. (2014). Risk factors of hepatitis b virus reactivation induced by precise radiotherapy in patients with hepatic carcinoma. The Practical Journal of Cancer, 29, 675–677.

376

S. Atef et al.

3. Shuai, W., Guan-peng, W., Wei, H., Tong-hai, L., Yong, Y., & Yi-hui, L. (2016). The predictive model of hepatitis B virus reactivation induced by precise radiotherapy in primary liver cancer. Journal of Electrical and Electronic Engineering, 4(2), 31–34. 4. Guan-peng, W., Shuai, W., Wei, H., Tong-hai, L., Yong, Y., & Yi-hui, L. (2016). Application of BP and RBF Neural network in classification prognosis of hepatitis B virus reactivation. Journal of Electrical and Electronic Engineering, 4(2), 35–39. 5. Wu, G., Liu, Y., Wang, S., Huang, W., Liu, T., & Yin, Y. (2016, August) The classification prognosis models of hepatitis b virus reactivation based on Bayes and support vector machine after feature extraction of genetic algorithm. In 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD) (pp. 572–577). IEEE.

Novel Approach for Gridding of Microarray Images D. P. Prakyath, S. A. Karthik, S. Prashanth, A. H. Vamshi Krishna, and Veluguri Siddhartha

1 Introduction In the last decade, the two prime aspects of DNA microarrays are dealing out of microarray image and analysis of processed microarrays. The prime objective of the entire image analysis is to achieve significant biological results, which is rely on the truthfulness of the different stages, mostly those at the initial stages of analysis. The word microarray is an ordered arrangement of DNA sequences plotted on a substrate. By scanning a substrate at the required resolution gives an image, which is compiled of associated spots and associated-grids. In some conditions, depending on the arrangement of the plotting pins, various associated-grids are exposed to improper alignments, change in shape, size, and surprising addition of unwanted particles. Because of above-mentioned factors, these facts may not be precise or obtainable [1]. Identifying the exact location of a spot or sub-grid in a microarray image is a premier step in entire analysis task, as an error in this task is disseminated to subsequent tasks of analysis and may perhaps condense the truthfulness and exactness of the study dramatically. In principle, obtained images from substrate preparation tasks are extremely structured as spots are arranged in a standard grid-like pattern. The spots are expected to be roughly circular, although in practice different shapes are possible. The earliest job in the analysis is gridding [2–6], if it is finished accurately, significantly develops the competence of future tasks like spot separation, quantification [7]. A typical image has numerous associated-grids and every sub-grid has a lot of associated spots prearranged in row-wise as well as column-wise. The aspire of the gridding step [8, 9] is to achieve a two-step process, (i) the associated-grid localities D. P. Prakyath · S. A. Karthik (B) · S. Prashanth · A. H. Vamshi Krishna · V. Siddhartha Department of ISE, DSATM, Bangalore, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_41

377

378

D. P. Prakyath et al.

are determined during the initial stage, and (ii) spot position inside every associatedgrid must be initiated in the succeeding step. While preparing DNA microarrays, lots of parameters are must be considered, which are the total count of spots and dimension of spots, count of an associatedgrids, and its accurate position. Nevertheless, lots of issues like noise, improper, alignment, and change in shape in the sub-grid pattern which is exceptionally hard to locate the correct location of the spots. Thus, to make the gridding procedure more robust, the researcher needs to deal with the following issues [10–13]: 1. In the biological experiment, the exact locations of the associated-grids in each image can differ from substrate to substrate. 2. The location of the associated-grids and the distance between them can be different. 3. The associated-grids can be of different sizes. 4. There may be some noise present in the image after scanning, or the background intensity may be too high. 5. The signal intensities may not be strong enough or uniform across the same image. 6. The size and shape of each spot may not be the same for all spots (e.g., not perfectly circular) (Fig. 1). The organization of rest of the paper [14] is as follows: In Sect. 2, the proposed approach has been discussed. In Sect. 3, draw attention to the outcomes of testing carried out on a number of standard images. Lastly, conclusion is discussed.

Fig. 1 Typical Microarray Image

Novel Approach for Gridding of Microarray Images

379

2 Proposed Approach Automatic gridding is performed in two steps (i) Identification of grid line position and (ii) refinement of grid lines.

2.1 Identification of Grid Line Position For all connected component [15] in the clean image, row_min, row_max, col_min, col_max are calculated as shown in Figure. Subsequently, row_min values are obtained and sorted. Arrangement of consecutive differences of row_min array called diff_row_min also for row_max, col_min, col_max (diff_row_max, diff_row_min, diff_col_min, diff_col_max) is found. Obtained values of row_min, row_max and diff_row_min, diff_row_max are revealed below. Image_ID (62919) I taken for all computation of values (Fig. 2). row_min: 60 315 383 211

213 43 93 112

9 110 129 399

110 128 146 163

297 9 313 351

77 25 76 24

246 212 212 177

76 333 145 76

315 366 128 399

9 109 213 297

246 60 25 92

row_max:

Fig. 2 Computation of row_min, row_max, col_min & col_max of a spot

380

D. P. Prakyath et al.

72 327 394 224

226 55 105 124

25 121 140 410

125 142 156 172

310 20 324 360

89 38 89 38

259 224 224 191

90 341 158 87

327 377 140 410

21 122 222 386

258 71 37 101

The step by step procedure for the identification of horizontal grid lines is given below. 1. Determination of sorted values of row_min sorted_ row_min: 9 76 145 297

9 77 146 297

9 92 163 313

24 93 177 315

25 109 211 315

25 110 212 333

43 110 212 351

60 112 213 366

60 128 213 383

76 128 246 399

76 129 246 399

2. Computation of successive difference between su pixel values of sorted_row_min diff_row_min: 0 1 1 0

0 15 15 1 17 14 16 2

1 0 16 1 34 1 0 18

18 0 0 18

17 2 1 15

0 16 16 0 0 33 17 16

0 0 1 16 0 51 0

3. From the array of diff_row_min values one can judge, if there is an abrupt transform in row_min values then it indicates the ending of preceding row of spots and origination of subsequent row of spots. 4. By observing the abrupt transform from 0 to 15, at location 3 in diff_row_min array, the 3rd element of row_min arrangement is 9. Through the inspection of diff_row_min imply a grid line is at row 9. In the same way, it is implicitly gives succeeding values of grid row_min. Similarly, grid_row_max is determined. grid_ row_min: 9 25 43 60 77 93 112 129 146 163 177 194 213 230 346 263 280 297 315 333 351 366 383 399 A few typical values of sorted_row_max, diff_row_max, grid_row_max values are depicted below. sorted_ row_max: 20 89 156 306

21 90 158 310

25 104 172 324

37 105 191 327

38 121 222 327

38 122 224 341

55 123 224 360

71 124 224 372

72 140 226 394

87 140 258 410

89 142 259 410

Novel Approach for Gridding of Microarray Images

381

diff_row_max: 1 1 2 4

4 14 14 14

12 1 19 3

1 0 17 16 1 15 16 1 1 1 16 0 31 2 0 2 32 2 0 14 19 17 17 17

2 0 2 14 1 47 16 0

grid_row_max: 25 38 55 72 90 105 124 142 158 172 191 208 226 243 259 276 293 310 327 341 360 377 394 At last, locations of horizontal gridlines are identified by computing the mean values of rows recommended by grid_row_min and grid_row_max contents. Thus, flat gridlines are drawn at rows 9, 25 [(25 + 25)/2], 41 [(38 + 43)/2], 57 [(55 + 60)/2]… etc. The entire procedure is repeated to obtain vertical gridlines are identified using sorted_col_min, diff_col_min, grid_col_min, sorted_col_max diff_col_max, grid_col_max.

2.2 Refinement of Gridlines The procedure for gridline refinement [16, 17] is described in this section. Before refinement, one can draw a network of lines up to a spot that exists on each line and every segment of the separated spot. However, there might be some images where no spots are available in a few grid segments. In these images, there will be unpredictable dispersing between gridlines. Figure shows sparse gridding in a horizontal and vertical direction. In such situations, the refinement algorithm is recommended and can be used to represent additional/ missing grid lines. To make sure whether every gridline are drawn or not, a procedure of refinement is applied. Suppose if any in the placement of successive rows (i, i + 1) is larger than the mean spacing of preceding rows (avg_rows_pace), then the refinement procedure is initiated to plot horizontal lines at each consecutive avg_rows_pace starting from the formerly plotted horizontal line, until i + 1 or end of rows. To draw vertical grid lines entire procedure is repeated. The below figure shows both horizontal and vertical gridlines placed before refinement process (Fig. 3). Results gridding obtained before and after refinement process is shown below. Observe that there are more grid lines here when compared to figure Figures show gridding done by projection profiles and standard deviation methods. Observation reveals that, these have less and no uniform grid lines (Fig. 4).

382

D. P. Prakyath et al.

Fig. 3 Sparse horizontal grid lines in a noise-free image

Fig. 4 Gridding of image a before and b after refinement process

3 Results and Discussion The outcome of the proposed approach [14, 18, 19] is summarized in the table. Table shows comparison of proposed technique with other technique such as projection profile and standard deviation to carry out gridding (Fig. 5). From the Table 1, it is understood that how many rows and columns are to be there in image before applying gridding methods whereas Table 2 shows a few typical values of obtained rows and columns in a gridded image when compared different set of images. Finally, error percentage is calculated to prove the efficiency of various methods. Error percentage is reduced in the proposed approach.

Novel Approach for Gridding of Microarray Images

383

Fig. 5 Gridding of images by a projection profile and b standard deviation method

Table 1 Expected rows and columns of existing approaches and the proposed approach Method

Image ID

Expected number of rows

Expected number of columns

Gridding using standard deviation

62919

29

30

22593

17

15

37993

29

29

34212

20

21

34217

18

23

34143

22

23

62919

29

30

22593

17

15

37993

29

29

34212

20

21

34217

18

23

34143

22

23

62919

29

30

22593

17

15

37993

29

29

34212

20

21

34217

18

23

34143

22

23

Gridding using projection profile

Gridding using proposed method

4 Conclusion In this work, novel technique for gridding of microarray images is done. Initially, grid points are identified using spatial arrangement of pixel values. Next, grid lines

384

D. P. Prakyath et al.

Table 2 Obtained rows and columns of existing approaches and proposed approach Method

Number of rows obtained

Number of columns obtained

Total error (%)

Gridding using standard deviation

27

27

8.474576

21

15

12.5

27

29

3.448276

21

21

2.439024

18

23

0

22

21

4.444444

Gridding using projection 27 profile 20

29

5.084746

15

9.375

26

26

10.34483

20

21

0

21

24

9.756098

24

23

4.444444

29

30

0

17

15

0

29

29

0

20

21

0

18

23

0

23

23

2.222222

Gridding using proposed method

are drawn based on points obtained by spatial method. Finally, grid lines are obtained and refined by refinement procedure. The experimental outcome shows the efficiency of the proposed approach over existing approaches.

References 1. Angulo, J., & Serra, J. (2003). Automatic analysis of DNA microarray images using mathematical morphology. Journal Bioinformatics, 19, 553–562. 2. Ceccarelli, M., & Antoniol, G. (2006). A deformable grid-matching approach for microarray images. IEEE Transactions on Image Processing, 15, 153178–153188. 3. Rueda, L., & Vidyadharan, V. (2006). A hill-climbing approach for automatic gridding of cDNA microarray images. IEEE Transactions on Computational Biology and Bioinformatics, 3, 72–83. 4. Antoniol, G., & Ceccarelli, M. (2004). A Markov random field approach to microarray image gridding. In Proceedings of the 17th International Conference on Pattern Recognition, (pp. 550– 553). 5. Brandle, N., & Bischof, H. (2003). Robust DNA microarray image analysis. Machine Vision and Applications 15, 11–28. 6. Qi, F., Luo, Y., & Hu, D. (2006). Recognition of perspectively distorted planar grids. Pattern Recognition Letters, 27, 1725–1731.

Novel Approach for Gridding of Microarray Images

385

7. Qin, L., & Rueda, L. (2005). Spot detection and image segmentation in DNA microarray data. Applied Bioinformatics, 4. 8. Zacharia, E., & Maroulis, D. (2008). Microarray image gridding via an evolutionary algorithm. In Proceedings of the IEEE International Conference on Image Processing (pp. 1444–1447). 9. Rueda, L., & Rezaeian, I. (2011). A fully automatic gridding method for cDNA microarray images. BMC Bioinformatics, 12, 113. 10. Katzer, M., & Kummert, F. (2003). Methods for automatic microarray image segmentation. IEEE Transactions on NanoBioscience, 2, 202–214. 11. Wang, X. H, & Istepanian, R. S. H. (2003). Application of wavelet modulus maxima in microarray spots recognition. IEEE Transactions on NanoBioscience, 2, 190–192. 12. Yang, Y., & Buckley, M. (2002). Comparison of methods for image analysis on cDNA microarray data. Journal of Computational and Graphical Statistics(11), 108–136, (2002). 13. Kuklin, S., Shams, S., & Shah, S. (2000). Automation in microarray image analysis with AutoGene. Journal of the Association for Laboratory Automation, 5. 14. Karthik, S. A., & Manjunath, S. S. (2018). An automated and efficient approach for spot identification of microarray images using X-covariance. In: Proceedings of International Conference on Cognition and Recognition, Lecture Notes in Networks and Systems 14. (pp. 275–282). Springer Nature Singapore Pte Ltd. 15. Rueda, L. (2007). Sub-grid detection in DNA microarray images. In Proceedings of the IEEE Pacific-RIM Symposium on Image and Video Technology (pp. 248–259). 16. Shao, G.-F., Yang, F., & Zhang, Q. (2013). Using the maximum between-class variance for automatic gridding of cDNA microarray images. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 10, 181–192. 17. Wang, Y., Shih, F., & Ma, M. (2005). Precise gridding of microarray images by detecting and correcting rotations in subarrays. In Proceedings of the 8th Joint Conference on Information Sciences (pp. 1195–1198). 18. Ceccarelli, M., & Petrosino, A. (2001). The orientation matching approach to circular object detection. In Proceedings of the International Conference on Image Processing (Vol. 3, pp. 712– 715). IEEE. 19. Duda, R. O., & Hart, P. E. (1972). Use of the hough transformation to detect lines and curves in pictures. Communications of the ACM, 15(1), 11–15. 20. Karthik, S. A., & Manjunath, S. S. (2019). Automatic gridding of noisy microarray images based on coefficient of variation. Informatics in Medicine Unlocked, 17, 100264. 21. Karthik, S. A., & Manjunath, S. S. (2020). Microarray spot partitioning by autonoumsly organising maps thorugh contour model. International Journal of Electrical and Computer Engineering (IJECE), 10(1), 746. 22. Karthik, S. A., & Manjunath, S. S. (2019). A Review on Gridding Techniques of Microarray Images, 1st International Conference on Advanced Technologies in Intelligent Control, Environment, Computing, and Communication Engineering. IEEE. 23. Karthik, S. A., & Manjunath, S. S. (2018). An Enhanced Approach for Spot Segmentation of Microarray Images. Procedia Computer Science 132, 226–235.

Neighbours on Line (NoL): An Approach to Balance Skewed Datasets Shivani Tyagi, Sangeeta Mittal, and Niyati Aggrawal

1 Introduction Data imbalance problem occurs when one class have significantly higher or lower instances in sample distribution relative to the other class(es) [1]. Real-world datasets in many domains like medical, intrusion detection, fraud transactions, bioinformatics, etc., are highly imbalanced [2]. For instance, in a medical disease diagnosis dataset, normally there are only a few positive examples out of thousands of negative ones [3]. Another real-world example is credit card transaction datasets, with a typical ratio of 10,000:1 or more for legitimate transactions to fraudulent ones [4]. Typically, minority class is the class of interest, and thus more important to model. However, non-identification of a fraudulent application will be more expensive on part of the company than suspecting a normal application to a credible one [5]. In the case of an imbalanced dataset with 99% instances of one class, even a null classifier, which always predicts class = 0, would obtain over 99% accuracy. Thus, to improve the performance of the classifier, data must be balanced to a sufficient extent. The methods to handle data imbalance classification work in two ways [6]. First is by making algorithm level changes in the learning process. This involves modifying the loss function for the asymmetric cost of misclassification of the minority class. In other types of approach, dataset is resampled for data balancing. The methods S. Tyagi · S. Mittal · N. Aggrawal (B) Department of Computer Science and Engineering, Jaypee Institute of Information Technology Noida, Noida, U.P., India e-mail: [email protected] S. Tyagi e-mail: [email protected] S. Mittal e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_42

387

388

S. Tyagi et al.

undersampling and oversampling are used to modify an imbalanced dataset into balanced distribution by altering the size of the original dataset. In this paper, we worked out a novel undersampling technique that reduces majority class data instances without decreasing the accuracy of the classification of new instances. The technique has been used on a well-known imbalanced dataset and compared with another popular method of undersampling. Obtained results establish the merits of the proposed method.

2 Related Work Oversampling based data balancing techniques work by reducing the majority class samples until they become equal to that of the minority class. Most of the undersampling approaches available in the literature are distance-based. All methods require the use of k Nearest Neighbour (kNN) based distance for estimation of similar data points [7]. CNN (Condensed Nearest Neighbour) identifies three different categories of data instances namely outliers, prototype points and absorbed points [8]. This approach has been used for comparing the undersampling technique proposed in this work. Another approach involves removing TL (Tomek Links) [9]. A pair of instances are said to be Tomek link if they are the only nearest neighbours of each other but belong to different classes. These points are boundary cases or noise that needs to be removed as there is a great possibility of them being misclassified. TLs of only majority class instances which are removed. OSS (One-Sided Selection) [10] uses Tomek Links undersampling approach followed by Condensed Nearest Neighbour method. ENN (Edited Nearest Neighbour) [11] is an extension of OSS method that considers three nearest neighbours of each instance of the frequently occurring class and removes those instances whose class is different from at least two of its three nearest neighbours. NCL (Neighbourhood Cleaning Rule) [12] is repeated ENN as it also finds three nearest neighbours for each instance in training set, and removes the one belonging to the majority class till it cannot further remove any more. Reduced dataset has been used to prepare better discriminating feature space using dissimilarity metrics in [13]. Some techniques intervene in the classifier learning process to better train the model for imbalanced datasets. Authors in [14] design such interventions in cluster formation for kNN method.

3 Proposed Data Balancing by Undersampling The method proposed in this work has been named as Neighbours-on-Line (NoL). The detailed algorithm has been mentioned in the algorithm (NoL). The algorithm takes as input only the majority class samples out of the whole dataset. Number of nearest neighbours to be considered for the working of the algorithm has also been taken as input. In lines 1–6, the algorithm works towards finding k nearest neighbours

Neighbours on Line (NoL): An Approach …

389

of a randomly chosen majority class sample. As mentioned in reduce() function from line number 9–11, attribute wise distance is computed with its k nearest neighbour and distance to one of these neighbours is used for further processing. Algorithm NoL(T, k) Input: Number of majority class samples T; Number of nearest neighbors k Output: Reduced majority class samples 1. numattrs = Number of attributes 2. majority[][]:array of actual majority class samples 3. i=0 4. set mark(majority[][]) = 0 5. while(!mark(majority[][]) 6. compute k nearest neighbors for majority(i) and save the indices in the nnarray[] 7. reduce( i, nnarray) 8. endwhile reduce( i, nnarray) 9. Choose a random number between 1 and k, call it nn. //This step chooses one of the k nearest neighbours of i. 10. for attr ← 1 to numattrs 11. diff = majority[nnarray[nn]][attr]majority[i][attr] 12. threshold = random number between 0 and 1 or explicitly chosen by user 13. for every sample j in majority[][] 14. if(abs(majority[j][attr]-majority[i][attr]) Gaussian SVM >Quadratic discriminant >Logistic Regression >Linear discriminant. Random subsets of misclassified images were analysed for all these models. In general, the mix contained all types of ‘bad’ images in about equal proportion. Thus, we can be assured that the classifier did not fixate on any particular measurement. However, there are two observations that need to be stated. The most pertaining observation was the consistent asymmetry between the types of misclassification errors—The bad images were classified as good more often than vice versa. The chance exception was in the case of the quadratic discriminant for M = 200, wherein all the erroneous classifications were for ‘good’ images which is an outstanding aberration. Another odd consistency was regarding the characters in the misclassified images. Despite all being equal in frequency, the character ‘W ’ happened to be in most of them, regardless of the label. Other frequent characters were ‘N’, ‘M’ and ‘X’. It can be observed that these characters have the most prominent diagonal segments. But that

Minimising Acquisition Maximising Inference—A Demonstration …

421

is merely stating correlations without establishing causality. A concrete reason for neither of these oddities could be established. I heartedly welcome any analysis the reader would like to share which could shed more light on these observations.

6 Future Work As and when such a system is designed, using it in detecting print errors would be the least of its use. While advances in artificial intelligence has increased the intelligence quotient of our computers, incrementing emotional intelligence is still a humongous and very coveted task. Although we now have a plethora of AI architectures, they all have limited scope of implementation where user/data privacy is a concern. If we can establish that human emotions can be inferred from digital portraits with a remarkable accuracy via such limited and incoherent measurement that the original image can never be reconstructed to reveal the user identity, it would be landmark in the field of emotion recognition. Having said that, the face database would be much less cooperative than the one used here as faces are much more diverse and have relatively small features that help in accurate detection. This idea can be extended to any field where data privacy is as important as accurate inference, if not more. Many crucial analysis was left out in this work for the sake of accentuating the proof of concept. The SPC modelled Poisson noise in its measurements and pursued the reconstruction accordingly. Such an addition might affect accuracy or privacy either favourably or otherwise. Deep learning empowered reconstruction algorithms like Reconnet [26] Stacked Denoising Auto-encoders (SDAs) [27] have lately shown remarkable efficacy in reconstructing images from very few compressed samples (as low as 1%). Testing the data privacy against such reconstruction algorithms should be an imperative course in the line of its advancement. On the other hand, deep neural networks are known to be much ‘deeper’ in understanding the classifier of many datasets which are otherwise difficult to extricate. Such advancements should invigorate both objectives of this paper. There are many successful variants of compressive sensing which make use of the structure of data to provide even better reconstructions. Among the most pertaining ones would be block sparsity-based reconstruction [28]. Since our data is clearly block sparse in spatial as well as transformed domain, such an algorithm is likely to deliver better results than the basic basis pursuit used here.

7 Conclusion Our mantra in this paper has been to ‘maximise inference while minimising acquisition’. We took the print error detection problem in reprography—where the sensitivity of the print data forbids the digitisation—to demonstrate that using compressive classification, and we can achieve remarkable accuracy in feature detection while

422

S. Shandilya

ensuring that the data is practically unaccessed. Even at compression ratio as low as 0.3 and 0.6%, the results were very satisfactory. We tested multiple models to achieve the desired classification rate. Kernel SVMs (Cubic and Gaussian specifically) turned out to be most efficient in general. It was observed that even simple linear discriminants provided astounding accuracy when the compression ratio was relatively higher, but not high enough to provide meaningful reconstruction.

References 1. Candes, E. J., Romberg, J., & Tao, T. (2006). Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2), 489–509. 2. Donoho, D. L. (2006). Compressed sensing. IEEE Transactions on Information Theory, 52(4), 1289–1306. 3. Candes, E. J., & Wakin, M. B. (2008). An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2), 21–30. 4. Vaidyanathan, P. P. (2001). Generalizations of the sampling theorem: Seven decades after nyquist. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 48(9), 1094–1109. 5. Van Den Berg, E., & Friedlander, M. P. (2008). Probing the pareto frontier for basis pursuit solutions. SIAM Journal on Scientific Computing, 31(2), 890–912. 6. Rebollo-Neira, L., & Lowe, D. (2002). Optimized orthogonal matching pursuit approach. IEEE Signal Processing Letters, 9(4), 137–140. 7. Tropp, J. A., & Gilbert, A. C. (2007). Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory, 53(12), 4655–4666. 8. Donoho, D. L., Drori, I., Tsaig, Y., & Starck, J. L. (2006). Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. Department of Statistics: Stanford University. 9. Needell, D., & Tropp, J. A. (2009). Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3), 301–321. 10. Duarte, M. F., Davenport, M. A., Takhar, D., Laska, J. N., Sun, T., Kelly, K. F., et al. (2008). Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine, 25(2), 83–91. 11. Davenport, M. A., Duarte, M., Wakin, M. B., Laska, J. N., Takhar, D., Kelly, & K., Baraniuk, R. (2007, February). The smashed filter for compressive classification and target recognition art. no. 64980h. In Proceedings of SPIE 6498. 12. Bianchi, T., Bioglio, V., & Magli, E. (2016). Analysis of one-time random projections for privacy preserving compressed sensing. IEEE Transactions on Information Forensics and Security, 11(2), 313–327. 13. Braun, H., Turaga, P., & Spanias, A. (2014, May). Direct tracking from compressive imagers: A proof of concept. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 8139–8142). 14. Balthasar, M. R., Leigsnering, M., & Zoubir, A. M. (2012, August). Compressive classification for through-the-wall radar imaging. In 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO) (pp. 2288–2292). 15. Amaravati, A., Xu, S., Romberg, J., & Raychowdhury, A. (2018). A 130 nm 165 nj/frame compressed-domain smashed-filter-based mixed-signal classifier for “in-sensor” analytics in smart cameras. IEEE Transactions on Circuits and Systems II: Express Briefs, 65(3), 296–300. 16. Rachlin, Y., & Baron, D. (2008, September). The secrecy of compressed sensing measurements. In 2008 46th Annual Allerton Conference on Communication, Control, and Computing (pp. 813–817).

Minimising Acquisition Maximising Inference—A Demonstration …

423

17. Orsdemir, A., Altun, H. O., Sharma, G., & Bocko, M. F. (2008, November). On the security and robustness of encryption via compressed sensing. In: MILCOM 2008—2008 IEEE Military Communications Conference (pp. 1–7). 18. Cambareri, V., Haboba, J., Pareschi, F., Rovatti, H. R., Setti, G., & Wong, K. (2013, May). A two-class information concealing system based on compressed sensing. In: 2013 IEEE International Symposium on Circuits and Systems (ISCAS2013) (pp. 1356–1359). 19. Cambareri, V., Mangia, M., Pareschi, F., Rovatti, R., & Setti, G. (2015). Low-complexity multiclass encryption by compressed sensing. IEEE Transactions on Signal Processing, 63(9), 2183–2195. 20. Cambareri, V., Mangia, M., Pareschi, F., Rovatti, R., & Setti, G. (2015). On known-plaintext attacks to a compressed sensing-based encryption: A quantitative analysis. IEEE Transactions on Information Forensics and Security, 10(10), 2182–2195. 21. Zhou, S., Lafferty, J., & Wasserman, L. (2009). Compressed and privacy-sensitive sparse regression. IEEE Transactions on Information Theory, 55(2), 846–866. 22. Duncan, G. T., Pearson, R. W., et al. (1991). Enhancing access to microdata while protecting confidentiality: Prospects for the future. Statistical Science, 6(3), 219–232. 23. Daubechies, I. (1992). Ten lectures on wavelets. Vol. 61. Siam. 24. Grant, M., & Boyd, S. (2014, March). CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx. 25. Grant, M., & Boyd, S. (2008). Graph implementations for nonsmooth convex programs. In V. Blondel, S. Boyd, H. Kimura (Eds.), Recent Advances in Learning and Control, (pp. 95–110). Lecture Notes in Control and Information Sciences, Springer-Verlag Limited. http://stanford. edu/~boyd/graph_dcp.html. 26. Kulkarni, K., Lohit, S., Turaga, P., Kerviche, R., & Ashok, A. (2016) Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 449–458). 27. Mousavi, A., Patel, A. B., & Baraniuk, R. G. (2015). A deep learning approach to structured signal recovery. In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton) (pp. 1336–1343). IEEE. 28. Eldar, Y. C., & Bolcskei, H.: Block-sparsity: Coherence and efficient recovery. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 2885–2888). IEEE.

Data Management Techniques in Hadoop Framework for Handling Small Files: A Survey Vijay Shankar Sharma and N. C. Barwar

1 Introduction Big Data is a wider term that will cover all sorts of huge data, i.e. organized, unorganized, semi-organized, in the present era by the increasing popularity of the internet and social media sites, every day a large volume of data is produced. In the year 2012, 2.73 Lac Exabyte’s of digital data were stored across the globe. This explosion of the data is increasing day by day and it has been estimated by the IDC (International Digital Corporation) that the volume of the digital data will reach up to 35 Lac Exabytes by 2020. Due to this changing scenario of the use of digital data, traditional techniques for storing, processing and managing the huge data are not sufficient and it will create a great demand for the distributed computing framework that can handle the massive data sets efficiently. The Apache foundation is providing a powerful distributed computing framework Hadoop, this framework is based on the Map Reduce parallel programming model and can easily handle the massive data processing in the distributed environment. When dealing with millions of small files, it will create a lot of issues like there will be a requirement of more space for Name Node in RAM, network traffic will increases that result in the consumption of more time to store the data, Map Reduce will take longer times to process the requests, etc., therefore there is the requirement of the efficient data management techniques that can deal the small file problem of the Hadoop efficiently. A number of solutions are proposed to the small file problem of the Hadoop, i.e. Hadoop Archives (HAR), Sequential File, Combine File Input Format, etc. Theses proposed solutions are discussed in detail in Section-3 of the paper V. S. Sharma (B) · N. C. Barwar Department of Computer Science Engineering, MBM Engineering College, Jodhpur, India e-mail: [email protected] N. C. Barwar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_48

425

426

V. S. Sharma and N. C. Barwar

but HDFS performance is not efficiently utilized by the proposed solutions, theses solutions are not able to provide optimal performance to store, manage and process small file applications. In this paper a survey has been conducted on various available solutions to the small file problem that has been proposed by several researchers, a detail comparative study is presented for the clear understanding. At last in the conclusion, a number of important characteristics of the unique and optimal data management technique for the solution of small files problem are discussed. The rest sections of the paper are as follows. Section 2 explains small file problem in detail. Section 3 presents existing approaches, their merits and demerits for managing small files in Hadoop. Section 4 briefs various approaches for the solution of the Small file storage in HDFS. Section 5 presents a tabular summary of various data management techniques proposes by several researchers. Conclusions and Future work are drawn in Sect. 6.

2 Small File Problem in HDFS The small file problem can easily be understood by the fact that native Hadoop is designed especially for reliable storage, efficient processing and management of the very huge files. There is a server in the Hadoop called “Name Node” that will be responsible for managing all the files in HDFS for that Name Node keeps the metadata of each and every file that will be stored in the HDFS in its main memory, in this situation when dealing with millions of small files the performance of the HDFS degrades severely because these small files impose an excess burden on the Name Node performance in terms of handling small blocks that are lesser than the configured size of the data block and millions of small files will require more space to manage and process these files. In addition to that HDFS does not provide any prefetching technique to improve the I/O performance of the files and does not consider correlations among the files to improve the access efficiency.

3 Existing Approaches in Hadoop for Managing Small Files 3.1 HAR HAR [1] archive can be generated by the Hadoop archive command, and the command executes a Map Reduce task, by which small files are archived into HAR files. This method can reduce the memory consumption of the Name Node. But, when HDFS reads the HAR file, the system needs to read two index files and the data file. So the read efficiency of small files is very low. The small files are packed in the large files and Hadoop achieves are created, this will reduce the map operations and

Data Management Techniques in Hadoop Framework for Handling …

427

lessen the storage overhead on Name Node that results in the increased performance of the Hadoop. “Hadoop Archive” command is used to create the HAR file: hadoop archive –archiveName [Name of Archive] –p * < Destination> Example: Hadoop archive –archiveName example.har –p /hduser/hadoop directory1 directory2 /hduser/vijay

3.2 Sequence File Sequence File [2] is a binary file format provided by the Hadoop API, in which multiple small files can be merged into large files by Sequence File. Its data structure is composed of a series of binary key/value components. For this technique, in order to merge small files into one Sequence File, Sequence File stores the file name into the key and the file contents into the value. Wherein, different compression strategies can be used to compress file block in small files. But when reading the massive small files, HDFS can only retrieve the file content sequentially and the reading efficiency is still low.

3.3 Combine File Input Format Combine File Input Format [3] is a new Input Format and the directory including a plurality of small files is used as one input, instead of using a file as input. In order to improve the executive speed of Map Reduce task, Combine File Input Format merges multiple files into a single split and makes each mapper task can handle multiple data. In addition, the storage location of the data will be taken into account. Input Format generates Input Split and for each Input Split, Map Reduce processing framework spawns one map task. The modification to the Input Format class can be achieved by combining multiple files in a single spilt hence each map task will receive more inputs to process. When mappers are receiving more input the time required to execute the millions of small files gets reduced significantly. In HDFS generally, there is a provision of the more than one mappers and single reducer, to achieve the parallelism Combine File Input Format allows multiple reducers. Name Node is responsible for the combining small files in single Input Split, this Input Split given to the map task that generates intermediate results. Now, these intermediate results are available as input for the multiple reducers. Multiple reducers provide sorted merge output that will be used further for processing. In such a way Combine File Input Format reduces the processing time and achieves more than average performance improvement.

428

V. S. Sharma and N. C. Barwar

3.4 Map File Map file [4] is a sorted Sequence File that is made of two files, a data file and an index file. Data files can easily be searched out as key to access the data files are managed by the index file. Data file holds the key-value pairs as record and they are sorted by the key. When there is a requirement for access Map File only lookup for the key there is no need to traverse the whole file however there is no provision for the flexible Application Programming Interface (API) because the append method is supported only for the specified keys. In simple words, it is not necessary to append every key to the created Map File because keys must be always in sorted form.

4 Approaches for Solution of Small File Storage in HDFS There are three main approaches to solve small file problem, several researches worked on these approaches and proposed various efficient data management techniques for the storage of small files in Hadoop framework. • File Merging Technique: In the Merging operation related small files are merged into a single large file and this operation is done at HDFS client. The Merging operation will reduce the load on the Name Node as Name Node only keeps metadata of the merged files. Original small files are not kept at Name Node, therefore, it will reduce the number of files managed by the Name Node. • Use of Meta Data: Metadata of the small files are used to store the mapping details between small file and merged file. This metadata information can efficiently be used to improve the access performance and memory utilization of the Name Node. • Prefetching and Caching of Files: The important concern when dealing with small files is the access efficiency. It can be greatly improved if prefetching and caching techniques are taken into account. Using prefetching and caching technique disk I/O can be minimized and response time can be reduced by considering the access locality.

5 Analysis of Various Data Management Techniques for Small File Storage in HDFS Based on approaches explained in the previous section several researchers proposed various efficient data management techniques for small files. Table 1 summarizes work done in the field of small file management. Each research paper in this table studied in a manner that provides a quick and deeper insight, explaining that what data management technique is used by the researcher, what has been enhanced and what parameters are evaluated. Table 2 presents a comparative study and analysis of the main data management techniques for small file management.

What is enhanced

Parameters for evaluation

(continued)

LHF technique is compared with the • Access time Map File, HDFS, and HAR Technique • Throughput of access time and found better in terms of access • Memory consumption at Name Node time, memory usage at Name Node

Tao et al. [7]

3

• Merging algorithm based on linear hashing—extendable index • Index file splitting technique

Tchaye-Kondi et al. [6] • Index-based archive file (HPF) • Reduce the metadata load in Name • Access performance without caching • Use of extensible hash function to Node memory along with fast I/O • Access performance with caching distribute metadata operations • Name Node Memory Usage • Use of centralized cache management • Improves seek and append operation

• A two-level prefetching mechanism is • To reduce the size of meta data on • File Number per KB of Memory utilized, which comprises local index Name Node all related small files of (Memory Usage) • MB of Accessed Files per file a PPT courseware are merged in a • A local index file is established, and Millisecond (Access Efficiency) bigger file according to the offset and length for • A two-level prefetching mechanism • Millisecond per Accessing a File each file in the local index file, is introduced to improve the original files are merged efficiency of accessing small files

Data management technique used

2

Title of paper

Bo et al. [5]

No.

1

S.

Table 1 Summary of the solution of small file problem in HDFS

Data Management Techniques in Hadoop Framework for Handling … 429

Peng et al. [9]

Cai et al. [10]

Kim and Yeom [11]

Lyu et al. [12]

5

6

7

8

Title of paper

Jing et al. [8]

No.

4

S.

Table 1 (continued)

Analytical hierarchal process is responsible for the identification dynamic queue size according to the best system performance and evaluates the performance of small files with different ranges across four indexes

What is enhanced

• Optimized merge strategy • Prefetching and caching

• Direct data referencing using pointer • Prefetching of data block

Merging technique based on file distribution and file correlation (MBDC)

Access time Memory usage Upload time Combined efficiency

• Time cost of importing the merged files to HDFS • Memory consumption of Name Node • Time consumption of file access

• Access time for picture and text file • Memory consumption at Name Node

• • • •

Parameters for evaluation

• Improves the reading speed • Name Node memory usage

(continued)

• Memory consumption in Name Node • Time consumption of reading files

• Reduce the storage I/O operations • Normalized execution time • Reduce the application level latency • Access time • Improves file operation throughput • Throughput (transaction/s)

• Import efficiency of small files • Storage efficiency of small files • Reading Efficiency of related files

• Merging algorithm for meta data and • Reduce memory usage block of meta data • Reduce access time • Special memory subsystem (Caching) using log-linear model

• DQFS: Use of a dynamic queue according to file size • Establishment of the secondary index for merged files • Prefetching for better access

Data management technique used

430 V. S. Sharma and N. C. Barwar

Mu et al. [14]

Wang et al. [15]

10

11

Title of paper

Fu et al. [13]

No.

9

S.

Table 1 (continued) What is enhanced

Small file merging and prefetching technique using probabilistic latent semantic analysis (PLSA)

• Reduce request-response delay • Improves prefetching hit ratio • Reduce memory consumption of MDS (MDS Workload) • Comparison of HDFS, HAR, Scheme of Dong [16], and proposed PLSA model

Improving the storage architecture by • Improves block in storage inserting a processing layer to judge the architecture and establishing space size of files which transfer from mechanism of secondary indexes • Save Name Node memory space, the client improve the storage access efficiency

• File accessing in the cloud system is • Reduce reading and writing time of optimized by using the Block Replica the small files • Improves data query efficiency and Placement technique • Storage of small files and access latency frequencies of small files is optimized • Reduce the number of files and therefore the size of file meta data using an In-Node model can be reduced • Comparison of HAR, D2I and Sequence File

Data management technique used

• • • •

(continued)

Average request-response delay MDS workload Hit Ratio MDS memory consumed

• No of Files V/S Upload Time • No. of Files V/S Download Time

• Average disk utilization of Data Nodes when there are different numbers of files • Memory consumption of Data Nodes when there are different numbers of files • File reading time in random/sequential access • File writing time in random/sequential access

Parameters for evaluation

Data Management Techniques in Hadoop Framework for Handling … 431

Fu et al. [18]

Mao et al. [19]

Guru Prasad et al. [20]

13

14

15

Title of paper

He et al. [17]

No.

12

S.

Table 1 (continued)

• Optimize the storage and access of millions of small files for web-based applications • iFlatLFS uses flat storage architecture & metadata technique to manage millions of small files and have direct access to the raw disks

• Reduce the memory usage at the major nodes of the collection • Improves the overall efficiency of the collection

What is enhanced

• Use of merging algorithm based on array of abstract path • Introduces efficient indexing mechanism

• Reduce Name Node memory usage, therefore, improves overall processing of the small files • Reduce time to move local files into HDFS

• File correlations are considered when • Minimize the memory utilization of merging files Name Node • Prefetching and caching strategy is • Enhance the access performance of used while accessing files large number of small files • Reduces the seek time and delay in reading files

• iFlatLFS: hybrid storage system to integrate different storage systems • Use flat storage architecture and simple meta data technique • Use meta data consistency policy

File merging based on balance of data block using file merging queue and tolerance queue

Data management technique used

Random access performance Size of meta data Critical point of file size Ratio of write requests to total request Comparative performance an -analysis of Randomio, iFlatLFS, Ext4, XFS and ReiserFS

• Memory usage of Name Node for storing meta data • Time Taken by Map Reduce phase to process files • Time taken to move files from local file system to HDFS (continued)

• Memory usage of Name Node and Data Node • File reading/writing efficiency • Concurrent reading efficiency

• • • • •

• Time to import file in HDFS • Memory usage at Name Node • Data processing speed

Parameters for evaluation

432 V. S. Sharma and N. C. Barwar

16

S.

No.

Dong et al. [16]

Title of paper

Table 1 (continued)

• Classification of files (Structurally and Logically related files) • Local and global index file strategy

Data management technique used

Parameters for evaluation

• Improve storage efficiency by factor • File Number Per KB of 9 (FMP-SSF) and 2 (FGP-ISF) • MSPF (ms) • Reduce the per file meta data server interaction

What is enhanced

Data Management Techniques in Hadoop Framework for Handling … 433

Type

Based on DFS

Based on Archive & Index

Based on Archive & Index

Based on Archive

Based on Archive & Index

Based on Archive & Index

Based on Archive & Index

Based on Archive & Index

Feature

HDFS

BlueSky [5]

Hadoop Archive [1]

SequenceFile [2]

MapFile [4]

Hadoop Perfect File [6]

Linear Hashing Function [7]

DQSF [8], He et al. [17], Cai et al. [10], Bok et al. [21]

Low

Low

Low

Very Low

Very Low

Low

Low

Very High

Master memory usage

Yes

Yes

Yes

Special Keys

Yes

No

Yes

Yes

Support append

No

No

No

No

No

No

No



Modify HDFS

Table 2 Comparative analysis solutions of small file problem in Hadoop [6]

No

No

No

No

No

No

No



Use extra system

No

No

No

No

No

Yes

No

Yes

HDFS Pre-upload required

High

Moderate

Moderate

Moderate

Low

V. High

High

V. High

Creation overhead

High

High

(continued)

Very High(O(1))

High(O(logn))

Low(O(n))

Low

High

High

Reading efficiency (complexity)

434 V. S. Sharma and N. C. Barwar

Type

Based on Archive & Index

Based on MapFile

Based on Archive & HBase

Feature

SHDFS [9]

OMSS [22], TLB-MapFile [23]

Zheng et al. [24]

Table 2 (continued)

Low

Very Low

Low

Master memory usage

Yes

For special keys

Yes

Support append

Yes

No

Yes

Modify HDFS

Yes

No

Yes

Use extra system

No

No

No

HDFS Pre-upload required

High

Moderate

High

Creation overhead

High

High

High

Reading efficiency (complexity)

Data Management Techniques in Hadoop Framework for Handling … 435

436

V. S. Sharma and N. C. Barwar

6 Conclusion and Future Work In this paper, a quick review has been conducted on some important research papers (Table 1). The main findings of the survey done in this paper is that most of the authors proposed the efficient data management techniques for small files using the concept of merging file strategy, hash function, correlation of files, prefetching, caching of frequently accessed files, secondary index for merged files, block replica placement algorithm, metadata management, local and global index file strategy and direct data referencing using a pointer. The performance of the proposed data management techniques by the several researchers in the survey are evaluated using the following common parameters: Name Node memory usage, access time and access efficiency for the small files, storage efficiency, size of meta data, request-response delay, time taken to move files from local file system to HDFS, prefetching hit ratio, execution time, etc. The main concern of authors is to improve access efficiency of the small files and lower the Name Node memory usage and they have achieved it at an extent but still, there are some issues that can be addressed in the future research work carried out by the other researchers, few of them are as follows • Study of the time consumption associated with variable file size as calculating correlation of files. • Proposed techniques perform poor for random access. • Performance of the proposed data management techniques varies for different data sets. • Identification of the large and small files and setup of cut off points to classify the files by their size. • Identification and formation of the new relationship among volume of a merged file and storage & access efficiencies. • Implementation of the hybrid storage architecture. At last, the conclusion of this paper is that there is a requirement of the more effective data management technique that can provide all in one solution for all types of small files while keeping access efficiency at the highest level and Name Node memory usage at lowest level.

References 1. HAR Online]. Available: https://hadoop.apache.org/docs/r1.2.1/hadoop_archives.html. 2. SequenceFile. [Online]. Available: https://examples.javacodegeeks.com/enterprise-java/apa che-hadoop/hadoop-sequence-file-example/. 3. CombineFileInputFormat. [Online]. Available: https://hadoop.apache.org/docs/r2.4.1/api/org/ apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.html. 4. MapFile. [Online]. Available: https://hadoop.apache.org/docs/r2.7.0/api/org/apache/hadoop/ io/MapFile.html.

Data Management Techniques in Hadoop Framework for Handling …

437

5. A novel approach to improving the efficiency of storing and accessing small files on Hadoop: A case study by PowerPoint file—IEEE Conference Publication. https://ieeexplore.ieee.org/ xpls/abs_all.jsp?-arnumber=5557216. 6. Tchaye-Kondi, et al. (2019, April 26). Hadoop perfect file: A fast access container for small files with direct in disc metadata access. ArXiv.org, arxiv.org/abs/1903.05838. 7. Tao, W., et al. (2019). LHF: A new archive based approach to Accelerate massive small files access performance in HDFS. EasyChair Preprints. https://doi.org/10.29007/rft1. 8. Jing, W., et al. (2018). An optimized method of HDFS for massive small files storage. Computer Science and Information Systems, 15(3), 533–548. https://doi.org/10.2298/csis171015021j. 9. Peng, J., Wei, W., Zhao, H., Dai, Q., Xie, G., Cai, J., & He, K. (2018). Hadoop massive small file merging technology based on visiting hot-Spot and associated file optimization. In: 9th International Conference, BICS 2018, Xi’an, China, July 7–8, 2018, Proceedings. https://doi. org/10.1007/978-3-030-00563-4_50. 10. Cai, X., et al. (2018). An optimization strategy of massive small files storage based on HDFS. In: Proceedings of the 2018 Joint International Advanced Engineering and Technology Research Conference (JIAET 2018). https://doi.org/10.2991/jiaet-18.2018.40. 11. Kim, H., & Yeom, H. (2017). Improving small file I/O performance for massive digital archives. In: 2017 IEEE 13th International Conference on e-Science (e-Science). https://doi.org/10.1109/ escience.2017.39. 12. Lyu, Y., Fan, X., & Liu, K. (2017). An optimized strategy for small files Storing and accessing in HDFS. In: 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). https://doi.org/10.1109/cse-euc.2017.112. 13. Fu, X., Liu, W., Cang, Y., Gong, X., & Deng, S. (2016). Optimized data replication for small files in cloud storage systems. Mathematical Problems in Engineering, 2016, 1–8. https://doi. org/10.1155/2016/4837894. 14. Mu, Q., Jia, Y., & Luo, B. (2015). The optimization scheme research of small files storage based on HDFS. In: 2015 8th International Symposium on Computational Intelligence and Design (ISCID). https://doi.org/10.1109/iscid.2015.285. 15. Wang, T., Yao, S., Xu, Z., Xiong, L., Gu, X., & Yang, X. (2015). An effective strategy for improving small file problem in distributed file system. In: 2015 2nd International Conference on Information Science and Control Engineering. https://doi.org/10.1109/icisce.2015.35. 16. Dong, B., Zheng, Q., Tian, F., Chao, K.-M., Ma, R., & Anane, R. (2012). An optimized approach for storing and accessing small files on cloud storage. Journal of Network and Computer Applications, 35(6), 1847–1862. https://doi.org/10.1016/j.jnca.2012.07.009. 17. He, H., Du, Z., Zhang, W., & Chen, A. (2015). Optimization strategy of Hadoop small file storage for big data in healthcare. The Journal of Supercomputing, 72(10), 3696–3707. https:// doi.org/10.1007/s11227-015-1462-4. 18. Fu, S., He, L., Huang, C., Liao, X., & Li, K. (2015). Performance optimization for managing massive numbers of small files in distributed file systems. IEEE Transactions on Parallel and Distributed Systems, 26(12), 3433–3448. https://doi.org/10.1109/tpds.2014.2377720. 19. Mao, Y., et al. (2015). Optimization scheme for small files storage based on Hadoop distributed file system. International Journal of Database Theory and Application, 8(5), 241–254. https:// doi.org/10.14257/ijdta.2015.8.5.21. 20. Improving the performance of processing for small files in Hadoop: A case study of weather data analytics. CiteSeerX, https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.659.7461. 21. Bok, K., et al. (2017). An efficient distributed caching for accessing small files in HDFS. Cluster Computing, 20(4), 3579–3592. https://doi.org/10.1007/s10586-017-1147-2. 22. Sheoran, S., et al. (2017). Optimized MapFile based storage of small files in Hadoop. In: 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). https://doi.org/10.1109/ccgrid.2017.83.

438

V. S. Sharma and N. C. Barwar

23. Meng, B., et al. (2016). A novel approach for efficient accessing of small files in HDFS: TLB-MapFile. In 2016 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD).https://doi.org/ 10.1109/snpd.2016.7515978. 24. Zheng, & Guo. (1970, January). A method to improve the performance for Storing Massive small files in Hadoop. SAO/NASA ADS: ADS Home Page, https://ui.adsabs.harvard.edu/abs/ 2017ccen.confE..22Z.

Maintaining Accuracy and Efficiency in Electronic Health Records Using Deep Learning A. Suresh and R. Udendhran

1 Introduction Deep learning, the new buzzword has changed the technique for computational data processing in data analytics. In the era of traditional statistical approaches, manual work was required to process the data and design meaningful features, but in deep learning, machines can learning the meaningful features straight from the data without any explicit programming. Healthcare organizations employ huge electronic health records (EHR). EHR enabled researchers to employ traditional statistical approaches, for instance, logistic regression or random forests for computational healthcare [1]. However, electronic health data records are increasing which needs effective computing resources and the only solution is deploying deep learning based applications in healthcare for managing electronic health records [2]. However, in healthcare, interpretation and accuracy of deep learning is important as shown in Fig. 1. All important electronic healthcare records, business modules and organization’s applications process confidential information and these systems may be connected to third parties [3]. If any suspicious behaviour and events arise, it must be assessed and perform risk assessment for each system so that we can create audit log for reviewing as well as monitoring [4]. In order to create an effective audit log to successfully address threats, one must incorporate user IDs, details about log on and log off with its key events, terminal identity, access to systems, files as well A. Suresh (B) Department of Computer Science and Engineering, Nehru Institute of Engineering and Technology, T.M.Palayam, Coimbatore, India e-mail: [email protected] R. Udendhran Department of Computer Science and Engineering, Bharathidasan University, Tiruchirappalli, India © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_49

439

440

A. Suresh and R. Udendhran

Fig. 1 a Prediction performance of EHR. b Change in the perplexity of response

as networks and its applications, system configurations and its system utilities. The threats related events should be maintained such as activated alarms and activation of protection systems including intrusion detection systems which checks the audit data for any intrusion with the previous intrusion attacks. In the aspect of security of audits of electronic health information, healthcare companies’ process security audits employing audit logs that records key activities for threads of access with modifications as well as transactions. These audit logs are mainly employed for finding unauthorized access to information, accountability of employees, monitoring for inappropriate accesses, recording disclosure of PHI, maintaining or updating patient information and checking compliance with accreditation as well as regulatory rules. Applications which employ audit logs and trail for security as follows: • Applications that support management activities, logistics, administration as well as providing goods and services related to the electronic health record. Certain examples are Electronic health record (EHR), Picture archiving and communication systems (PACS) and finally, electronic patient record (EPR), • Application regarding healthcare knowledge delivery with medical education, research and clinical handling [5]. • Security- and confidentiality-related application that deals with the operation of IT technology without any data leakage. For instance, the web-based healthcare platform used by the Microsoft employs audit logs for sharing the information among them [7]. On the other hand, Oracle had offered an integrated healthcare solution suite with audit logs and trials for the better management of performances for better practices [8] deep learning employs high-capacity neural network which is trained by huge training samples [9]. The major advancements in deep learning were due to the development of computational resources, for instance, graphics processing unit (GPU), cloud storage for handling large volumes of labelled datasets.

Maintaining Accuracy and Efficiency in Electronic …

441

2 Proposed Technique Based on the mathematical representation of the EHR as provided in the Eq. (1), the proposed approach follows a top-down manner. A multi-layer perceptron (MLP) is employed to produce the representation [10],  TF(x, a) =

0 freq(x, a) = 0 1 + log log(1 + log log(freq(x, a))) otherwise

(1)

The TFs further have certain designated values based on the occurrences EHR and so a TF (x, a) is designated as 0 and in case the document does not have the desired word or 1 and if this word is present. By employing learning representation of the patient’s visit and the training of the MLP as follows: a visit represents a state in a continuous process based on the patient’s clinical experience. Thus, with the provided visit representation, we can predict based on what has happened in the past. The softmax classifier predicts the medical codes of the visits within a context window. The Cosine function is employed to find the similarity among different electronic records as given in Eq. (2):  di · d j  cos di , d j = di X d j

(2)

By employing Skip-gram, co-occurrence information is employed to represent codes that occur in the same visit which can predict. By embedding every coordinate of the information into a lower-dimensional nonnegative space, which can be easily interpreted, for this purpose, non-negative matrix factorization (NMF) is employed and by training the ReLU, interpretation of each coordinate of the m-dimensional code embedding space can be determined [11].

3 Experimentation The data set was extracted from Children’s Healthcare of Atlanta (CHOA). The data set consists of patient visit which posses medical codes, for instance, diagnosis, medication as well as procedure codes. The ICD-9 codes are based on the diagnosis codes created by National Drug Codes (NDC), and the procedure codes are based on the Category I of Current Procedural Terminology (CPT) as shown in Fig. 2. For creating learning code as well as visit representations, Adadelta was employed after a fixed number of epochs, the optimization commences. As presented in Table 1, the proposed techniques achieve better accuracy of more than 90% when compared with K-nearest neighbour.

442

A. Suresh and R. Udendhran

Fig. 2 Description of data set

Table 1 Evaluation results

Classifier

Sensitivity (%)

Specificity (%)

Accuracy (%)

Proposed

86.10

85.30

98.250

KNN

67.870

82.850

84.620

4 Conclusion Electronic health data records are increasing which needs effective computing resources and the only solution is deploying deep learning based applications in healthcare for managing electronic health records. In this research work, an effective deep learning method is applied to process longitudinal electronic health records data which enhances the accuracy and it was evaluated as well as compared with KNN.

References 1. Choi, E., Bahadori, M. T., Song, L., Stewart, W. F., & Sun, J. (2017). Gram: Graph-based attention model for healthcare representation learning. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM. 2. Miotto, R., Li, L., Kidd, B. A., & Dudley, J. (2016). Deep patient: An unsupervised representation to predict the future of patients from the electronic health records. Scientific Reports. 3. Suresh,H., Hunt, N., Johnson, A., Celi, L. A., Szolovits, P., & Ghassemi, M. (2017). Clinical intervention prediction and understanding using deep networks. In MLHC. 4. Nguyen, P., Tran, T., & Venkatesh, S. (2018). Resset: A recurrent model for sequence of sets with applications to electronic medical records. ArXiv:1802.00948. 5. Kim, J.-H., On, K.-W., Lim, W., Kim, J., Ha, J.-W., & Zhang, B.-T. (2016). Hadamard product for low-rank bilinear pooling. In ICLR. 6. Suo, Q., Ma, F., Canino, G., Gao, J., Zhang, A., Veltri, P., & Gnasso, A. (2017). A multi-task framework for monitoring health conditions via attention-based recurrent neural networks. In AMIA. 7. Pham, T., Tran, T., Phung, D., & Venkatesh, S. (2017). Predicting healthcare trajectories from medical records: A deep learning approach. Journal of Biomedical Informatics. 8. Udendhran, R. (2017). A hybrid approach to enhance data security in Cloud Storage. In ICC ‘17 Proceedings of the Second International Conference on Internet of things and Cloud

Maintaining Accuracy and Efficiency in Electronic …

443

Computing, Cambridge University, United Kingdom, March 22–23. ISBN: 978-1-4503-47747. https://doi.org/10.1145/3018896.3025138. 9. Suresh, A., Udendhran, R., Balamurgan, M., et al. (2019). A novel internet of things framework integrated with real time monitoring for intelligent healthcare environment. Journal of Medical System, 43, 165. https://doi.org/10.1007/s10916-019-1302-9. 10. Suresh, A., Udendhran, R., & Balamurgan, M. (2019). Hybridized neural network and decision tree based classifier for prognostic decision making in breast cancers. Journal of Soft Computing. https://doi.org/10.1007/s00500-019-04066-4. 11. Sundararajan, K., & Woodard, D. L. (2018). Deep learning for biometrics: A survey. ACM Computing Surveys (CSUR), 51, 3.

An Extensive Study on the Optimization Techniques and Its Impact on Non-linear Quadruple Tank Process T. J. Harini Akshaya, V. Suresh, and M. Carmel Sobia

1 Introduction Liquid level systems had been widely used in many processing industries such as the nuclear power plants, pharmaceuticals, chemical processing industries, water purification process, and many more. Usually, sensors play a role in providing necessary information regarding the valves, pipes, and electric motors that leads to effective control by automating them. Modern control issues are caused by several control factors due to their high nonlinearity. One of the extremely high un-predictable processes with the maximum input and output connection with the non-linear property comes with the Quadruple Tank System (QTP). They have more than one control loop that interacts with each other in a way of providing their own information with the output and also promotes the other process outputs. In order to achieve the specific reference level, the lower tank liquid should necessarily be controlled and managed. Research had been carried out in the Multi Inputs Multi Output systems (MIMO) by considering the nonlinearity, large time delay, and constrained variables, thereby maintaining the control strategies with the liquid levels. Optimization techniques had been carried out in this system and this paper deals with those optimizing techniques involved along with the controllers for maintaining the control of fluid levels.

T. J. Harini Akshaya (B) · V. Suresh · M. Carmel Sobia Department of EIE, National Engineering College, Kovilpatti, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_50

445

446

T. J. Harini Akshaya et al.

2 Modeling of QTS The framework is composed of two input sources and voltage based on the speed of the pump that could control the flow towards the output tank. The QTS system has been represented in Fig. 1. The mathematical equation of the process is given by the mass balance equation with Bernoulli’s law. The rate of accumulation = (Inflow rate)−(Outflow rate) Ai dh i /dt = qin 1 − qouti

(1)

Equation (1) denotes the general equation of a tank. With this equation, the nonlinear equation of the Quadruple tank process could be obtained and these are as follows [1]: √ √ a1 2gh 1 a3 2gh 3 γ1 K 1 v1 dh 1 =− + + dt A1 A1 A1 √ √ a2 2gh 2 dh 2 a4 2gh 4 γ2 K 2 v2 =− + + dt A2 A2 A2 √ dh 3 a3 2gh 3 (1 − γ2 )K 2 v2 =− + dt A3 A3

Fig. 1 Schematic diagram of a quadruple tank system

(2)

(3)

(4)

An Extensive Study on the Optimization Techniques…

447

√ a4 2gh 4 dh 4 (1 − γ1 )K 1 v1 =− + dt A4 A4

(5)

where hi k1, k2 γ 1, γ 2 ai Ai g

level of water in each tank (where, i = 1, 2, 3, 4) pump constants valve constants cross-sectional area of outlet pipes cross-section area of each tank acceleration due to gravity.

The state-space equation could be obtained as follows: ⎡

− a1

√ 2gh 1 A1

⎢ ⎢0 dh =⎢ ⎢0 dt ⎣ 0 ⎛ γ1 K 1 v1 ⎜0 ⎜ +⎜ ⎝0

0

√ 2 − a2 A2gh 2

0 0

(1−γ1 )K 1 A4

γ2 K 2 A2 (1−γ2 )K 2 A3





2gh 2 A1

0

√ 3 − a3 A2gh 3

0

0

0 ⎞

0

A1

− a2

− a2 − a4



2gh 4 A1



⎟ ⎟ ⎟v ⎠

⎥ ⎥ ⎥h ⎥ ⎦

2gh 4 A4

(6)

0

Considering, the x is the state vector, which is obtained from (hi − hi0 ) and u to be input vector obtained from (vi − vi0 ). The linearized state-space equation of QTP system is given by ⎡ −1 T1

0

A3 A1 T3

⎢ 0 −1 0 dx ⎢ T2 =⎢ ⎣ 0 0 −1 dt T3 0 0 0 ⎛ γ1 K 1 v1 ⎜0 ⎜ +⎜ ⎝0

A1

(1−γ1 )K 1 v1 A4



0 A4 A2 T4

0 −1 T4

⎥ ⎥ ⎥x ⎦ ⎞

0 γ2 K 2 A2 (1−γ2 )K 2 v2 A3

⎟ ⎟ ⎟u ⎠

(7)

0

The output equation is 

k 0 00 y= c 0 kc 0 0 Where the time-constant is determined to be

 (8)

448

T. J. Harini Akshaya et al.

 Ai Ti = ai

2h i , where i = 1, 2, 3, 4 g

The system is open-loop stable with two multivariable zeros. The parameters γ 1 , γ 2 governs the nature of zeros: • The system has Right Half Plane transmission Zeros (RPHZ) if 0 ≤ γ 1 + γ 2 < 1 • The system has Left Half Plane transmission Zeros (LPHZ) if 1 < γ 1 + γ 2 ≤ 2. The following are said to be the non-minimum and the minimum phase. In the non-minimum phase, the liquid entering the lower tanks is less than the upper tank and vice versa for the maximum phase.

3 Advancement in Optimization Techniques for Non-linear Process A Robust decentralized PID controller design for the QTP non-linear system had been discussed in [2]. In this paper, the minimum and the non-minimum phase had been studied. For the static output feedback controller Linear Matrix Inequality (LMI) based design provides a good result in the minimum phase configuration and the inverse dynamics approach is preferred by the non-minimum case and a large integration constant is required by the state-space design in this case, thereby providing an oscillatory response. To control the level of the lower tank in QTP, a decentralized neuro-fuzzy controller had been presented in [3]. This controller was designed based on the adaptive neuro-fuzzy interface system (ANFIS). The first controller utilized in predicting the voltage and thereby controlling the level in tracking the referred one is done by neuro-fuzzy inverse non-linear model (NFIN) and the desired level is obtained by the obtained voltage level fed through the neuro-fuzzy forward non-linear model (NFFN). To control the system at any operating condition, neuro-fuzzy non-linear gain scheduling PI controller is used. The implementation had been carried out in the Matlab software platform showing the accurate tracking level is best done with the NFIN controller taking minimum computational time but the desired level with the minimum error of approximately 2 mm. A new constrained Particle Swarm Optimization (PSO) along with the TS fuzzy modeling approach had been carried out for QTP system in [4]. The predictive control is done by the PSO algorithm by lowering a constrained multivariable criterion (cost function) and the states are forecasted for the closed-loop stability is done by the TS fuzzy approach. Comparison is done among the developed Modern Predictive Scheme (MPC) scheme and also the Non-linear Generalized Predictive Control (NGPC) and it shows the proposed method is superior in handling the disturbances. In [5], the control of QTP is done by the Genetic Algorithm Optimization technique. The author had compared the result with PI controller tuning using Internal Mode Technique (PIIMC), Bacteria Foraging Optimization Algorithm (BFO.A), and PSO. According to

An Extensive Study on the Optimization Techniques…

449

the result, PI-IMC shows a minimum improvement and that is increased with the other soft computing techniques and the dynamic performances such as the settling time and the rise time had also been improved. But the drawback is that GA requires 32.156 s computational time when compared to the other optimization techniques. The state estimation problem for neural networks with time-varying delays had been examined in [6]. Neural networks function takes place in the form of the original biological neurons, which impose the functionality of the nervous systems. Their models could be utilized in several practical systems. In this QTP system had been considered and stochastic sampling is used to determine the sampled-data and with the novel approach, the state estimator had been designed. The activation function bound is divided into two subintervals. Based on the extended Wirtinger inequality, a discontinuous Lyapunov function had been proposed that utilizes the saw-tooth structure of the sampling input delay. The LMIs solution could be determined with the gain obtained by the state estimator. The result shows that the continuous Lyapunov function is found to be more conservatism than the discontinuous Lyapunov function. An experimental study in the field of process control had been carried out in [7]. The study had been carried out by making the QTP act similarly to the single tank, cascaded tank, and the mixture of process and hybrid dynamics. This control structure is found to be highly flexible with the PC or with certain external devices fitted with the sensors. The connection to the LabView, SCADA, or the Matlab could be done by the OPC (Ole for Process Control). Moreover, the IoT environment could also be adapted that promotes the real-time monitoring of the process. Since IoT plays a major role in the medical field for intelligent health care monitoring [8] and also certain classification technique based on the neural network and decision tree and neural network had been adapted in healthcare environment [9], this could be more suitable in process industries. Several processes had been carried out with modern predictive control strategies. A comparative study had been carried out in [10] where the performance of the heat exchanger system with IMC, MPC, feed-forward, and IMC based PID control process had been carried out. The process had been carried out with different temperature values. First, IMC controller was used but it has large offset error and set point tracking that had been minimized with the IMC based PID controller. But, advanced control strategies had been evolved that could be used in these cases. With the nonlinearity controlling process, it is not easy to obtain an accurate model of a plant or the process and hence confiscating an output from the controller is a tedious task. ANFIS model along with the PID fuzzy controller had been presented in [11] for a non-linear liquid level system. With this model, shorter settling times with less than 10% overshoot were achieved. This paper also proves that the system could be effectively controlled by the fuzzy controller and also minimizing the computation time when compared to the traditional modeling techniques. The enormous data obtained could be stored in the cloud environment by concerning the safety of these industrialized data [12].

450

T. J. Harini Akshaya et al.

4 Conclusion A study regarding the enhancement done in the non-linear quadruple tank process has been done in this paper. In many industrial fields, non-linear process plays a major role and hence it is necessary to make an improvement that could yield in minimizing the computational time and obtain better results in minimizing the rise time, settling time, overshoot and error values. Conventional controllers could be adapted with many optimization techniques to receive better gain values thereby enhancing the system. Evolutionary algorithms are also possible, which could be adapted in the field of process industries.

References 1. Johansson, K. H. (2000). The quadruple-tank process: A Multivariable Laboratory process with an adjustable zero. IEEE Transactions on Control Systems Technology, 8(3). 2. Rosinova, D. (2008). Robust control of Quadruple-Tank process. ICIC Express Letters. 2(3). 3. Eltantawie, M. A. (2019). Decentralized neuro-fuzzy controllers of nonlinear quadruple tank system. SN Applied Sciences, 1, 39. https://doi.org/10.1007/s42452-018-0029-4. 4. Ali, T., Sakly, A., & M’Sahli, F. (2019). A new constrained PSO for fuzzy predictive control of quadruple-tank process. Measurement, 136, 93–104. 5. Al-awad, N. A. (2019). Optimal control of quadruple tank system using genetic algorithm. International Journal Of Computing and Digital Systems., 9(1), 51–59. 6. Lee, T. H., Park, J. H., Kwon, O. M., & Lee, S. M. (2013). Stochastic sampled-data control for state estimation of time-varying delayed neural networks. Neural Networks, 46, 99–108. 7. Johansson, K., Horch, A., Wijk, O., & Hansson, A. (1999). Teaching multivariable control using the quadruple-tank process. Proceedings of the IEEE Conference on Decision and Control, 1, 807–812. 8. Suresh, A., Udendhran, R., Balamurgan, M., et al. (2019). A novel internet of Things framework integrated with real Time Monitoring for Intelligent healthcare environment. Springer-Journal of Medical System, 43, 165. https://doi.org/10.1007/s10916-019-1302-9. 9. Suresh, A., Udendhran, R., & Balamurgan, M. (2019). Hybridized neural network and decision tree based classifier for prognostic decision making in breast cancers. Springer—Journal of Soft Computing. https://doi.org/10.1007/s00500-019-04066-4. 10. Alvarado, I., Limon, D., Garcia-Gabin, W., Alamo, T., & Camacho, E. F. (2016). An educational plant based on the quadruple-tank process. IFAC Proceedings, 39(6), 82–87. https://doi.org/ 10.3182/20060621-3-ES-2905.00016. 11. Mishra, R., Mishra, R., Patnaik, A., & Prakash, J. (2016). Comparison study of IMC and IMC based PID controller for heat exchanger system. Journal of Control System and Control Instrumentation, 2(1). 12. Udendhran, R. (2017). A hybrid approach to enhance data security in cloud storage. In ICC ‘17 Proceedings of the Second International Conference on Internet of things and Cloud Computing at Cambridge University, United Kingdom, March 22–23, 2017. ISBN: 978-1-4503-4774-7. https://doi.org/10.1145/3018896.3025138.

A Unique Approach of Optimization in the Genetic Algorithm Using Matlab T. D. Srividya and V. Arulmozhi

1 Introduction The genetic algorithm is a theoretical global search method that copies the image of natural biological evolution [1]. Genetic Algorithms function on inhabitants of possible solutions put on the belief of survival of the fittest to yield improved estimates to a solution. In every generation, a novel set of estimates are created by the selection of entities based on their fitness level in the problem domain and mutating them together. Each individual is assigned a fitness value during the reproduction phase. The genes are recombined for the recombination stage. Another operation of mutation is carried out [2]. After recombination and mutation, the individual chromosomes are decoded, if necessary. The objective function is calculated, a fitness value is assigned for each individual and individuals according to the fitness values are chosen for mating and this process is continued for several generations. Matlab has various functions useful for genetic algorithms. It combines the concepts of numerical analysis, matrix computations, and graphics in a user-friendly way [3]. MATLAB functions are simple text files of constructed instructions. Without a recompilation step, these functions can be transferred from one hardware to another. It also encompasses innovative data analysis, conception tools and domain toolboxes for special purpose applications. This paper is organized as: Sect. 2 describes the literature survey carried out for this work. Section 3 focuses on the existing methods and Sect. 4 put forth the proposed methodology. Section 5 highlights the numerical illustration whereas Sect. 6 concentrates on the experimental results followed by a conclusion in Sect. 7.

T. D. Srividya (B) · V. Arulmozhi Department of Research—Ph.D. Computer Science, Tiruppur Kumaran College for Women, Post Box no. 18, S.R. Nagar, Mangalam Road, Tiruppur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_51

451

452

T. D. Srividya and V. Arulmozhi

2 Literature Survey Bhanu et al. [1] put forth the basic weaknesses in current computer vision systems and also the practical inability to adapt to the segmentation process as real-world changes occur in the image. All the problems are overcome by the genetic algorithm. The key elements of an effective image segmentation system are given in this work. He proposed that a highly effective search strategy must be employed for the selection of an optimization problem. The crossover and mutation are explained for the overview of the genetic algorithm. It briefs the issues meant for operating a genetic algorithm. Five segmentation quality measures are selected here for segmentation evaluation. In this paper, the genetic algorithm with two other variants of the genetic algorithm is compared. Finally, a comparison of segmentation and parallel testing experiments are presented. The conclusion can be extended to the scope of the current adaptive system. Alenazi [2] presents a detailed illustrative approach to Genetic algorithm. The concepts of selection, crossover, and mutation are given here. GA parameter tuning is discussed. He concluded that GA used with NN works well. Nisha [3] suggest Artificial Intelligence with Matlab by NN Toolbox which contains 40 different toolboxes. An overview of face detection is given. The Neuro-Fuzzy, computational techniques of Ant-Colony optimization are used. Saraswat and Sharma [4] discusses an evolutionary test which is interpreted as a problem of optimization and employs evolutionary computation to find test data with extreme execution times. GA with Matlab is used in this work. The things to consider for GA implementation in Matlab are highlighted in this work. He stated that if the diversity of the population is too high or too low the GA might not perform well. One factor which affects the diversity of the population, the fitness scaling is discussed. He concludes the power of GA in generating a fast and efficient solution in real-time. Jakbovic and Golub [5] employ a Self-Contained GA with steady-state selection. This variant of the genetic algorithm utilizes empirically based methods for calculating its control parameters. The strength of GA has two major goals. He analyzes two characteristics are optimizing multimodal functions. He concluded that the balance between these characters affect the way the genetic operators are performed. Cruz and Wishart [6] employs machine learning for statistical, probabilistic and optimization techniques. The summary of the advantages, assumptions and the limitation of different machine learning algorithms are given. Three case studies are conducted in this work. Balabantaray et al. [7] propose a machine vision system for robotic assembly system, uses Matlab 2010a Simulink. He proposes four algorithms for edge detection and concludes that canny gives a better result and minimum error for the intended task. The discontinuities are due to abrupt changes in pixel intensity. The comparison result of various algorithms is given in this work. Concluded the paper with three criteria’s for canny detection.

A Unique Approach of Optimization in the Genetic …

453

Bansal and Saini [8] develops image processing using Matlab with few transitional steps filters, masking through center of the image. Segmentation is carried out by Otsu, GVF, Color-based image segmentation with K-Means Clustering. The various image types take various probability risks. The images are extracted using Matlab. Preethi and Radha [9] extracts features by Matlab. Two conditions are to be met as accurateness of satisfying and efficacy in terms of recovery time. The fitness function is calculated for evaluating the efficiency of a chromosome. The crossover algorithm is highlighted in this work. He concludes by putting forth that GA is modeled with five strategies to carry out feature selection. Concludes the paper by suggesting GA with FSVM achieves better results. Fatima and Sameer [10] develop a methodology which repeatedly modifies a population of individual solutions. It selects individuals at random in the interval of 0 to 1. The method of stochastic uniform selection is employed. He concludes by applying probability function on parameters and creating fitness function. Li [11] introduces a novel method for feature selection which holds sufficient data for classification. The GA optimizes by removing irrelevant and find an optimal solution. Performance analyzed using two data sets. K-NN is used and the result indicates robustness. Shazia et al. [12] develop a thermography technique to identify skin lesion that is malignant with the help of temperature readings. He studied that the most common type of carcinoma is skin cancer. In the US alone the number of people diagnosed with skin cancer is more than 3 million. From other types of skin carcinoma, melanoma is most fatal. The materials and methods used are Image J 1.48 V. Banzi and Zhaojun [13] uses image processing techniques in recognizing the morphological nature of cancerous cells. He studied that the main features to get exact images are labeling the mask and percent of pixels. An attractive technique in Matlab of Multiphoton Laser Scanning Microscopy (MPLSM) is highlighted in this work. For this, a rotational Gaussian low pass filter of the standard deviation of 10 pixels with a resolution of 15 * 15 is used. The scaling factor was 0.9 and finally ended with a plot. Toledo et al. [14] put forth a methodology to solve an unconstrained optimization problem. The Hierarchically Structured GA (HSGA) is proposed in this work. Each crossover selects a cluster leader. The crossover rate has impacted over the number of fitness evaluations. He concludes that the individuals are systematized in overlying clusters. The two contradictory criteria for performance are analyzed as success rate and function evaluations. De Guia and Devaraj [15] proposes cancer classifier using gene expression data. For gene selection, the Recursive Feature Elimination (RFE) makes the elimination process to return the best classification power. McCall [16] introduces a demonstrative example of using GA for a medical optimal control problem. The genetic algorithm, when applied to a wide range of complex problems, is studied in this work. The structure of the genetic algorithm with the main components is given as chromosome encrypting, fitness function, selection, recombination and evolution scheme. He proves that GA uses fitness as a discriminator of quality of solutions. The Roulette wheel selection is used. The crossover

454

T. D. Srividya and V. Arulmozhi

random interval is chosen to be between 0 and 1. The GA design works well for integer and floating-point values. The algorithm stops when the criteria is met. The schema theorem is given. The GA proves to be a robust and flexible approach Sadaaghi et al. [17] introduces an adaptive genetic algorithm in which Bayes classifier is used. Several feature selection techniques are employed. He concludes that SFFS is dominant in classification error. The GA approach is focused mainly in this work which includes three reasons for subset feature selection. Another coding method is employed in which integer values are used. The population diversity equation is given and the selection strategy is cross-generational. A simple multipoint crossover operator is applied. For enhancing the outcome of feature subdivision algorithm, Adaptive Genetic Algorithm is applied which results in the development of overall error rate. Samanta [18] proposes a method of optimization for using a genetic algorithm. All the previous techniques faced problems which are overcome by GA. The GA principles are highlighted. The two methods to use GA in Matlab are: calling the GA function and using the GA tool. The various functions are given in this work. He analyzed various techniques for unimodal and GA for multimodal. The speed is increased by introducing guided search techniques with GA by hybrid optimization. Elgothamy and Abdel-Aty-Zohdy [19] suggests an efficient genetic algorithm applied to a large optimization problem. Roulette selection and Daubechies wavelets used. Less fitness value is used to save computational cost. The future work extends to enhance the genetic algorithm for more optimization problems Roberts et al. [20] proposes GA toolbox for function optimization. He suggests that any optimization method to be efficient if it is balanced between exploration and exploitation. The four main modules of GA toolbox are explained in this work. The concept of multi-objective optimization is briefed in this work.

3 Existing System In this section, the general idea of various programming languages for the execution of genetic algorithm is discussed. The application of genetic algorithm on highperformance computers is a problematic and time overwhelming task. The instigating languages must be carefully according to the mathematical report of the problem. The efficacy and the execution speed of assembled low-level languages are dealt in MATLAB easily. It is a high-level or 4th generation scripting language. The capability of interactive terms is an advantage in Matlab over traditional languages [4]. It allows for interactive testing of small parts of code in no time. The trade-off between execution time and development time reduced [5]. MATLAB turn out to be the best option of the user in implementing scientific, graphical and mathematical applications [6–8].

A Unique Approach of Optimization in the Genetic …

455

4 Proposed Methodology The procedures of genetic algorithm to be considered for implementing in Matlab are presented in this work [9]. The block diagram is depicted in Fig. 1. The standard method of Genetic algorithm is summarized as follows: 1. 2. 3. 4. 5.

A Framework of the Algorithm. Initial population. Producing the Subsequent Generation. Strategies of Future Peer Generation. Terminating criteria for the Algorithm. Description and working of each section of GA:

Initialization

Generate Population

Fitness Evaluation

Crite ria met

Selection

Crossover Best Individual Mutation End

Fig. 1 Block diagram of the system

456

T. D. Srividya and V. Arulmozhi

4.1 A Framework of the Algorithm The foremost step in using genetic algorithm is to agree for the possibility of building solutions to a problem [10]. In the detection of skin cancer, every cell that remains constant in terms of size, the shape is potentially an optimal solution, even though not an optimal one [11, 12] . Since GA needs an initial population of P individuals. The second phase is to choose the gene depiction like integer, double, binary, permutation and so on. The binary and double are flexible since mostly used. The next step is to select parents from population P. After selecting parents, the genetic algorithm creates a new generation of individuals for crossover and mutation among the candidates of the current group. The current generation of children are interchanged with the broods of the next generation. Based on the fitness value the children for the next generation are chosen [13]. The algorithm ends with the stopping criteria.

4.2 Initial Population After selecting variables, the genetic algorithm begins by creating an initial population as shown in Fig. 2. This is carried out by generating a random number which distributes values consistently within the range. The algorithm initiates by generating a random initial population [14].

(a)

(b) Chromosomes in Population

1 2 3 4 5 6 7 8 9 10

Chromosome A B C D E F G H I J

Fitness value

15.4 15.4 27.7 27.7 27.7 3.1 3.1 3.1 27.7 3.1

Fig. 2 a Distribution of chromosomes in a population, b fitness values of chromosomes

A Unique Approach of Optimization in the Genetic …

457

4.3 Producing the Subsequent Generation In every stage, the GA customs the present population for creating the children of the next generation. The algorithm selects a group of individuals as parents depending on their genes. Based on the fitness value of the parents, the genes are selected for generating better children [15]. The normalized_fitness values are calculated by using the formula: Normalized_ fitness = Fitness/(Fitness)

(1)

Three types of children are created by GA: (a) Crossover The method of generating fresh chromosomes by means of using the prevailing chromosomes is Crossover. The procedure of crossover is shown in Fig. 3. It contains three parts. First, select parents from the population. The crossover point is selected, which is done randomly [ ]. As illustrated above, two children are produced by means of exchanging between the parents as depicted [16]. It is the crossover operator in GA which spreads the strategic physical appearance of participants in the population [17]. Based on the fitness value, combining the genes these children are generated from the selection of parents. (b) Mutation The typical fitness value of the whole generation increases with increase in the number of generations besides the individuals in the population draw nearer to the global maximum point [0,0] of optimization [18]. By familiarizing random change in the gene of a single parent, the children are created [19]. (c) Elite Best fitness value individuals in the present group will robotically stay alive in succeeding generation. Parent 1

1 1 0 1 0 0 0 0 1

Child1

1 1 0 1 0 1 1 1 0

0 1 0 1 0 1 1 1 0

Child2

0 1 0 1 0 0 0 0 1

Parent2

Fig. 3 Single point crossover

458

T. D. Srividya and V. Arulmozhi

4.4 Strategies of Future Peer Generation As per the algorithm energizes on the graphical design of every succeeding generation depicts the cumulative intimacy of the consequence to the global optimal fact [20].

4.5 Terminating Criteria for the Algorithm Anyone of these conditions are considered as stopping criteria for GA: i. ii. iii. iv. v.

Maximum Number of Peer group. Maximum Time Limit. Maximum Fitness limit. Stall Generations. Stall Time Limit.

Fig. 4 Creating a population of chromosomes

A Unique Approach of Optimization in the Genetic …

459

Random value 0.8 0.6

Random value

0.4 0.2 0 0

2

4

Fig. 5 Random plots for 3 iterations

5 Numerical Illustration After creating a population while selecting gene and fitness in population size of 10 (Fig. 4). The Best chromosome is 5 & 7. The values are 15.4416 and 27.7416. The random value for 3 iterations are 0.7060, 0.2551, and 0.5308 shown in Fig. 5.

6 Experimental Results For iteration 5 the parents are 5 & 6 and R = 0.0462. For iteration 6, parents are 5 & 6 and R = 0.8235 (Figs. 6, 7, 8, 9, 10, 11 and Table 1).

Fig. 6 Chart depicting cumsum value

460

T. D. Srividya and V. Arulmozhi

Fig. 7 Chart for fitness value

Fig. 8 Random value in selection of parents

7 Conclusion The genetic algorithm searches a population of individuals in parallel. The direction of search is influenced by the objective function and equivalent fitness points. The best advantage of GA is in using probabilistic transitions rules and work on the encoding of parameter set (except for real-values). The future enhancement of this

A Unique Approach of Optimization in the Genetic …

461

Normalized fitness value for 5th & 6th iteration 1 0.8

Normalized fitness value for 5th & 6th iteration

0.6 0.4 0.2 0 Parents

Fig. 9 The normalized fitness values in 5th and 6th iteration

Fig. 10 Plots for normalized fitness in selection

work can be extended to elitism method, since saving the chromosomes for future generations is essential. And can be further extended to exploration and exploitation.

462

T. D. Srividya and V. Arulmozhi

Fig. 11 Parents and children after crossover

Table 1 Random values for 5 iterations

Iterations

Parent 1

Parent 2

Random value

Iteration 1

1

5

0.1299

Iteration 2

1

1

0.4694

Iteration 3

7

2

0.3371

Iteration 4

1

6

0.0318

Iteration 5

3

6

0.0462

References 1. Bhanu, B., & Lee, S., Ming, J. (1995, December). Adaptive Image Segmentation using a Genetic Algorithm. IEEE Transactions on Systems, Man, And Cybernetics, 25(12). 2. Alenazi, M. (2015, November). Genetic algorithm by using MATLAB program. International Journal of Advanced Research in Computer and Communication Engineering, 4(11). ISSN (Online) 2278-1021. ISSN (Print) 2319 5940. 3. Nisha, S. D. (2015). Face detection and expression recognition using neural network approaches. Global Journal of Computer Science and Technology: F Graphics & Vision, 15(3), Version 1.0, Online ISSN: 0975-4172 & Print ISSN: 0975-4350. 4. Saraswat, M., & Sharma, A. K. (2013, March). Genetic algorithm for optimization using MATLAB. International Journal of Advanced Research in Computer Science, 4(3) (Special Issue). ISSN: 0976-5697. 5. Jakbovic, D., & Golub, M. (1993). Adaptive genetic algorithm. Journal of Computing and Information Technology CIT, 7(3), 229–235.

A Unique Approach of Optimization in the Genetic …

463

6. Cruz, J. A., & Wishart, D. S. (2006). Applications of machine learning in cancer prediction and prognosis. Cancer Informatics, 2, 59–78 7. Balabantaray, B. K., Jha, P., & Biwasl, B. B. (2013, November). Application of edge Detection algorithm for vision guided robotics assembly system. The International Society for Optical Engineering. https://doi.org/10.1117/12.2051303. 8. Bansal, R., & Saini, M. (2015, September). A method for automatic skin cancer detection. International Journal of Advanced Research in Computer Science and Software Engineering, 5(9). ISSN: 2277 128X 9. Meena Preethi, B., & Radha, P. (2017, May). Adaptive genetic algorithm based fuzzy support vector machine (Aga-Fsvm) Query mechanism for image mining. ARPN Journal Of Engineering And Applied Sciences, 12(9). ISSN 1819 6608. 10. Fatima, N., Sameer, F. (2018. August). Paper on genetic algorithm for detection of oral cancer. International Journal of Advanced Research in Computer and Communication Engineering 7(8). ISSN (Online) 2278-1021 ISSN (Print) 2319-5940. 11. Li, T-S. (2006). Feature selection For Classification by using A GA-based neural network approach. Journal of the Chinese Institute of Industrial Engineers, 23(1), 55–64. 12. Shazia, S., Akhter, N., Gaike, V., & Manza, R. R. (2016, February). Boundary detection of skin cancer lesions using image processing techniques. Journal of Medicinal Chemistry and Drug Discovery, 1(2), pp. 381–388. ISSN: 2347-9027 13. Banzi, J. F., Zhaojun, X. (2013, December). Detecting morphological nature of cancerous cell using image processing algorithms. International Journal of Scientific and Research Publications, 3(12). ISSN 2250-3153 14. Toledo, C. F. M., Oliveira, L., França, P. M. (2014). Global optimization using a genetic algorithm with hierarchically structured population. Journal of Computational and Applied Mathematics, 261, 341–351. 15. De Guia, J. M., Devaraj, M. (2018). Analysis of cancer classification of gene expression data: A scientometric review. International Journal of Pure and Applied Mathematics, 119(12), 12505–12513. ISSN: 1314-3395 (on-line version). 16. McCall, J. (2005). Genetic algorithms for modelling and optimization. Journal of Computational and Applied Mathematics, 184, 205–222. https://doi.org/10.1016/j.cam.2004.07.034. 17. Sadeghi, M. H., Kotropoulos, C., & Ververidis, D. Using adaptive genetic algorithms to improve speech emotion recognition. ISBN: 978-1-4244-1273-0, https://doi.org/10.1109/MMSP.2007. 4412916. 18. Samanta, S. (2014, January). Genetic algorithm: An approach for optimization (using MATLAB). International Journal of Latest Trends in Engineering and Technology (IJLTET), 3(3). ISSN: 2278-621X. 19. Elgothamy, H., Abdel-Aty-Zohdy, H. S. (2018, March). Application of Enhanced genetic algorithm. International Journal of Computer and Information Technology, 07(02). ISSN: 2279-0764. 20. Roberts, J. J., Cassula, A. M., Silveira, J. L., Prado, P. O., & Freire Junior, J. C. (2017, November). GAtoolbox: A Matlab-based genetic algorithm Toolbox for function optimization. In The 12th Latin-American Congress On Electricity Generation And Transmission— CLAGTEE, 2017

464

T. D. Srividya and V. Arulmozhi Mrs. T. D. Srividya completed her M.Phil. (Computer Science) from Mother Teresa Women’s University in 2015. She is currently a Ph.D. student under the guidance of Dr. V. Arulmozhi. Her research is centered on Skin Cancer detection—A Genetic algorithm approach.

Dr. V. Arulmozhi Associate Professor and Research Head, Department of Research—Ph.D. Computer Science, Tiruppur Kumaran College for Women, Tiruppur, has received best paper awards and published more journal papers, conference proceedings and guiding M.Phil. and Ph.D. research scholars. Her specialization is Chemoinformatics, Machine learning techniques, Soft Computing, and Evolutionary Computational informatics.

Deep Learning Architectures, Methods, and Frameworks: A Review Anjali Bohra and Nemi Chand Barwar

1 Introduction Intelligence is an ability to use knowledge and skills efficiently. Making machines behave intelligently is Artificial Intelligence, which is a sub-area of computer science. One of the ways of creating intelligent machines is machine learning which use learning algorithms to extract information from the data while is deep learning which creates intelligent machines using specific algorithm called neural networks [1]. The key difference between machine learning and deep learning is how the features are extracted from the input using algorithms [2]. Machine learning use algorithms first to extract the features from the given input and then apply learning while deep learning automatically extract the features and represent them hierarchically in multiple levels [2]. In today’s scenario, the problems which is used to take large time in processing are now being solved with less time using deep learning concepts [3]. It is applied in many fields like natural language processing, image processing, computer vision, sentiment analysis from text and videos, object identification, etc. Deep learning provide hierarchical representation of data and classify as well as predict the patterns through multiple layers of information processing modules in hierarchical architectures [4].

A. Bohra (B) · N. C. Barwar Department of Computer Science & Engineering, MBM Engineering College, Jodhpur, Rajasthan, India e-mail: [email protected] N. C. Barwar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_52

465

466

A. Bohra and N. C. Barwar

2 Machine Learning and Deep Learning Machine learning is the concept in which data is parsed using algorithms. The learning is performed during parsing which is then applied to make the decisions as the algorithm requires a description about how to make an accurate prediction. Example is Facebook’s suggested list: which uses an algorithm which learns from the user preferences and present the suggested list of choices for the user to explore. These machine learning techniques have used shallow architectures for signal processing. These shallow-structured architectures contained at most one or two layers of nonlinear feature transformations. Examples of shallow architectures are Gaussian mixture models (GMMs) Resemblance to human information linear or nonlinear dynamical systems, conditional random fields (CRFs), maximum entropy (MaxEnt), support vector machines (SVMs), logistic regression, kernel regression, multilayer perceptron (MLPs) and extreme learning machines (ELMs) with a single hidden layer. Resemblance to human processing mechanism require deep architectures for extracting information from rich sensory inputs [5]. In deep learning, the algorithm learns itself through its own data processing. The state-of-the-art in processing natural signals can be advanced by using efficient and effective deep learning algorithms [5]. Deep learning is a subfield of machine learning that uses algorithms which have similar structure and functioning of ANN [3]. ANN is a computational information processing model based on a similar structure and functionality of biological nervous system such as brain. The purpose of neural network is to learn to recognize patterns present in the data. It is the interconnection of three layers: input, hidden and outer layer, respectively. The input layer contains neurons that send information to the hidden layer which sends data to the outer layer. Deep learning methods employ neural network architectures for inculcating learning therefore deep learning models are often referred as deep neural networks [6]. The number of hidden layers decides the depth of neural networks. Generally, 2–3 layers are used in traditional networks which are extended up to 150 in deep networks. The efficiency of the algorithm improves with increasing size of the data while in shallow learning it converges at specific level [6].

3 Machine Learning Architecture A machine Learning Architecture is a structure where the combination of components collectively performs the transformation of raw data into trained data sets using specific algorithm [7]. Machine learning architectures are grouped according to the learning algorithm used for training namely: Supervised Learning, Unsupervised Learning and Reinforcement Training. In supervised learning, the training data consist of input and output pair. The system detects the relationship between input– output pair which is used in training to obtain the corresponding output. In unsupervised learning the training data do not contain output. The trends and commonalities

Deep Learning Architectures, Methods, and Frameworks: A Review

467

Fig. 1 Architecture of machine learning systems [7]

are considered as standards output and the output is determined by mapping the resemblance of the obtained data with the benchmarks. For reinforcement training, the system uses algorithms to find the relevance of the context in reference with the present state. Figure 1 shows the basic architecture of machine learning systems. Data Acquisition is the data preprocessing stage which collects data, prepare and segregate obtained features. Data processing, the next stage is involved in normalization, transformation and encoding of data using specific learning algorithms. Data modeling selects the best algorithm from the set of libraries to make the system ready for execution. Execution stage is involved with the experimentation, testing, and tuning the system. At the deployment stage, the output obtained is considered as non-deterministic query which can be further used for decision-making system.

4 Deep Learning Architectures The deep networks have hierarchical architecture, i.e. multiple hidden layers between input and output layer for performing computation on the given dataset. The various networks differ in the shape, size and the interconnection of the hidden units within the hidden layers. Broadly, there are four types deep learning architectures [8] namely: Unsupervised Pre-trained (trained before) Networks (UPN), Convolution Neural Networks (CNNs), Recurrent Neural Networks and Recursive Neural Networks.

4.1 Unsupervised Pre-Trained (Trained Before) Networks The machines are trained before starting any particular task, the concept is also known as transfer learning as once the model is trained for a particular task in one domain, it can be applied for obtaining the solution in another domain also [9]. These architectures are used to capture observed data for pattern or synthesis

468

A. Bohra and N. C. Barwar

Fig. 2 Architecture of autoencoders [11]

purposes when no information is available regarding labels of target class. The various types unsupervised pre-trained network architectures are: Autoencoders, Deep Belief Networks (DBNs), and Generative Adversarial Networks (GANs).

4.1.1

Autoencoders

Autoencoders are the basic machine learning model in which the input is same as that of output. It is a data compression algorithm, lossy, and learned automatically from examples [10]. Following is the architecture of autoencoders: The input is compressed into latent space representation which is used to reconstruct the output. It has two parts Encoder and Decoder. The information is encoded using function h = f (x) where x is the input and h is latent space representation of the input x. This latent information is used by decoder to reconstruct the input which is now represented as r = g(h) where g is the reconstruction function. It is made to learn by putting constraints on the copying task. Dimensionality reduction for data visualization is the example of autoencoders (Fig. 2).

4.1.2

Deep Belief Networks

Deep Belief networks are a class of deep neural network which are composed of multiple layers of hidden variables. These hidden variables are connected with neighboring layers but there is no connection between the units within the same layer. The system learns to reconstruct the given input using probabilistic algorithms. The layers extract features which are used in training to perform classification [12]. They are consist of layers of Restricted Boltzmann Machines for pre-training phase of the algorithm and feed forward networks for tuning the system [8]. These machines have neuron-like units which are connected systematically for stochastic decisions regarding on and off of the system [13]. Figure 3 shows the architecture of deep belief networks. The RBMs are used to learn higher-level-features of the given dataset in an unsupervised training fashion. The higher layer of RBM is provided with learned features from the lower layer progressively to attain the better results [8]. In tuning phase, the system use backpropagation algorithm.

Deep Learning Architectures, Methods, and Frameworks: A Review

469

Fig. 3 Architecture of deep belief network [8]

Fig. 4 Architecture of generative adversary networks [14]

4.1.3

Generative Adversary Networks

Generative Adversary Networks use two networks using unsupervised learning approach for training [8]. It two deep networks are generator and discriminator respectively [14]. The architecture of GAN is as shown in Fig. 4 [14]. The generate produces the output from the given input and the discriminator classifies those generated outputs. The generate network have a deconvolution layer to generate the output which is fed to the discriminator, a standard convolution neural network [8].

4.2 Convolution Neural Network Convolution Neural Networks transform the input data by passing it through the series of connected layer to produce an specific class score as output [8]. As shown in Fig. 5, the convolution neural network have three layers namely input, feature extraction layers and classification layer. The input layer accepts three-dimensional data and uses gradient descent function (method) to train the parameters. The major component of the convolution layer are filters, activation maps, parameter sharing,

470

A. Bohra and N. C. Barwar

Fig. 5 Architecture of convolution neural network [8]

and layer-specific hyperparameters [8]. The feature extraction layer is the series of convolution layer followed with pooling layer. The convolution layer implements rectified linear unit activation function which is then passed to pooling layer. The pooling layer finds the number of features from the given input by progressively constructing higher-ordered features. Pooling reduce the size of the given input to specific number of parameters. Finally, the classification layer generates the class probabilities and specific scores. Some popular architectures of CNNs are: LeNet, AlexNet, ZFNet, GoogLeNet, VGGNet, RSNet, YOLO (a normal CNN in which convolution and max pooling layers are followed by two fully connected layers [15]), SqueezeNet (a pre-trained convolution neural network that have specialized structure called fire module [15]) and SegNet (a convolutional encoder-decoder architecture use for image segmentation [16]).

4.3 Recurrent Neural Network Recurrent Neural Network is feed forward neural network. As the name suggest the architecture refer to two classes of network with similar general structure with only difference of graphical connections between the nodes. Figure 6 shows the architecture of fully Recurrent Neural Network [17]. One network forms a directed acyclic graph while another form a directed cyclic graph while connecting the nodes to explain their temporal dynamic behavior [18]. The network has feedback connection to itself which allows to learn the sequences and maintain the information [17]. It uses internal state to process the sequences of inputs therefore applicable for handwriting recognition, natural language processing, and speech recognition tasks. These networks allow both parallel and sequential computation and are similar to the human brain which is a vast feedback network of connected neurons. The neurons learn by themselves to translate input stream into sequence of useful outputs [8]. Recurrent Neural Networks are considered as Turing complete and also considered standard for modeling time dimensions [8]. These models are better than Markov models, which were widely being used for modeling sequences because they become impractical for modeling long-range

Deep Learning Architectures, Methods, and Frameworks: A Review

471

Fig. 6 Architecture of Recurrent Neural Network [17]

temporal dependencies. The various type of Recurrent Neural Networks are: Fully Recurrent Neural Network, Recursive Neural Network, Hopfield Network, Elamn Networks And Jordan Networks or Simple Recurrent Networks (SRN), Echo State Networks, Neural History Compressor, Long Short Term Memory (LSTM), Gated Recurrent Unit, Continuous-Time Recurrent Neural Network (CTRNN), Hierarchical Recurrent Neural Network, Recurrent Multilayer Perceptron Model, Neural Turing Machines (NTM), and Neural Network Pushdown Automata (NNPDA) [17].

4.4 Recursive Neural Network Recursive Neural Network uses shared-weight matrix and a binary tree structure which help the network in learning about varying sequences of the words or parts of an image [8]. The system uses algorithm called backpropagation through structure (BPTS) which employs gradient descent technique for training. Figure 7 shows a hierarchical structure of Recurrent Neural Network in which c1 and c2 are child nodes Fig. 7 Architecture of recursive Neural network [20]

472

A. Bohra and N. C. Barwar

connected to parent p. Both parents and child nodes are n-dimensional vectors which use W shared-weight matrix across the complete network. With simple variations in the architecture, markable results are obtained in parsing of sentences of natural languages and various natural scenes. Recursive Autoencoders and Recursive Neural Tensor Networks (RNTNs) are types of recursive neural networks. Recursive Neural tensor networks (RNTNs) is a hierarchical structure having neural network structure at each node. These architectures can be used for various tasks of natural language processing like boundary segmentation, to identify the word groupings and word groupings. Word vectors are used for sequential classification. These word vectors are grouped into subphrases which are connected to form a meaningful sentence. The sentence can further be classified by sentiment or any other matrix [19]. Word vectorization is an example of Recursive Neural Tensor Network, using word2vec algorithm. The algorithm converts a corpus of word into vector space for classification. RNTNs use constituency parsing to organize the sentences into noun phrase (NP) and verb phrase (VP). Many more linguistic observations can be marked for the words and phrases [19].

5 Deep Learning Methods Deep learning architectures (networks) can be trained using specific machine learning methods. Backpropagation is a learning algorithm to compute gradient (partial derivatives) of a function through chain rule [16]. It is a supervised type of machine learning algorithm which requires a known desired output for each input for the calculation of derivatives of the loss function either through analytical differentiation or by approximate differentiation using finite difference [21, 22]. Stochastic gradient descent algorithm is an iterative method to optimize the randomly selected input samples for calculating the gradient to train the model. Learning rate parameter determines the effect of updating steps in reference with the value of weights. The learning rate parameter control the response of the model corresponding to the change of the weights. During learning iterations or epochs, the learning rate is scheduled through two parameters namely: decay and momentum which control the oscillations. Dropout is machine learning algorithm for training neural network by randomly dropping units to prevent overfitting while combining different neural network architectures [23]. MaxPooling is a sample-based discretization process which samples the input representation and reduce its dimensionality. Batch Normalization improves the sensitivity of the neural networks with respect to the weights and accelerate the learning [22]. Skip gram is an unsupervised learning algorithm used to find the context of the given word [22]. Continuous bag of words takes contextual words as input to predict the exact word which is the center of the context. Transfer Learning is a type of learning algorithm in which the model trained for a task in one domain is reused for solving a related task in another domain.

Deep Learning Architectures, Methods, and Frameworks: A Review

473

Table 1 List of deep learning frameworks Framework

Developed by

Language

Description

TensorFlow

Google brain team

C++, Python

Works efficient for Pre-trained models, images and RNN and CNN sequence-based data

Support models

Keras

Francois, a Python google engineer

Good results for image classification or sequence models

Pre-trained models, CNN and RNN

PyTorch

Facebook AI research group

Python, C

Support matplotlib library to manipulate the graphs [24]

Pre-trained models, RNN and CNN

Caffe

Berkeley AI Research

C++

Use for developing Pre-trained models, deep leaning models not suited for RNN for mobile phones and language models [24]

Deeplearning4j

Adam Gibson, Skymind

C++, Java

Process large scale data with fast speed [24]

Pre-trained models, RNN and CNN

6 Deep Learning Frameworks Neural networks are the best architectures for experimentation to create intelligence in machines equivalent to humans. An architecture is a general approach to solve the specific problem. A model is a mathematical representation of concept (phenomenon) and relationships of real-world entities to predict their functionality for the given input. It is the specific instance of a given architecture that is trained for a given dataset to make predictions on new examples at runtime. Therefore, a model is the process of learning the architectures to predict their behavior using specific dataset and past observations. A framework is the layered structure and collection of libraries, which are build-in or user-defined functions, compilers, toolset as well as application programming interfaces (API), deep learning framework is an interface, a library or a tool used for quickly developing deep learning models. A framework should automatically compute gradients, easy to code and understand and support parallel processing to reduce computation for optimized performance [24]. It is the systematic way of defining the learning models using pre-defined collection of components. Table 1 shows the list of deep learning frameworks.

7 Conclusion Deep Learning is one of the ways to create intelligence in the machines. Variety of deep learning architectures are available which differ in shape and size of the hidden layers (units) considered for providing real-world solutions. The paper explained

474

A. Bohra and N. C. Barwar

major types of deep learning architectures mainly: unsupervised pre-trained (trained before) networks, convolution neural networks, recurrent neural networks and recursive neural networks. A model can be developed using any of the architecture and learning algorithm for the given dataset. A framework allows developers to build the models by directly using the built-in functions. TensorFlow, Keras etc. are some of the deep learning frameworks. It is the relevant area of research as it is showing markable performance in the field of image processing, natural language processing and satellite launching example Chandra Yan 2. The filed have bright future because of less human intervention and efficient computing capabilities.

References 1. Crawford, C. (2016, November). https://blog.algorithmia.com/introduction-to-deep-learning/. 2. Alom, Md. Z., et al. (2019). A state-of-the-art survey on deep learning theory and architectures. Electronics, 8, 292. https://doi.org/10.3390/electronics8030292. 3. Dixit, M., Tiwari, A., Pathak, H., Astya, R. (2018). An overview of deep learning architectures, libraries and its application areas. In International Conference on Advances in Computing, Communication Control and Networking. ISBN: 978-1-5386-4119-4/18/$31.00 ©2018 IEEE. 4. Zhao, R., Yan, R., et al. (2018). ‘Deep learning and its applications to machine health monitoring. Elsevier. https://doi.org/10.1016-0888-3270. 5. Deng, L., & Yu, D. (2013). Deep learning: Methods and applications. Foundations and Trends® in Signal Processing, 7(3–4), 197–387. https://doi.org/10.1561/2000000039. 6. What is deep learning? Mathworks documentation. https://in.mathworks.com/discovery/deeplearning.html/. 7. Introduction to machine learning https://www.educba.com/machine-learning-architecture/. Accessed on September 11, 2019. 8. Gibson. A., Patterson, J., Deep learning. https://www.oreilly.com/library/view/deep-learning/ 9781491924570/ch04.html. 9. https://www.quora.com/What-is-pretraining-in-deep-learning-how-does-it-work. Accessed on September 12, 2019. 10. Jayawardana, V., https://towardsdatascience.com/autoencoders-bits-and-bytes-of-deep-lea rning-eaba376f23ad. Accessed on September 12, 2019. 11. Hubens, N., https://towardsdatascience.com/deep-inside-autoencoders-7e41f319999f. Accessed on September 12, 2019. 12. https://en.wikipedia.org/wiki/Deep_belief_network. Accessed on September 12, 2019. 13. Hinton, G. (2014). Boltzmann machines’ encyclopedia of machine | learning and data mining. New York: Springer Science + Business Media. https://doi.org/10.1007/978-1-4899-7502-7_ 31-1. 14. Hui, J., https://medium.com/@jonathan_hui/gan-whats-generative-adversarial-networks-andits-application-f39ed278ef09. Accessed on September 12, 2019. 15. https://hackernoon.com/understanding-yolo-f5a74bbc7967. Accessed on September 12, 2019. 16. https://www.analyticsvidhya.com/blog/2017/08/10-advanced-deep-learning-architecturesdata-scientists/. Accessed on September 12, 2019. 17. Katte, T. (2018, March). Recurrent neural network and its various architecture types. International Journal of Research and Scientific Innovation, (IJRSI), V (III). ISSN 2321-2705. 18. https://en.wikipedia.org/wiki/Recurrent_neural_network. Accessed on September 12, 2019. 19. Nicholson, C., https://skymind.ai/wiki/recursive-neural-tensor-network. Accessed on September 12, 2019. 20. https://en.wikipedia.org/wiki/Recursive_neural_network. Accessed on September 12, 2019.

Deep Learning Architectures, Methods, and Frameworks: A Review

475

21. https://searchenterpriseai.techtarget.com/definition/backpropagation-algorithm. Accessed on September 12, 2019. 22. https://medium.com/cracking-the-data-science-interview/the-10-deep-learning-methods-aipractitioners-need-to-apply-885259f402c1. Accessed on September 12, 2019. 23. Shrivastav, N. (2014). Dropout: A simple way to prevent neural network from overfitting. Journal of Machine Learning Research, 15, 1929–1958. 24. https://www.analyticsvidhya.com/blog/2019/03/deep-learning-frameworks-comparison/. Accessed on September 13, 2019.

Protection of Wind Farm Integrated Double Circuit Transmission Line Using Symlet-2 Wavelet Transform Gaurav Kapoor

1 Introduction The faults on wind farm integrated double circuit transmission lines (WFIDCTLs) have to be identified quickly so as to renovate the faulted phase, restore the electricity supply, and decrease the outage time as soon as possible. In the latest years, lots of studies have been dedicated to the problem of fault detection, classification and location estimation in the DCTLs [1–7]. A brief literature review of various recently reported methods is introduced henceforth. In [1], the authors presented the fault recognition and location method for the shielding of three terminals DCTL. In [2], the WT has been introduced for the protection of DCTL. HHT has been applied in [8] for the WFISCCTL. In [3] and [9], the authors employed WT for the protection of SCCDCTL and TPTL, respectively. WT and MLP have been used in [4] for locating faults in DCTLs. In [5] and [6], respectively, MM and DPA have been used. The fault is detected by WT in SPTL in [10] and [11], respectively. In this work, the Symlet-2 wavelet transform (SWT) is utilized and it is executed for the detection of faults in the WFIDCTL. Such type of work has not been depicted so far to the best of the information of the author. The results exemplify that the SWT competently detects every type of fault. This article is structured as Sect. 2 reports the specifications of the WFIDCTL. Section 3 describes the flow chart for the SWT. Section 4 presents the outcomes of the investigations which are carried out in this work. Section 5 concludes the article.

G. Kapoor (B) Department of Electrical Engineering, Modi Institute of Technology, Kota, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_53

477

478

G. Kapoor

2 The Specifications of WFIDCTL Figure 1 depicts the schematic of WFIDCTL. The WFIDCTL has a rating of 400 kV, 50 Hz. The WFIDCTL has a total length of 200 km. The WFIDCTL is alienated into two zones of length 100 km each. The current measurement blocks are connected at bus-1 for measuring the currents of both circuits of the WFIDCTL. The simulation model of a 400 kV WFIDCTL is designed using MATLAB. Figure 2 illustrates the currents of circuit-1 and 2 for no-fault. The SWT coefficients of circuit-1 and 2 currents for no-fault are shown in Fig. 3. Table 1 reports the results of SWT for no-fault. Circuit-1 (200 km)

Wind Farm-1

400 kV Source

Wind Farm-2

Circuit-2 (200 km) Bus-1

Bus-2

SWT Relay

Fig. 1 The schematic of WFIDCTL

Current (A)

400 IC1(:,1) IC1(:,2) IC1(:,3)

200 0 -200 -400

0

500

1000

1500

2000

2500

Samples

3000

3500

4000

4500

5000

Current (A)

400 IC2(:,1) IC2(:,2) IC2(:,3)

200 0 -200 -400

0

500

1000

1500

2000

2500

Samples

Fig. 2 Circuit-1 and 2 currents for no-fault

3000

3500

4000

4500

5000

Protection of Wind Farm Integrated Double Circuit …

479

Amplitude

400

caD5 cbD5 ccD5

200 0 -200 -400

0

10

20

30

40

50

Samples

60

70

80

Amplitude

400

90

100

caD5 cbD5 ccD5

200 0 -200 -400

0

10

20

30

40

50

60

70

80

90

100

Samples

Fig. 3 SWT coefficients of circuit-1 and 2 currents for no-fault

Table 1 Results of SWT for no-fault SWT Coefficients Phase-A1

Phase-B1

Phase-C1

Phase-A2

Phase-B2

Phase-C2

242.6318

217.5305

349.8179

242.6318

217.5305

349.8179

3 Symlet-2 Wavelet Transform (SWT) Figure 4 illustrates the process for the SWT. The steps for the same are shown beneath. Step 1 Simulate the WFIDCTL for creating faults and produce the post-fault currents for both circuits. Step 2 Use SWT to examine the post-fault currents of both circuits for characteristics retrieval and determine the range of SWT coefficients. Step 3 The phase will be proclaimed as the faulted phase if its SWT coefficient has a larger amplitude as compared to the healthy phase, under fault situation.

4 Performance Valuation The simulation studies have been carried out for the near-in relay faults, far-end relay faults, converting faults, inter-circuit faults, and cross-country faults with the objective of verifying the feasibility of the SWT. In the separate subcategories, the outcomes of the work are investigated.

480

G. Kapoor Record currents of circuit-1 and circuit-2

SWT based currents analysis

Features retrieval in terms of SWT coefficients

No Is |SWT coefficient| faulted phase > |SWT coefficient| healthy phase

No fault

Yes Simultaneous fault detection and faulty phase recognition

Fig. 4 The schematic for SWT

4.1 The Efficacy of SWT for the Near-in Relay Faults The efficiency of the SWT is investigated for the near-in relay faults on the WFIDCTL. Figure 5 exemplifies the circuit-1 and circuit-2 currents for A1B1C1A2B2G near-in relay fault at 5 km at 0.05 s among RF = 1.5  and RG 10 5

Current (A)

1 0.5 0

IC1(:,1) IC1(:,2) IC1(:,3)

-0.5 -1

0 10

5

Current (A)

1

2

3

4

5

6

10 4

Samples

4

0 -5

-10

IC2(:,1) IC2(:,2) IC2(:,3)

0

1

2

3

4

5

Samples

Fig. 5 Circuit-1 and 2 currents for A1B1C1A2B2G near-in relay fault at 5 km at 0.05 s

10 4

6

Protection of Wind Farm Integrated Double Circuit …

481

Amplitude

2000

caD5 cbD5 ccD5

1000 0 -1000 -2000 300

320

340

360

380

400

420

440

460

480

500

Samples

Amplitude

2000

caD5 cbD5 ccD5

1000 0 -1000 -2000 300

320

340

360

380

400

420

440

460

480

500

Samples

Fig. 6 SWT coefficients of circuit-1 and 2 currents for A1B1C1A2B2G near-in relay fault

Table 2 Results of SWT for different near-in relay faults Fault type

SWT coefficients Phase-A1 Phase-B1 Phase-C1 Phase-A2 Phase-B2 Phase-C2

A1B1C1A2B2G(5 km) 1.9270 × 103

1.1716 × 103

3.5293 × 103

2.9937 × 103

967.3718

432.7254

A1B1C1G (6 km)

3.1444 × 103

1.3794 × 103

3.7235 × 103

9.1308

13.8679

20.9425

A2B2C2G (7 km)

12.2576

36.6665

15.7245

3.3044 × 103

1.8474 × 103

9.6885 × 103

A1C1B2C2G (8 km)

1.7720 × 103

253.1755

4.0859 × 103

237.1926

2.2574 × 103

4.4246 × 103

A1A2G (9 km)

2.9129 × 103

88.3158

99.0478

2.9129 × 103

88.3158

99.0478

= 2.5 . Figure 6 illustrates the SWT coefficients for circuit-1 and 2 currents. The fault factors for the other fault cases are: T = 0.05 s, RF = 1.5  and RG = 2.5 . Table 2 details the results of the SWT for the five different near-in relay faults. It is confirmed from Table 2 that the SWT precisely detects the near-in relay faults.

4.2 The Efficacy of SWT for the Far-End Relay Faults The SWT has been explored for the far-end relay faults. Figure 7 shows the circuit-1 and circuit-2 currents for A1B1C1G far-end relay fault at 195 km at 0.1 s with RF = 2.25  and RG = 3.25 . Figure 8 shows the SWT coefficients for circuit-1 and circuit-2 currents. Table 3 reports the results for far-end relay faults. It is inspected

482

G. Kapoor 10 4

Current (A)

2

IC1(:,1) IC1(:,2) IC1(:,3)

1 0 -1 -2

0

1000

2000

3000

Samples

4000

5000

6000

7000

Current (A)

400 IC2(:,1) IC2(:,2) IC2(:,3)

200 0 -200 -400

0

1000

2000

3000

Samples

4000

5000

6000

7000

Fig. 7 Circuit-1 and circuit-2 currents for A1B1C1G far-end relay fault at 195 km

Amplitude

1

10 4 caD5 cbD5 ccD5

0.5 0 -0.5 -1 100

110

120

130

140

150

160

170

180

190

200

Samples Amplitude

400 caD5 cbD5 ccD5

200 0 -200 -400 100

110

120

130

140

150

Samples

160

170

180

190

200

Fig. 8 SWT coefficients of circuit-1 and 2 currents for A1B1C1G far-end relay fault at 195 km

from Table 3 that the efficacy of the SWT remains unaltered for the far-end relay faults.

4.3 The Efficacy of SWT for the Converting Faults The SWT has been investigated for the converting faults. Figure 9 shows the circuit1 and circuit-2 currents of the WFIDCTL when initially the A1G fault at 0.05 s is converted into the A1B1C1G fault at 100 km at 0.2 s among RF = 1.75  and RG = 2.75 . Figure 10 exemplifies the SWT coefficients of currents of circuit-1 and

Protection of Wind Farm Integrated Double Circuit …

483

Table 3 Results of SWT for different far-end relay faults Fault Type

SWT coefficients Phase-A1

Phase-B1

Phase-C1

Phase-A2

Phase-B2

Phase-C2

A1B1C1G (195 km)

7.7157× 103

9.9851 × 103

9.6802 × 103

263.7880

247.0019

234.4003

A1B1A2B2G (196 km)

1.3899 × 104

1.3059 × 104

467.1882

1.3899 × 104

1.3059 × 104

467.1882

C1C2G (197 km)

783.7513

810.6755

9.0926 × 103

783.7513

810.6755

9.0926 × 103

A1A2B2G (198 km)

8.2579 × 103

482.5384

514.6460

1.3192 × 104

1.5552 × 104

501.6608

A2B2C2G (199 km)

319.7346

275.6656

313.0196

8.0687 × 103

1.2111 × 104

9.3623 × 103

10 4

Current (A)

2

IC1(:,1) IC1(:,2) IC1(:,3)

1 0 -1 -2

0

1000

2000

3000

4000

5000

6000

7000

8000

5000

6000

7000

8000

Samples Current (A)

400

IC2(:,1) IC2(:,2) IC2(:,3)

200 0 -200 -400

0

1000

2000

3000

4000

Samples

Fig. 9 Currents when A1G fault is converted into A1B1C1G fault at 100 km

Amplitude

1.5

10 4 caD5 cbD5 ccD5

1 0.5 0 -0.5 -1

60

100

120

140

160

180

200

220

160

180

200

220

Samples

400

Amplitude

80 caD5 cbD5 ccD5

200 0 -200 -400

60

80

100

120

140

Samples

Fig. 10 SWT coefficients of DCTL currents when A1G fault is converted into A1B1C1G fault

484

G. Kapoor

Table 4 Results of SWT for different converting faults Fault

Converted fault

SWT coefficients Phase-A1 Phase-B1 Phase-C1 Phase-A2 Phase-B2 Phase-C2

A1G (0.05)

A1B1C1G 1.0361 × (0.2) 104

7.4399 × 1.0666 × 243.8407 103 104

A2G (0.075)

A2B2G (0.175)

404.8313

403.8857

B1C1G (0.06)

B1G (0.16)

1.5788 × 103

1.1410 × 6.1709 × 688.6479 104 103

A1G (0.07)

A2B2G (0.18)

2.7286 × 103

399.5339

429.7451

1.3193 × 104

1.2161 × 450.4424 104

701.2517

1.0600 × 668.4992 104

9.0260 × 103

7.2728 × 7.0097 × 103 103

A2B2C2G B1G (0.091) (0.1808)

432.6832

1.3252 × 104

283.6771

260.6055

1.2205 × 453.3672 104 645.6566

725.8445

circuit-2. The fault factors preferred are FL = 100 km, RF = 1.75  and RG = 2.75 . Table 4 reports the results for different converting faults.

4.4 The Efficacy of SWT for the Cross-Country Faults The SWT is tested for different cases of cross-country faults. Figure 11 depicts the currents of circuit-1 and circuit-2 when the WFIDCTL is simulated for the crosscountry A2G fault at 50 km and B2C2G fault at 150 km at 0.075 s with RF = 3.05  and RG = 1.05 . Figure 12 shows the SWT coefficients of the WFIDCTL currents. Table 5 presents the results for different cross-country faults.

Current (A)

400 IC1(:,1) IC1(:,2) IC1(:,3)

200 0 -200 -400 2000

4000

5000

Samples

6000

7000

8000

10 4

1.5

Current (A)

3000

IC2(:,1) IC2(:,2) IC2(:,3)

1 0.5 0 -0.5 -1

0

1000

2000

3000

4000

Samples

5000

6000

7000

Fig. 11 DCTL currents for cross-country fault A2G at 50 km and B2C2G at 150 km

8000

Protection of Wind Farm Integrated Double Circuit …

485

Amplitude

400 caD5 cbD5 ccD5

200 0 -200 -400 450

Amplitude

1

460

470

480

490

500

Samples

510

520

530

540

10 4 caD5 cbD5 ccD5

0.5 0 -0.5 -1 450

460

470

480

490

500

Samples

510

520

530

540

Fig. 12 SWT coefficients of DCTL currents for A2G and B2C2G cross-country fault

Table 5 Results of SWT for different cross-country faults Fault-1

Fault-2

SWT Coefficients Phase-A1

Phase-B1

Phase-C1

Phase-A2

Phase-B2

Phase-C2

A2G (50 km)

B2C2G (150 km)

273.5590

269.2363

268.7989

8.7411 × 103

4.0714 × 103

4.1746 × 103

A1B1G (110 km)

C1G (90 km)

7.1535 × 103

7.1888 × 103

3.8619 × 103

243.7163

248.6825

256.6200

B2C2G (70 km)

A2G (130 km)

247.3766

306.5399

276.0775

3.2320 × 103

1.1399 × 104

1.0973 × 104

C1G (120 km)

B1G (80 km)

501.9318

7.0200 × 103

9.2192 × 103

482.6128

452.6856

447.9645

A2G (60 km)

B2G (140 km)

437.7010

448.3879

463.4406

1.4657 × 104

5.4796 × 103

487.8947

4.5 The Efficacy of SWT for the Inter-circuit Faults The SWT is tested for different cases of inter-circuit faults. Figure 13 depicts the currents of circuit-1 and circuit-2 when the WFIDCTL is simulated for the intercircuit A1B1G and B2C2G fault at 85 km at 0.04 s among RF = 2.85  and RG = 1.45 . Figure 14 depicts the SWT coefficients of the circuit-1 and circuit-2 currents. Table 6 presents the results for the inter-circuit faults. It is examined from Table 6 that the SWT performs well for the detection of inter-circuit faults.

486

G. Kapoor 10 4

Current (A)

2

IC1(:,1) IC1(:,2) IC1(:,3)

1 0 -1 -2

0

1000

2000

3000

4000

5000

6000

7000

8000

Samples 10 4

Current (A)

1

IC2(:,1) IC2(:,2) IC2(:,3)

0.5 0 -0.5 -1 -1.5

0

1000

2000

3000

4000

5000

6000

7000

8000

Samples

Fig. 13 Circuit-1 and circuit-2 currents for inter-circuit fault A1B1G and B2C2G at 85 km

Amplitude

2

10 4 caD5 cbD5 ccD5

1 0 -1 -2 270

Amplitude

2

280

290

300

280

290

300

310

320

330

340

350

310

320

330

340

350

Samples

10 4

1

caD5 cbD5 ccD5

0 -1 -2 270

Samples

Fig. 14 SWT coefficients for inter-circuit fault A1B1G and B2C2G

5 Conclusion The Symlet-2 wavelet transform (SWT) is seemed to be very efficient under varied fault categories for the WFIDCTL. The SWT coefficients of fault currents for both the circuits are assessed. The fault factors of the WFIDCTL are varied and it is discovered that the variation in fault factors do not influence the fidelity of the SWT. The outcomes substantiate that the SWT has the competence to protect the WFIDCTL beside different fault categories.

Protection of Wind Farm Integrated Double Circuit …

487

Table 6 Results of SWT for different inter-circuit faults Fault-1

Fault-2

SWT coefficients Phase-A1 Phase-B1 Phase-C1 Phase-A2 Phase-B2 Phase-C2

A1B1G

B2C2G

1.7603 × 104

1.3445 × 478.1081 104

472.0364

1.5203 × 1.6073 × 104 104

B1G

A2B2C2G 713.5010

1.4915 × 676.3567 104

2.6852 × 104

2.3359 × 2.7247 × 104 104

C1G

A2B2G

461.3480

489.2221

1.5148 × 1.7200 × 104 104

1.5662 × 486.3861 104

A1B1C1G C2G

2.4512 × 104

2.3716 × 2.1239 × 564.8310 104 104

627.3734

B1C1G

400.2502

1.2047 × 1.4051 × 411.1600 104 104

7.8248 × 385.6059 103

B2G

1.3271 × 104

References 1. Gaur, V. K., & Bhalja, B. (2017). New fault detection and localisation technique for doublecircuit three-terminal transmission line. IET Generation, Transmission and Distribution, 12(8), 1687–1696. 2. Kapoor, G. (2019). A protection technique for series capacitor compensated 400 kV double circuit transmission line based on wavelet transform including inter-circuit and cross-country faults. International Journal of Engineering, Science and Technology, 11(2), 1–20. 3. Gautam, N., Ali, S., Kapoor, G. (2018). Detection of fault in series capacitor compensated double circuit transmission line using wavelet transform. In Proceedings of the IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 769–773). Greater Noida, India: IEEE. 4. Hosseini, K., Tayyebi, A., Ahmadian, M. B. (2017). Double circuit Transmission line short Circuit fault location using wavelet transform and MLP. In Proceedings of the IEEE Iranian Conference on Electrical Engineering (pp. 1336–1342. Tehran, Iran: IEEE. 5. Kapoor, G. (2018). Mathematical morphology based fault detector for protection of double circuit transmission line. ICTACT Journal of Microelectronics, 4(2), 589–600. 6. Gawande, P. N., Dambhare, S. S. (2016). Enhancing Security of distance relays during Power swing Unblocking function for double circuit Transmission lines: A differential power approach. IET Transmission and Distribution, 1–6. 7. Kang, N., Chen, J., & Liao, Y. (2015). A fault-location algorithm for series-compensated Double-Circuit Transmission lines using the Distributed Parameter line model. IEEE Transactions on Power Delivery, 30(1), 360–367. 8. Sharma, N., Ali, S., & Kapoor, G. (2018). Fault detection in wind farm integrated series capacitor compensated transmission line using Hilbert Huang transform. In Proceedings of the IEEE International Conference on Computing, Power and Communication Technologies (GUCON) (pp. 774–778). Greater Noida, India: IEEE. 9. Kapoor, G. (2018). Wavelet transform based detection and classification of multi-location three phase to ground faults in twelve phase transmission line. Majlesi Journal of Mechatronic Systems, 7(4), 47–60. 10. Kapoor, G. (2018). Six-phase transmission line boundary protection using wavelet transform. In Proceedings of the 8th IEEE India International Conference on Power Electronics (IICPE). Jaipur, India: IEEE. 11. Kapoor, G. (2018). Fault detection of phase to phase fault in series capacitor compensated sixphase transmission line using wavelet transform. Jordan Journal of Electrical Engineering, 4(3), 151–164.

Predicting the Time Left to Earthquake Using Deep Learning Models Vasu Eranki, Vishal Chudasama, and Kishor Upla

1 Introduction Due to the destructive nature of earthquakes, predicting its occurrence is an important task to the earth science community. There are mainly three parameters used in an earthquake forecast. First when it will occur, second the magnitude of the earthquake and third where it has occurred. In this manuscript, the focus is on the former; predicting the timing of an earthquake. To be more specific, the predictions are related to the time left before the next earthquake. To predict this timing, one experiment has been carried out in the laboratory upon a rock in a double direct shear geometry subjected to bi-axial loading as depicted in Fig. 1a. Two fault gouge layers are sheared simultaneously while subjected to a constant normal load and prescribed shear stress. The acoustic data is recorded by a piezoceramic (PZT) sensor (as displayed in Fig. 1a). The laboratory faults fail in repetitive cycles of stick and slip that are meant to mimic the cycle of loading and failure on tectonic faults which is depicted in Fig. 1b. However, the timing of an earthquake is based on a measure of fault strength. When a laboratory earthquake occurs, this stress drops unambiguously. Many methods have been proposed in order to predict the earthquake [2–5]. However, all these methods use traditional machine learning algorithms. Recently, due to the massive amount of data and highly powerful graphical processing units (GPUs), deep learning (especially deep neural network (DNN)) has obtained remarkable performance than that of traditional machine learning algorithms. In DNN, a long short-term memory (LSTM) which was proposed by Hochreiter et al. [6] is used for the prediction task. After that, a simple version of LSTM called gated recurrent unit (GRU) is proposed by Cho et al. [7]. We perform the earthquake prediction over

V. Eranki · V. Chudasama · K. Upla (B) Sardar Vallabhbhai National Institute of Technology (SVNIT), Surat, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_54

489

490

V. Eranki et al.

Fig. 1 The laboratory experimental setup along with the results obtained from shear stress [1]

these models and found that GRU is favoured over LSTM as the former is comparatively computationally cheaper and has fewer parameters to train. In this paper, we introduce DNN-based models based on a different combination of GRUs to predict the time to the next earthquake. The main contributions in this manuscript are as follows: • We propose different deep neural network models based on GRU which can predict the time to next earthquake precisely. • We compare the different combination of DNN-based models and analyze them with different training strategies.

2 Related Work Many machine learning algorithms have been proposed in the literature to predict the earthquake. Rouet-Leduc et al. [2] proposed a model which can predict quasiperiodic laboratory earthquakes with continuous acoustic data by using a random forest method. These authors have further extended their work and proposed a new model in [3] that infer the fault zone frictional characteristics and predict the state of stress by using a gradient boosted tree approach. Recently, authors in [4] conducted one experiment upon slow earthquakes in the Cascadia subduction zone by employing a random forest method [8]. Hulbert et al. [5] conduct another experiment to study the similarity between slow and fast earthquakes using a gradient boosted tree algorithm [9]. Recently, deep learning (especially recurrent neural network (RNN)) has achieved a remarkable performance in the prediction tasks. The RNN model is further modified in terms of long short-term memory (LSTM) [6] as an improvement to the RNN architecture as it solved the problem of exploding/vanishing gradient and was able to remember long sequences which RNN cannot able to accomplish. Recently, Shi et al.

Predicting the Time Left to Earthquake Using Deep Learning Models

491

[10] propose a combination of convolution and LSTM-based end-to-end trainable models for the precipitation now casting problem and proved that their model outperforms the simple LSTM model. The LSTM model was further simplified in terms of the gated recurrent unit (GRU) [7] where a GRU cell is similar to an LSTM cell except that the GRU cell does not require an output gate. Ballas et al. [11] propose a model by combining the GRU and convolution layers for learning the video representation. Similarly, Zhang et al. [12] introduce convolution-GRU combined model to detect the speech on Twitter. In this work, we use different deep learning models to predict the time left to an earthquake. From the best of our knowledge, we can say that it is the first work in which deep learning is used to predict an occurrence of the earthquake.

3 Methodology The network architecture of the proposed models is display in Fig. 2. Here, five different architecture designs are displayed. The first model utilizes five fully dense layers. While the second and third model consists of five LSTM and GRU layers, respectively followed by one dense layer. The last two models are the two variants with a combination of convolution and GRU layers. Here, the first variant is based on two GRUs followed by one convolution layer while the second variant is based on one convolution layer followed by two GRU layers. In both these variants (see the forth and fifth model in Fig. 2), combination layers are followed by two dense layers in order to map the final output. The kernel size is set to 1 with stride value of 1 in all 1-D convolution layers. In all these models, the network design of LSTM and GRU is same as described in papers [6, 7]. Initially, the acoustic data was fed as input into the neural network models. To increase the forecasting capability on the testing data, Gaussian noise of mean 0 and width 0.5 is added to each chunk of 150,000 data points, then high-frequency noise present in the chunk is removed with the use of the Wavelet Transform using a fourth order Daubechies wavelet [13]. Out of the 17 seismic cycles present in the training dataset, the laboratory earthquakes which had similar distributions to the ones present in the test set were considered. Therefore, only 1st–3rd, 5th, 10th–12th and 14th–15th seismic cycles are considered out of the 17 present in the training set. Once a chunk has been cleaned of high-frequency noise, seventeen acoustic features are derived from the acoustic data, which are then fed into the models. The model takes these seventeen features as input and predicts the time left to the earthquake as output.

492

V. Eranki et al.

Fig. 2 The network architecture of the proposed models. Here, BN represents the batch normalization layer. The number in each layer block indicates the number of feature maps

4 Result Analysis The acoustic dataset used for the experiments is publicly available at Kaggle [1]. In training dataset, there is 629,145,480 number of data points. The testing dataset consists of 2624 number of segments where each segment has 150,000 data points. Each data point represents one acoustic data. This acoustic dataset is pre-processed as discussed in Sect. 3 and then the processed dataset is fed into the proposed models. All the proposed models have a batch size of 64. They are trained using the L 1 loss function. Here, L 2 regularization of 0.01 is applied to prevent the model from overfitting. Adam optimizer [14] is used with the learning rate left at its default value of 0.001. The total number of epochs defined for training is 1000. Here, the mean absolute error (MAE) is used as an evaluation metric for validating the performance of the proposed models. The predicted outputs obtained from the proposed models are uploaded in Kaggle website and then they calculate the MAE value based on the predicted and true values. Lower the MAE value, the prediction is more accurate.

Predicting the Time Left to Earthquake Using Deep Learning Models

493

Table 1 The comparison of the different DNN-based models in terms of their MAE value Models

Numer of parameters

Fully Dense

MAE value

43,637

3.43682

LSTM

1,313,549

3.52085

GRU

1,751,397

3.39659

LSTM-Conv1D

1,359,617

3.39670

GRU-Conv1D

1,050,497

2.64390

Conv1D-LSTM

1,500,545

3.39670

Conv1D-GRU

1,154,689

3.39662

All the models are trained on a system with specifications of Nvidia-GeForce 1070 8 GB GPU, octa-core CPU with 32 GB RAM. Our implementation is based on Keras with Tensorflow as a backend [15]. Table 1 shows the comparison of the different DNN-based models in terms of their obtained MAE values. The corresponding number of training parameters are also mentioned in Table 1. In order to observe the effect of the combination of convolution and LSTM layers, we also train two additional models called LSTMConv1D and Conv1D-LSTM models. In both of these models, we replace the GRU unit (as depicted in the last two models in Fig. 2) with the LSTM model. The number of the feature maps and other hyper-parameters is kept the same as GRU-Conv1D and Conv1D-GRU models. From Table 1, one can observe that the fully dense model has fewer trainable parameters but the MAE value is higher than that of other models. Here, the GRU-Conv1D model obtains best MAE value with less number of training parameters than other models (except fully dense model). One can also found from Table 1 that the GRU model has better MAE value with less number of training parameters than LSTM model. From Table 1, one can notice that the GRU combination model outperforms LSTM combination models in terms of MAE measure with less number of training parameters. Hence, in Table 2, we compare only two GRU combination variants on three weight initialization strategies named Glorot uniform [17], He uniform [18] and He normal [18]. One can notice from Table 2 that the Glorot uniform weight initialization performs better in the case of GRU-Conv1D model, while for Conv1D-GRU model, all three weight initialization strategies obtain similar performance. This proves that Glorot weight initialization is the best weight initialization strategy in GRU-Conv1D Table 2 The comaprison of combination of GRU and convolution layer based models for different initialization strategies and without BN layer Models

Weight Initialization strategies

Without BN layer Glorot Uniform [17] He Uniform [18] He Normal [18] [16]

GRU-Conv1D 2.64390

3.39679

3.82095

3.39673

Conv1D-GRU 3.39662

3.39673

3.39660

2.64607

494

V. Eranki et al.

model to predict the earthquake. To learn the effect of BN layer, we also train both the models (i.e., GRU-Conv1D and Conv1D-GRU) without using BN layer using Glorot weight initialization. Here, one can observe that the GRU-Conv1D model without BN layer degrades the performance of prediction. Also, the Conv1D-GRU model without BN layer obtains similar performance with that of GRU-Conv1D model. This happens because the BN layer performs better only when it follows by any activation function. From Table 2, we found that the GRU-Conv1D with BN layer and Conv1D-GRU without BN layer obtains better MAE measures. Hence, the comparison of Conv1DGRU without BN layer and GRU-Conv1D with BN layer models is depicted in Fig. 3. Here, both these models are compared in terms of time to predict the next earthquake for the first 50 samples of testing dataset.

5 Conclusion In this paper, we have prepared different deep learning based models to predict time to earthquake based on the acoustic data obtained from laboratory setup. To the best of our knowledge, this is the first work of deep learning to predict the time left to the earthquake. The acoustic data is pre-processed and fed into different deep neural network based models. These models predict the time left for the earthquake. From different experiments, we conclude that the GRU combinations have better performance than that of LSTM combinations with fewer number of training parameters. We train the proposed model on different weight initialization strategies and found that the Glorot uniform weight initialization performs better than that of others. In order to learn the effect of BN layer, we further train the proposed models without BN layer and observe that the BN layer followed by activation function helps to improve the performance of prediction. Here, we also found that the Conv1D-GRU model without BN layer has a similar performance with that of GRU-Conv1D model with BN layer.

Predicting the Time Left to Earthquake Using Deep Learning Models

495

(a) Conv1D-GRU without BN using Glorot uniform

(b) GRU-Conv1D with BN using Glorot uniform Fig. 3 The performance comparison of Conv1D-GRU and GRU-Conv1D models in terms of time to failure value of first 50 testing samples

496

V. Eranki et al.

References 1. Lanl earthquake prediction. https://www.kaggle.com/c/LANL-Earthquake-Prediction. 2. Rouet-Leduc, B., Hulbert, C., Lubbers, N., Barros, K., Humphreys, C. J., & Johnson, P. A. (2017). Machine learning predicts laboratory earthquakes. Geophysical Research Letters, 44(18), 9276–9282. 3. Rouet-Leduc, B., Hulbert, C., Bolton, D. C., Ren, C. X., Riviere, J., Marone, C., et al. (2018). Estimating fault friction from seismic signals in the laboratory. Geophysical Research Letters, 45(3), 1321–1329. 4. Rouet-Leduc, B., Hulbert, C., & Johnson, P. A. (2019). Continuous chatter of the cascadia subduction zone revealed by machine learning. Nature Geoscience, 12(1), 75. 5. Hulbert, C., Rouet-Leduc, B., Johnson, P. A., Ren, C. X., Rivière, J., Bolton, D. C., et al. (2019). Similarity of fast and slow earthquakes illuminated by machine learning. Nature Geoscience, 12(1), 69. 6. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. 7. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. 8. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. 9. Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 785–794). ACM. 10. Xingjian, S., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., & Woo, W.-c. (2015). Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems (pp. 802–810). 11. Ballas, N., Yao, L., Pal, C., & Courville, A. (2015). Delving deeper into convolutional networks for learning video representations. arXiv preprint arXiv:1511.06432. 12. Zhang, Z., Robinson, D., & Tepper, J. (2018). Detecting hate speech on twitter using a convolution-gru based deep neural network. In European Semantic Web Conference (pp. 745–760). Springer. 13. Graps, A. (1995). An introduction to wavelets. IEEE Computational Science and Engineering, 2(2), 50–61. 14. Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 15. Ketkar, N. (2017). Introduction to keras. In Deep learning with Python (pp. 97–111). Springer. 16. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. 17. Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (pp. 249–256). 18. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1026–1034).

Fully Informed Grey Wolf Optimizer Algorithm Priyanka Meiwal, Harish Sharma, and Nirmala Sharma

1 Introduction NIAs provide reliability in solving the optimization problems [2]. NIAs are split into two wide sections: swarm intelligence-based algorithms (SI) and evolutionary algorithms (EAs). As the complexity of problem increases, the nature of the solution also gets convoluted [9, 11]. Metaheuristic algorithm such as GWO [8], differential evolution DE [10], artificial bee colony ABC [1], power law-based local search in spider monkey optimization PLSMO [14] and, particle swarm optimization PSO [3] are now becoming powerful method for solving many tough optimization problems. Generally, these are used regularly as a synonym for a search or optimization algorithm [6]. Swarm intelligence (SI) is motivated by natural anatomical principles. SI-based algorithms are a group of agents which interact locally with each other as well as with their natural habitat. These agents follow some rules without any manipulation of their random local behaviour. Communications between such agents lead to the development of intelligent global behaviour, which is unknown to the individual agent [13]. A ‘swarm’ is a disorganised group of moving individuals that makes a cluster (population) of moving individuals. It seems that the cluster moves together while each seems to be moving in irregular direction [4].

P. Meiwal (B) · H. Sharma · N. Sharma Rajasthan Technical University, Kota, India e-mail: [email protected] H. Sharma e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_55

497

498

P. Meiwal et al.

GWO is an SI-based algorithm. The GWO specifies the social hierarchy and hunting behaviour of grey wolves. For incorporating this behaviour in GWO, Mirjalili et al. [8] defined the positioning approach of each wolf by taking mean of positions of the alpha, beta and delta wolves. In GWO alpha, beta and delta are the first-best, second-best and third-best solution of the search space, respectively. In this paper, for enhancing the exploration capacity, an innovative alternate of GWO is introduced. In the proposed work, the searching procedure is guided by an arbitrarily selected solution of the search space along with the first-best, second-best and the third-best solution. The proposed variant is known as fully informed grey wolf optimizer algorithm (FIGWOA). In FIGWOA, the average of all the population of grey wolf to increase the exploration capacity is calculated. To check the validation of FIGWO, a set of 14 benchmark functions is studied for experiments. The obtained results are compared with the state of the algorithms. The obtained outcomes prove the authenticity of the proposed approach. The remainder part of the paper is arranged as follows: Sect. 2 introduces the grey wolf optimizer; in Sect. 3, we introduced the proposed FIGWOA; Section 4 describes the performance of the proposed algorithm, which is examined over benchmarks functions; and finally, the conclusion is expressed in Sect. 5.

2 Grey Wolf Optimizer In this section, the inspiration for the proposed method is explained first. Then, secondly, the mathematical model with the algorithm is illustrated.

2.1 Inspiration Grey wolf belongs to Canidae species. Grey wolves are at the top of the food chain. They often prefer to live in a group. The average group size is 5–12 [8]. They have a very strict social dominant hierarchy, as shown in Fig. 1. The leaders are a male and a female called alphas which are responsible for decision-making of hunting, resting site, time to wake. The alpha’s arrangements manage the pack. In gathering, the entire pack acknowledges the alpha by holding their tails down. One interesting thing is that the alpha is not undoubtedly the strongest member of the pack, but the best in terms of managing the pack. This shows that the organization and discipline of a pack is much more important than its strength. The second level in the hierarchy of grey wolves is beta. The betas are subordinate wolves that help the alpha in decision-making or other pack activities. The second fittest wolf is the beta wolf; it can be either male or female. In case if the alpha wolves pass away or become very old, then the beta wolf is probably the best candidate to be the alpha. It should respect the alpha wolf, but commands the other lower-level wolves. It plays the role of an advisor to the alpha wolf and discipliner for the group.

Fully Informed Grey Wolf Optimizer Algorithm

499

The beta augments the alpha’s instructions throughout the group and gives feedback to the alpha wolf. The lowest ranking grey wolf is omega. The omega plays the role of scapegoat. Omega wolves always have to submit to all the other dominant wolves. They are the last wolves that are allowed to eat. It may seem the omega is not an important individual in the pack, but it has been observed that the whole pack faces internal fighting and problems in case of losing the omega. This is due to the venting of violence and frustration of all wolves by the omegas. This assists in satisfying the entire pack and maintaining the dominance structure. In some cases, the omega is also the babysitters in the pack. If a wolf is not an alpha, beta or omega, he/she is called subordinate (or delta in some references). Delta wolves have to submit to alphas and betas, but they dominate the omega. Scouts, sentinels, elders, hunters and caretakers belong to this category. Scouts are responsible for watching the boundaries of the territory and warning the pack in case of any danger. Sentinels protect and guarantee the safety of the pack. Elders are the experienced wolves who used to be alpha or beta. Hunters help the alphas and betas when hunting prey and provide food for the pack. Finally, the caretakers are responsible for caring for the weak, ill and wounded wolves in the pack.

2.2 Mathematical Model and Algorithm In this subsection, the mathematical models of the social hierarchy, tracking, encircling and attacking prey are provided. Then, the GWO algorithm is outlined. 1. Societal ranking: To create the societal ranking of wolves, we consider the fittest solution as the alphawolf (alphawolf ). Consequently, the second and third fittest solutions classified as the beta-wolf (betawolf ) and delta-wolf (deltawolf ), respectively. The rest of the candidate solutions are assumed to be the omega-wolf (omegawolf ). In the GWO algorithm, the hunting is guided by alphawolf , betawolf and deltawolf . The omega wolves follow these three wolves (Fig. 1). 2. Encircling prey: As mentioned above, grey wolves encircle prey during the hunt. In order to mathematically model encircling behaviour, the following equations are proposed:  → − → − → − → −  D =  C ∗ Z p (iter) − Z w (iter)

(1)

− → − → − → − → Z (iter+1) = Z p (iter) − A ∗ D

(2)

500

P. Meiwal et al.

Fig. 1 Social behaviour of grey wolves

where iter indicates the current iteration, A and C are coefficient vectors, Z p (iter) is the position vector of the prey, and Z w (iter) indicates the position vector of a grey wolf. As presented in Eq. 2, wolves decrease their distance from the prey’s position. Their distance depends on A and D in which A is gradually decreasing and D itself is distance from the location of prey. Therefore, as iteration number of the algorithm increases, wolves get closer and closer to the prey. In other words, they encircle the prey since their initial locations are determined randomly [5]. − → − → The vectors A and C are calculated as follows: − → A = 2 a ∗ r1 − a

(3)

If |A| 1 then wolf far away from prey or may assume that alpha wolf is injured, tired. Only two main parameters to be adjusted are a and C. The range of C is 2 < C 1, a higher impact on the prey is considered to change position of wolves, and when C 1, it enforces the wolves including alpha, beta and delta to move away from the current prey hope to find the better prey (Fig. 2). 3. Hunting process: Grey wolves have the ability to recognize the location of prey and encircle them. The hunt is usually guided by the alpha. The beta (β) and delta (δ) might also participate in hunting occasionally. However, in an abstract search space, we have no idea about the location of the optimum (prey). In order to mathematically simulate the hunting

Fully Informed Grey Wolf Optimizer Algorithm

501

X*-X (X*, Y) (X*-X, Y)

(X, Y) wolf

wolf

wolf

Y*-Y

(X*-X, Y*) wolf

Prey *

wolf

(X, Y*)

*

(X , Y )

wolf

wolf

wolf

(X, Y*-Y)

(X*-X, Y*-Y) Fig. 2 2D position vector and their possible next location [8]

behaviour of grey wolves, we suppose that the α (first best candidate solution), β and δ have better knowledge about the potential location of prey. Therefore, we save the first three best candidate solutions obtained so far and oblige the other search agents (including the ω) to update their positions according to the position of the best search agents that is α wolf. The following formulas are proposed in this regard: − → − → − → → − Dα = C 1 ∗ Z α − Z 

(5)

− − → → − → → − D β = C 2 ∗ Z β − Z 

(6)

− − → → − → → − D δ = C 3 ∗ Z δ − Z 

(7)

− → → → → − − → − → − →  − − → − → − →  − − → − → Z 1 = Z α − A 1 ∗ D α , Z 2 = Z β − A 2 ∗ D β , Z 3 = Z δ− A 3 ∗ D δ (8) z 1 + z 2 + z 3 − → Z (iter+1) = 3

(9)

502

P. Meiwal et al.

C1

A1

α_wolf

C2

A2 β_wolf

R

C3

A3

prey (es mated)

δ_wolf

ω or any other hunter wolf Fig. 3 Position updating in GWO [8]

Figure 3 shows how a search agent updates its position according to α, β, δ in a 2D search space [8]. It can be observed that the final position would be in a random place within a circle which is defined by the positions of α, β and δ wolves estimate the position of the prey, and other wolves updates their positions randomly around the prey. 4. Exploitation process (Attacking prey): As describe in the above hunting process 3, the grey wolves finish the hunt by exploiting the prey when it stops moving. In order to mathematically model approaching the prey, we decrease the value of a . Notice that the variation in range − → of A is also decreased by a . Because of the value of a is calculated by the above − → equation described in Eq. 3 in which the value of both a and A are directly affect − → by each other. In other words A is a random value in the interval [−2a, 2a] where a is decreased from 2 to 0 over the course of iterations. When the random values of − → A are in [− 1, 1], the next position of a search agent can be in any position between its current position and the position of the prey. When the value of |A| 1 then wolf far away from prey or may assume that alpha wolf is injured, tired as show in Fig. 4.

Fully Informed Grey Wolf Optimizer Algorithm

Wolf a ack on prey

503

wolf far away from prey wolf

wolf

prey

(a) If |A| < 1

prey

(b) If |A| > 1

Fig. 4 Exploiting to the prey versus exploring the prey [8]

With the operators proposed so far, the GWO algorithm allows its search agents to update their position based on the location of the α, β, δ and attack towards the prey [8]. However, the GWO algorithm is prone to stagnation in local solutions with these operators. It is true that the encircling mechanism proposed shows exploration to some extent, but GWO needs more operators to emphasize exploration.

3 Proposed Fully Informed Grey Wolf Optimizer Algorithm (FIGWOA) One of the main drawbacks of GWO is precipitate confluence. To reduce such type of incidents in the group, a different search tactics, i.e. fully informed learning [16], is used. In fully informed learning (FIL), the (group of wolves) individual collects information from the fittest solution and all other neighbouring solutions update its position in the search space. Mathematically, FIL is described in Eq. 10. The social behaviour of FIGWOA can be depicted as Fig. 3.

504

P. Meiwal et al.

Algorithm 1: Basic algorithm: FIL =



population Total population

(10)

As per the above discussion, the pseudocode of the proposed FIGWO is shown in Algorithm 2. FIGWO Algorithm 2 is as follows (Fig. 5): Fig. 5 In FIGWO work with fully informed learning concept

Fully Informed Grey Wolf Optimizer Algorithm

505

Algorithm 2: Proposed fully informed grey wolf optimizer algorithm

4 Test Outcomes and Analysis The empirical performance evaluation of the proposed FIGWOA in terms of accuracy, efficiency and reliability is discussed in this section.

4.1 Test Enigmas Under Judgment The effectiveness of the proposed algorithm FIGWOA is validated over 14 mathematical optimization problems (f 1 to f 14 ) (shown in Table 1) of various aspects, and complexities are carried into consideration.

4.2 Test Setting The outcomes achieved from the proposed FIGWOA are collected in the form of success rate (SR), average number of function evaluations (AFEs) and mean error (ME). The outcomes for these benchmark test enigmas are also obtained from the GWO [8], DE, LFSMO [15], ABC [1], PLSMO [14] and PSO [3] for the comparison purpose. It requires a number of parameters to be set, specifically, initialize the size of population, number of sites selected for neighbourhood search (out of n visited sites),

Name

Rastrigin

Alpine

Zakharov

Cigar

Brown3

Sum of different powers

Inverted cosine wave

Rotated hyperellipsoid

Beale function

Branins function

S. No.

1

2

3

4

5

6

7

8

9

10

Table 1 Test problems

i x1 2

4

i=1

j=1

 D i

i=1

x 2j 



2

2 f 9 (x) = [1.5 − x1 (1 − x2 )]2 + 2.25 − x1 1 − x22 + [2.625 − x1 (1 − x32 )]2 2 f 10 (x) = a x2 − bx12 + cx1 − d + e(1 − f ) cos x1 + e

f 8 (x) =

f 6 (x) =

|xi |i+1 2 2



 D−1 − xi +xi+1 +0.5xi xi+1 exp × I f 7 (x) = − i=1 8

D

D 2 f 4 (x) = x02 + 100,000 i=1 xi

2  D−1 2(xi+1 ) +1 2x 2 +1 xi f 5 (x) = i=1 + xi+1i

Objective function  D  2 xi − 10 cos(2π xi ) f 1 (x) = 10D + i=1 n f 2 (x) = i=1 |xi sin xi + 0.1xi |  D 2  D i x i 2  D f 3 (x) = i=1 xi + + i=1 2 i=1

[−5, 10], [0, 15]

[−4.5, 4.5]

[−65.536, 65.536]

[−5, 5]

[−1, 1]

[−1, 4]

[−10, 10]

[−5.12, 5.12]

[−10, 10]

[−5.12, 5.12]

Search range

2

2

30

10

30

30

30

30

30

30

Dm

(continued)

1.0E − 05

1.0E − 05

1.0E − 05

1.0e − 5

1.0E − 01

1.0E − 05

1.0E − 05

1.0E − 02

1.0E − 05

1.0E − 05

AE

506 P. Meiwal et al.

Name

Six-hump camel back

Hosaki Problem

McCormick

Moved axis parallel hyperellipsoid

S. No.

11

12

13

14

Table 1 (continued)

f 13 (x) = sin(x1 + x2 ) + (x1 − x2 )2 − 23 x1 + 25 x2 + 1 D f 14 (x) = i=1 5i × xi2

Objective function f 11 (x) = 4 − 2.1x12 + x14 /3 x12 + x1 x2 + −4 + 4x22 x22 f 12 (x) = 1 − 8x1 + 7x12 − 7/3x13 + 1/4x14 x22 exp(−x2 ) [−5.12, 5.12]

[−1.5, −3], [3, 4]

[0, 5], [0, 6]

[−5, 5]

Search range

30

30

2

2

Dm

1.0e − 15

1.0E − 04

1.0E − 6

1.0E − 05

AE

Fully Informed Grey Wolf Optimizer Algorithm 507

508

P. Meiwal et al.

maximum number of iterations and the stopping criterion. The following parameter setting is assumed while implementing the proposed algorithm: – – – –

The number of simulation/run = 30. Population of grey wolf = 50. FE max parameter = 200,000. Maximum number of iterations = 5000.

4.3 Outcomes Evaluation of Tests Tables 2, 3 and 4 manifest the statistical outcomes for the benchmark functions of Table 1 including the experimental perspectives as shown in Sect. 4. These tables show the outcomes of the offered and additional examined algorithms in terms of average function evaluations (AFE), success rate (SR) and mean error (ME). Here, SR represents the number of times the algorithm achieved the function optima with acceptable error in 30 runs, and AFE is called in 30 runs by the algorithm to reach the termination criteria. The outcomes are studied, and it can be said that FIGWOA outperforms the supported algorithms most of the time in terms of accuracy, reliability and efficiency. Some other statistical tests such as the Mann–Whitney U-rank-sum test, acceleration rate (AR), boxplot and performance indices have also been done in order to analyse the output of the algorithm more intensively. Table 2 Comparision based on AFEs, TP: test problem TP

FIGWOA

DE

GWO

PLSMO

PSO

f b1 f b2

ABC

2691.66

32,693.75

2713.33

117,022.1

0

41,050.5

10,296.66

63,000

21,898.33

50,570.86

0

27,780.06

f b3

7290

40,496.67

7518.33

23,150.2

69,030

34,818.33

f b4

5758.33

22,465

16,120

13,073.36

34,395

20,741.67

f b5

3345

45,365

3366.67

23,699.6

70,065

41,708.33

f b6

1261.67

22,858.33

1270

16,922.16

39,491.66

11,436.73

f b7

14,363.33

18,285

24,970

103,017.8

69,686.66

0

f b8

4361.66

20,151.66

4481.67

18,972.56

31,930

19,420

f b9

52,863.33

5650

93,180

19,167.3

55,573.33

0

f b10

75,541.66

5777.08

114,461.66

34,937.66

34,936.67

114,522.6

f b11

3398.33

5235

3890

12,104.83

10,001.66

164,416.5

f b12

63,886.67

1423.33

92,203.33

703.06

1531.66

1193.43

f b13

13,413.33333

2787.03

15,641.66

1691.53

3586.66

27,651.03

f b14

8520

60,123.33

8963.33

35,040.03

105,511.67

63,685

Fully Informed Grey Wolf Optimizer Algorithm

509

Table 3 Comparision based on SR out of 30 runs, TP: test problem TP

FIGWOA

DE

GWO

PLSMO

f b1

30

24

30

16

PSO

ABC

f b2

30

2

28

30

0

30

f b3

30

30

30

30

30

30

f b4

30

30

29

30

30

30

f b5

30

30

30

4

30

30

f b6

30

30

30

0

12

30

f b7

29

30

29

30

0

30

f b8

30

30

30

29

30

30

f b9

30

25

26

30

30

0

f b10

30

24

23

30

30

5

f b11

30

30

30

30

30

2

f b12

30

30

25

30

30

30

f b13

30

27

30

30

30

30

f b14

30

30

30

30

30

30

0

30

Table 4 AR of FIGWO as compared to the GWO, DE, PLSMO, PSO, ABC TP

GWO

DE

PLSMO

PSO

ABC

f b1

2

12.14

43.47

0

15.25

f b2

2.12

6.11

4.91

0

2.69

f b3

1.03

5.55

3.17

9.46

4.77

f b4

2.79

3.90

2.27

5.97

3.60

f b5

1

13.56

7.08

20.94

12.46

f b6

1

18.11

13.41

31.30

9.06

f b7

1.73

1.27

7.17

4.85

0

f b8

1.02

4.62

4.34

7.32

4.45

f b9

1.76

0.10

0.36

1.05

0

f b10

1.51

0.07

0.46

1.51

0.46

f b11

1.14

1.54

3.56

2.94

48.38

f b12

1.44

0.02

0.01

0.02

0.01

f b13

1.16

0.20

0.12

0.26

2.06

f b14

1.05

7.05

4.11

12.38

7.47

4.4 Statistical Analysis Algorithms GWO [8], DE [10], ABC [1], PLSMO [14] and PSO [3] are compared based on SR, AFE and ME. After examining all of the comparisons, it is clearly seen

510

P. Meiwal et al.

that FIGWOA costs less on 14 benchmark functions. And all of the above results are shown in Table 2. Furthermore, we compare the convergence speed of the considered algorithms by measuring the AFEs. A smaller AFE means higher convergence speed, and a higher AFE means smaller convergence speed. In order to compare convergence speeds, we use the acceleration rate (AR) test which is defined as follows, based on the AFEs for the two algorithms ALGO and FIGWOA: AR =

AFEALGO AFEFIGWOA

(11)

where ALGO belongs to (GWO, DE, PLSMO, PSO and ABC) and AR >1 means FIGWOA performs fast. In order to investigate the AR of the proposed algorithm as compare to the considered algorithm, results of Table 2 are analysed and the value of AR is calculated using Eq. 11 which shows a comparison between FIGWOA and GWO; FIGWOA and DE; FIGWOA and PLSMO; FIGWOA and PSO; and FIGWOA and ABC in terms of AR. It is clear from Table 4 that convergence speed of FIGWOA is better than the considered algorithms for most of the functions (Fig. 6). Mann–Whitney U-rank-sum [7] based on AFE at coefficient = 0.05 significance level is also done as shown in Table 5. The observation is doing by the Mann–Whitney U-rank-sum test. This test is based on some of the following assumptions which are as follows: when a notable variation is not seen, then the null hypothesis is accepted and an equal sign (=) appears; when a notable variation is seen, then the null hypothesis is discarded and AFE of FIGWOA is compared with other algorithms. And we use specific signs ‘+’ which indicate that FIGWOA performs good, while if FIGWOA performs worse, then use ‘−’ sign. If there is no difference, then use ‘=’ sign. 4

x 10 16 14 12 10 8 6 4 2 0

FIGWO

DE

Fig. 6 Boxplot graph for AFE

GWO

PSO

PLSMO

ABC

Fully Informed Grey Wolf Optimizer Algorithm

511

Table 5 Comparision based on the average function evaluation and the Mann–Whitney U-rank-sum test, TP: test problem TP

GWO

DE

PLSMO

PSO

ABC

F1

=

+

+

+

+

F2

+

+

+

+

+

F3

+

+

+

+

+

F4

+

+

+

+

+

F5

=

+

+

+

+

F6

=

+

+

+

+

F7

+

+

+

+



F8

+

+

+

+

+

F9

+









F 10

+







+

F 11

+

+

+

+

+

F 12

+









F13

+







+

F 14

+

+

+

+

+

This proposed algorithm performs outstanding for most of the standard problems as shown in Table 1. In comparison with GWO [8], FIGWOA performs outstanding for 12 standard problems (f b2 to f b4 , f b7 and f b14 ). In case of ABC [17], FIGWOA performs much better for 11 distinct problems (f b1 to f b6 and f b8 , f b10 , f b11 , f b13 and f b14 ). In comparison with DE [12], it gives better performance for 10 standard problems (f b1 to f b8 , f b11 and f b14 ). In case of PLSMO [14], FIGWOA performs much better for 10 distinct problems (f b1 to f b8 , fb11 and f b14 ), while in case of PSO [3], FIGWOA gives better performance for 10 distinct problems (f b1 to f b8 , f b11 and f b14 ).

5 Conclusion The FIGWOA algorithm is generated to enhance the convergence speed of the GWO algorithm, and its efficiency overcomes the drawbacks such as premature convergence and stagnation. The proposed algorithm has been widely compared with another optimization algorithm. Through extensive experiments, it can be stated that the proposed algorithm is a competitive algorithm to solve continuous optimization problems. Empirical outcomes expose that the proposed dispatching rules are honestly good. Collating FIGWO to other famous algorithms such as DE, GWO, PLSMO, PSO and ABC revealed that FIGWO has better performance.

512

P. Meiwal et al.

References 1. Bansal, N., Kumar, S., & Tripathi, A. (2016). Application of artificial bee colony algorithm using hadoop. In 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom) (pp. 3615–3619). IEEE. 2. Coit, D. W. & Smith, A. E. (1996). Reliability optimization of seriesparallel systems using a genetic algorithm. IEEE Transactions on reliability, 45(2), 254–260. 3. Jeyakumar, D. N., Jayabarathi, T., & Raghunathan, T. (2006). Particle swarm optimization for various types of economic dispatch problems. International Journal of Electrical Power & Energy Systems, 28(1) January 2006. 4. Kennedy, J. (2006). Swarm intelligence. In Handbook of Nature-Inspired and Innovative Computing (pp. 187–219). Springer. 5. Komaki, G. M., & Kayvanfar, V. (2015). Grey wolf optimizer algorithm for the two-stage assembly flow shop scheduling problem with release time. Journal of Computational Science, 8, 109–120. 6. Lones, M. A. (2014). Metaheuristics in nature-inspired algorithms. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation (pp. 1419–1422). ACM. 7. McKnight, P. E., & Najab, J. (2010). Mann-whitney u test. The Corsini encyclopedia of psychology (pp. 1–1). 8. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61. 9. Mittal, H., Pal, R., Kulhari, A., Saraswat, M. (2016). Chaotic kbest gravitational search algorithm (ckgsa). In 2016 Ninth International Conference on Contemporary Computing (IC3) (pp. 1–6). IEEE. 10. Neri, F., & Tirronen, V. (2010). Recent advances in differential evolution: A survey and experimental analysis. Artificial Intelligence Review, 33(12), 61–106. 11. Pal, R., Mittal H., Pandey, A., & Saraswat M. (2016). Beecp: Biogeography optimizationbased energy efficient clustering protocol for hwsns. In 2016 Ninth International Conference on Contemporary Computing (IC3) (pp. 1–6). IEEE. 12. Qin, A. K., Huang, V. L., & Suganthan P. N. (2009). Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Transactions on Evolutionary Computation, 13(2), 398–417. 13. Roy, S., Biswas, S., & Chaudhuri, S. S. (2014). Nature-inspired swarm intelligence and its applications. International Journal of Modern Education and Computer Science, 6(12), 55. 14. Sharma, A., Sharma, H., Bhargava, A., & Sharma, N. (2017). Power law-based local search in spider monkey optimisation for lower order system modelling. International Journal of Systems Science, 48(1), 150–160. 15. Sharma, A., Sharma, H., Bhargava, A., Sharma, N., & Bansal, J. C. (2016). Optimal power flow analysis using Lévy flight spider monkey optimisation algorithm. International Journal of Artificial Intelligence and Soft Computing, 5(4), 320–352. 16. Sharma, K., Gupta, P. C., & Sharma, H. (2016). Fully informed artificial bee colony algorithm. Journal of Experimental & Theoretical Artificial Intelligence, 28(1–2), 403–416. 17. TSai, P.-W., Pan, J.-S., Liao, B.-Y., & Chu, S.-C. (2009). Enhanced artificial bee colony optimization. International Journal of Innovative Computing, Information and Control, 5(12), 5081–5092.

A Study to Convert Big Data from Dedicated Server to Virtual Server G. R. Srikrishnan, S. Gopalakrishnan, G. M. Sridhar, and A. Prema

1 Introduction The data conversion is implemented for an existing company where the data at present is stored in a dedicated server. But this ends up in the problem of storage capacity. It high time to decide how to store these data in a large volume set called as “Virtual Server”. A dedicated server is defined as a physical device which is completely responsible and dedicated to a client to cater to their requirements. A DS works in en effective, but when the question of large data arises, it should be migrated to a virtual server for better usage. A virtual server shares hardware and software resources with other operating systems or with a dedicated server. They are cost-effective and provide faster resource control; these servers are preferred in scenarios where large data is handled. A virtual server is capable of running its own operating system and thus enables a business to increase its investment in hardware (Fig. 1).

G. R. Srikrishnan · S. Gopalakrishnan (B) · A. Prema Department of Computer Science, School of Computing Sciences, VISTAS, Chennai, India e-mail: [email protected] G. R. Srikrishnan e-mail: [email protected] A. Prema e-mail: [email protected] G. M. Sridhar Department of Computer Science Christ College of Arts and Science, Bangalore, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_56

513

514

G. R. Srikrishnan et al.

Fig. 1 Managed dedicated server

2 Literature Review Existing algorithms for the conversion of DS to VS Author

Year

Senthil Kumar R, Latha Parthiban

2018 Privacy Preservation Attribute-Based In Big Data With Encryption (ABE) Encrypted Cloud Data Storage Using Walrus

Title

Algorithm

Description Walrus, is a storage service in Eucalyptus for storing the data. It will store the data in cloud in the form of buckets. The encrypted information will be stored in cloud using the walrus and by performing the ABE (continued)

A Study to Convert Big Data from Dedicated Server to Virtual …

515

(continued) Author

Year

R. Kavitha, E. Shanmugapriya [1]

2019 Medical big data analysis: preserving security and privacy with hybrid cloud technology [1]

Title

Algorithm

Description

Bilinear pairing cryptography

Proposed method investigates privacy and security with hybrid cloud computing. It was implemented by bilinear pairing protocol to analyze the big data and using authenticated key management system. The proposed method will provide less computation cost, time consumption, and computational complexity compared to existing method

Licheng Wang, 2017 Secure and ABE, IBBE Qinlong Huang, and Privacy-Preserving Yixian Yang [2] Data Sharing and Collaboration in Mobile Healthcare Social Networks of Smart Cities [2]

It focuses on the healthcare data and social sharing data and collaboration smart cities. It is developed based on ABE and IBBE

Dr. K. Baskaran and 2016 Cryptographically Dr. Ragesh G. K. Enforced Data [20] Access Control in Personal Health Record Systems

Proposed method will be used Revocable Multi Authority Attribute Set Based Encryption (R-MA-ASBE) . Each patient data would be encrypted, prior to uploading the data in the cloud server

Revocable Multi Authority Attribute Set Based Encryption (R-MA-ASBE)

(continued)

516

G. R. Srikrishnan et al.

(continued) Author

Year

Tejaswini L and Dr. Nagesh H.R. [14]

2017 Study on Encryption Homomorphic methods to secure encryption the Privacy of the data and Computation on Encrypted data present at cloud

Title

Algorithm

Description It uses the homomorphic encryption. The data stored at cloud can kept private and also the computation on the ciphertext can be achieved. Since the healthcare record needs to be kept secure and private this approach can be used

Jin Sun, Xiaojing 2018 A searchable Wang, Shangping personal health Wang, Lili Ren [21] records framework with fine-grained access control in cloud-fog computing

Search Encryption (SE) Technology and Attribute-Based Encryption (ABE)

The proposed article combines the ABE and SE for implementation of keyword search Function and ability to access the control that is fine-grained. If the trapdoor and keyword match both are successful, then a cloud host provider gives the results based on the search to the individual-based the search requirements

Ling Liu, Rui Zhang 2017 Searchable and Rui Xue [23] Encryption for Healthcare Clouds: A Survey

Searchable Encryption

This paper defines the encryption that is searchable encryption the healthcare applications

Tingting Zhang, Yang Ming [22]

Privacy-Preserving Access Control (PPAC) Mechanism

This paper defines a new method called PPAC Mechanism for Medical Records. It utilizes the attribute-based sign cryption method to sign crypt the data

2018 Efficient Privacy-Preserving Access Control Scheme in Electronic Health Records System

A Study to Convert Big Data from Dedicated Server to Virtual …

517

3 Methodology Conversion of a dedicated server to virtual server follows the following steps Step 1 Physical server should be capable of meeting the hardware requirements of the hypervisor (Hypervisor is an operating system that manages virtual servers and requires undeniable hardware to install) Step 2 The data and appliance configuration in the current physical server need to be saved Step 3 Hypervisor has to be installed by inserting the reader in the disc Step 4 Boot the console of hypervisor and refer to the manual of virtual machine Step 5 Operating system should be installed by using the image of the disk stored on a hard disk drive and then configure to install the appliance Step 6 This process should be repeated for all the appliances to be installed in the virtual server. To make it more effective these two steps need to be kept in mind 1. Hypervisors are available as an open-source or as a proprietary version 2. Few hypervisors allow to scale up or down memory an CPU power 3. If multiple identical dedicated servers are converted to Virtual servers, VM’s can be converted from one server to another without ant interrupt to service 4. Using a VMware convert or Xenconvert a dedicated server can be changed to virtual server (Fig. 2).

Fig. 2 Scalable virtual servers

518

G. R. Srikrishnan et al.

4 Conclusion This research work aims to ease the work of data storage from dedicated servers to virtual servers. The steps discussed will help to do a faster conversion. The aim of this research work is to design a cost-effective and less equipped instrument for better storage capacity. The data stored or backed up should be recoverable at all stages. Conversion from Virtual servers to Cloud servers to the existing vehicle company is the next task.

References 1. Singh, G., Behal, S., & Taneja, M. (2015). Advanced memory reusing mechanism for virtual machines in cloud computing. In 3rd International Conference on Recent Trends in Computing 2015 published by ELSEVIER, July 2015 (pp. 91–103). 2. Chawda, R. M., & Kale, O. (2013). Virtual machine migration Techniques_in cloud environment_ A Survey. IJSRD. 3. Addawiyah, R., Mat Razali, R., Ab Rahman, N., & Zaini, M. S. (2014). Virtual machine migration implementation in load balancing for cloud computing. In IEEE Conference. 4. Shribman, A., & Hudzia, B. (2013). Pre-copy and post-copy VM live migration for memory intensive applications (pp. 539–547). Springer. 5. Kapil, D., Pilli, E. S., & Joshi, R. C. (2012). Live virtual machine migration Techniques_Survey and research challenges. IEEE. 6. Hines, M. R., Deshpande, U., & Gopalan, K. (2009). Post-copy live migration of virtual machines [online] Available: http://osnet.cs.binghamton.edu/publicationshines09postcopy_ osr.pdf. 7. Hines, M. R., & Gopalan, K. (2013). Post-copy based live virtual machine migration using adaptive pre-paging and dynamic self-ballooning. 8. Soni, G., & Kalra, M. (2013, December) Comparative study of live virtual machine migration techniques in cloud. IJCA, 84(14). 9. Ahmad, R. W., Gani, A., Ab, S. H., Hamid, M., Shiraz, F., Xia, S., & Madani, A. (2015). Virtual machine migraton in cloud data centers_a review taxonomy and open research issues (pp. 2473–2515). Springer. 10. Patel, P. D., Karamta, M., Bhavsar, M. D., & Potdar, M. B. (2014, January). Live virtual machine migration techniques in cloud computing a survey. IJCA, 86(16).

Fuzzy Logic Controller Based Solar Powered Induction Motor Drives for Water Pumping Application Akshay Singhal

and Vikas Kumar Sharma

1 Introduction Energy is an important part of life. The demand for energy is increasing day by day. The conventional sources of energy are limited and they will vanish after some time. Climate change is a major problem. It is necessary to use renewable energy resources. The solar energy is available everywhere and available free of cost. The developing country like India, Peoples mainly depends on agriculture. So it is an attractive choice to use solar energy in pumping applications for irrigation. The IMD is the most attractive motor for industrial and almost 85% of total motors are only IM and consume more than 60% of total power [1, 2]. IMD is simple, rugged, and singly excited machine [3]. The IM is a singly excited machine, its speed and torque characteristics are dependent. Whenever a change in supply voltage the transient will appears across the motor, which affects the performance of the motor. The multilevel inverter (MLI) has improved power quality with reduced voltage stress across the switch [4]. The irradiations level is varying throughout the day. To extract maximum energy and improve the efficiency of the solar panels MPPT technique is necessary to install [5]. To maintain the DC link voltage, the boost converter with InC MPPT technique is added in the system. The boost converter maintains the DC link voltage from highly varying solar panels outputs. SVM technique is used to drive inverter. Compared to the conventional technique it has higher output voltage, improved power quality, and reduced harmonic [6].

A. Singhal (B) · V. K. Sharma Global Institute of Technology, Jaipur, Rajasthan, India e-mail: [email protected] V. K. Sharma e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_57

519

520

A. Singhal and V. K. Sharma

Fig. 1 Block diagram of closed-loop speed controller of IMD

The Variable voltage variable frequency (VVVF) is popularly known as V/F control scheme has a simple structure and comparatively less costly. In this technique, the working flux density maintained constant [7]. In this technique, the actual speed is measured with a sensor and compared with the reference speed. The error signal is fed to the controller, which may be PI controller or maybe FLC. Because of the higher time constant of PI controller, the response is sluggish. The FLC has a smoother response [8]. Figure 1 illustrates the block diagram closed-loop speed control of IMD.

2 VVVF or V/F Speed Control The overall cost of the pumping set will be as lower as possible. The advancement in the low-cost microcontroller, it is possible to obtain a smoother speed response. The boost converter changes the output voltage magnitude while the inverter changes the frequency of operation. The SVM based inverter is based on the park and Clarke’s transformation.

Fuzzy Logic Controller Based Solar Powered Induction Motor …

521

Fig. 2 The ML-NPC inverter

2.1 Three-Level Neutral Point SVM Inverter The ML-NPC is known for improved power quality. The ML-NPC also is known as diode clamped inverter. The SVM has less switching losses compared with the sinusoidal pulse width modulation (SPWM) technique. It is represented by the unique switches sequence. In this technique, the reference voltage is generated by αβ plane by the transformation of αβ, and determine the sector of reference voltage. With the help of sector, the switching duration (ON-OFF time) is calculated. Figure 2 illustrated the circuit diagram of the ML-NPC inverter and the switching pattern of the inverter. The multilevel has the X n where the X is the number of phase and n is the output voltage levels. So the three-phase three-level inverter has a total 33 = 27 switching states, in these some state are redundant and common [9, 10].

2.2 The DC-DC Converter or Boost Converter The output voltage from the solar panels is less as required in IMD operation, which is step-up by using the DC-DC converter. It is the arraignment of electronics switch and energy storing devices. Figure 3 illustrates the boost converter.

522

A. Singhal and V. K. Sharma

Inductor

Diode

Ipv

Capictor

Ic

T

Vdc

Fig. 3 Boost converter

vo =

v pv 1−α

(1)

where the vo is output voltage from the DC-DC converter and α is switch duty cycle.

3 Simulation Results The performance of the IMD is good in the steady-state condition. But due to the sudden change in the irradiation level, the transient will appears across the IMD, which will affect the speed response of the machine. Under loaded conditions, the machine shows good performance. Figures 4, 5, 6, 7, 8, and 9 illustrate the various performance of the IMD with FLC and PI controller. The current THD of the machine is within the range. The output voltage level of inverter also smoother in the steadystate which is illustrated in Fig. 8. From FLC. The FLC controller has less settling time in speed response, Fig. 4. And Fig. 5 illustrates the speed response from FLC and PI controller, respectively. The torque ripples can be reduced by FLC.

Fig. 4 Speed response with FLC

Fuzzy Logic Controller Based Solar Powered Induction Motor …

Fig. 5 Speed response with PI controller

Fig. 6 Torque response with FLC

Fig. 7 Torque response with PI controller

523

524

A. Singhal and V. K. Sharma

Fig. 8 Stator current response with FLC

Fig. 9 MLI output voltage level and current harmonics with FLC

4 Conclusion The IMD with V/F closed speed control has good steady-state performance. The advancement in low-cost microcontroller the technique is implemented in the irrigation water pumping application, where the transient performance may not considered. It is effective in rural areas. The FLC has a smoother response even in the irradiations levels changes. The ML-NPC inverter has good output voltage levels and less stress across the switch. The MATLAB/Simulink environment is used to identify the performance of the IMD. Future work involves the hardware implementation of Simulink model.

Fuzzy Logic Controller Based Solar Powered Induction Motor …

525

References 1. Reza, C. M. F. S., Islam, M. D., & Mekhilef, S. (2014). A review of reliable and energy efficient direct torque controlled induction motor drives. Renewable and Sustainable Energy Reviews, 37, 919–932. 2. Alsofyani, I. M., & Idris, N. R. N. (2013). A review on sensorless techniques for sustainable reliablity and efficient variable frequency drives of induction motors. Renewable and Sustainable Energy Reviews, 24, 111–121. 3. Hannan, M. A., Ali, J. A., Mohamed, A., & Hussain, A. (2018). Optimization techniques to enhance the performance of induction motor drives: A review. Renewable and Sustainable Energy Reviews, 81, 1611–1626. 4. Giribabu, D., Vardhan, R. H., & Prasad, R. R. (2016). Multi level inverter fed indirect vector control of induction motor using type 2 fuzzy logic controller. In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT) (pp. 2605–2610). 5. Anurag, A., Bal, S., Sourav, S., & Nanda, M. (2016). A review of maximum power-point tracking techniques for photovoltaic systems. International Journal of Sustainable Energy, 35(5), 478–501. 6. Durgasukumar, G., & Pathak, M. K. (2011). THD reduction in performance of multi-level inverter fed induction motor drive. In India International Conference on Power Electronics 2010 (IICPE2010) (pp. 1–6). 7. Singh, B., & Shukla, S. (2018). Induction motor drive for PV water pumping with reduced sensors. IET Power Electronics, 11(12), 1903–1913. 8. Sun, X., Koh, K., Yu, B., & Matsui, M. (2009). Fuzzy-logic-based $V/f$ control of an induction motor for a DC grid power-leveling system using flywheel energy storage equipment. IEEE Transactions on Industrial Electronics, 56(8), 3161–3168. 9. Kakodia, S. K., & Dyanamina, G. (2019). Field oriented control of three-level neutral point clamped inverter fed IM drive. In 2019 9th Annual Information Technology, Electromechanical Engineering and Microelectronics Conference (IEMECON) (pp. 24–29). https://doi.org/10. 1109/IEMECONX.2019.8877091. 10. Kakodia, S. K., & Dyanamina, G. (2019). Indirect vector control of Im drive fed with three level Dci. In 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT) (pp. 1–6). https://doi.org/10.1109/ICECCT.2019.8869144.

Identity Recognition Using Same Face in Different Context Manish Mathuria, Nidhi Mishra, and Saroj Agarwal

1 Introduction Computer Science is at boom compare to all other industries today. The reason behind this hike is the demand of digital technologies. Computer systems have maddened human life very easy and controllable. The major roles of computer system in life are as follows: • • • •

Increasing the Transparency Rate Minimize the efficiency effort Enhancing information security Easy and face Sharing of Digital Data.

In these, all application areas of computer science, the most important role is security. Because, when data is transparent, available online, and easy to share then it is very risky too. However, some privileges have already been assigned, but there is a chance of data leakage and stolen. To protect the data from unwanted access there are many techniques available. In which Biometric is most demandable. Biometric is nothing but to use human body features to uniquely identify the person from others. The most popular technologies are fingerprint recognition, it is very popular from the past century. The fingerprints were recorded in the form of ink

M. Mathuria (B) · N. Mishra School of Computer Science & Engineering, Poornima University, Jaipur, Rajasthan, India e-mail: [email protected] N. Mishra e-mail: [email protected] S. Agarwal Department of Computer, Mahaveer College of Commerce, Jaipur, Rajasthan, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_58

527

528

M. Mathuria et al.

print on the paper that was used as a signature. Still, at many places, the uneducated (Illiterate) person uses thumb fingerprint to agree on the agreement. In India, the government had started to store Biometric features of Indian citizens to provide them unique identification under Aadhar Card. It is the next step into the Digital World. In the last 5 years, digital technologies have dramatically changed their acceptance. Many researches have been proposing to enhance efficiency and to compete with population issues. The biometric authentication not only includes the 10 fingers for print but also includes the Face, Iris, Palm, Body Structure, Voice, and DNA. To take it in general use new invention and research operations are under processing [1, 2].

2 Face Recognition for Security Face is the only property of the human body that is recognized very instantly by the human. Once the face of humans first identified, it will store in the human memory. Later on when the same person meets again the instant recall from the memory identify the person. In the absence of technology, face was the only method to identify the human with a name. People are also used to identify some mark on the body to differentiate twins. But now in the digital world Face recognition became very important to authenticate for authorization. UDAI had taken a very strong decision regarding the security issues on issuing new SIM card among the telecom companies. Now, a fingerprint is not used for issuing new SIM card because of some discrepancies and legal issues new SIM card connections. UDAI actively presented the alternative to stop illegal issues in SIM cards by introducing live face recording and matching with AADHAR Card IDs. Our face is a gift of God that grabs the features from Father and Mother. The facial properties of a parent make their child to recognize among others. Face of male and female have different features, so identification of female among female and identification of male among male is quite difficult. To do so many research have been proposed in which faces are categorized according to their living region on earth like Asia, China, Japan, the US, South Africa extra [2, 3].

2.1 Face Recognition in AI Face Recognition in Artificial Intelligence is very helpful to decision through Machine Learning. The analysis of Face based on facial feature comes under Cognitive Vision research. When machine is artificially intelligent, it means it can perform any action according to the situation. Face Recognition in AI has the following domains: 1. Facial Expression for Mood Analysis like happy. 2. Face Condition for Patient Analysis like good health.

Identity Recognition Using Same Face in Different Context

529

3. Face Property for Mental Analysis like no depression. 4. Face Similarity for Crowd Analysis like no terrorist. Despite these all domains, the main purpose of AI is to provide needful services on-time using machine learning. For example, a user is searching for smart phone on sight. His face at different sight locations can be analyzed to find the interest. This will help both the customer to quick result and buyer to understand demand [3, 4].

2.2 Difficulties in Face Recognition The recognition of face using Digital Image processing has some limitations as follows: 1. 2. 3. 4. 5. 6. 7.

Camera Capability Environmental Input like absence of light Signal Noise Face movement Eye Glasses Mustache and beard Age effect.

The above limitations of Face Recognition become probes for recognition. To overcome these limitations, many researches are focused on improving the face matching score. The face identification, Dr. Robin Kramer, University of Lincon has taken these difficulties as a challenge; he has produced too many research papers on keen identical problems. He has years of experience in research on Face Recognition. Regarding the variability of faces on Photo Id proof like Driving License (DL), Passport, election card, Aadhar, PAN card extra, he had published a paper titled “Variability” [1]. It is a very genuine problem today, for example, the Applicant at the exam centre recognize by their Admit Card and Photo ID Card. In many cases, the face of the candidate does not match with either Admit Card photo or Photo ID Card. This is because of the limitations discussed in the previous paragraph (Fig. 1). For instant suppose, a candidate of age 35 years providing his Driving License as a photo id proof of 17 years old (He got his DL in age of 18 years). In this case, “how invigilators allow him for an exam?” But due to photo on admit card and matching of other details, he may be allowed. This is not the correct solution, one needs to work on Face Recognition System to improve the quality of recognition. The incorporation of the digital face recognition system is needed to involve in practice. It is the main objective of the research to present the current problem in Biometric Face Recognition and to solve them with some advanced technique. Here, in the given figure above, seven images of different categories are present. The first image is of Driving License from which present face gets extracted for image processing operation. Similarly, presented face in Aadhar Card, College ID

530

M. Mathuria et al.

a) Driving License

c) College ID Card

b) Aadhar Card

d) Passport Size Photos

Fig. 1 Variability of photo in ID cards and Passport size photos

and Passport size photos are extracted for processing. From the given images, at first sight, one cannot directly state that all images belong to a single person. But it is true; these all images belong to a single person who is one of the authors of this paper. The aim of this paper is to present such a type of scenario where identification of the face from ID cards is not possible from the human eye at once [5, 6].

3 Functional Model for Face Recognition Figure 2 presenting a Functional Model for Face Recognition which is a convenient system of face processing, and how the different components are thought to relate to each other. Structural encoding produces a set of descriptions of the presented face, which include view-centered descriptions as well as more abstract descriptions both of the global configuration and of features. View-centered descriptions provide information for the analysis of facial speech, and for the analysis of expression. The more abstract, expression-independent descriptions provide information for the face recognition units (Table 1).

4 Result and Discussion After processing all the images extracted from different IDs, it is very clear that face in different contexts has different properties inherited either from capturing the environment (camera, light, photo paper) or age effect. Due to these differences, the face is cropped with the ROI at eyes. The cropped eyes and eyebrows among

Identity Recognition Using Same Face in Different Context

531

EXPRESSION ANALYSIS

VIEW CENTERED DESCRIPTION

FACIAL SPEECH ANALYSIS

EXPRESSION INDEPENDENT DESCRIPTION

STRUCTURAL ENCODING

DIRECTED VISUAL PROCESSING

FACE RECOGNITION UNIT

ACUMEN SYSTEM

PERSON IDENTITY NODES

NAME GENERATION

Fig. 2 Functional model for Face recognition

the images are then processed by finding the age to the exact matching of images. Generally, black and white dots are calculated which presents the difference. Some of the images given in the table like image no. 7 are highly affected due to image processing operations because of bad quality. Basically, these types of complexities arise within the image processing based recognition. To exactly match the phrase, other parts may also be considered like face shape, space between nose-mouth-eyes. Similar to other recognition, face can also be recognized using the Eigenvalues. But similarly image quality will affect the performance of the recognition system. To give the perfect answer, after recognition of the person with his/her id image, we really need to process a combination of matching approaches with different ROI within the image.

5 Conclusion Face is a very important property in biometric systems that internally combine eyesnose-mouth-eyebrows-face shape to give the meaningful identification inherited from the parents. It is a really very complicated situation when a person’s ID photograph does not match with the original face. The main objective of this research paper

532

M. Mathuria et al.

Table 1 Extracted face image processing in MatLab Face photo

Gray image

B&W image

Cropped features

Extracted features

Edge detection

B&W points W= 1408 B= 64,128 W= 1715 B= 63,821 W= 1996 B= 63,540 W= 1976 B= 63,560 W= 1687 B= 63,849 W= 1584 B= 63,952

is to highlight these types of verification problems related to the mismatch of ID photos. To present the scenario, we have taken the example of photo IDs processed on MatLab with ROI which does not satisfy the recognition due to bad quality of images. So, on the basis of just face-matching with ID cards identification is not guaranteed, parallelly it requires some other authentications.

References 1. Ritchie, K. L., Kramer, R. S. S., & Burton, A. M. (2018). What makes a face photo a ‘good likeness’? Elsevier. 2. Jenkins, R., White, D., Van Montfort, X., & Mike Burton, A. (2011). Variability in photos of the same face. Elsevier.

Identity Recognition Using Same Face in Different Context

533

3. Neil, L., Cappagli, G., Karaminis, T., Jenkins, R., & Pellicano, E. (2016). Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism. 4. Redfern, A. S., & Benton, C. P. (2018). Representation of facial identity includes expression variability. Elsevier. 5. Ozbek, M., & Bindemann, M. (2011). Exploring the time course of face matching: Temporal constraints impair unfamiliar face identification under temporally unconstrained viewing. Elsevier. 6. Wieser, M. J., & Brosch, T. (2012). Faces in context: A review and systemization of contextual influences on affective face processing. Frontiers in Psychology.

Using Hybrid Segmentation Method to Diagnosis and Predict Brain Malfunction K. Dhinakaran and R. A. Karthika

1 Introduction The hybrid segmentation technique [1], combining two or more techniques, gives away an efficient result which is far better than the algorithms of segmentation which works separately. This is made possible in the image processing field mostly under the area of segmentation of medical. Image segmentation refers to the separation of the objects from its background [2]. This segmentation of the image could be based upon the gray scale, texture, color, motion and depth. In the process of feature extraction, data are statistically calculated based on the gray scale level matrix having distinct directions and distance [3]. After extraction process of feature, the ones which are different and are identified for the classification is selected [4]. Thus, image processing works as the heart of the classification technique. The proposed system majorly focuses on the medical imaging for the extraction of tumor [5], especially in MRI images [6]. It has accurate positioning of the hard and soft tissues, highresolution, and is most suitable in the brain tumors diagnosis. Hence, this kind of imaging is more adoptable for the identification of the brain lesions or tumor [7, 8]. Brain tumor is an unusual white tissue which is differed from the usual, normal tissues. This could be found out by the tissue structures. Tumors generally consist of holes or have the appearance of white solid tissues. Hence, the threshold segmentation is combined intently with the growing region for the improvement of the result [9]. The RF random forest (RF) is applied during the segmentation process of the brain, K. Dhinakaran Department of Computer Science and Engineering, Rajalakshmi Institute of Technology, Chennai, India R. A. Karthika (B) Department of Computer Science and Engineering, Vels Institute of Science Technology & Advanced Studies, Chennai, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_59

535

536

K. Dhinakaran and R. A. Karthika

in order to classify the voxels as three different components, cerebrum, cerebellum and brain stem four categories [10]. The usual method of random forest consists of two main disadvantages in the segmentation of the medical image: large pool features with a lot of poor features may affect the accuracy of segmentation and voting equally is not a good method to generate the result of classification. Weighted voting and feature selection are applied in this paper, in order to overcome the problems for brain components segmentation [11]. Picture noising is an irregular variety of shading or shining of data in the picture and is also a part of electronic clamor. This could be delivered by the hardware and sensor of a scanner or by making use of a computerized camera. Four varied noises are undergone within the image that is segmented [12]: gaussian noise [13, 14], salt and pepper noise, Poisson noise and speckle noise which are influenced when brain smear image is taken and they are removed using the filters of four types [15]: median filter, mean filter, wiener filter and Gaussian filter to test, efficiency of the variety of filters over the various kinds of noise [16]. To estimate the parametric values, we can make use of normalized absolute error, mean square error, peak signal noise ratio and normalized correlation [17, 18].

2 Materials and Methods a. BFR Algorithm One of the emerging methods in clustering algorithm is Bradley Fayyad Reina (BFR) [19, 20]. Datasets that are congregated are classified into two clustering algorithms. namely (1) hierarchical clustering (2) point clustering. Bradley Fayyad Reina is algorithm of point clustering which stay away from taking several copies. Bradley Fayyad Reina is processed to recover the database from Gaussian distribution and normally distributed across centroid [21]. The BRF algorithm’s main idea is to maintain the clusters in the main memory. The three sets that are involved in BRF include discarded set (DS), compressed set (CS) and retained set (RS). Discarded set values that are present in the cluster are further saved in the disk. It is placed in the central memory. Compressed sets are mini clusters points close to each other but not close to any cluster. They do not fit into big clusters and are kept within the disk. b. SOM Algorithm Tumor images are identified using a technique named self-organizing map [20]. For initial weight vector, the random variables are processed in the beginning. The main advantage of SOM is visualizing low-dimensional views of high-dimensional data so that the dimensionality and clustering of an image is reduced. The following steps are proposed on how SOM algorithm works. Step 1: Processing of SOM commences for initial weight vector when random variables are chosen.

Using Hybrid Segmentation Method to Diagnosis …

537

Step 2: Processing includes grabbing of an input vector from the input image. Step 3: Each input data in the input set is traversed and the neuron that wins is found. Step 4: Lastly, repetition of drawing of vector from input image is done.

3 Proposed Work Proposed work is on hybrid segmentation and classification process in a brain image. The brain is subdivided into three different components, cerebrum, cerebellum and brain stem. Segmentation is based on the adaption result of the volume of interest of the brain. The active appearance model (AAM) method is applied to identify the localization of the brain. The AAM is basically helps in the visualization of face recognition and to localize organs. However, typical AAM looks at the whole image which is not very efficient especially for images in large volume. The AAM traverses throughout the center of gravity from brain in contrast of the whole image thus improving the accuracy and efficiency of AAM. In the process of segmentation of components in the brain, the random forest (RF) method is used. During this segmentation process of the brain, the random forest (RF) method involves in the classification of voxels into three categories [22]: cerebrum, cerebellum and brain stem. The conventional random forest method has major disadvantages in segmentation of medical image: A big feature pool containing more poor features will affect the accurate segmentation. Producing classification result by equal voting of each tree is inappropriate. These problems for brain components segmentation can be avoided by feature selection and weighted voting. In addition, the multithreading technology acts as a catalyst to speed up the segmentation process.

3.1 Types of Noise Picture commotion is the inconsistent variety of the splendor or marking data in the representation and is generally a kind of electronic agitation. This could be delivered or given through the hardware and sensor of a computerized camera or scanner. Picture clamor can be created in a film grain and inside the unavoidable shot commotion of a phenomenal photon finder. It is a method which is undesirable and produced from picture catch which include spurious and incidental realities [23]. a. Salt And Pepper Noise A kind of noise at times noticeable on medical images is salt and pepper noise. It offers a safe and careful approach on noise reduction and for these kinds of noises it makes use of the median or morphological filter. For reducing both the salt and pepper noise, contra harmonic mean filter will help. This noise is considered as impulsive or fat tail distributed or spike noise.

538

K. Dhinakaran and R. A. Karthika

b. Gaussian Noise A noise which is statistical and having the (PDF) probability density function which is equal to that of a typical distribution, which is at times known as the Gaussian distribution. The values that the noise can take on at different phases are called Gaussian disbursed. c. Speckle Noise The granular noise which happens inherently and diminishes the standard of the synthetic aperture radar (SAR), active radar, medical ultrasound and hence the optical coherence pictorial represents the image. The monstrous majority of surfaces, artificial or standard area unit are completely arduous on the size of wavelength. d. Poisson Noise Poisson noise sometimes called photon noise. It is a basic type of uncertainty which is associated with the dimension of light, inherent to the quantized nature if mild and the independence of photon detections.

3.2 Types of Filters The filtering technique is used for enhancing or modifying an image. For example, filtering of an image can be done to emphasize or remove certain features. Operation of an image process is done with the use of filters which includes sharpening, smoothing and edge enhancement. a. Mean Filter For smoothing of picture in a simple and intuitive method is mean filtering, i.e., it decreases the quantity of depth variant between pixels. Mean filter will lessen the noise when the image is taken. b. Median Filter The advanced separation method used to uproot commotion is by the use of the nonlinear median filter. For diminishment of commotion median filter is a run of the mill pre-preparing venture to enhance the aftereffect of lateral handling (edge discovery on a picture). c. Gaussian Filter The channel for drive reaction is a Gaussian hardware and sign handling. Gaussian channel has the properties of getting no overshoot to a stage capacity information while minimizing the fall time and ascent. This conduct is firmly connected to the truth that the Gaussian channel has the negligible conceivable gathering stretch. It is thought about the right-time region sift through, these houses are required in regions reminiscent of virtual telecom structures and oscilloscopes.

Using Hybrid Segmentation Method to Diagnosis …

539

d. Wiener Filter This is used in the signal processing, to produce an estimate of the target random system by using (LTI) linear time-invariant filtering of noisy system, assuming known noise spectra, stationary sign and additive noise. They imply square error occurring in between the preferred method and the estimation random system can be minimized with the use of wiener filter.

4 Experiment and Result For the segmentation process, a set of different brain images are taken. The image is first segmented through the AAM and (RF) random forest (RF) model. It is then subjected with various kinds of noise. Every single image with the noise added is subjected to various types of filters. The following algorithm explains brain segmentation from the input brain image ALGORITHM: Hybrid segmentation method with slider control INPUT: Size m × n brain image OUTPUT: Segmented brain image 1. 2. 3. 4. 5. 6. 7. 8.

Let I be the input image. The RGB image is converted into a grey scale image. As the segmentation function use the gradient magnitude. Foreground objects be marked. Background markers be computed. Compute the watershed transform of the segmentation function. Calculate a mean value of input image, say Tm. Let p0 , p1 , p2 , …, pn be the pixel values; g be the grey scale value and N be the maximum pixel value of the image. 9. Threshold value t is assigned to slider control. Calculate the mean pixel value based on two conditions namely, less than and greater than the threshold value independently. 10. Visualize the result. The original image is compared with filtered image. Results are shown Fig. 1 and image quality metrics of varied filters applied in salt and pepper noise. Figure 1 shows the graphical representation of Table 1 and Fig. 2 shows the performance evaluation of brain image using salt and pepper noise. Figure 3 shows the graphical representation of Table 2 and Fig. 4 show the performance evaluation of brain image using speckle noise. Normalized absolute error shows difference in the de-noised image when compared to the original, which is perfect fit with the value of zero. A big NAE value represents quality of image is bad. Figure 5 shows the graphical representation of Table 3. It is also observed the highest PSNR is given by median filter and the lowest MSE and NAE values against salt and pepper noise, thereby establishing the

540

K. Dhinakaran and R. A. Karthika

100% 80% 60% 40% 20% 0%

NAE NCC PSNR MSQE

Fig. 1 Image quality metrics for various filters applied in salt and pepper noise

Table 1 Image quality metrics for various filters applied in salt and pepper noise MSQE Mean filter Median filter

139.740 23.9538

PSNR

NCC

NAE

26.6776

0.9745

0.2117

34.3371

0.9637

0.0615

Wiener filter

321.420

23.0601

0.9570

0.2520

Gaussian filter

123.627

27.2096

0.9694

0.2157

Fig. 2 Performance evaluation of brain image using salt and pepper noise

performance of median filter is better when compared to others. When the NCC approaches closer to 1, it shows that the equivalent filter is an optimal alternative that effectively removes salt and pepper noise (Fig. 6).

Using Hybrid Segmentation Method to Diagnosis …

100% 80% 60% 40% 20% 0%

541

NAE NCC PSNR MSQE

Fig. 3 Image quality metrics for various filters applied in Gaussian noise Table 2 Image quality metrics for various filters applied in Gaussian noise MSQE

PSNR

NCC

NAE

Mean filter

143.7919

26.5535

0.9769

0.3759

Median filter

104.8239

27.9262

0.9802

0.2398

Wiener filter

162.0831

26.0334

0.9755

0.4040

Gaussian filter

141.4453

26.6249

0.9719

0.3769

Fig. 4 Performance evaluation of brain image using Gaussian noise

542

K. Dhinakaran and R. A. Karthika

100% 80% 60% 40% 20% 0%

NAE NCC PSNR MSQE

Fig. 5 Image quality metrics for various filters applied in speckle noise Table 3 Image quality metrics for various filters applied in speckle noise MSQE

PSNR

NCC

NAE

Mean filter

39.4629

32.1689

0.9734

0.1001

Median filter

43.0833

31.7877

0.9706

0.1001

Wiener filter

53.6478

30.8353

0.9777

0.1261

Gaussian filter

44.4006

31.6569

0.9682

0.1063

Fig. 6 Performance evaluation of brain image using speckle noise

Using Hybrid Segmentation Method to Diagnosis …

543

5 Conclusion This paper segments the brain from the non-segmented portions and the performance of the four filters mean filter, Gaussian filter, median filter is compared. The wiener filter de-noises the pictures which are prone to four sorts of noises: salt and pepper, Gaussian noise, speckle noise and Poisson noise. From the results shown, it is clear that best performance over salt and pepper noise with median filter and wiener filter gives sensible performance over Gaussian, Poisson and speckle noise. Thus, wiener filter is associate degree optimum filter which will be applied to medical pictures. For the segmentation process, a set of different brain images are taken. The image is first segmented through the AAM and (RF) random forest (RF) model. It is then subjected with various kinds of noise. It is conferred that the proposed model works better for the brain segmentation.

References 1. Khan, W. (2013). Image segmentation techniques: A survey. Journal of Image and Graphics, 1(4), 166–170. 2. Aparna, M., & Nichat, S. A. (2016, April). Ladhake, brain tumor segmentation and classification using modified FCM and SVM classifier. International Journal of Advanced Research in Computer and Communication Engineering, 5(4). 3. Chevaillier, B., Ponvianne, Y., Collette, J. L. Claudon, M., & Pietquin, O. (2008). functional semi-automated segmentation of the renal dce-mri sequences. In ICASSP (pp. 525–528). 4. Mahindrakar, P., & Hanumanthappa, M. (2013, November–December). Data mining in healthcare: A survey of techniques and algorithms with its limitations and challenges. International Journal of Engineering Research and Applications, 3(6), 937–941, ISSN: 2248-9622. 5. Karthika, R. A., Dhinakaran, K., Poorvaja, D., & Shanbaga Priya, A. V. (2018). Cloud based medical image data analytics in healthcare management. International Journal of Engineering & Technology, 7(3.27)135–137. 6. Clapp, W. L. (2009). The renal anatomy. In X. J. Zhou, Z. Laszik, T. Nadasdy, V. D. D’Agat i, & F. G. Silva (Eds.), Silva’s Diagnostic Renal Pathology. New York: Cambridge University Press. 7. Seerha, G. K. & Kaur, R. (2013). Review on recent image segmentation techniques. International Journal on Computer Science and Engineering (IJCSE), 5(02), 109–112. 8. Gupta, B. & Tiwari, S. (2014, April). Brain tumor detection using Curvelet transform and support vector machine. International Journal of Computer Science and Mobile Computing, 3(4), 1259–1264. ISSN 2320-088X. 9. Rouhi, R., & Jafari, M. (2015).Classification of benign and malignant breast tumors based on hybrid level set segmentation. Expert Systems with Applications. 10. Lakshmiprabha, S. (2008). A new method of image denoising based on fuzzy logic. International Journal of Software Computing, 3(1), 74–77. 11. JavedIqbal, M., Faye I., & Brahim Belhaouari, S. (2014, June). Efficient feature selection and classification of protein sequence data in bioinformatics. The Scientific World Journal, 2014. Article ID 173869. 12. Murali Mohan Babu, Y., Subramanyam, M. V., & Giri Prasad, M. N. (2012, April). PCA based image de-noising. SIPIJ, 2. 13. Pandey, R. (2008). An improved switching median filter for the uniformly distributed impulse noise removal. WASET, 28, 349–351.

544

K. Dhinakaran and R. A. Karthika

14. Saleh Al-amri, S., Kalyankar, N. V., & Khamitkar, S. D. (2010, January). A comparative study of the removal noise from remote sensing image. IJCSI International Journal of Computer Science Issues, 7(1). 15. Padmavathi, G., Subashini, P., Muthu Kumar, M., & Thakur, S. K. (2010, January). Comparison of the filters used for underwater image-preprocessing. IJCSNS International Journal of Computer Science and the Network Security, 10(1). 16. M’elange, T., Nachtegael, M., & Kerre, E. E. (2009). A fuzzy filter for the removal of the gaussian noise in the colour image sequences 1474–1479. 17. NandhaGopal, N. (2013). Automatic detection of brain tumor through magnetic resonance image. International Journal of Advanced Research in Computer and Communication Engineering, 2(4). 18. Parveen, & Singh, A. (2015). Detection of brain tumor in MRI images, using combination of fuzzy C-means and SVM. In 2015 2nd International Conference on Signal Processing and Integrated Networks (SPIN) ©2015 IEEE. 19. Platero, C., & Tobar, M. C. (2014). A multiatlas segmentation using graph cuts with applications to liver segmentation in CT scans. Computational and Mathematical Methods in Medicine, 2014. Article ID182909, 16 p. 20. Pavan Kumar Reddy, Y., & Kesavan, G. (2016). Tumor identification using self organizing map and BFR algorithm. Middle-East Journal of Scientific Research, 24(6), 2110–2115. IDOSI Publications. 21. Vishnuvarthanan, A., Thiagarajan, A., Kannan, M., & Murugan, P. R. Short notes on unsupervised learning method with clustering approach for tumor identification and tissue segmentation in magnetic resonance brain images. Journal of Clinical & Experimental Neuroimmunology, 1, 101. 22. Patidar, P., Gupta, M., Srivastava, S., & Nagawat, A. K. (2010, November). Image de-noising by various filters for different types of noises. International Journal of Computer Applications, 9. 23. Thangavel, K., Manavalan, R., & Laurence Aroquiaraj, I. (2009). Removal of a speckle noise from ultrasound medical image based on the special filters: Comparative study. ICGST-GVIP Journal, 9(3), 25–32.

A Blockchain-Based Access Control System for Cloud Storage R. A. Karthika and P. Sriramya

1 Introduction Distributed computing urges clients to redistribute their information to distributed storage. Information re-appropriating implies that clients lose physical selfgovernance all alone information, which makes remote information trustworthiness confirmation become a fundamental test for hidden cloud customers. To free customers from the weight brought about by progressive uprightness checks, Third Party Auditor (TPA) is familiar with performing affirmations in light of a legitimate concern for the customer for data decency assertion. In any case, existing open inspecting plans depend on the supposition that TPA is trusted, subsequently, these plans can’t be straightforwardly stretched out to help the re-appropriated reviewing model, where TPA may be unscrupulous and any two of the three included substances (for example client, TPA, and cloud specialist co-op) may be in conspiracy. We suggest a dynamic redistributed examining plan that can’t just ensure against any deceptive element and crash, yet in addition bolster unquestionable powerful updates to re-appropriated information. We present another methodology, in view of clump leaves-validated Merkle Hash Tree (MHT), to aggregate check different needle centers and their own special records all stable, which is continuously reasonable for the lively re-appropriated assessing structure than customary MHT-based dynamism approves that can just confirm many needles hubs gradually. Static redistributed inspecting plan, and acquires a lower cost. R. A. Karthika (B) Department of Computer Science and Engineering, Vels Institute of Science Technology & Advanced Studies, Chennai, India e-mail: [email protected] P. Sriramya Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_60

545

546

R. A. Karthika and P. Sriramya

Objective of this paper circulated registering has ended up being progressively standard, getting growing contemplations in scholastic field and IT industry. A wide extent of inclinations of conveyed processing are engaging, for example, omnipresent access of system administration, wage as you go charging model, on-request plans of programming and equipment assets, cost sparing of IT framework speculation. Notwithstanding these advantages, numerous hidden cloud clients still can’t seem to join the cloud, and many are generally putting just their less delicate information in the cloud. As a matter of fact, the worries of cloud clients appear to bode well. Cloud Security Alliance (CSA) sees Data Loss as the most elevated need on the summary among the notorious nine appropriated figuring top perils, demonstrating the criticalness and hugeness for customer to find information degradation in cloud on schedule would be reasonable after that speedily remove exercises to remain from the lost mishaps. In a current framework, the expense of in statement in current outsourced looking at plan Fortification is high. As showed up, in the midst of the Store Protocol (i.e., the information pre-managing with step), the entire customer’s redistributed information must be downloaded by TPA from cloud. Given that TPA will at the same time give assessing intermediary administrations to a wide range of cloud clients, and the all out size of redistributed information of all clients will be noteworthy in cloud. For circumstance, it must be considerable correspondence cost for TPA, by downloading all redistributed data from CSP, to accomplish above instatement for every customer. For all intents and purposes, to make a redistributed assessing plan even more viably recognized from the point of view of a veritable TPA, the structure of driving TPA to get the entire re-appropriated information from CSP is an impediment that ought to be maintained a strategic distance from. Drawbacks in existing system are as follows: • Waste of Space • Need to buy Large Volume of data.

2 Related Work In this returning to Attribute-Based Encryption With Verifiable Outsourced Decryption Attribute-based encryption (ABE) is a promising framework for fine-grained get the chance to control of blended information in a dispersed stockpiling, nevertheless, unscrambling related with the ABEs is commonly nonsensically expensive for resource obliged front-end customers, which unbelievably discourages its utilitarian commonness. In order to lessen the translating overhead for a customer to recover the plaintext, Green et al. proposed to re-fitting the greater part of the translating work without revealing truly data or private keys. To ensure the outcast organization truly figures the re-appropriated work, Lai et al. given a fundamental of obvious nature to the unscrambling of ABE, at any rate, their course of action expanded the degree of the shrouded ABE figure content and the check costs. For the most part, their standard hope is to use an extending equally inscription structure, while one of the extending

A Blockchain-Based Access Control System for Cloud Storage

547

equally sections is used for the request reasoning. Thusly, the exchange agility and the count bite are duplicated. In particular, we suggest reasonably favorable and nonexclusive progress of ABE with certain re-appropriated unraveling subject to a property based key depiction framework, a symmetrize-key inscription plot and a commitment conspire. We show the surveillance and the declaration correctness of our made ABE devise in the classic model. Finally, we instinct our arrangement with strong structure squares. Lai et al.’s plot our approach lessen the information exchange limit and the estimation costs extensively. In this, An Algorithmic Approach to Improving Cloud Security: The MIST And Malachi Algorithms Cloud Computing is reliably extending in noticeable quality in the product designing field. In perspective on this extended use, the hugeness of information dependability and active surveillance has ended up being principal. This paper clears up upon wellbeing endeavors and approaches to bolster the cloud, including two new surveillance computations. As shown by the Cloud Security Alliance paper, “The Notorious Nine: Cloud Computing Top Threats 2013”, the nine most serious risks to appropriated figuring are information breaks, information mishap, record or organization gridlock laying hold of, untrustworthy alliance and Application Programming Interfaces (APIs), repudiation of organization, malevolent accomplice, exploitation of cloud organizations, lacking due assurance, in conclusion, mutual advancement susceptibility. Every one of the nine of these affairs would be reduced by truly realizing exacting surveillance on cloud systems. A blend of wellbeing endeavors in synchronization is the reason for the surveillance upgrades expressed abruptly. The surveillance computations introduced in this paper, the MIST and Malachi are two better ways to deal with guarantee customers’ data through record security. In this Data Security in Cloud managing and Outsourced Databases we present a model for provable information holder ship (PDP) that permits a customer that has checked information at an unbelieved server to affirm that the server has the fundamental information without recovering it. The model makes probabilistic requests of ownership by looking at graphs of squares from the server, which absolutely lessens I/O costs. The customer keeps up an ardent level of metadata to attest the confirmation. The test/reaction show transmits a little, predictable level of information, which limits engineer correspondence. As necessities be, the PDP model for remote information checking supports vital enlightening records in generally dissipated points of confinement systems. It is moreover inescapable in execution by convincing the use of expensive open key cryptography in metadata the heads. We present the structure and utilization of various SHAROES sections and our examinations show execution superior to various recommendations by over 40% on various benchmarks. In this proposed structure thought deals with the drawbacks of the private assessing, i.e. Outsider Auditor change the undermined record in the cloud information by making two sorts of database in server. As one database information host been ruined third gathering pundit change the debased information by fascinating information with regards to the second database.

548

R. A. Karthika and P. Sriramya

The advantages of the proposed system are as follows: • Absence of data proprietor is conceivable • Regeneration issue of authenticators is unraveled.

3 System Architecture

4 Modules 1. 2. 3. 4. 5.

UI design Document Holder Uploading Document Requesting Outsider Auditor Response Document Retrieval.

5 Modules Description 5.1 UI Design To interface with server, customers must give their username and state then nobody yet can organize to relate the server. If the customer starting now exits clearly, can sign into the assistant, but additional customer requisites to enroll their nuances, for instance, username, enigma explanation, and mail id into the assistant. Assistant will

A Blockchain-Based Access Control System for Cloud Storage

549

make the record for the entire customer to keep up exchange and compute quota. The term will be set as a customer id. Checking in is consistently used to enter a limited page.

5.2 Document Holder Uploading This is the module for exchanging the holder’s records or reports into the essential widget. Those goals fill a twofold need as they can demonstrate irregular state frameworks and help affiliation assignments. The customer circulates the record to the cloud, assigns the information so exchange the account or information. Given that we rely on framework relationships for our ultimate surveillance-crucial information. An origin needs to actively set up a relationship on a great deal of pros up a cloud band together with unit-limit boundary, inside watching a cloud customer.

5.3 Document Requesting The report is basically watch collecting so the record is divided and download reason in appeal circulate to the information holder, the information holder checks the arrangements and customer was demanded individual so information holder reply and key suit the customer.

550

R. A. Karthika and P. Sriramya

5.4 Outsider Auditor Response The report is basically to watch accumulation, so the record is shared and downloaded. The reason in Request is sent to the information proprietor, the information proprietor checks the courses of action, and client was requested individually by information proprietor.

5.5 Document Retrieval TPA can review the dependability of the attempted squares without recovering these bona fide squares from the cloud. Regardless, the homomorphic names must be set up by client herself against dangerous CSP/TPA. Post builds up the game plan of where the homomorphic tag of information square is made by utilizing the relating square record.

A Blockchain-Based Access Control System for Cloud Storage

6 Result and Discussion 6.1 UI Design

The client needs to enroll and need to recollect the private key.

The client needs to login to transfer or download any documents.

551

552

R. A. Karthika and P. Sriramya

6.2 Document Holder Uploading

The open key will be created while transferring the document and it record gets scrambled.

6.3 Document Requesting

The second user gives request to admin.

A Blockchain-Based Access Control System for Cloud Storage

553

6.4 Outsider Auditor Response

The administrator acknowledges the client demand. Until that the document can’t be opened.

6.5 Document Retrieval

The user can download the file and enter the Private key and Public key to open it.

554

R. A. Karthika and P. Sriramya

7 Conclusion Concerning appropriated capacity and remote information exploring, instructions to guarantee against an exploitative TPA is a basic issue raised by late research. Diverged from customary open examining plans, redistributed reviewing plan under a more grounded bond model means to guarantee against any savage substance and plot. In this paper, we propose another confirmed information structure that depends upon Merkle Hash Tree and recommended as BLA-MHT. By aiding the social affair attestations upon various needle focus focuses, this novel information construction is more dominant than current MHT-based methods of insight, and thusly is fitting for the dynamic re-appropriated surveying framework. In light of BLA-MHT, we likewise propose another arrangement to achieve both one of a kind updates and re-appropriated assessing. Appeared differently in relation to the forefront, the preliminaries support the suitability of our arrangement.

References 1. Wang, C., Chow, S. S. M., Wang, Q., Ren, K., & Lou, W. (2013). Privacy-preserving public auditing for secure cloud storage. IEEE Transactions on Computers, 62(2), 362–375. 2. Zhu, Y., Ahn, G. J., Hu, H., Yau, S. S., An, H. G., & Hu, C. J. (2013, April–June). Dynamic audit services for outsourced storages in clouds. IEEE Transactions on Services Computing, 6(2), 227–238. 3. Wang, Q., Wang, C., Ren, K., Lou, W., & Li, J. (2011). Enabling public auditability and data dynamics for storage security in cloud computing. IEEE Transactions on Parallel and Distributed Systems, 22(5), 847–859. 4. Erway, C. C., Küpçü, A., Papamanthou, C., & Tamassia, R. (2009). Dynamic provable data possession. In Proceedings 16th ACM Conference Computer and Comm. Security (CCS’09) (pp. 213–222). 5. Cash, D., Küpçü, A., & Wichs, D. (2013). Dynamic proofs of retrievability via oblivious RAM. In Proceedings of the 32nd International Conference Theory and Applications of Cryptographic Techniques: Advances in Cryptology (EUROCRYPT’13) (pp. 279–295). 6. Chow, R., Golle, P., Jakobsson, M., Shi, E., Staddon, J., Masuoka, R., & Molina, J. Controlling data in the cloud: Outsourcing computation without outsourcing control. In Proceedings of the 2009 ACM Workshop on Cloud Computing Security (CCSW’09) (pp. 85–90). 7. Cloud Security Alliance (CSA). (2013). The notorious nine cloud computing top threats in 2013. https://cloudsecurityalliance.org/download/the-notorious-nine-cloud-computing-top-thr eats-in-2013, February 2013. 8. Ateniese, G., Burns, R. C., Curtmola, R., Herring, J., Kissner, L., Peterson, Z. N. J., & Song, D. X. (2007). Provable data possession at untrusted stores. In Proceedings of the 14th ACM Conference on Computer and Communications Security (CCS’07) (pp. 598–609). 9. Juels, A., & Kaliski B. S., Jr. (2007). PORs: Proofs of retrievability for large files. In Proceedings of the 14th ACM Conference on Computer and Communications Security (CCS’07) (pp. 584– 597). 10. Shacham, H., & Waters, B. (2008). Compact proofs of retrievability. In Proceedings of the 14th International Conference Theory and Application of Cryptology and Information Security: Advances in Cryptology (ASIACRYPT’08) (pp. 90–107).

Cost-Effective Solution for Visually Impaired Abhinav Sagar, S Ramani, L Ramanathan, and S Rajkumar

1 Introduction Envision walking into an entirely new railway station. The spots we need to scan for, ticket counter, security registration, entryway, are difficult to find even with signs. Think about the amount of a difficulty this would be for a person who can’t even see the signs. Even some basic activities can be challenging for a visually impaired person. While strip malls consistently have building maps, they are regularly stationary introductions that are useful exactly when one can discover and peruse them. For a visually impaired individual, the assignment of finding a route turns out to be almost impossible. This is where we need some advanced technologies that can help them in a cost-effective way as nearly 90% of the visually impaired people cannot afford the existing costly solutions.

2 Previous Approaches In previous approaches, only color identification module used to detect the colored object and then speaks it to the visually impaired people. Orcam is available, but is very costly and was not actually aimed for blind people. It just acts as a commercial product that can help people in reading articles and recognizing known faces. Finger Reader is as of now only an idea with a model, yet it has potential applications past the outwardly impeded for instructing kids to peruse or deciphering dialects. We as of now have applications fit for doing this on our cell phones, and OCR (optical character recognition) is getting genuinely solid, yet the Finger Reader gives a progressively A. Sagar · S. Ramani · L. Ramanathan · S. Rajkumar (B) Vellore Institute of Technology, Vellore, Tamil Nadu, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_61

555

556

A. Sagar et al.

characteristic method for connecting. The Anagraphs undertaking took up the thought and started to chip away at designs for a device that would utilize thermo-pressure driven small scale incitation to actuate Braille spots by infrared laser radiation through a miniaturized scale mirror scanning framework. It’s simpler to envision it as a sort of wax material, which can go from strong to fluid with warmth and be effectively reshaped to make Braille specks. Tragically, the EU subsidizing has run out and the venture needs more money to be figured out. This idea first surfaced a few years back as a concept on Yanko Design. What if there is a device where blind people can also digitally read, just like a kindle? Braille education has been in unfaltering decrease since the 1960 s for different reasons. There still exists a situation in regards to the significance of Braille and issues related to talking PCs followed by the results of research revealing that there is a link between Braille literacy and employment. A system, named “Roshni” decides the customer’s situation in the building, course by methods for sound messages by pressing keys on the portable unit. It uses sonar development to recognize the place where the customer is by mounting ultrasonic modules on rooftop at typical between times. This framework is flexible, easy to work, and isn’t impacted by normal changes. Nevertheless, this system is confined only for the indoor course since it requires a quick and dirty inside guide of the building. RFID based guide perusing framework which gives specialized answer for the outwardly weakened to go through open areas effectively utilizing RFID label matrix, RFID stick Reader, Bluetooth interface, and individual computerized help. However, its beginning advancement cost is very high and odds of impedance in substantial activity (Figs. 1 and 2). A voice-controlled outdoor navigation application can be created utilizing GPS, voice, and ultrasonic sensors. It can ready for the client’s current position and give verbal rules to heading out to a remote goal, however, neglects to give snag identification and caution alarm. Another continuous development made to alert the customer by the proximity of static/unique deterrents inside two or three meters around, which works without the need of any Smartphone, utilizes a camera for movement recognition. This framework is healthy to complex camera and development and doesn’t require any prior finding out about the hindrance size, shape, or position. This camera-based picture taking care of structure can be a predominant decision yet it requires part getting ready to control and henceforth system ends up plainly massive, expensive and it must be transportable. Fig. 1 RFID communication

Cost-Effective Solution for Visually Impaired

557

Fig. 2 RFID tag decoding by host

There are mainly three areas, where most of the help is needed: Navigating to the destination, Image to speech conversion, and currency detection. These three assists can be enough for a visually impaired person to carry on some basic activities.

3 Navigation Assist A talking assistance area discovering framework is proposed for both indoor and outside routes. Framework involves walking stick having GSM module to send message to endorsed individual at the period of crisis, sonar sensors, and RF transmitter and beneficiary. For indoor area reference RFID is utilized and GPS for outside. Along these lines, this GPS used as a piece of strolling stick diminishes the expense of presenting various RFID names in outdoors to perceive the spot. This GPS-based methodology is “Drishti” which can change the framework from an indoor to an outside. To give completion course structure, makers increase indoor variation of “Drishti” to the open air variations for outwardly disabled individuals by walking by including only two ultrasonic handsets that are humbler than a MasterCard and are named to the customer’s shoulder. System gives a constant response to the client by means of the earphone in which the client can request the way, voice prompts, and even his/her current area in a recognizable way. Shockingly, this framework has two confinements. So to speak two reference focuses joined to the customer’s shoulder, so it advances toward getting to be hard to get the stature data of the customer. Used computation figures the region of customer in two estimations anticipating the typical stature of a man, which gives greater misstep if the customer sits or rests. Another limitation is that because of signs reflection or preventing by dividers besides, furniture, there are some “dead spots” in light of the terrible flawed data reads. To plan paths to specific goals, the estimated time of arrival must be given the map of a building, which includes the position of every RFID labels. We imagine that this data might be downloaded to the estimated time of arrival at the client’s demand, or when the nearness of Blind-Aid RFID labels is recognized at the passage

558

A. Sagar et al.

of the building. This last technique can be utilized for different applications, for example, electronic visit guides, like the ones as of now utilized as a part of a few historical centers. Since this guide data is unmistakably specific to each working wherein the system is used, data must be made for each RFID arranged building when the establishment is first introduced. Path determination is executed using Dijkstra’s shortest path calculation [1] over a graph. The vertices of the outline are the regions of enthusiasm for choice of path: entryways, convergences, and corners. Outline edges are only the halls interfacing them. Each vertex may have an optional number of RFID names related to it. For instance, no short of what one tag is required at each edge of a four-way intermingling. In addition, the diagram data files store the (x,y) picture directions of each vertex to empower plotting an arranged way in the graphical UI. Given a scale for the image, this in like manner grants us to process this present reality expel between territories on the diagram. When a way has been chosen through the chart, thusly ought to be changed over into voice headings in a reasonable way. The vertex directions license the framework to choose when a gathering of vertices are arranged in a straight line in a solitary hall. Exactly, when the course of movement changes, the framework finds the convergence where the change occurred, and chooses the suitable way changes expected to train the customer where to turn. Speech recognition is used for setting destinations. We can use Microsoft Speech API from its Cognitive services platform [2]. It uses the same technology on which Microsoft’s Virtual Assistant Cortana has been created. At the point when a destination is set, it is reported to the client. This is where we reduce a high amount of cost involving the project. We don’t need to train our device for speech recognition and reduce the number of languages supported. We can use the Speech API and reduce costs a lot, get support for all the major languages, and also get an industry-standard speech recognition module. We don’t even need to upgrade our speech recognition software along with time. When a user settles at place, the area examined is reported to the client. For instance, when the user stops before the ticket counter—II, “Ticket Counter second” is broadcasted. The direction of the client when analyzing a tag is thought to be before the door or other object of intrigue related to the tag, or in the direction of way being announced by the system. The direction that the user should be turning towards is likewise reported after the area. When a destination is given, a path is generated utilizing Dijkstra’s shortest path algorithm as depicted before. The path selected is separated into steps, where each progression is a leg of the outing down one passage. For the means that include simply crossing down the passageway and entering into another, only important headings are given. At the point when the client comes to the destination where the room is discovered, headings are specified in finer detail of how far to go dependent on entryways they should pass to their left side or right side. These instructions are easily understandable to a blind person as they are generally prepared to trail the dividers on either-side and their sticks permit them to feel for entryways.

Cost-Effective Solution for Visually Impaired

559

4 Currency Detection The issue of perceiving currency notes as for changes size, way, torn, and worn of clean enthusiasm inside the field of PC vision. Takeda et al. [3] proposed currency notes acknowledgment procedure by exploitation neural system for improvement of late-type of currency notes acknowledgment machines. They anticipated three center systems utilizing neural system. The little size neuro-recognition framework is utilized in light of the fact that the first strategy. The subsequent system is the cover assurance method exploitation hereditary calculation. The neuro-motor system, that utilizations computerized signal processor, is applied in light of the fact that the third procedure. The recognition component depends on a periodic worth opto-electronic gadget, which delivers an image identified with the daylight refracted by money notes. Takeda et al. [3] anticipated a route for acknowledgment of money notes. The applied math properties related to the vibe of the money notes are investigated in their work. The vibe of the money note pictures are portrayed by building a co-occurrence network. The acquired co-occurrence grid is then won’t remove the alternatives (Fig. 3). Fig. 3 Currency verification process

INPUT IMAGE

PCA Algorithm

DB with images

LBP Analysis

Note

Real Note

Fake Note

560

A. Sagar et al.

Principal component analysis (PCA) is a scientific [4] system that utilizations symmetrical transformation to change over a lot of perceptions of potentially corresponded factors into a lot of estimations of straight uncorrelated factors called head segments. The quantity of head parts is not exactly or equivalent to the quantity of unique factors. This change is characterized so that the primary head part has the biggest conceivable fluctuation (that is, it represents the most noteworthy changeability that can happen in the information conceivable), and each succeeding segment, thusly can have the most astounding difference under the condition that it is symmetrical to the past ones. Head parts are destined to be autonomous if the informational collection is together regularly disseminated. PCA is touchy to the general scaling of the underlying factors. PCA analyses connections of factors. It very well may be utilized to diminish the quantity of factors in relapse and grouping, for instance [4]. Every key part in Principal Component Analysis is a blend of the factors giving most extreme changes. Give ‘X’ a chance to be a network for n perceptions by p factors, and the covariance framework is ‘S’. At that point for a direct mix of the factors. Z1 =



a1i xi

(1)

where xi is the ith variable, a1i, i = 1, 2, 3 . . . p are linear combination coefficients for z1, they can be denoted by a column vector a1, and normalized by a1T a1. The variance of z1 will be a1T sa1. The vector a1 is found by maximizing the variance. And z1 is called the first principal component. The second principal component can be found in the same way by maximizing: a2T sa2 subject to the constraints a2T sa2 = 1 and a2T a1 = 0. It gives the second principal component that is orthogonal to the first one. Remaining principal components can be derived in a similar way. In fact, coefficients a1, a2, a3 . . . p can be calculated from eigenvectors of the matrix S. Origin uses different methods according to the way of excluding missing values. The reason for the pre-preparing module is to lessen or dispense with a portion of the varieties in face because of brightening. It standardized and upgraded the face picture to improve the acknowledgment execution of the framework. The pre-preparing is urgent as the strength of a face acknowledgment framework significantly relies upon it. By utilizing the standardization procedure, framework heartiness against scaling, pose, outward appearance and light is expanded. The photometric standardization procedures are utilized in histogram separating. Histogram evening out is the most widely recognized histogram standardization or dark level change, which reason for existing is to create a picture with similarly conveyed brilliance levels over the entire brightness scale. Recognizing patterns algorithms [5] is calculations by and large intend to give a sensible response to every single imaginable information and to perform “in all probability” coordinating of the data sources, considering their measurable variety. This is against example coordinating calculations, which search for accurate matches in the contribution with previous examples. A typical case of an example coordinating calculation is currency notes shaping, which searches for examples of a given sort in

Cost-Effective Solution for Visually Impaired

561

printed picture and is incorporated into the hunt capacities of numerous rupees note esteems in powers. Systems to change the raw element vectors (include extraction) are now and then utilized preceding utilization of the example coordinating calculation. For instance, include extraction calculations endeavor to lessen a huge dimensionality highlight vector into a littler dimensionality vector that is simpler to work with and encodes less excess, utilizing scientific strategies, for example, principal components analysis (PCA). The qualification between highlight determination and highlight extraction is that the subsequent highlights after component extraction has occurred are of an unexpected sort in comparison to the first includes and may not effectively be interpretable, while the highlights left after element choice are basically a subset of the first includes. The currency notes are distinctive in texture, size, and shading. The component estimations of every money note removed by our proposed methodology don’t have any covering [6] as for the other cash notes. In this manner, the removed highlights are sufficient to segregate distinctive cash notes dependent on PCA Algorithm Using Local Binary Pattern Analysis.

5 Obstacle Detection Wearable and minimized assistive advances are similarly used for supporting people with inadequacies, for instance, the outwardly weakened. Wearable contraptions are allowing without hands participation, or at the smallest restricting the usage of hands when using the device, while flexible assistive devices required steady hand cooperation. A wearable impediment evasion hardware gadget intended to serve the route arrangement of outwardly debilitated individuals. The system accentuates its qualities like free hands, free ears, wearable, and easy to work (Fig. 4). An ultrasonic sensor based route framework for visually impaired individuals depends on microcontroller with manufactured discourse yield and convenient gadget

Fig. 4 Ultrasonic sensor

562

A. Sagar et al.

to manage the client about urban strolling ways to call attention to what choices to make. This gadget utilizes the rule of impression of high recurrence ultrasonic pillar to identify impediments in the way. These versatility bolster guidelines are given by vibro-material shape keeping in mind the end goal to decrease route troubles. A Disadvantage of ultrasound is that dividers may reflect or square ultrasound signals, which result in less exact limitation. We need three ultrasonic sensors. For example, we have an obstacle right in front of us slightly towards right/left. We have to point our unidirectional sensor towards the obstacle for it to detect it. Single sensor can’t detect any object present on left or right side of the user. And also, a mere obstacle detected announcement isn’t enough for deciding what should be the next step. The three sensors are placed as one in the center and two of them on both sides, making more than 45°’ angle. In this way, if an obstacle is detected only to the right one, then there is an obstacle to the complete right. Both the center and the right one detects and obstacle, then it means it is situated diagonally towards the right. Distance = [(EPW) ∗ (V )/2]

(2)

where D Distance (cm/s) EPW Echo pulse width high time V Sound velocity (cm/s). Vibration response is also incorporated along with voice in this obstacle detection system created utilizing ultrasonic sensors to recognize snags. Since outwardly impeded people are increasingly sensitive in hearing and has solid discernment than conventional individuals, giving caution through vibration and voice is highly efficient. The system works both indoor and outdoor, detecting obstructions and cautioning through vibration and voice reactions. Contingent on the separation between the obstruction and the client, various levels are given to vibration (Fig. 5). Another option instead of ultrasonic sensors would be Light Dependent Resistor [LDR]. LDRs are the resistors whose obstruction changes with the power of light episodes on it. The obstruction is customarily high when no light is occurrence and it Fig. 5 Light dependent resistor

Cost-Effective Solution for Visually Impaired

563

Fig. 6 LDR resistance versus light intensity

begins to decrease as light power increments on it. LDR or a photograph sensor has its application in various mechanical technology/implanted applications, for instance, line looking for a robot, line following robot, carport entryway opener when car’s headlight is occurrence on it, daylight based tracker, etc. It is eluded to by numerous names, for instance, LDR, photoresistor, photograph conductor, etc. The resistor has a section that is touchy to light. One of the semiconductor materials used as a piece of LDR is cadmium sulfide (Fig. 6). Since an electrical current would include movement of electrons, it drifts according to the difference in the potential on either end. An LDR or photograph resistor is included a semiconductor material which has a high opposition, offering less number of free electrons for conduction. As light is occurrence upon the semiconductor, photons are devoured by its cross-section. A piece of this vitality gets traded to the electrons in the cross-section which would then have satisfactory vitality to break free from the grid and look into conduction. Therefore, the opposition of the photograph resistor decreases with fluctuating intensity of event light.    10.72 ∗ volt Light Intensity = 500/ 5 − volt

(3)

In this way, we can differentiate the materials which are in front of the LDR. The material separation is finished by estimating force levels of the reflected light from various materials. After sharp perception of the light force esteems, the materials can be gathered with three references light power esteems. The significant snags looked by VIP like a concrete divider, stone, cardboard, mirror, and fabrics are tried. The highest light intensity has been reflected by a whiteboard whereas the lowest for ceramic tiles. In this way, all the materials can be categorized into three categories based on the intensity they have reflected: materials that reflect 100 lx. After experimenting some general materials showed the following results:

564

A. Sagar et al.

(a) 100 lx Intensity: All the reflective materials like glass and white board come under this category.

6 Image to Speech Conversion Optical Character Recognition (Microsoft Cognitive Services) [7], OCR distinguishes contextual information in a picture and in this way, removes the recognized content into a machine-comprehensible character stream used for inquiry and different various purposes going from therapeutic records to security. It also supports various languages. OCR spares time and gives comfort to clients by enabling them to just take photographs of content as opposed to translating direct text. Right now, 21 languages are being supported by OCR. If necessary, OCR rotates the image around the even hub. OCR gives the edge directions of each word as observed in Fig. 7. On photographs where content is overwhelming, false positives may originate from halfway perceived words. On certain photos, especially the one with no substance, exactness can change an extraordinary arrangement relying upon the sort of picture. The precision of content acknowledgment relies upon the nature of the picture. There are numerous purposes behind explanations behind mistaken perusing. • Blurry pictures • Handwritten or cursive content

Fig. 7 OCR image correction

Cost-Effective Solution for Visually Impaired

• • • • •

565

Artistic textual styles Small content size Complex foundations, shadows or glare over content or point of view contortion Oversized letters at the start of words Subscript, strikethrough, and superscript text.

The computer vision calculation concentrates hues from a picture. The hues are examined in three distinct settings, closer view, foundation, and entire, and hues are assembled into twelve 12 prevailing accent hues (dark, blue, darker, dim, green, orange, pink, purple, red, greenish-blue, white, and yellow). Contingent upon the hues in a picture, basic high contrast or complement hues might be returned in hexadecimal shading codes.

7 Conclusion Using all the effective methods stated in this paper, we can come up with a costeffective solution for blind which is also highly efficient. Half of the work of finding the obstacle in front can be done by the LDR and the rest can be done by the Cognitive Services API. However, this is limited just too indoor environments, as the distances between the VIP and the nearest obstacle in front will be high. However, this isn’t a great fall-back as the obstacle isn’t near the VIP and in a general context, a person would want to know only about the things that are nearer to him/her. All the previous solutions didn’t take advantage of the newly emerging cloud platforms, like Microsoft Cognitive services. These platforms are highly effective and also the burden of creating all the image processing techniques can be left out by using these and more concentration can be kept on user experience.

References 1. Dijkstra Edsger, W. (1959). A note on two problems in connexion with graphs. Numerische Mathematik, 1(1), 269–271. 2. Tonstad, K. (2017). Introduction to Cortana Intelligence Suite. 3. Takeda, F., Nishikage, T., & Omatu, S. (1997). Neural network recognition system tuned by GA and design of its hardware by DSP. IFAC Proceedings Volumes, 30(25), 319–324. 4. Chui, C. K., & Lian, J. (1996). A study of orthonormal multi-wavelets. Applied Numerical Mathematics 20(3), 273–298. 5. Zhang, W., Shan, S., Zhang, H., Chen, J., Chen, X., & Gao, W. (2006). Histogram sequence of local Gabor binary pattern for face description and identification. Ruan Jian Xue Bao (Journal of Software), 17(12), 2508–2517. 6. Wang, L., Yan, Z., & Jufu, F. (2005). On the Euclidean distance of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8), 1334–1339. 7. Del Sole, A. (2017). Microsoft Computer Vision APIs Distilled: Getting Started with Cognitive Services. Apress.

When Sociology Meets Next Generation Mobile Networks Harman Jit Singh, Diljot Singh, Sukhdeep Singh, Bharat J. R. Sahu, and V. Lakshmi Narasimhan

1 Introduction The mobile phone has progressed exponentially from a minimal processing power and a compact screen device to a device with sizeable screen and processing power comparable to laptops. This progression triggered the need for high data rates due to upsurging bandwidth hungry applications. Furthermore, these applications are generating lot of data, which impelled many network operators across the globe to deploy 4G Long Term Evolution (LTE) networks. With arrival of 4G, many diverse network services (like Device to Device (D2D) communications, Big Data, Internet of Things (IoT) and Internet of Vehicles (IoV)) were established. Due to inclusion of high amount of smart devices and users in the network (because of aforesaid network services), some network problems are emerging like low Quality of Service (QoS), less resource availability, high network overload, etc. As a result, dust around LTE is settling, and network researchers are gradually moving towards 5G networks. The interest of network research community is growing, and the industries are funding heavily to deploy 5G networks. Emergence of mm-wave spectrum, diverse network services and hyper-connected vision are triggering the evolution of 5G networks [1]. Figure 1 illustrates the major requirements of 5G networks provided by Group Special Mobile Association (GSMA) and its

H. Jit Singh (B) · D. Singh · S. Singh · B. J. R. Sahu Network Software R&D Team, Samsung R&D India-Bangalore (SRI-B), Bangalore, India e-mail: [email protected] S. Singh e-mail: [email protected] B. J. R. Sahu · V. Lakshmi Narasimhan Kyungpook National University, Daegu, South Korea Computer Science Department, University of Botswana, Gaborone, Botswana © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_62

567

568

H. Jit Singh et al.

Fig. 1 Requirements of upcoming 5G networks

partner organizations. GSMA has blended research strategies of different industries and identified eight major requirements for deployment of 5G [1]: 1. Upto 10 Gbps data rate support: Almost 100 times as compared to current 4G LTE/LTE-A networks. 2. About 1ms round trip latency: 10 times lower than LTE/LTE networks. 3. High bandwidth per unit area: To accommodate substantial number of devices in a particular area for longer duration. 4. Increase in number of connected devices: To support growing smart devices in the network. 5. 99.999% perceived availability: To make 5G readily available whenever needed. 6. Energy reduction by 90%: For making 5G, a green network system. 7. Battery life increment by 10 times: To reduce the power consumption and enhance the battery life of smart devices in the network. 8. 100% coverage: For “anytime anywhere” connectivity. Recently, 3GPP identified three important goals of 5G network in 5G discussions1 as illustrated in Fig. 2. Following are the goals discerned by 3GPP, which are likely to be the pillars of 5G networks: 1. High capacity dense area networks: Small cell concept has the high potential to increase the capacity in dense area networks substantially. LTE already put forward this concept in Release 12 and optimized it to the maximum possible 1 “3GPP system standards heading into the 5G era”, [Online]. Available: http://www.3gpp.org/news-

events/.

When Sociology Meets Next Generation Mobile Networks

569

Fig. 2 Goals of upcoming 5G networks defined by 3GPP

extent. Moreover, optimization of small cell concept for unlicensed spectrum bands has been discussed in Release 13. Small cell concept is identified as one of the key drivers of 5G wireless networks. 2. Ubiquitous Coverage: LTE has nearly reached the feasible limits of efficiency for present available spectrum, technologically. LTE is expected to endure as a baseline technology for extensive coverage of wide area broadband in 5G networks. 3GPP claims to enhance LTE and upcoming 5G networks from radio as well as network services perspective. 3. Virtualization to cut down cost: Present network deployment services brought hardware and software significantly close to each other. Advancement in hardware industry along with the successful execution of virtualization concept through cloud services made network space virtualized. User and control plane separation is appraised as one of the key solutions to attain virtualization in network architectures. According to 3GPP, virtualization has the potential to augment automation, decouple software schemes from resources, furnish network as well as service optimization analysis and enable fast service delivery. 3GPP considers virtualization as one of the strong pillars of 5G networks, which is expected to cut down operational as well as capital expenditure. Motivations: Many industries and research groups are working to commingle the network services under one common network in order to fulfil the above-mentioned requirements and goals [2]. There exists some problems in the current network services, which can hinder their co-existence in future generation networks. At the same time, with the advent of network generation from 4G to 5G, social networks are also expanding exponentially. Due to seamless connectivity and high speeds, it is easier for the smart devices to connect and form social ties and circles. These social ties or social circles formed between smart devices or its users have close relationships and dependencies based on the properties and behaviour of a device or a user. Recently,

570

H. Jit Singh et al.

research community scrutinized social paradigm as an important key to solve the existing challenges in the network services [2]. Therefore, we believe that social paradigm has the potential to solve the network service challenges, making it easier to inculcate all the network services concurrently in 5G networks. A successful blending of social networks and various network services can solve some challenges of 5G networks like heterogeneity, low latency provision, Quality of Service (QoS) guarantee, fault tolerance, data management, etc. Contributions: In this article, we classify several network services and the associated challenges. We present social networks as a crucial paradigm to solve the existing challenges in the network services. Furthermore, we confer that the co-existence of these network services along with the social networks can mitigate many challenges and open issues in deployment of 5G networks. The combination of network services and social paradigm under the umbrella of 5G can lead to socio-5G networks with differentiated applications. Finally, we identify some of the research challenges these network services will face when they are integrated with social networks. To show the effectiveness of our proposal, we perform simulations on our testbed and demonstrate that the delay experienced by Social IoT devices reduces up to 3.5 times and maximum device support increases by 2 times as compared normal 5G networks. On the other hand, social D2D shows up to 67% increase in data rate while providing coverage for upto 80% devices as compared to normal D2D that provides only 50% coverage, thus, reducing the chances of outage.

2 Present Network Service Challenges to Social Convergence Solution Key features of 4G LTE/LTE-A, e.g. advanced antenna systems (like Multiple-Input and Multiple-Output (MIMO)), adaptive modulation techniques, IP-based femtocells, IPv6 support along with peak download rates of 1 Gbps and pervasive connectivity, lead to the establishment of diverse network services [1]. These network services include IoT, IoV, D2D communications and Big Data. Network services are the source of substantial benefits but are slowly becoming a problem for telecommunication designers. This is due to saturation of present network because of increase in number of smart devices and its associated data. With introduction of new network paradigms, these network services are facing many challenges in terms of deployment and implementation. Thus, there is a need to tackle these problems or challenges through a common solution as these network services promise to be the genesis of future generation wireless networks. It has been scientifically proved that a huge count of individuals hitched in a social network has the potential to provide precise solutions to complex challenges than a single individual (even knowledgeable).2 This principle is explored widely by the 2 “Social

Internet of Things: Turning Smart Objects into Social Objects to Boost the IoT” by L. Atzori et. al in IEEE Internet of Things newsletter (2014).

When Sociology Meets Next Generation Mobile Networks

571

research community in different internet domains and in the smooth implementation of network services like IoT, IoV, D2D communications and Big Data. Social approach to these network services offers to meet the needs of network designer, user and developer. Social concept helps to establish social ties or relationships amongst smart devices used by the network services autonomously with respect to the humans [2]. Likewise, social network of smart objects is formed. The social network of smart objects of the network services can shape the network navigability and effectively carry out objects’ discovery. It also assures scalability, same as in human social networks. Models made to investigate social networks can be reused to mitigate the challenges being faced by the current network services [2]. Social objects have higher potential as compared to smart objects. Specific challenges vary according to the network services. Figure 3 illustrates the challenges of each network service along with the social solution. The challenges and their respective social solutions are discussed below.

2.1 Device to Device Communications D2D communications is foreseen as one of the crucial network services of next generation mobile networks. In D2D, a mobile device exchanges data with another mobile device directly, using cellular resources under their eNodeB (eNB). However, D2D faces some problems due to device behaviour, cell interference, network loading and channel coordination. Some of the prominent problems of D2D communications include (1) data/traffic overloading (2) guaranteed QoS (3) data distribution and (4) resource allocation. On the other hand, devices pre-owned by users often form common social groups in a social network space. These social groups can solve the aforesaid problems of D2D. The challenges and the related social solutions are discussed below: • Appropriate co-operative communication, user-centric offloading and relay selection with the help of social relations exhibited by devices and their owners (collected by eNB) can mitigate data/traffic offloading problem. Indian buffet process [3] is the most optimal scheme to deploy aforesaid methods. • QoS of D2D communication can be enhanced with the selection of appropriate link. Social metrics of mobile social network can be utilized to form groups on the basis of common social characteristics or interaction phenomenon. These social groups assist in judgement of appropriate link distance and thus the link selection, which helps in significant enhancement of throughput and spectral efficiency. • Data distribution can be handled by (1) effective multicast schemes and (2) grouping of users based on their locations. Both can be attained by exploring social relationships amongst devices with the help of user categorization and application of Bayesian theory [4].

572

H. Jit Singh et al.

Fig. 3 From network service challenges to social solution: a path towards socio-5G networks

• If the devices are made to communicate in groups in D2D rather than pairs, the resources can be effectively utilized. Multiple D2D communication pairs can be instigated with the help of social networks by group formation on the basis of common properties, behaviour and characteristics. This further helps in non-orthogonal and orthogonal resource sharing, which in turn maximizes the throughput of the network.

When Sociology Meets Next Generation Mobile Networks

573

2.2 Big Data Analytics Big Data refers to structured and unstructured data in huge volume, which is difficult to process with traditional mining schemes, database techniques or softwares. Most of the data today is generated by mobile network and related devices. There is possibility of massive interconnection with the increase in volume. This interconnection further affects interpretation and processing methods for extraction of knowledge from the data. Another challenge is to develop some Big Data computing schemes to access, assemble, analyse and act. On the other hand, social network has the potential to connect data resources, workflows, network data, software components and web-based services. Thus, social aspect can play a major role in optimizing analytics. Socially connected people yielding interconnected data can be analysed collectively rather than analysing data of each individual or its device. This further aids in mitigating privacy problems, provides data credibility, eliminates redundancies and saves time, resources as well as cost. Mingling Big Data and social aspect (i.e. Social Big Data) can further solve the significant Big Data problems such as: (1) knowledge sharing, (2) online data management and (3) keyword prediction. These challenges and the related social solutions are discussed below: • Social networks can help to identify useful data out of huge data comparatively easier than finding it individually. Social paradigm can serve as a strong platform for knowledge sharing since the data is enormous. • Social networks can help in filtering the unwanted data and process the required data. Redundancies can further be removed from the required data with the help of social relationships. • Sentiment analysis with the help of social networks can make the keyword predicting task easy. Association networks can further accelerate the sentiment analysis task [5].

2.3 Internet of Things The concept of IoT lies in one of the main objectives of the future Internet that is to interconnect the physical objects and their owners. In this scenario, a single object will communicate with diverse things nearby. Complexity is expected to increase leading to many challenges like (1) self-operation (2) heterogeneity and interoperability (3) relationships and (4) Discovery and interaction. Social aspect can help to overcome these challenges. Blending of social paradigm and IoT is known as Social IoT. Following are various challenges faced by IoT and related solutions: • The elite challenge of IoT is to interconnect heterogeneous devices in an independent environment. Social networks can act as an intersecting point for Web services, people and objects. Exploring social ties between objects based on its owner can help to achieve autonomy. COSMOS proposed by Orfef et al. [6] fur-

574

H. Jit Singh et al.

nishes a platform for self-operation and decentralization of IoT objects. It makes use of virtual entity, which shares the previous experience knowledge based on learning with the help of communication, individual learning or through knowledge repository. • Web service exploration can support designing of favourable common interface. This can further assist in linking all the heterogeneous devices on a common platform. A common framework should be built taking IoT and social network into consideration. Social relationships or behaviour of things or objects can help in discovering common properties of heterogeneous devices. This has the potential to solve the heterogeneity and interoperability challenge. • Recently, the notion of establishing relationships amongst objects in IoT gained popularity. There are many challenges associated with establishing relationships like privacy, trustworthiness, etc. Use of social paradigm can help to mitigate these challenges. Luigi Atzori et al. [7] have proposed the functionalists and policies to integrate objects in a social network. Different policies have been defined for management and establishment of relationships using social concept. Social relation can help in discovering objects, associated services, sharing of resources and the like. As per SIoT, the things or objects in the network manage and store relationship-related information with the help of search function. Appropriate link selection with the help of social ties can also help in managing the relationship. • Grouping of objects based on decentralized management and common social relationships is an optimal solution to solve the discovery of services challenge. In order to make a decentralized network, social relationships can be used so that the things or objects can retrieve services and behave autonomously at the same time [8]. An optimal architecture can control decentralized management and assist grouping of things or objects.

2.4 Internet of Vehicles The concept of combination of Vehicular Ad-hoc NETworks (VANETS) and IoT is known as IoV. It includes connection of vehicles-to-vehicles and vehicles to road side infrastructure. IoV faces many problems like (1) limited connectivity, (2) heterogeneity, (3) effective content distribution and (4) privacy. Integrating IoV with social network services (known as social IoV) can help in solving the aforesaid challenges as follows: • Investigating social properties has the capability to overcome the limited connectivity problem. Verse application proposed by Luan et al. [9] uses social-aware rate management technique to adapt vehicles’ transmission rate quickly and efficiently with the help of social impact. Pre-existing applications of social network can allow sharing of traffic information easily, thus solving the limited connectivity challenge.

When Sociology Meets Next Generation Mobile Networks

575

• Optimal grouping of diverse travellers and their devices using social networks can resolve the heterogeneity problem. Social architecture such as drive and share (DaS) [10] can act as a common platform in order to provide traffic-related information to all the groups containing heterogeneous devices. • To attain effective distribution of IoV content (like road side units, on board units, location based units, etc.), separate social cloud can be deployed. These social clouds can effectively distribute the data amongst vehicles avoiding the ambiguity and replication problems. Dedicated Short Range Communication (DSRC) can be used to transfer the social cloud data amongst vehicles. This can also save the unwanted bandwidth consumption problem. • Privacy is another crucial challenge to be taken care-off. Vehicles can establish social relationships autonomously in IoV using social networks. If the relationships are trustworthy, the data is automatically secured. Mainly, privacy of IoV depends on the social relationships, which in turn depends on social properties or behaviour of a vehicle or its drivers. Thus, social concept can play a crucial role in data protection of IoV.

3 Towards Socio-5G Networks Tremendous growth of wireless data, rapid penetration of seamless mobile connectivity and escalation of smart devices enriched with diverse features are mounting the stage for upcoming 5G networks. Next generation 5G network is expected to furnish high data rates, QoS and connectivity. New network services like D2D communications, MCC, Big Data, IoT and IoV will be co-working under the common network. These network services, when amalgamated with sociology aspect, will further help in accomplishing the goals and requirements of upcoming 5G networks. The Social D2D communication, Social MCC, Social Big Data, Social IoT and Social IoV will lay the foundation of optimal and efficient “Socio-5G Networks” as shown in Fig. 4. Table 1 depicts the potential of ’Socio-5G Networks’ to attain the goals and requirements of upcoming 5G networks. We now discuss capability of Social D2D communication, Social MCC, Social Big Data, Social IoT and Social IoV to achieve goals and requirements of 5G networks. Social D2D Communication: D2D communication is considered as a key network service of growing 5G network architecture according to mobile and wireless communication enablers for the Twenty-twenty Information Society (METIS) [11]. METIS is the part of European Union project with the main objective to deploy 5G networks. Social D2D has the potential to meet some of the 5G network requirements. D2D resource allocation enhanced with the help of social networks instigates sharing of spectrum resources amongst D2D and cellular users. This can further enhance the capacity, which is the foremost requirement of 5G networks. QoS guarantee and optimal data distribution in social D2D can help to attain much higher data rates due to favourable propagation and close proximity. Social D2D offers enhanced data/traffic

576

H. Jit Singh et al.

Fig. 4 A. Connectivity of IoT devices B. Delay experienced by IoT devices C. No. of IoT devices supported without packet drop D. Data rate experienced by UE E. Data offloaded from BS F. UE devices in coverage

When Sociology Meets Next Generation Mobile Networks

577

Table 1 Network services from social perspective and 5G network goals/requirements 5G radio access network models mmWave frequency Channel bandwidth 5G channel model 5G Penetration loss (l) Cell Radius Path loss compensation eNB system models eNB’s max. Tx power No of UE Mobile’s max. Tx power IoT device parameters No Of IoT Devices IoT device Tx power IoT Packet size Uplink scheduler Packet generation rate D2D device parameters Path loss for D2D link D2D transmit power

27.925 GHz 520 MHz Samsung 5G field test channel model 20 dB 500 m 3.8 20 W 90–240 100 mW 200–500 K 1 Mw 100 byte Proportional fair Exponential (Mean 10 s 30 min) 31.54 + 40 log2d 10 dBM

offloading due to which devices can communicate using direct links. This in turn helps in significant reduction of end-to-end network latency. Social Big Data: 5G networks will inculcate massive smart devices producing huge amount of data. Handling velocity, variety and volume of data will be difficult task for mobile network operators (MNO). Recently, Big Data analytics has emerged as the only solution for MNOs to intelligently understand, analyse and process data while offering many smart future network services. However, Big Data cannot provide optimal trade-off between cost and performance enhancement in case of 5G networks [12]. 5G mobile network data will be heterogeneous in nature, huge in number and may be ambiguous in some cases. Social Big Data analytics has the tendency to provide fault tolerant, cost effective and highly scalable analytics for data generated by 5G networks. Correlation between network traffic and user behaviour can assist MNOs to make long-term strategical solution along with optimized resource management in order to minimize operational and deployment cost. Social Big Data with powerful data management capability and effective knowledge sharing can help MNOs to attain effective resource utilization in 5G networks along with personalized Quality of Experience (QoE). This in turn helps in optimization of 5G networks.

578

H. Jit Singh et al.

Social IoT: Social IoT provides effective data management schemes, efficient device discovery methods, self-operation techniques, relationship establishment capability and heterogeneity support. On the other hand, next generation IoT is expected to provide cognition, virtualization and information centric network support.3 This vision can be achieved through above-mentioned SIoT characteristics. Furthermore, achievement of next generation IoT goals with the help of social aspect can further increase the network capacity, encourage low latency transmissions and provide massive smart device support. Social IoT can also improve spectral efficiency, reliability and throughput per area, thus satisfying major deployment requirements of 5G networks. Social IoV: Indoor and outdoor deployment of 5G networks are expected to vary in terms of requirement and architecture. Social IoV can help to achieve requirements of outdoor environment. It can also assist in building the optimal architecture for dense and sparse areas, thus providing guaranteed QoS and ultra low latency for end-to-end high mobile networks.

4 Performance Evaluation In this section, we first discuss our simulation platform, simulation parameters and assumptions. Thereafter, we evaluate the performance of our simulations.

4.1 Testbed Setup The testbed consists of a single 5G gNB, implemented in a IBM X3650 server, connected to backhaul using gigagbit interfaces [13]. We have gathered actual IoT traffic data from a global leading federation for IoT data sets and test beds, i.e. Fiesta, constituting thousands of IoT devices (http://fiesta-iot.eu/fiesta-experiments/ [October 22, 2016]). According to statistical analysis of Fiesta-IoT traffic, interarrival rates of packets lie between 0.1 and 1 sec with MTU as 100 bytes. The IXIA traffic generator is used to produce and emulate real IoT traffic (https://www.ixiacom. com/sites/default/files/resources [October 22, 2016]). We have used dense urban 5G channel model [14] and 5G-specific RF parameters [6]. Major 5G network radio parameters along with IoT and D2D parameters used for testbed, and simulation experiments are highlighted in Table II. Some IoT devices ( 10%) are considered to be directly unreachable from gNBs due to underground deployment. We consider IoT devices are linked to other reachable IoT devices with the help of different social relationships. The IoT devices can connect to other nearby IoT devices (socially related) to reach the gNBs. 3 “Internet

of Things and 5G: GISFI IoT WG Activities”, Available online: www.gisfi.org.

When Sociology Meets Next Generation Mobile Networks

579

4.2 Results and Discussions Figure 4a presents the connectivity of IoT devices. With higher number of devices, competition for radio resource increases, and thus, the connectivity decreases. Social IoT provides up to 18% better connectivity. Using social relationships, comparatively more device can avail the optimized radio resource usage and coverage. Using social IoT platform, IoT devices that can not access gNB directly, can do so with the help of trusted devices. Moreover, socially connected devices reduce competition for radio resource, as well as minimize radio resource usage by grouping messages. As shown in Fig. 4b, the delay experienced by IoT device reduces up to 3.5 times. When IoT devices increases, the competition for connectivity causes failure in random access. This increases retransmission rates, and thus, delay increases. Using social concept, the devices tend to minimize the competition by forming group with their ties. Figure 4c shows the number of IoT devices that can be supported by the gNB without increasing the delay budget and connectivity. Using social IoT concept in our simulations, the maximum number of devices that can be supported is double as compared to normal 5G IoT. This is direct consequence of results depicted in Fig. 4a, b. Social relationship in D2D communication is explored to improve the UE performance. One of the practical challenges in D2D is willingness to share. Using social concept, a UE can establish a reliable and trusted D2D connection with a nearby UE. Social and collocation information enables gNB to use multicast to save radio resource. Moreover, nearby UE shares data to offload the gNB. As per Fig. 4d, social D2D simulation shows up to 67% increase in data rate depending upon the number of nearby devices. When number of UEs in the system increases, more trusted and socially connected UEs in the vicinity can share data and also enable multicast transmission. Also, data discovery and sharing between the neighbour socially connected UE devices enable gNB offloading. As shown in Fig. 4e, social D2D can offload upto 50% gNB data depending on collocation and social relation. Figure 4f depicts that connectivity also improves with social D2D. In our simulation setup, we reduce the original gNB coverage to 50% and gradually increased by 10%. The number of UE that can connect to gNB is always more in case of social D2D as compared to normal D2D. Using social relationship, UE helps each other in connecting, thus reducing the outage. With 50% gNB coverage, simple D2D can provide coverage (connectivity) to 50% devices, whereas Social D2D can provide coverage for up to 80% devices. Using the social concept increases chances of relaying and sharing radio resources, thus improving the connectivity.

580

H. Jit Singh et al.

5 Conclusion In this article, we have provided an overview of emerging network services, related challenges and possible solutions from social perspective. We believe that these network services will co-exist under the umbrella of 5G networks and work together in synchronous with each other. Furthermore, we provided a notion of extending the wings of these network services by integrating them with sociology paradigm to fulfil the goals and requirements of upcoming future 5G networks.

References 1. Agiwal, M., Roy, A., & Saxena, N. (2016). Next generation 5G wireless networks: A comprehensive survey. In IEEE Communications Surveys and Tutorials, 99, 1–40. 2. S. Singh, et al.: A survey on 5G network technologies from social perspective. IETE Technical Review Taylor and Francis, pp. 1–10. https://doi.org/10.1080/02564602.2016.1141077. 3. Zhang, Y., et al. (2015). Social network aware device-to-device communication in wireless networks. IEEE Transactions on Wireless Communications, 14(1), 177–190. 4. Sun, Y., et al. (2014). Efficient resource allocation for mobile social networks in D2D communication underlaying cellular networks. In IEEE International Conference on Communications (ICC). Sydney: NSW. 5. Hamed, A. A., & Wu, X. (2014). Does social media big data make the world smaller? An exploratory analysis of keyword-hashtag networks. In IEEE International Congress on Big Data (BigData Congress). Anchorage, AK. 6. Roh, W., et al. (2014). Millimeter-wave beamforming as an enabling technology for 5G cellular communications: Theoretical feasibility and prototype results. EEE Communications Magazine, 52(2), 106–13. 7. Nitti, M., Atzori, L., & Cvijikj, I. P. (2015). Friendship selection in the Social Internet of Things: Challenges and possible strategies. IEEE Internet of Things Journal, 2(3), 240–247. 8. Atzori, L., Iera, A., & Morabito, G. (2014). From smart objects to social objects: The next evolutionary step of the internet of things. IEEE Communications Magazine, 52(1), 97–105. 9. Luan, T. H., et al. (2015). Feel bored? join verse! engineering vehicular proximity social networks. IEEE Transactions on Vehicular Technology, 64(3), 1120–1131. 10. Lequerica, I., Longaron, M. G., & Ruiz, P. M. (2010). Drive and share: Efficient provisioning of social networks in vehicular scenarios. IEEE Communications Magazine, 48(11), 90–97. 11. Osseiran, A., et al. (2014). Scenarios for 5G mobile and wireless communications: The vision of the METIS project. IEEE Communications Magazine, 52(5), 26–35. 12. Zheng, K., et al. (2016). Big data-driven optimization for mobile networks toward 5G. IEEE Network, 30(1), 44–51. 13. Saxena, N., Roy, A., Sahu, B. J. R., & Kim, H. (2017, February). Efficient IoT gateway over 5G wireless: a new design with prototype and implementation results. IEEE Communication Magazine, 55(2), 97–105. 14. Rappaport, T. S., et al. (2013). Broadband millimeter wave propagation measurements and models using adaptive beam antennas for outdoor Urban cellular communications. IEEE Transactions Antennas and Propagation, 61(4), 1850–59. 15. Choi, S., Chung, K. S., & Yu, H. (2014). Fault tolerance and QoS scheduling using CAN in mobile social cloud computing. Cluster Computing, 17(3), 911–926. 16. Wu, Y., et al. (2013, June) Cloudmov: Cloud-based mobile social tv. IEEE Transactions on Multimedia, 15(4), 821–832.

When Sociology Meets Next Generation Mobile Networks

581

17. Voutyras, O., et al. (2014, october). An architecture supporting knowledge flow in social internet of things systems. In IEEE 10th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Larnaca. 18. Barbarossa, S., Sardellitti, S., & Di Lorenzo, P. (2014). Communicating while computing: Distributed mobile cloud computing over 5G heterogeneous networks. IEEE Signal Processing Magazine, 31(6), 45–55.

Image Enhancement Performance of Fuzzy Filter and Wiener Filter for Statistical Distortion Pawan Kumar Patidar and Mukesh Kataria

1 Introduction Image enhancement is a vital image processing [1] task, i.e. as a process itself as well as a component in other processes. There are many ways to enhance an image or a set of data and methods exists. The important property of a good image enhancement model is that it should completely remove distortion as far as possible as well as preserve edges. Traditionally, there are two types of models, i.e. linear model and nonlinear model. Generally, linear models are used. The benefits of linear distortion removing models is the speed and the limitations of the linear models is that the models are not able to preserve edges of the images in an efficient manner, i.e. the edges, which are recognized as discontinuities in the image, are smeared out. On the other hand, nonlinear models can handle edges in a much better way than linear models. One popular model for nonlinear image enhancement is the Total Variation (TV)-filter. We suggest to enhance a degraded image I given by I = O + D, where O is the original image and D is an Additive White Statistical distortion with unknown variance [2]. The rest of the paper is organized as follows: • • • • •

In the second section, we present the method of statistically designed filter. In the third section, we present the method of Blurred filter. In the fourth section, we described the image distortion. The simulation results are discussed in part fifth. We conclude and future work in part sixth and seventh.

P. K. Patidar (B) Computer Science Department, Poornima Institute of Engineering and Technology, Jaipur, India e-mail: [email protected] M. Kataria Computer Science Department, Poornima College of Engineering, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_63

583

584

P. K. Patidar and M. Kataria

2 Statistically Designed Filter The goal of the statistically designed filter [3] is to filter out distortion that has corrupted a signal. It is based on a statistical approach. Typical filters are designed for the desired frequency response. The statistically designed filter approaches filtering from a different angle. One is assumed to have knowledge of the spectral properties of the original signal and the distortion, and one seeks the LTI filter whose output would come as close to the original signal as possible. Statistically designed filters are characterized by the following: a. Assumption: signal and (additive) distortion are stationary linear random processes with known spectral characteristics. b. Requirement: the filter must be physically realizable, i.e. causal (this requirement can be dropped, resulting in a non-causal solution). c. Performance criteria: minimum mean square error.

2.1 Statistically Designed Filter in the Fourier Domain The Statistically designed filter is: R(x, y) =

K ∗ (x, y)Q s (x, y) |K (x, y)|2 Q s (x, y) + Q n (x, y)

(2.1)

Dividing through by Qs makes its behavior easier to explain: R(x, y) =

K ∗ (x, y) |K (x, y)|2 +

Q n (x,y) Q s (x,y)

(2.2)

where K(x, y) K * (x, y) Qn (x, y) Qs (x, y)

Degradation function Complex conjugate of degradation function Power Spectral Density of Distortion Power Spectral Density of un-degraded image.

The term Qn/ Qs can be interpreted as the reciprocal of the signal-to-distortion ratio.

Image Enhancement Performance of Fuzzy Filter and Wiener …

585

3 Blurred Filter Blurred filter (FF) is based on gray level mapping into a blurred plane, using a membership function [4]. The aim is to generate an image of higher contrast than the original image by giving a larger weight to the gray levels that are closer to the mean gray level of the image than to those that are farther from the mean. An image f of size M × N and L gray levels can be considered as an array of blurred singletons, each having a value of membership denoting its degree of brightness relative to some brightness levels. For an image t(p, q), we can write in the notation of blurred sets: t ( p, q) = U pq μ pq /Z pq

(3.1)

where p q Z pq μpq

1, 2, …, M 1, 2, …, N The intensity of (p, q)th value Membership value.

The membership function characterizes a suitable property of image like darkness, edginess, textural property, etc. and can be defined globally for the whole image or locally for its segments. The basic principles of blurred enhancement schemes are illustrated in Fig. 1. Blurred filter method for image enhancement based on blurred set theory. This filter employs blurred rules for deciding the gray level of a pixel within a window in the image. This is a variation of the MF and Neighborhood Averaging filter with blurred values. The algorithm includes the following steps: 1. At first the gray values of the neighborhood pixels (n × n window) are stored in an array an then sorted in ascending or descending order. 2. Then, blurred membership value is assigned for each Fig. 1 The basic principles of blurred enhancement

Input Image

Image Fuzzification

Membership Defuzzification Enhanced Image

586

i. ii. iii.

3.

4.

P. K. Patidar and M. Kataria

neighbor pixels: This step has the following characteristics: A P-shaped membership function is used. The highest and lowest gray values get the membership value 0. Membership value 1 is assigned to the mean value of the gray levels of the Neighborhood pixels. Now, we consider only 2 × r + 1 pixels (r/2 ≤ n2) in the sorted pixels and they are the median gray value and r previous and forward gray values in the sorted list. Now, the gray value that has the highest membership value will be selected and placed as output.

4 Image Distortion Image distortion is the random variation of brightness or color information in images produced by the sensor and circuitry of a scanner or digital camera. Image distortion can also originate in film grain and in the unavoidable shot distortion of an ideal photon detector [5]. Image distortion is generally regarded as an undesirable by-product of image capture. Although these unwanted fluctuations became known as “distortion” by analogy with an unwanted sound they are inaudible and actually beneficial in some applications, such as dithering. The types of distortion are the following: • • • •

Amplifier distortion (Statistical distortion) Salt and pepper distortion Shot distortion (Poisson distortion) Speckle distortion.

4.1 Amplifier Distortion (Statistical Distortion) The standard model of amplifier distortion is additive, statistical, independent at each pixel and independent of the signal intensity, caused primarily by Johnson–Nyquist distortion (thermal distortion), including that which comes from the reset distortion of capacitors (“kTC distortion”). In color cameras, where more amplification is used in the blue color channel than in the green or red channel, there can be more distortion in the blue channel. Amplifier distortion is a major part of the “read distortion” of an image sensor, that is, of the constant distortion level in dark areas of the image [6].

Image Enhancement Performance of Fuzzy Filter and Wiener …

587

5 Simulation Results In this section, experimental results are presented which explored the characteristics of the various filters used and tested. The comparative analysis has been presented on the basis of the different standard deviation of distortion for the original image (512 * 512) which is shown in Table 1. The result is taken by comparing the performance of blurred filter and statistically designed filter on the basis of PSNR and MSE value (Figs. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 and 14). Table 1 Standard deviation, MSE and PSNR values of blurred filter and statistically designed filter

Filter name Blurred filter

Statistically designed filter

Fig. 2 Original image

Standard deviation (σ )

MSE value

PSNR value

8

387.4825

22.2483

10

395.3612

22.1609

12

402.8893

22.0789

14

412.4349

21.9772

8

44.5471

31.6426

10

62.4977

30.1722

12

81.4361

29.0226

14

100.4175

28.1127

588

P. K. Patidar and M. Kataria

Fig. 3 Adding statistical distortion with standard deviation (8)

Fig. 4 Adding statistical distortion with standard deviation (10)

6 Conclusion This paper focuses on the effective algorithms which have been used for Image Filtering by using a blurred filter and statistically designed filter which tell that: The performance of the statistically designed filter is better than blurred filter according to PSNR values for different standard deviation. PSNR high means good quality and low means bad quality. PSNR is using a term mean square error (MSE) in the denometer. So, low the error, high will be the PSNR. The performance measure of blurred filter and statistically designed filter is shown in Fig. 15.

Image Enhancement Performance of Fuzzy Filter and Wiener …

589

Fig. 5 Adding statistical distortion with standard deviation (12)

Fig. 6 Adding statistical distortion with standard deviation (14)

7 Scope for Future Works Future works include using new concepts to modification of membership value without affecting the performance of result in blurred filtering. Use adaptive properties for comparison of all filters. The construction of other blurred filtering methods for color and gray images to filtering other types of distortion (salt and pepper, quantization, speckle, etc.). Use a median filter for comparison with blurred filter for salt and pepper distortion, Poisson distortion or speckle distortion. Use some new filters for comparison with different types of distortion and different types of images (MRI Image, Mammogram Image…etc.) which have different pixel values. Use new algorithms to cancellation distortion from video frame.

590 Fig. 7 Enhancement by blurred filter, standard deviation = 8

Fig. 8 Enhancement by blurred filter, standard deviation = 10

P. K. Patidar and M. Kataria

Image Enhancement Performance of Fuzzy Filter and Wiener … Fig. 9 Enhancement by blurred filter, standard deviation = 12

Fig. 10 Enhancement by blurred filter, standard deviation = 14

591

592 Fig. 11 Enhancement by statistically designed filter, standard deviation = 8

Fig. 12 Enhancement by statistically designed filter, standard deviation = 10

P. K. Patidar and M. Kataria

Image Enhancement Performance of Fuzzy Filter and Wiener … Fig. 13 Enhancement by statistically designed filter, standard deviation = 12

Fig. 14 Enhancement by statistically designed filter, standard deviation = 14

593

594

P. K. Patidar and M. Kataria

Fig. 15 Performance measure of blurred filter and statistically designed filter

References 1. Gonzalez R. C., & Woods, R. E. (2008). Digital image processing, Prentice Hall. 2. Khare, C., & Nagwanshi, K. K. (2012, February). Image restoration technique with non linear filter. International Journal of Advanced Science and Technology, 39. 3. Wavelet domain image enhancement by thresholding and Statistically designed filtering. Kazubek, M. Signal Processing Letters, IEEE, Volume: 10, Issue: 11, Nov. 2003 265 Vol. 3. 4. Mozammel Hoque Chowdhury, M., Ezharul Islam, Md., Begum, N., & Al-Amin Bhuiyan, Md. (2007). Digital image enhancement with blurred rule-based filtering. IEEE, 1-4244-1551-9/07. 5. Boncelet, C. (2005). Image distortion model. In Bovik. A. C. (Ed.), Handbook of image and video processing. 6. Mukesh C. Motwani, Mukesh C. Gadiya, Rakhi C. Motwani, Frederick C. Harris, Jr, (2004), “Survey of Image Denoising Techniques,” Proc. of GSPx 2004, Santa Clara Convention Center, Santa Clara, CA, pp. 27–30.

A Review of Face Recognition Using Feature Optimization and Classification Techniques Apurwa Raikwar and Jitendra Agrawal

1 Introduction As of late, face recognition has pulled in much consideration and its exploration has been quickly extended by specialists as well as neuroscientists since it has numerous potential applications in computer vision communication and programmed access control framework. As the early step of self-regulating face recognition, recognizing the face is an essential part of face detection. Nonetheless, face detection is not genuine because it has many discrepancies of image display, such as pose variation, occlusion, image orientation, illuminating condition and facial expression. To resolve every abnormality mentioned above, a lot of novel methods have been suggested. For instance, the layout coordinating techniques are utilized for face localization and detection by figuring out the relationship of an input image to a standard face pattern [2–5]. To portray integrity and emotions, our face is the elementary target of attention. The process of feature detection plays an important role in facial recognition. It is designed to find a definite constituent of the data that can lay emphasis on the appropriate information. This representation can be found by amplifying a standard or can be a pre-characterized depiction [10, 11]. Generally, a face image is systematic accumulation of pixel values (holistic representation). For situations like varying gestures, the LRC approach has been quite useful with the grimmest expression where the state-of-the-art techniques lag behind, indicating coherency for mild and severe changes. In case of camouflage, the modular LRC algorithm using an efficient evidential fusion strategy yields the best reported results in the literature. “Recent research A. Raikwar · J. Agrawal (B) School of Information Technology, Rajiv Gandhi Prodhyougiki Vishwavidhyalaya, Bhopal, Madhya Pradesh, India e-mail: [email protected] A. Raikwar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_64

595

596

A. Raikwar and J. Agrawal

demonstrated the competency of unorthodox features such as down-sampled images and random projections, indicating a divergence from the conventional ideology” [6, 7]. However, with new LRC approach it has been confirmed that with suitable choice of classifier the down-sampled images can generate better results as compared to the other clichéd approaches [9]. Section II of the following paper discusses the approach of face detection and recognition. In Sect. 3, we discuss the related work. In Sect. 4, the problem formulation and used techniques are described. Finally, we discuss conclusion and future work.

2 Image Based Approach of Face Recognition The evolution of the feature-based technique can be divided into three sectors. Let us consider a common face detection problem of discovering a face in a strewed picture, and pixel features such as gray scale and color are used for the low-level analysis which deals with the segmentation of visual features. Hence, features produced from this analysis are cryptic due to the low-level nature. In feature analysis, visual features are organized into a more global concept of face and facial features using information of face geometry. “Through feature analysis, feature ambiguities are reduced and locations of the face and facial features are determined” [8]. Active shape models are used by the next group. “These models ranging from snakes, recommended in the late 1980s, to the more contemporary point distributed models (PDMs) have been established for the purpose of complex and nonrigid feature extraction such as eye pupil and lip tracking” [11, 12]. • Low-level analysis • Feature analysis • Active shape models.

3 Related Work This section discusses the associated work in the field of face recognition and detection. The process of face recognition uses various transform-based function and classification techniques. Here, some methods described by different authors are discussed. Yu et al. [1] formulated that for reducing the complexity and the interference of background noise, the binarization image denoising method for face image denoising extricates the face value of the peak and valley of two-dimensional features. BP neural network classifier method is used to classify facial features that are constructed to achieve accurate face recognition. This paper introduces a face recognition algorithm based on neural network and studies the optimization problem of face recognition.

A Review of Face Recognition Using Feature Optimization …

597

Fig. 1 Face detection approaches

Cha Zhang and Zhengyou Zhang et al. [2] discussed a representation of a novel adjacency coefficient which does not only reflect the continuity between similar samples and the similarity between different samples but also capture the category information between different samples. Original data space can be transformed into an uncorrelated discriminant subspace by applying this new adjacency coefficient into the unsupervised discriminant projection. A comprehensive outcome of the discussed BULDP is given based on singular value decomposition. Hyeonseob Nam and Bohyung Han et al. [3] presented a network which can absorb moderate changes known as Gabor feedforward network. Originally, the network produces directionally projected Gabor magnitude features at the hidden layer and works directly on raw face images. Finally, there is a fusion at the output layer, of various orientations and scales that are produced from several sets of magnitude features for final classification decision. Analytical training of the network model is done using a single sample per identity. The resulting solution is always optimal with respect to the total error rate. Their factual analysis conducted on five face datasets (six subsets) from the public domain shows supportive outcomes in terms of identification accuracy and computational efficiency.

598

A. Raikwar and J. Agrawal

Renjie Huang and Xudong Jiang et al. [4] explored an OFIML algorithm. To enhance the metric learning algorithms in feature space, this algorithm uses an offfeature vector that contains accurate information of poses, expressions and occlusions. Comprehensive analysis indicates that intrasubject variations are reduced and therefore certainly complement the face recognition performances of previous metric learning algorithms. This idea can be applied to any metric learning algorithm. Furthermore, better assessment methods of off-feature information can be used in this algorithm. Technologies of 3D face model and deep learning will be improvised in the future to advertise the off-feature estimation method and thus enhance their algorithm. Mohannad A. Abuzneid and Ausif Mahmood et al. [5] declared an upgraded approach for improvisation of human face recognition using a backpropagation neural network (BPNN) and feature extraction in this paper. A new set called the Tdataset is generated from the original training dataset and is used to train the BPNN. The T-dataset is generated using the interaction between the training images and not using the technique of image density. A high distinction layer between the training images is provided by the corresponding T-dataset which assists the BPNN to attain better accuracy and converge fast. Xiao Han and Qingdong Du et al. [6] focused on deep learning in the field of biometrics on the basis of research hotspot of face recognition in combination with the applicable theory and methods of deep learning and face recognition technology. Soula et al. [7] developed incremental nonparametric discriminant analysis which is a novel face recognition technique for classification and face recognition. Experimental results on the two very well-known ORL and Yale Face Databases are provided by them. An adaptive face recognition system was evaluated by them. Furthermore, they contrasted the ILDA method with IDA, batch LDA and batch NDA in face recognition context. The remarkability of the INDA in terms of recognition performance and the success of the face recognition system are clearly shown by the experimental results. Bessaoudi et al. [8] deliberated that 3D information based on high-order tensor representation is used for verifying efficient framework in uncontrolled conditions. The 3D depth images are divided into sub-blocks, and the multi-scale local binarized statistical image features (MSBSIF) + multi-scale local phase quantization (MSLPQ) histograms are extracted and concatenated from each block and organized as a thirdorder tensor. This paper aims to resolve the problem of 3D face authentication in the presence of illuminations, poses and occlusion using high-order representation. The histograms of multi-scale local descriptors BSIF and LPQ are applied for describing the 3D face. Oktay et al. [9] examined that neural network (NN) models can be trained by an image analysis framework that employ autoencoder (AE) and T-L networks. Using this training ethic, neural networks prognose the anatomy of the learnt shape models which are referred as image priors. In cases where the images are corrupted, the experimental results show that the state-of-the-art NN models can benefit from the learnt priors.

A Review of Face Recognition Using Feature Optimization …

599

Ledig et al. [10] introduced here an altered version of EBGM algorithm for face recognition. Firstly, a fuzzy skin detector based on RGB color space is used to detect the faces. Then, the grid of points to the result of an edge detector is adjusted and the fiducial points for the facial graph are extracted automatically. After that, the locality of the nodes, their neighborhood relationship and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterward. The set of experiments carried out for their SOMEBGM method shows the efficiency of their proposal in comparison with the other state-of-the-art methods. Ma et al. [11] provided a preparatory analysis for the face recognition mechanics. Issues regarding the universal framework for face recognition, such as factors that may affect the performance of the recognizer, and several state-of-the-art face recognition algorithms are identified. Hong et al. [12] summarized the breakthrough in the research and highlighted the future aspects. The methods that extract local features or involve local filtering are robust against illumination variations to an extent. Moreover, the effect of illumination is further reduced by performing normalization at pooling. Illumination variations can be problematic for high-level representations that are extracted from raw pixel values. Illumination does not affect shape representation as they ignore pixel intensities. However, the performance of shape representations is degraded with decrease of accuracy with illumination variations. Narayan T. Deshpande and Dr. S. Ravishankar et al. [13], in this paper, scrutinized a method based on 3D distortable model (3DMM) that uses 3D data as a mediatory step for RGB facial expression recognition. 3DMM was used to remove the limitation of the pose presented in 2D images. A universal 3D face model is altered to match RGB images by illustrating the variation in the texture map of the 3D aligned input and reference images. The input of the 3DMM algorithm is RGB face image. This descriptor secures the full 3D geometry of the shape and hence prevents the recognition process from the need of pose normalization. Experimental results on various demanding standards validate the effectiveness of the discussed facial expression recognition structure. Kar et al. [14] covered almost all the techniques for face recognition approaches. They also covered the relative analysis between all the approaches which are useful in face recognition. The pros and cons of all the techniques are acknowledged, and recognition rates of the techniques are also compared. Sushil M. Sakhare and Harish K. Bhangale et al. [15] covered issues such as generic framework for face recognition. They also studied about the factors that may affect the performance of recognizers and several faces of the art facial recognition algorithms (Table 1).

600 Table 1 Associated work of the following researchers

A. Raikwar and J. Agrawal Et al.

Researcher

Methodology

Recognition rate (%)

[16]

Manisha M. Kasar et al.

PCA with ANN 95.3

[17]

Sachin Sudhakar DDFD Farfade et al.

95.45

[18]

Tianyi Liu et al.

CNN

91.79

[19]

Samiksha Agrawal et al.

B-CNN

97.56

[20]

Jingtuo Liu et al. BPN + RBF

85:1

[21]

Rajeev Ranjan et al.

RCNN

95.3

[22]

Leon A. Gatys et al.

RINN

85:1

[23]

Denis Tomè et al.

MRC and MLP 97.56 NN

[24]

Ali Mollahosseini et al.

Gabor wavelet with ANN

91.79

[25]

Samil Karahan et al.

WNN

95.45

[26]

Litong Feng et al.

Gabor wavelet with ANN

88.18

[27]

Yong Tang et al.

WNN

86.36

[28]

Gaurav Goswami et al.

Gabor wavelet with ANN

86.36

[29]

Rana Aamir Raza Ashfaq et al.

WNN

86.36

[30]

André Teixeira Lopes et al.

RCNN

88.18

[31]

Jun-Cheng Chen RINN

[32]

Mostafa Mehdipour Ghazi et al.

PCA with ANN 85.45

[33]

M. Hassaballah et al.

DDFD

84.55

[34]

Amin Jourabloo et al.

CNN

87.27

[35]

Heechul Jung et al.

B-CNN

88.18

[36]

Iryna Korshunova et al.

BPN + RBF

85:1

84.55

(continued)

A Review of Face Recognition Using Feature Optimization … Table 1 (continued)

601

Et al.

Researcher

Methodology

Recognition rate (%)

[37]

Gil Levi et al.

PCA with ANN 97.56

[38]

Hayet Boughrara et al.

DDFD

91.79

[39]

Nian Liu et al.

CNN

86.36

[40]

Mahmood Sharif B-CNN et al.

88.18

4 Problem Formulation and Used Techniques We study various research and journal papers related to face detection based on feature extraction process. In feature extraction process, the main problem is loss of face data and mismatch of face template. Some problem found in survey is given below [4–7]. • The biometric community has long accepted that there is no “template aging effect” for face detection, meaning that once you are registered in a face detection system, your chances of experiencing a false non-match error remain constant over time. • The false match rate means that even if the images are dissimilar the system will show them as a match. • Increase the ratio of miss. • Decrease the ratio of hit.

4.1 Used Techniques • Nonnegative Matrix Factorization (NMF): In nonnegative matrix factorization, we strictly keep the matrix values as nonnegative. It helps in extracting meaningful features using unsupervised learning. • Support Vector Machine (SVM): It is the supervised machine learning algorithm which can be used for both classification and regression challenges. In this, each data item is plotted in n-dimensional space and the value of the feature is the value of that specific coordinate. Classification is performed by identifying the hyperplane that differentiates the two classes. • Partial Least Square (PLS): It is a statistical model which is an extension of multiple linear regression. In this method instead of finding hyperplanes having maximum variance, we find linear regression model by projecting the predicted variables and observable variables to a new space. Hence, the X and Y data are projected to new space. This method is used to find the basic relation between the two matrices X and Y. It is particularly suitable when the matrices have more variables than observations.

602

A. Raikwar and J. Agrawal

• Hidden Markov Model (HMM): A hidden Markov model (HMM) comprises finite set of states. Each state is affiliated with a probability distribution. Changeover among the states is determined by the set of probabilities known as transition probabilities. In a specific state, an outcome can be originated according to the associated probability distribution. The outcome is visible to the observer and not the state. Hence, the state is hidden. • Local Ternary Pattern: LTP is an extension of local binary patterns (LBP). It uses a threshold constant to threshold pixels into three values. Each of the values is contained by a thresholder pixel. Adjacent pixels are cumulated after thresholding into ternary pattern. This pattern is henceforth split into binary pattern to compute the histogram. Histograms are then concatenated to generate a descriptor double size of LBP. • Booth’s Algorithm: Booth’s algorithm is used basically for two purposes, i.e., fast multiplication and signed multiplication. Fast multiplication occurs when there are two or more 0’s or 1’s in consecutive order in the multiplier. Booth’s multiplication algorithm multiplies two signed numbers in two’s compliment notation.

5 Conclusion and Future Scope In this paper, we present the review of face recognition methods based on different feature-based and optimization algorithms. The process of feature optimization elevates the recognition rate of face image. The features drawn out from an image represent a face. Representative features are eigen-features and edge features. In the following, we briefly explain each feature. Eigen-faces have been used for face detection and recognition purposes. The face recognition methods suffered a bottleneck problem of obstruction of image when a cap, sunglasses or other things were worn by a human face. The obstacle in present feature extraction approach and their dependency on the precise alignment is examined. Ultimately, we were introduced to the use of face-GLOH signatures that are unsteady with respect to scale, translation and rotation and therefore do not require properly aligned images.

References 1. Yu, Z., Liu, F., Liao, R., Wang, Y., Feng, H., & Zhu, X. (2018). Improvement of face recognition algorithm based on neural network. Measuring Technology and Mechatronics Automation, 229–234. 2. Zhang, C., & Zhang, Z. (2018). Improving multiview face detection with multi-task deep convolutional neural networks. IEEE 1–6. 3. Nam, H., & Han, B. (2018). Learning multi-domain convolutional neural networks for visual tracking. IEEE, 4293–4202. 4. Huang, R., & Jiang, X. (2018). Off-feature information incorporated metric learning for face recognition. IEEE 541–545.

A Review of Face Recognition Using Feature Optimization …

603

5. Abuzneid, M. A., & Mahmood, A. (2018). Enhanced human face recognition using LBPH descriptor, multi-KNN, and back-propagation neural network. IEEE 20641–20651. 6. Han X., & Du, Q. (2018). Research on face recognition based on deep learning. IEEE 53–58. 7. Soula, A., Salma, B. S., Ksantini, R., & Lachiri, Z. (2018). A novel incremental face recognition method based on nonparametric discriminant model. International Conference on Advanced Technologies for Signal and Image Processing 1–6. 8. Bessaoudi, M., Belahcene, M., Ouamane, A., & Bourennane, S. (2018). A novel approach based on high order tensor and multi-scale locals features for 3D face recognition. International Conference on Advanced Technologies for Signal and Image Processing, 1–5. 9. Oktay, O., Ferrante, E., Kamnitsas, K., Heinrich, M., Bai, W., Caballero, J., Cook, S. A., de Marvao, A., Dawes, T., ORegan, D. P., Kainz, B., Glocker, B., & Rueckert, D. (2018). Anatomically constrained neural networks (ACNNs): Application to cardiac image enhancement and segmentation. IEEE, 384–395. 10. Ledig, C., Theis, L., Husz´ar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., Shi W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. IEEE 4681–4690. 11. Ma, X., Dai, Z., He, Z., Ma, J., Wang Y., & Wang, Y. (2017). Learning traffic as images: A deep convolutional neural network for large-scale transportation network speed prediction. Sensors 1–16. 12. Hong, H. G., Lee, M. B., & Park, K. R. (2017). Convolutional neural network-based finger-vein recognition using NIR image sensors. Sensors 1–21. 13. Deshpande, N. T., & Ravishankar, S. (2017). Face detection and recognition using Viola-Jones algorithm and fusion of PCA and ANN. Advances in Computational Sciences and Technology 1173–1190. 14. Kar, A., Rai, N., Sikka K., & Sharma G. (2017). AdaScan: Adaptive scan pooling in deep convolutional neural networks for human action recognition in videos. IEEE 3376–3385. 15. Sakhare, S. M., & Bhangale, H. K. (2015). Face recognition with novel self organizing map using neural network. International Journal of Engineering Sciences & Research Technology 479–485. 16. Kasar, M. M., Bhattacharyya, D., & Kim, T. (2016). Face recognition using neural network: A review. International Journal of Security and Its Applications, 10(3), 81–100. 17. Farfade, S. S., Saberian, M. J., & Li, L.-J. (2015). Multi-view face detection using deep convolutional neural networks. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieva, (pp. 643–650). ACM. 18. Liu, T., Fang, S., Zhao, Y., Wang, P., & Zhang, J. (2015). Implementation of training convolutional neural networks. arXiv preprint arXiv:1506.01195. 19. Agrawal, S., & Khatri, P. (2015). Facial expression detection techniques: Based on Viola and Jones algorithm and principal component analysis. In 2015 Fifth International Conference on Advanced Computing & Communication Technologies, pp. 108–112. IEEE. 20. Liu, J., Deng, Y., Bai, T., Wei, Z., & Huang C. (2015). Targeting ultimate accuracy: Face recognition via deep embedding. arXiv preprint arXiv:1506.07310. 21. Ranjan, R., Patel, V. M., & Rama Chellappa (2015). A deep pyramid deformable part model for face detection. In 2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–8. IEEE, 2015. 22. Gatys, L. A., Ecker, A. S., & Bethge, M. (2015). A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576. 23. Tomè, D., Monti, F., Baroffio, L., Bondi, L., Tagliasacchi, M., & Tubaro, S. (2016). Deep convolutional neural networks for pedestrian detection. Signal Processing: Image Communication, 47, 482–489. 24. Mollahosseini, A., Chan, D., & Mahoor, M. H. (2016). Going deeper in facial expression recognition using deep neural networks. In 2016 IEEE Winter conference on applications of computer vision (WACV) (pp. 1–10). IEEE. 25. Karahan, S., Yildirum, M. K., Kirtac, K., Rende, F. S., Butun, G., & Ekenel, H. K. (2016). How image degradations affect deep cnn-based face recognition? In 2016 International Conference of the Biometrics Special Interest Group (BIOSIG), pp. 1–5. IEEE.

604

A. Raikwar and J. Agrawal

26. Feng, L., Po, L.-M., Li, Y., Xu, X., Yuan, F., Cheung, T. C.-H., et al. (2016). Integration of image quality and motion cues for face anti-spoofing: A neural network approach. Journal of Visual Communication and Image Representation, 38, 451–460. 27. Tang, Y., Zhang, C., Gu, R., Li, P., & Yang, B. (2017). Vehicle detection and recognition for intelligent traffic surveillance system. Multimedia Tools and Applications, 76(4), 5817–5832. 28. Goswami, G., Ratha, N., Agarwal, A., Singh, R., & Vatsa, M. (2018). Unravelling robustness of deep learning based face recognition against adversarial attacks. In Thirty-Second AAAI Conference on Artificial Intelligence. 29. Ashfaq, R. A. R., Wang, X.-Z., Huang, J. Z., Abbas, H., & He, Y.-L. (2017). Fuzziness based semi-supervised learning approach for intrusion detection system. Information Sciences, 378, 484–497. 30. Lopes, A. T., de Aguiar, E., De Souza, A. F., & Oliveira-Santos, T. (2017). Facial expression recognition with convolutional neural networks: Coping with few data and the training sample order. Pattern Recognition, 61, 610–628. 31. Chen, J.-C., Ranjan, R., Kumar, A., Chen, C.-H., Patel, V. M., & Chellappa, R. (2015) An end-to-end system for unconstrained face verification with deep convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 118– 126. 32. Ghazi, M. M., & Ekenel, H. K. (2016). A comprehensive analysis of deep learning based representation for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34–41. 2016. 33. Hassaballah, M., & Aly. S. (2015). Face recognition: Challenges, achievements and future directions. IET Computer Vision 9(4), 614–626. 34. Jourabloo, A., & Liu, X. (2016). Large-pose face alignment via CNN-based dense 3D model fitting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4188–4196. 35. Jung, H., Lee, S., Yim, J., Park, S., & Kim. J. (2015). Joint fine-tuning in deep neural networks for facial expression recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2983–2991. 36. Korshunova, I., Shi, W., Dambre, J., & Theis. L. (2017). Fast face-swap using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3677–368. 37. Levi, G., & Hassner, T. (2015). Age and gender classification using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34–42. 2015. 38. Boughrara, H., Chtourou, M., Amar, C. B., & Chen, L. (2016). Facial expression recognition based on a mlp neural network using constructive training algorithm. Multimedia Tools and Applications, 75(2), 709–731. 39. Liu, N., Han, J., Zhang, D., Wen, S., & Liu, T. (2015). Predicting eye fixations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 362–370. 40. Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540. ACM.

Rapid Eye Movement Monitoring System Using Artificial Intelligence Techniques M. Vergin Raja Sarobin, Sherly Alphonse, Mahima Gupta, and Tushar Joshi

1 Introduction Sleep is a vital function of human’s daily routine that they spend around thirty percentages of their time in it. Quality sleep is the most important factor, and getting enough of it is as important for existence as food and water. Sleep is crucial for many of the brain functions, including how one neuron exchange their information with other neurons. In fact, our brain and body stay astonishingly active while we sleep [1]. Recent research by leading neurologists of the world suggests that sleep plays a vital part that takes away toxins from our brain that get accumulated while we are awake. Many researches have shown that a chronic lack of sleep, or even having poor quality of sleep significantly increases the risk of disorders namely cardiovascular disease, diabetes, high blood pressure, obesity and even depression [2]. Sleep occurs mainly in five stages, but they are often segregated by consumer-level devices into three different stages namely: light sleep, deep sleep, and REM sleep [3]. The detection of REM sleep with a wearable device or application is virtually not possible without any kind of polysomnography. It is a test that saves the brain waves, oxygen level of the blood, breathing rate, heart rate, in addition to the eye and leg movements during the study. But it involves deploying multiple electrodes on the head and is normally investigated in a clinic for one night. Therefore, knowing the fact that it is impossible to detect REM sleep using conventional methods and devices, the research interest shifted towards the polysomnography method. This work proposes an efficient system to classify the sleep cycles from EEG data accurately and efficiently. To aid this, the dataset from physionet.org is obtained M. Vergin Raja Sarobin (B) · M. Gupta · T. Joshi Vellore Institute of Technology, Chennai, India e-mail: [email protected] S. Alphonse DMI Engineering College, Aralvaimozhi, India © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_65

605

606

M. Vergin Raja Sarobin et al.

that consists of EEG data. This data is split as the training set and the validation set. The training set is used to train the classifier. The EEG data from the user is used as the testing data that is fed to the proposed system to perform predictions and to determine the different sleep stages. The validation set is used to check the efficiency of the proposed system. The method of using EEG data achieves good accuracy in determining REM sleep and non-REM sleep [4, 5]. This model also identifies the ideal time to wake the person up from their sleep using REMA.

2 System Architecture Figure 1 depicts the complete architecture of the proposed work. The proposed system has access to view the sleep data of each individual. The proposed system has the following functions. • Maintaining Sleep Data: Accessing and storing sleep data of each individual for further use. • Pre-process the Data: Fill the holes in the data so that it can be passed onto the classifier. Train classification model: Train the model using the training data in the dataset. • Predict the sleep stages: After training is done, the trained model is used to predict the different sleep stages. The device will also act as an alarm clock to wake the user at the requested time. • The EEG signal from the user is used for the prediction of different sleep stages. The validation data from the dataset is used to check the efficiency of the proposed system in this work.

2.1 Rapid Eye Movement Alarm (REMA) REMA functions as an indicator to the user of the time to wake up. The user can input the time that they want to wake up. After the user has set the time, the REMA continuously displays the different sleep stages of the user. The different sleep stages

Fig. 1 System architecture

Rapid Eye Movement Monitoring System …

607

Fig. 2 Sleep cycle

used in this system are rapid eye movement (REM) sleep and non-REM sleep. When a person is in REM sleep there is a rapid movement in eyes under the closed eyelids. In non-REM sleep, there is no such movement. Only if a person has an organized mindset, the person can have a non-REM sleep. These sleep stages are predicted using a classification algorithm as in Fig. 1. The input to the classification algorithm(CNN) is the EEG signal gathered from the user. A person will have a REM sleep initially, then non-REM sleep and then a cycle of REM and non-REM sleeps as in Fig. 2. Once, REMA reaches the time at which the user wanted to wake up, it will check whether the user is in REM or non-REM sleep. The alarm will go on only when the person is in REM sleep. The brain waves of a person under REM sleep are similar to the ones when he is awake. Also, REMA will give an indication on an LCD Display about this information. The visual aid using LCD Display can be used by another individual in the user’s vicinity to wake the user up, only when the user exit REM sleep. The continuous monitoring of the sleep helps to analyze the sleep pattern and identify the disorders in persons affected by sleep problems like insomnia. This also helps the persons affected by psychological problems to wake only when they are in their lightest sleep.

3 Artificial Intelligence Based Classification Models 3.1 Random Forest (RF) Random forest is an efficient machine learning algorithm, which is based on decision tree algorithm. This classifier generated a set of decision trees from randomly chosen

608

M. Vergin Raja Sarobin et al.

subsets of training data set. It undergoes a voting process from various decision trees and selects the final class for the test data set [6].

3.2 Support Vector Machine (SVM) Support vector machine is a supervised learning algorithm. It is a discriminative classifier formally defined by a separating hyperplane. It is one of the most advanced and efficient algorithms that gives better accuracy when compared to the normal classification algorithms [7]. In this work, Multiclass SVM is used because of the fact that there are more than two classes(sleep stages) in the system. A simple SVM would only make use of one hyperplane to divide the data into two main classes, and it is not ideal for this application related datasets.

3.3 Convolutional Neural Network(CNN) A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm that can take in an input, assign the weights and biases and be able to distinguish one from the other. There are a fewer needs of pre-processing phase in a ConvNet than the other classification algorithms. While in previous techniques, filters are hand-engineered, with necessary data, ConvNets can be trained to learn these filters/characteristics. The proposed system is useful for various real-time applications like monitoring premature babies and psychiatric patients [8–10]. CNN finds its own features from the raw signal whereas other algorithms need a vector representation of features as input. CNN achieves good accuracy within less number of epochs which is very much needed in real-time applications. Therefore, CNN is an ideal classification algorithm for our system.

4 Implementation Results and Discussion 4.1 Data Pre-processing The sleep data is downloaded from the site called physionet.org. The data is retrieved as sleep record for 39 patients in multiple EDF files [11]. Since EDF files are not known to be handled by machine learning classifiers, a converter was used to convert the EDF files into CSV files for the ease of use in the classifiers. But the major problem of data handling is the presence of a lot of holes (i.e. missing data) in the input files. It can be very problematic during the time of classification if the model does not have the appropriate data to help in the classification. Therefore, to handle

Rapid Eye Movement Monitoring System …

609

the missing data, pre-processing is carried out by filling the holes using the mean of the data present in the columns. For example m-th row, n-th column missing data is replaced by the Eq. 1. n Dmn =

i=1

Dmi

(1)

n

Once the pre-processing is done, the holes in the dataset have been filled and there is no more discrepancy in the data.

4.2 CNN Classification Model ConvNets have been applied in this work to classify the sleep records. Other basic machine learning classification models are also used to train and classify the sleep stages and the results are analyzed.

4.3 Performance Metrics Accuracy is a measure of number of correct predictions divided by the total number of predictions made as shown in Eq. 2. Accuracy =

correct predictions Total no of predictions

(2)

Table 1 shows the accuracy of the proposed system analyzed using the validation set and different classifiers. It is observed that Convolutional Neural Networks gives the best accuracy as 91%. Random Forest Classifier achieves 89% accuracy which is better when compared to the 81% accuracy achieved by Support Vector Machine as in Table 1. But when the data become more sparse due to pre-processing, the system using Random Forest Classifier gets stuck. The CNN performs well in such stages, which is so much needed in real-time applications such as a sleep monitoring system. That is the reason CNN was used as the primary model for the classification of sleep stages [12]. Table 1 Performance comparison

Algorıthms

Accuracy

Support Vector Machine

0.81

Random Forest Classifier

0.89

Convolutional Neural Network (CNN)

0.91

610

M. Vergin Raja Sarobin et al.

5 Conclusion Presently there is a serious lack of devices/systems that can efficiently predict sleep stages. This work aims to introduce an inexpensive and highly promising technological answer to the field of sleep stage detection. The sleep system not only detects the sleep stages of an individual but also finds a solution to wake up an individual in a refreshed state. REM Sleep Monitoring System (RSMS) is designed in this research work using artificial intelligence techniques which helps humans with a real-time solution. It detects the different sleep stages that the user is going through and wakes the user in the lightest sleep possible. It helps in monitoring the sleep patterns of premature babies and patients.

References 1. Stewart, E., Gibb, B., Strauss, G., & Coles, M. (2018). Disruptions in the amount and timing of sleep and repetitive negative thinking in adolescents. In Behavioral Sleep Medicine (pp. 1–9). 2. Berry, R. B., Brooks, R., Gamaldo, C. E., Harding, S. M., Marcus, C., & Vaughn, B. V. (2012). The AASM manual for the scoring of sleep and associated events. Rules, terminology and technical specifications (p. 176). Darien, Illinois: American Academy of Sleep Medicine. 3. Tsinalis, O., Matthews, P. M., & Guo, Y. (2016). Automatic sleep stage scoring using timefrequency analysis and stacked sparse autoencoders. Annals of Biomedical Engineering, 44(5), 1587–1597. 4. Abdulla, S., Diykh, M., Laft, R. L., Saleh, K., & Deo, R. C. (2019). Sleep EEG signal analysis based on correlation graph similarity coupled with an ensemble extreme machine learning algorithm. Expert Systems with Applications, 30(138), 112790. 5. Sharma, R., Pachori, R. B., & Upadhyay, A. (2017). Automatic sleep stages classification based on iterative filtering of electroencephalogram signals. Neural Computing and Applications, 28(10), 2959–2978. 6. Hassan, A. R., & Subasi, A. (2017). A decision support system for automated identification of sleep stages from single-channel EEG signals. Knowledge-Based Systems, 128, 115–124. 7. Lajnef, T., Chaibi, S., Ruby, P., Aguera, P. E., Eichenlaub, J. B., Samet, M., et al. (2015). Learning machines and sleeping brains: automatic sleep stage classification using decision-tree multi-class support vector machines. Journal of Neuroscience Methods, 250, 94–105. 8. Marandi, R.Z., & Gazerani, P. (2019). Aging and eye tracking: in the quest for objective biomarkers. Future Neurology. 2019 Oct 9(0):FNL33. 9. Pillay, K. Quantifying brain maturation in the preterm baby from EEG sleep analyses (Doctoral dissertation, University of Oxford). 10. Stefani, A., Holzknecht, E., & Högl, B. (2019, Jan 1). Clinical neurophysiology of REM parasomnias. In Handbook of Clinical Neurology (Vol. 161, pp. 381–396). Elsevier. 11. Goldberger, A.L., Amaral, L.A.N., Glass, L., Hausdorff, J.M., Ivanov, P.Ch., & Mark, R.G., et al. (2003). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation, 101(23), e215–e220. 12. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning (p. 326). MIT Press.

Analysis of Process Mining in Audit Trails of Organization Swati Srivastava, Gaurav Srivastava, and Roheet Bhatnagar

1 Introduction Auditing could be a normal business followed with several applications. The essential factor that includes mind once agonizing about evaluating could be a money-related setting, pointed for instance at analysing appropriateness of a business to charge tenets and laws. This can be just 1 of the numerous employments. Today, one will review support building practices, well-being and issues of security, moral direct, and in truth an extensive kind of IT-related practices like data systems security, getting controls and business forms. Conducting a process audit is effort-consuming, as human input is needed for all aspects mentioned in the generic audit definition: acquisition of audit evidence, examination of evidence against audit criteria, and reporting of audit results. To alleviate this effort, conducting an audit can be supported with methods and tools. When introducing the analysis field of method mining, the extraction of data from event logs of data systems provided the opportunity to obtain processes, to check the correspondence of processes against a predefined model and to improve models. Auditing a process can be viewed as a form of conformance checking, used by the auditors to find out if the process as implemented in the information system obeys the same business rules as the reference model. By using process-mining tools that can automate one or more steps of process mining, auditors can put more focus S. Srivastava (B) · R. Bhatnagar Manipal University Jaipur, Jaipur, India e-mail: [email protected] R. Bhatnagar e-mail: [email protected] G. Srivastava Poornima College of Engineering, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_66

611

612

S. Srivastava et al.

on analysing conformance instead of data gathering and taking samples. Recently, the necessary techniques have made an appearance in an ever-increasing number of commercial process analysis tools, such as Fluxicon Disco1. However, use processmining supported auditing of business processes is still not widely applied in practice. In the literature part of this paper, the necessary elements of such an approach are investigated and an extended approach is formulated. This approach is evaluated in the experimental part of this study. Two cases of process audits are conducted, using a structured process-mining approach for support. The limitations related to application of process mining are mapped to various steps of the approach. Finally, the limitations identified are used to identify areas for future research [4].

1.1 Relevance Theoretical: The application of process mining is depending on the availability of event data in the information system. It is not a trivial challenge to develop standard methods that enable non-process-mining experts (auditors) to use information found in event logs in their day-to-day work. This thesis adds knowledge to the present practice of business process auditing and to extend the field of process mining with a methodology for the auditing of processes [4]. Business context: Traditionally, auditing is conducted by selecting random cases from an information system and comparing them with a formal process model, in order to verify a number of auditing criteria (such as separation of concerns for some tasks in the process). This takes time and effort, which could possibly be reduced by automating it. It could also help in improving the consistency and comparability of audits that are conducted in multiple time periods. Finally, it enables auditors to audit all instances of a process instead of a taking a sample, increasing validity of the findings [5, 7].

2 Need of Process Mining in Audit In order to address problem stated of the prior paragraph, the research goal is to understand what factors limit the application of process mining as support for process auditing, and develop an approach that addresses the limiting factors. Consequently, we evaluate this approach in the empirical part of this research. The main research question can be formulated: In what way can a process audit be conducted, so that it is supported by process mining techniques and tools?

This question can be answered by addressing a number of sub questions. The first four questions will be reviewed in the literature research part of this thesis: 1. What is the place of process auditing among other types of auditing?

Analysis of Process Mining in Audit Trails …

613

2. What are the elements of process mining aimed at auditing? 3. How might we approach the utilization of process mining to help the examining of business forms? 4. What are the prohibitive factors in the utilization of process mining to review business forms? In the empirical section of this research, the following subquestion is answered: 5. What factors limit the structured approach of a business process audit supported by process mining?

3 Literature Background The outcome will act as the theoretical fundaments for the modelling and empirical part of the research. In the following sections, results of the literature research are summarized for all subquestions.

3.1 What Is the Place of Process Auditing Among Other Types of Auditing? Audits can be applied to evaluate business processes. As per Russell, business process audits may achieve a number of objectives: • Measure the conformity with the standards and requirements of the product delivered through the process; • Measuring the effectiveness of the process and the instructions that deliver the product. Auditing a process can be seen as a form of conformance checking, that is used by auditors to find out if the process as implemented in the information system obeys the same business rules as the reference model. By using process-mining tools that can automate one or more steps of process mining, auditors can put more focus on analysing conformance instead of data gathering and taking samples. Recently, the necessary techniques have made an appearance in an ever-increasing number of commercial process analysis tools. However, literature research did not yield a tested approach for conducting business process audits using process-mining techniques [9]. To identify elements that are needed for such an approach, first we examine the structural elements of audits in general. Karapetrovic and Willborn [1] talk about various definitions, concepts and the basic principles and practices of auditing that a variety of audit definitions exists that are applied in a range of audit topics. The auditing of a business process is an activity that can be applied in different business contexts; this is why a generic definition of auditing and related terms is needed.

614

S. Srivastava et al.

A process audit is one of many audit types designed to audit a specified business process against documented procedures. In combination with the generic audit definition, we can define the process audit independently as ‘A free and recorded framework for accomplishing and checking proof concerning a business procedure, against archived systems that establish the review criteria, and revealing the procedure review discoveries, while considering review hazard and materiality’ [2, 3]. In addition to this definition, the fundamental structural elements that will be used in this thesis are control objectives, audit process, audit criteria, audit evidence, and audit findings. The most important concepts defining process auditing are: Audit process The audit process is the interrelated set of activities that is performed with the goal to transform audit evidence into audit findings. For the scope of this thesis, this includes all activities of process mining that are a part of the transformation. Audit criteria To evaluate control objectives that are formulated in a generic manner, business process-specific audit criteria are needed that are used to check conformance. In this thesis audit criteria are central to the auditing approach supported by process mining, as they provide a link between the domain of auditing and the domain of process mining. To be used as such a link audit criteria must be formulated in a way that can be evaluated by using process-mining techniques. Audit evidence The sources of audit evidence are relevant to this thesis, as they provide the information that is used for mining of the process and verification of the audit criteria. In the traditional approach of business process auditing, audit evidence consists primarily of samples of process output that is manually evaluated for conformance to a normative process model. For use in a process-mining approach, the audit evidence has to be available in the form of an event log of the information system that is used to support the business process. Audit findings As a result of the process of auditing, the audit findings consist of the verification of all audit criteria against the audit evidence. Non-conformity to the audit criteria is reported and can be used for analysis of the underlying factors.

Analysis of Process Mining in Audit Trails …

615

3.2 What Are the Elements of Process Mining Aimed at Auditing? To answer this subquestion, elements of process mining and the application of business rules to compose audit criteria are explored. Process mining: general principals Organizations execute business processes to achieve their business goals. Nowadays, most business processes are supported by information systems (IS) that help users of the IS within the organization to complete the business process in an efficient manner. In most modern businesses, process auditing is conducted mainly by inspecting the information systems that collect the state and data of cases flowing through a business process. An information system can support a business process by automating a number of activities. To do this, a workflow of the supported tasks is presented for the user to follow. This workflow can change when the user of the IS makes one or many decisions regarding the preferred order of the activities. The resulting ordering of activities is called the control flow and can be analysed using process-mining techniques [8].

3.3 How Can the Use of Process Mining to Support the Auditing of Business Processes Be Approached? Transforming control objectives to audit criteria Now that both the elements of process audits and the rule based composing of audit criteria have a theoretical basis, the final element is the connection of both components using a structured transformation approach. In an overall methodology on the modelling of control objectives for business process compliance, Sadiq et al. [6] propose an approach that we can use as a basis for the transformation, which contains the following steps: 1. Translate control objective to internal control. The control objective is a generic statement that is applicable on the entire domain of the law or regulation. This objective needs to be formulated in terms of the specific requirements that are applicable to the business process that is to be audited. We have found no control framework that provides a straightforward transformation of the generic control objectives to internal controls. 2. Model the internal controls. In their study, Sadiq et al. [6] use Formal Contract Language as a formalism to express normative specifications. While FCL has a high internal consistency that makes it well-suited for application in the context of a run-time environment of systems as the basis of an automated internal control

616

S. Srivastava et al.

verification mechanism, the modelling of rules in FCL requires significant expertise from a modelling expert. In our thesis, we will use business rules expressed in controlled natural language for the modelling purposes, as these are more user-friendly to specify and system-wide consistency is less of an issue in our narrower context. This step results in audit criteria that are expressed as business rules. 3. Process model interconnect. This final step plots the audit criteria to activities in the process model so that the two can interconnect. Sadiq et al. [6] use control tags to categorize the type of FCL rules, which correspond to types of business rules.

3.4 What Are Limiting Factors in the Use of Process Mining for Auditing Business Processes? Now that the required elements of both business process auditing and process mining are known in answering subquestions 1, 2, and 3, we identify limiting factors by examining the elements independently. Normative process model By examining the definition of conformance checking, we have found that the availability of a normative process model that describes acceptable behaviour is assumed. It is needed so that audit criteria can be formulated that are used to test real-world behaviour against the normative model. As can be seen in Fig. 1, the methodology presented assumes the availability of a process model. Sadiq et al. [6] do not report how the process model was obtained. Process-mining tools that support the auditing process provide a way to discover the process model. For a tool to be useful, at least the following functionalities are needed: 1. Conversion of imported event log to visual process model (process discovery). This is needed to be able to verify the process model with business experts in a

Control ObjecƟve

1. Apply to business context

2. Transform using controlled natural language Internal Control

Audit criterium as a business rule

3. Interconnect of business rules and process model

Process Model

Fig. 1 An approach audit process approach

Analysis of Process Mining in Audit Trails …

Control ObjecƟve

2. Apply to business context

Internal Control

617 2. Transform using controlled natural language

Audit criterium as a business rule

3. Interconnect of business rules and process model 4. Process Discovery Real world Problems Process Model

Fig. 2 Preliminary audit process approach

non-technical manner, and to help identify the relevant activities in the event log as a basis for the business rules. 2. Filtering of the model based on activities. This provides insight in the conformance of the cases in the event log, by showing only the non-conforming cases. For optimal results, filters for the three types of business rules: order-, value-, and resource-oriented are needed. Two tools, Disco and PROM, were evaluated on their suitability for auditing based on the functionality of both process discovery and filtering. Although PROM has a large and advanced library of process discovery algorithms, the filtering functionality is lacking. We have found that Disco has a single process discovery mechanism, and advanced filtering capabilities. As both functionalities are usable, Disco is therefore best suited as the supporting tool for the empirical part of our research. Therefore, with regards acquisition of process models from logs, we see a gap in the framework by Sadiq et al. [6].

4 Towards Auditing Supported by Process Mining 4.1 Design Goal The method for controlling/checking the objectives of auditing certeria, the goal can be verified by business process model for compliance. However, the method of obtaining the business process model is not provided.

5 Advantages of Introducing Process Mining in Business Process • Helping shield property and decrease the opportunity of fraud • Enhancing efficiency in operations

618

S. Srivastava et al.

• Growing financial reliability and integrity • Ensuring compliance with laws and statutory guidelines • Setting up tracking procedures.

6 Conclusion Literature suggests that process-mining techniques and tools can be assisted by conducting of business process audits, but the adoption of process mining as support for business process auditing is still limited in real-life business environment. Therefore, using process mining in audit will lead to more accurate, fair conclusions in business audit analysis. The focus of this study of is on financial auditing. This entails obtaining a thorough understanding internal control of all company processes and procedures which lead up to financial reporting, and besides this, also includes obtaining an understanding of the relevant information systems of the company (Akkerman et al. 2006). The connection between the financial statement and the company’s processes is that the financial statement is a (re)production of the processes which the company has in place. This means that in order for the auditor to get assurance in the financial statement of a company, the processes which lead up to financial reporting need to be trustworthy too. For this reason, it is important to depict the processes which are in scope for the financial audit.

References 1. Karapetrovic, S., & Willborn, W. (2018). Generic audit of management systems: fundamentals. Managerial Auditing Journal, 15(6), 279–294. doi:10.1108/02686900010344287. 2. Ridley, G., Young, J., & Carroll, P. (2014). COBIT and its utilization: A framework from the literature. In Proceedings of the 37th Annual Hawaii International Conference on Paper Presented at the System Sciences, 2004. 3. Roubtsova, E.E. (2014). Property specification for coloured petri nets. In IEEE International Conference on Paper Presented at the Systems, Man and Cybernetics, 2004. 4. Roubtsova, E.E. (2015, 24-5-2015). A property specification language for workflow diagnostics. Paper Presented at the International Conference on Enterprise Information Systems, Miami. 5. Russell, J. (2016). Process auditing and techniques. Quality Progress, 39(6), 71–74. 6. Sadiq, S., Governatori, G., & Namiri, K. (2007). Modeling control objectives for business process compliance. Paper Presented at the International Conference on Business Process Management. 7. Spreeuwenberg, S., & Healy, K.A. (2015). SBVR’s approach to controlled natural language (Vol. 5972, pp. 155–169). Heidelberg: Springer. 8. Tuttle, B., & Vandervelde, S. D. (2007). An empirical examination of CobiT as an internal control framework for information technology. International Journal of Accounting Information Systems, 8(4), 240–263. https://doi.org/10.1016/j.accinf.2007.09.001. 9. van der Aalst, W. (2012). Process mining: Overview and opportunities. ACM Transactions on Management formation Systems (TMIS), 3(2), 7.

Modern Approach for the Significance Role of Decision Support System in Solid Waste Management System (SWMS) Narendra Sharma, Ratnesh Litoriya, Harsh Pratap Singh, and Deepika Sharma

1 Introduction Developing countries are facing a sustainable SWM problem for a long time due to a lack of sufficient resources and government policies. Improper solid waste directly affected environmental factors and human being. Peoples suffer from many types of diseases and health problems due to improper waste management. Our government makes many policies regarding the proper MSWM, but the local authority is not capable to work according to these policies due to lake of technical experts and other resources. But this time the government focuses to deal with this problem to work with new tools and techniques in collaboration with waste management models. An advanced DSS, a technology-based modern system, can be very cooperative for any type of organization to improve their decision-making capabilities. We know well indeed correct decisions are very beneficial for the growth of any organization. A well-designed useful DSS works to analyze data collected from various sources and predict valuable suggestions [1, 2].

N. Sharma (B) Mewar University Chittorgarh, Chittorgarh, India e-mail: [email protected] R. Litoriya Jaypee University of Engineering & Technology, Guna, India e-mail: [email protected] H. Pratap Singh Sri Satya Sai University of Technology and Medical Sciences, Sehore, Madhya Pradesh, India e-mail: [email protected] D. Sharma CSA PG College, Sehore, Madhya Pradesh, India © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_67

619

620

N. Sharma et al.

Many researchers pronounce DSS as a computer-oriented information model containing data analysis tools, specially designed for decision-makers or support managers to choose the best solutions between many solutions available for a problem. Great researcher Moore and Chang defined the decision support system as “DSS is, a modern technology-based system specially designed for accomplishment analyze the data sets and prove help in decision-making activities” [3]. Also, Carlson and Sprague describe it as “DSS is a specific well-designed system that can provide help of decision-makers to solve different—different types of environmental problems”. It helps organizations to proliferate their market share, reduces costs, increases throughput, and improves quality. In the decision-making process, the nature of the problem itself plays a key role. I have used two different data mining tools used for analysis purposes. In this paper, I have done work on waste management data and how it be valuable for the DSS for solving waste management problems [4]. The DSS models should also be worked as an integrated form with other advanced technologies such as a geographical information system for improving the collection and processing; Fig. 1 shows the basic diagram of DSS. The diagram shows the working process of a DSS. In the first phase of DSS, raw data is stored in a DBMS system. The analyst takes the data sets from these database management systems for analysis purpose. In the second phase, the analyst applies some algorithms or processes the data in a specific data analysis tools and gathers some useful results [5]. A well-premeditated DSS is very beneficial for any organization; the main purpose of designing a useful DSS for supporting the decision-making process of organizations is to only support the decision-making not to play role of decision-makers. A good DSS has the following characteristics [6]:

DATA DBMS

MODEL BASED MANAGMENT

USER MANAGMNTSYST EM

DECISION MAKER/PLANNER

Fig. 1 A basic architecture of the decision support system

MODEL

Modern Approach for the Significance …

621

Table 1 Solid waste data set [5] Organic

Carbon

Hydrogen

Oxygen

Nitrogen

Sulfur

Ash

Paper

43.5

6

44

0.3

0.2

6

Cardboard

44

5.9

44.6

0.3

0.2

5

Plastics

60

7.2

22.8

0

0

10

Textiles

55

6.6

31.2

4.6

0.15

2.5

Rubber

78

10

0

2

0

10

Leather

60

8

11.6

10

0.4

10

Yard wastes

49.5

6

42.7

0.2

0.1

1.5

Glass

0.5

0.1

0.4

0.1

0

98.9

Metals

4.5

0.6

4.3

0.1

0

90.5

Dirt

26.3

3

2

0.5

0.2

68

• The functionality of DSS to be fully user-friendly, it will be able to solve the undetermined and structured problem that can face upper-level managers. • DSS will be a combination of models or advanced techniques with basic data access and retrieval functions; • DSS particularly motivates on features that make it easy to use by non-technical peoples who are not well aware of computer technology; • DSS highlights adaptability and resistance in the changing situations of problems and provides many solutions of decision-makers to take appropriate decisions. Significance of decision support system in solid waste management: For a supportable solid waste management framework, DSS tools support very effectively to increase the efficiency of waste management process. They not only work on the integral part of proper planning but also focus on other important factors like collection, monitoring, simulation, disposal of waste and evaluation, etc. [7] (Table 1). We need a well-designed waste management system for the following reasons: (a) (b) (c) (d)

To stop the blowout of contagious diseases. For proper monitoring of all types of pollutions. To provide facility recycling of harmful waste products for further production. To provide continuous monitoring of our environmental resources.

For the implementation of a well-functional solid waste management plan, an adaptable, integrated framework plays a very important role [8]. A well-designed E-municipal SWM model cover all the important factors of waste management like (i) (ii) (iii) (iv) (v) (vi)

Composting Segregation Drainage Collection of waste Dumping Handling of seepages before discharge.

622

N. Sharma et al.

A well-designed DSS framework can be very useful in the appropriate managing of solid waste; it includes the estimate quantity and quality of generated municipal waste, provides technology-based assessment for collection, treatment, and disposal of waste, estimates the cost of transportation and other expenditure planned solutions, and predicts volumes and time of investment associated with waste management [9, 10].

2 Literature Review Emadoddin Livani and Raymond Nguyen Jörg have presented a research work; in this research, they merge two and more clustering and classification methods that predict more accurate results than the result of the any specific techniques, mainly for large data sets. In this paper, the researchers proposed a hybrid estimation technique to merging weighted k-means clustering and linear regression. We have used weighted k-means for making cluster in data set. Then, linear regression analysis is to build the final result. The projected method has been used to solve the problem of MSW and estimate within a large data set containing 63,000 records. To get the results of combining approach is more predictable in place of any specific methods [11]. Barbara Białecka and Karolina J˛aderko-Skub had presented a research work in area of usefulness of DSS in field of solid waste management. Decision-making is led by the nature of the problem, collects relevant data, compiles skilled knowledge, and finally predicts the solutions which will allow us to select the best selections as far as estimation and collection are concerned. Now in the current environment business exercise has initiated computerization of all these accomplishments, for designing a DSS for all divisions of enterprises’ activity. Waste management due to its multilayered nature also goes through other DSSs which will allow to apply operational decision-making processes [12]. Oyelade et al. have been doing the work to predict the student’s academic performance using the K-means algorithm in 2010. Their research is based on cluster analysis of educational data sets of students. In this paper, they have implemented the k-means clustering algorithm for investigating students’ results data. The model was joined with the deterministic model to dissect the understudies’ aftereffects of a private institution in %criteria, which is a decent benchmark to screen the movement of scholastic execution of understudies in higher institution to settle on a viable choice by the scholarly organizers. To develop a suitable method for mobile app development, considerable work has been carried out [13]. Yadav and Sharma presented the review of the performance of the K-means algorithm. In this paper, they work on the performance of three different modified k-means algorithms and try to eliminate the limitation of the k-means algorithm and increase the productivity of the k-means algorithm [14]. Ali and Kadhum had been done the research work of pattern recognition using K-means algorithms. Firstly they have focused on the primary working of clustering algorithms on spatial data analysis and image processing. The paper comprises the algorithm and its implementation and

Modern Approach for the Significance …

623

focuses on how to use data mining algorithms in pattern recognition. The literature survey reveals that data mining is found to be a useful technique for many applications [15].

3 Proposed Methodology The plan to design a DSS is incorporated into information-based master frameworks. An all-around structured DSS is very useful for leaders to gather valuable data from a combination of fundamental information, individual learning, and archives or different sources to order and resolve issues, and doing help in the correct choices. In the last many decades, the organizations have used advanced technologies for business advancements. Organizations take decisions to increase its organizational effectiveness in running structure. We know very well a good decision support system is very useful for achieving the targets or organizations’ growth. For the development of DSS framework, it is very necessary to work with new data mining techniques and other supportive AI algorithms such as machine learning, clustering, and other algorithms simultaneously. It can be very helpful to design reliable DSS architecture. In this paper, I have studied an integrated DSS framework, which is co-operated with data mining technology and waste management case study. It will facilitate to perform various types of calculations. I will try to include all the important factors that are directly parts of the MSWs system. A well-designed DSS helps in designing, evaluation, and monitoring of landfill sites. It is also useful in analysis and evaluation of raw data and finds out the opportunity of solid waste disposal (Fig. 2).

4 Analysis and Result Based on data mining techniques, we have proposed an integrated DSS for solid waste management. It will be able to work with various data mining algorithms. This framework has worked on two phases; on the first part, stored raw data forms all important modules, which are also subdivided into submodules. All raw data is stored in a specific database. In the second phase, the analyst has used these data sets according to their needs. Analyst use different-different analysis tools such as Data mining incorporates various types of analysis work like study of pattern recognition, statistical analysis, and advanced machine learning algorithms to extend the prospects of determining info, trends, and patterns by using decision rules and decision trees, than the traditional statistical methods, and are. For this paper, we have taken data sets for the government site (www.mygov.in) and analyzed the two different tools. Firstly, I have used these data sets in MS Excel. It can prove the facility of classifying the data in various types with the help of different charts. Many researchers use this analysis tool for representing data in various charts as shown in the figures.

624

N. Sharma et al.

Fig. 2 Advanced DSS framework

I have used other analysis tools known as the SPSS modeler. It is a data mining tool that provides the various algorithms for data analysis. In the MS Excel, I have analyzed the data which contains the organic percentage of the various materials. The same data sets apply in the SPSS modeler and apply clustering algorithms for identifying the same grouping data. The biggest challenge of a DSS development consists of identifying essential data sets, analyzing its contents, and the way it relates to other data sets. Data analysis is focused on business analysis rather than system analysis performed in traditional methodologies. It leads to data cleaning activity. For this purpose, we proposed a new integrated framework, which will be based on data mining technology and consist of all solid waste management techniques. Analysis tools are such as Ms Excel, Weka, and SPSS Modeler. Data mining encompasses statistical, pattern recognition, and machine learning tools to extend the possibilities of discovering information, trends, and patterns by using richer model representations (e.g., decision rules, decision trees, …) than the usual statistical methods and are therefore well suited for making the results more comprehensible.

Modern Approach for the Significance …

625

For this work, we have taken data sets for the government site (www.mygov.in) and analyzed the two different tools. Firstly, I have used these data sets in MS Excel. It can prove the facility of classifying the data in various types with the help of different charts. Many researchers use this analysis tool to represent data in various charts as shown in Figs. 3 and 4. I have used other analysis tools known as the SPSS modeler. It is a data mining tool that provides the various algorithms for data analysis. In the MS Excel, I have analyzed the data which contains the organic percentage of the various materials. The same data sets apply in the SPSS modeler and apply clustering algorithms for identifying the same grouping data (Fig. 5). The biggest challenge of a decision support system development consists of identifying essential data sets, analyzing its contents, and the way it relates to other data sets. Data analysis is focused on business analysis rather than system analysis performed in traditional methodologies. It leads to data cleaning activity. For this purpose, we proposed a new integrated framework, which will be based on data mining technology and consist of all solid waste management techniques [9, 10]. 0.5 4.5 26.3

48

49.5 43.5

60

44

60 78 55 Food wastes

Paper 6.0

Cardboard

Plastics

Textiles

Rubber

Leather

Yard wastes

Wood

Inorganic

Glass

Metals

Dirt, ash, etc.

Fig. 3 Analysis of organic compounds

626

Fig. 4 Analysis of organic compound data in graph

Fig. 5 Perform clustering in SPSS modeler

N. Sharma et al.

Modern Approach for the Significance …

627

5 Conclusion and Future Work In the SWM process, the planners are focused on various factors including land use, recycling rates, labor needs, energy use, financial costs, pollution generation, etc. For making decisions, these factors play a key role. These factors contain a large amount of data and information that must be considered. Generally, mostly MSW planners do not have the resources required to analyze all of the information. Mostly they focused only on the financial cost of the MSW process. To improve the MSW decision-making process, an integrated DSS has been required to contain the geographical nature and multi-attribute of solid waste systems. All this process is completed with the help of analytical tools for developing and estimating materials management and disposal policies. The DSS incorporates master frameworks and model administration capacities to sort out and break down important information. Since this data can be disseminated over an enormous geographic area, a GIS is incorporated to enable the organizer to see how a specific strategy may affect general society and condition. A well-developed decision-making system plays a very important role in any organization’s decision-making process. SPSS Modelers and other data mining tools offer a variety of modeling methods taken from artificial intelligence, statistics, and machine learning. The strategies accessible on the displaying palette enable us to get new data from our information and to create systematic models. Each method has certain strengths and is best suited for particular types of problems. The forthcoming specialties of upcoming DSS, therefore, can autonomously and continuously improve decision-making activities within a changing business environment, rather than tools that just produce more detailed reports based on current static standards of quality and performance. The concluding outcome of DSS, accordingly, lies in the advancement of frameworks that can self-govern and persistently improve basic leadership inside a changing business condition, as opposed to instruments that simply produce increasingly point by point reports dependent on current static benchmarks of value and execution. The future of DSS can be visualized as systems that can guide and deliver increasingly smart decisions in a volatile and uncertain environment. The working of it on the basis of using prediction and optimization techniques creates self-learning decision systems.

References 1. Rupnik, R., Kukar, M., & Krisper, M. (2007). Integrating data mining and decision support through data mining based decision support system. Journal of Computer Information Systems, 47(3), 89–104. 2. De Kock, E. (2003). Models for knowledge management. Decision Support Systems, 35(1), 103112. 3. Sharma, N., Bajpai, A., & Litoriya, R. (2012). Comparison the various clustering algorithms of weka tools. Int. J. Emerg. Technol. Adv. Eng., 2(5), 73–80.

628

N. Sharma et al.

4. Sharma, N., & Litoriya, R., et al. (2019). Designing a decision support framework for municipal solid waste management. Int. J. Emerg. Technol., 10(4), 374–379. 5. Anagnostopoulos, T., & Zaslavsky, A. (2017). Challenges and opportunities of waste management in IoT-enabled smart cities: A survey. IEEE Transactions on Sustainable Computing, 2(3), July–September. 6. Caruso, G., & Gattone, S.A. (2019). Waste management analysis in developing countries through unsupervised classification of mixed data. Soc. Sci., 8, 186. https://doi.org/10.3390/ socsci8060186. 7. Song, J., Liao, Y., He, J., Yang, J., & Xiang, B. (2014). Analyzing complexity of municipal solid waste stations using approximate entropy and spatial clustering. Journal of Applied Science and Engineering, 17(2), 185192. 8. Kohansal, M.R., Firoozzarev, A. (2015). Data mining and analysis of the citizens’ behavior towards the source separation of waste project by applying C4.5 algorithm of decision tree. Journal of Geography and Regional Development, 13(1– S.N.24). 9. Prasanna, A., Vikash Kaushal, S. (2018). Survey on identification and classification of waste for efficient disposal and recycling. International Journal of Engineering & Technology, 7(2.8), 520–523. 10. Han, J., & Kamber, M. (2006). Data mining, concepts and techniques, 2nd edn. Morgan Kaufmann Pub. 11. Livani, E., & Jörg, R.N. (2013). A hybrid machine learning method and its application in municipal waste prediction. In ICDM 2013: Advances in Data Mining. Applications and Theoretical Aspects (pp. 166–180). 12. Białecka, B., & J˛aderko-Skubis, K. (2015). Decision support systems in waste management—A review of selected tools. https://www.researchgate.net/publication/281625305. 13. Rybnytska, O., Burstein, F., Rybin, A.V., & Zaslavsky, A. (2018). Decision support for optimizing waste management. Journal ISSN: 1246-0125 (Print) 2116-7052 (Online). 14. Yadav, M., Sharma, J. (2013). A review of K-mean algorithm. Int. J. Eng. Trends Technol., 4(7). 15. Oyelade, O.O.O., & Oladipupo, O.J. (2010). Application of k-means clustering algorithm for prediction of students’ academic performance. Int. J. Comput. Sci. Inf. Secur., 7(1), 292–295.

Integration of Basic Descriptors for Image Retrieval Vaishali Puranik and A. Sharmila

1 Introduction The development of technology over time leads to the large volume of digital images. The content retrieval plays an important role to retrieve the images from the database. Recently, more active research work is going on to retrieve the image efficiently. The manual text annotation is the search method for indexing and retrieving image data indirectly. In this method, the image is retrieved using the keywords related to the image. Generally, the image cannot be described by means of limited keywords which lead to less retrieval accuracy as well as ambiguous. These limitation leads to develop automatic and efficient techniques to improve the retrieval accuracy of the image from large database. Therefore, the image retrieval requires more extensive and sophisticated solutions to handle the large volume of image from the database. Recently, many research works have been carried out to improve the image retrieval accuracy. The content-based image retrieval system (CBIR) is an efficient system to search an image from the database which matches to the query image. CBIR is a technology which retrieves the digital image based on the visual features. CBIR is based on lowlevel descriptors such as color, edge, texture, and shapes, for image retrieval and lists the image based on image accuracy, i.e., indexing [1]. It is based on the principle of computer vision method to solve the image retrieval limitations on the existing approaches. Human can recognize the images with the help of the significant descriptor called color. In the color-based descriptor, it is difficult to recognize the two dissimilar objects of same color which are grouped to the same cluster [2]. For example, it is impossible to differentiate a yellow banana from a yellow basket. Another important V. Puranik (B) · A. Sharmila Krishna Engineering College, Mohan Nagar, Ghaziabad, UP, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_68

629

630

V. Puranik and A. Sharmila

feature of image is texture. Many research works have been done on texture to measure the similarity of the images [3]. Once the segmentation of the images done, the region-by-region segmentation is compared. It is difficult to define the texture in the retrieval system. Texture feature plays an important role in region-based retrieval [3]. Generally, textures are homogenous to various part of an image. Shape is another important feature to distinguish between two different objects in an image. The operation of image retrieval based on the shape is bounded to the single object in the databases, and it is very difficult to perform the segmentation on dissimilar objects from an image [4]. Recently, many research works have been carried out based on segmentation [5], object-based retrieval system for digital image libraries using diffusion model [6], and feature extraction based on wavelet transform [7]. The wavelet transform techniques are based on efficient multi-scale approach which does not need segmentation operation to retrieve the image based on texture features. The drawback of image retrieval based on shape is difficult to identify the shapes which are varying the position. This paper includes extended previous work done by authors and focuses on providing efficient method to retrieve the image. The main contribution of this paper is to overcome the limitation of individual features by proposing several combinations of the basic descriptors and provide a better feature set for image retrieval.

2 Proposed Method Color histogram uses 64 bins to store the color component of the RGB image. The wavelet transform gives multi-resolution description of an image which is very effective for analyzing the information contents of the image. Wavelet transform is implemented using debauchees 4 coefficients. Original image is decomposed into different frequency bands while maintaining the image size same as that of the original image. This is an important characteristic of wavelet transform. We have implemented wavelet transform up to 5 level so that we get twenty featured image. Canny edge detector used for shape provides good detection and good localization. Feature extraction is followed by measuring similarity distance using Euclidean distance metric and displays the relevant images. Observations are carried out over images for color, texture, and shape, with different threshold and database of 100 images each, and show the relation between recall and precision. This paper compares and shows that several combinations of the color, texture, and shape features enhance the retrieval accuracy. The experimental results show the efficiency of the method.

2.1 Integration of Basic Descriptors This section compares the related work with proposed work. The recent work in image retrieval from database has been focused on evolving an individual concise

Integration of Basic Descriptors for Image Retrieval

631

feature like color, shape, or texture. Although color appears to be highly reliable feature for retrieval of image, there can be a situation where images need to make use of texture and/or shape features for image retrieval because color information is not available in an image. Image retrieval based on a single attribute might not provide the required information for discrimination of image and might not be able to accommodate large scale and orientation changes [8]. In recent times, numerous works have been carried over combinations of various features for effective and efficient retrieval querying by image content. In this paper, an effort has been made to integrate the representation of an image based on basic descriptors to enhance the retrieval performance.

2.2 Integrated Color and Texture Feature Extraction To characterize the image regions by both color and texture attribute, we obtain integrated feature maps by exploring capabilities of integrating the color and texture images. Real-world object within images can characterize and can also better capture and index color pattern by integrating color and texture shape, thereby increasing the retrieval rate. The color feature of the image is extracted using color histogram, and the texture feature of an image is extracted using wavelet transform. In order to get fine result, the individual methods are combined using the additive approach and sequential approach which are described in Sect. 2.1.

2.3 Integrated Color and Shape Feature Extraction It became important to use additional features in the presence of similar color images or in the presence of color distorting factor or in the absence of color information. Shape is another vital descriptor for classification of images and perceptual object recognition. For indexing and retrieval, shape is also used in conjunction with color and other basic descriptors [9–11]. In addition to shape feature color information shape is used in the proposed method to achieve efficient and effective retrieval performance. The color feature of the image is extracted using color histogram, and the shape features of an image are extracted using canny edge detector. For this purpose, we integrate extracted features of color, method described in Sect. 2.1 and shape, method described in Sect. 2.1, by using additive property.

632

V. Puranik and A. Sharmila

2.4 Integrated Shape and Texture Feature Extraction To represent visual data, the effectiveness of textual description is limited to very narrow context. In general, retrieval of items through a textual query could not be relevant at all to query image. To express textures in words is almost impossible, and it is difficult to sketch it too [image retrieval using texture and shape]. In the proposed technique, shape feature in addition to texture information is used to achieve efficient and effective retrieval performance. The shape feature of the image is extracted using Canny edge detector and the texture features of an image is extracted using wavelet transform. For this purpose, we integrate extracted features of texture and shape explained in Sect. 2.1 by using additive. To compute the distance between the feature vectors, Euclidean distance is used as a metric.

3 Experimental Results In this section, the performance of image integrated features is analyzed with respect to retrieval accuracy. The performance is analyzed in MATLAB 9.4. Following are the observations carried out over the images for individual and combined approach (additive and sequential), i.e., color, texture, and shape, color and shape, color and texture, texture and shape at different thresholds and database of 150 images of various categories: Fig. 2 represents retrieval methods versus recall and precision, at different thresholds (92–98). For individual approach, from these graphs we can observe that, as threshold decreases, precision increases and vice versa. For the selected database, threshold 90 is the better threshold where we get better recall and precision, and also this is where the images retrieved are more similar to the query image. For additive approach recall gradually decreases from threshold 92 to 98 and small increase in value of precision with till threshold 96, after that which tremendously increases, threshold 97 is the better threshold where we get better recall and precision (values in the range of 0.5– 0.8) and sequential property, the recall and precision obtained are almost same for all threshold (90–98) applied with exception of some cases where recall gradually decreases from threshold 94 to threshold 97. Threshold 97 is the better threshold where we get better recall and precision (values in the range of 0.4 to 0.5). From graphs (refer Figs. 1 and 2), we observe that for image = 104 (refer Fig. 1), color, additive (color + shape), and sequential (color+ shape, color + texture) are the best retrieval techniques for threshold = 98.

Integration of Basic Descriptors for Image Retrieval

633

Fig. 1 Image = 104.jpg

1-color, 2-Texture, 3-Shape, 4-Add_C+S, 5-Add_C+T, 6- Add_T+S, 7-Seq_C+S, 8Seq_C+T, 9- Seq_T+S. Fig. 2 Recall and precision for different retrieval methods, threshold (92-98), for Image = 104.jpg. 1-color, 2-Texture, 3-Shape, 4-Add_C+S, 5-Add_C+T, 6- Add_T+S, 7-Seq_C+S, 8-Seq_C+T, 9Seq_T+S

634

V. Puranik and A. Sharmila

4 Conclusion In this paper, the individual approach has been reviewed to retrieve the image using basic descriptor. The accuracy of image retrieval using individual approach is very less than the proposed approach. The proposed methods use additive and sequential features combines multiple basic descriptors to compare and retrieve relevant images. That improves the accuracy of image retrieval. To achieve this integration of extracted features of color, texture, and shape by using additive property is proposed. From the results, better retrieval accuracy is achieved by selecting the appropriate threshold values to increase the relevance as well as to suppress irrelevant image retrieval. From the test cases, the additive approach provides better performance than the sequential approach.

References 1. Mistry, Y., & Ingole, M.D. (2019). Content based image retrieval using hybrid features and various distance metric. Journal of Electrical Systems and Information Technology (3), 874– 888. 2. Wang, X.-Y., Zhang, B.-B., Yang, H.-Y. (2014). Content-based image retrieval by integrating color and texture features. Journal Multimedia Tools and Applications Archive (3), 545–569. 3. Haji, M.S., Alkawaz, M.H., Rehman, A., & Saba, T. (2019). Content-based image retrieval: A deep look at features prospectus. International Journal of Computational Vision and Robotics, 9(1), 14–38. 4. Hiremath, P.S., Jagadeesh, P. (2007). Content based image retrieval using color, texture and shape features. In 15th International Conference on Advanced Computing and Communications (pp. 780–784). IEEE Computer Society. 5. Smith, J.R., Chang, S.-F. (1996). Automated image retrieval using color and texture. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) – Special Issue on Digital Libraries: Representation and Retrieval. 6. Avula, S.R., Tang, J., & Acton, S.T. (2006). An object-based image retrieval system for digital libraries. Multimedia Systems, 11(3), 260–270 (Springer, Berlin). 7. Avula, S.R., Tang, J., & Acton, S.T. (2003). Image retrieval using segmentation. In Proceedings of the 2003 Systems and Information Engineering Design Symposium. 8. Canny, John. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6), 679–698. 9. Kuticsts, A., Nakajima, M., Ieki, T., & Mukawa, N. (1999). An object-based image retrieval system using an inhomogeneous diffusion model. Proceedings of International Conference on Image Processing, 2, 590. 10. Katare, A., Mitra, S.K., Banerjee, A. (2007). Content based image retrieval system for multi object images using combined features. In Proceedings of the International Conference on Computing: Theory and Applications (ICCTA’07), IEEE Computer Society, Jan 2007. 11. Wang, S. (2001). A robust CBIR approach using local color histograms. Department of Computer Science, University of Alberta, Edmonton, Alberta, Canada, Tech. Rep. TR 01-13, October 2001.

Bitcoin Exchange Rate Price Prediction Using Machine Learning Techniques: A Review Anirudhi Thanvi, Raghav Sharma, Bhanvi Menghani, Manish Kumar, and Sunil Kumar Jangir

1 Introduction Bitcoins are digital currency and the first example of cryptocurrency. Unlike, conventional paper money, it has witnessed a lot of attention worldwide. Using advanced computer software that provides complex mathematical solutions hence it helps to make businesses simple. In fact, Bitcoin becomes a medium of exchange that is based on an extensive mathematical approach. Before the arrival of cryptographic money Bitcoin, various advanced money advances were begun under the convention of David Chaum and Stefan Brands [1]. Giving this technology as a change and with existing new ideas a pseudonymous creator Satoshi Nakamoto brings into the use open-source software in 2009 and in the market of digital currency called cryptocurrency, supposed because it makes use of cryptography to manage the design and A. Thanvi (B) · R. Sharma · B. Menghani Department of Information Technology, Jaipur Engineering College and Research Center, Jaipur, India e-mail: [email protected] R. Sharma e-mail: [email protected] B. Menghani e-mail: [email protected] M. Kumar Department of Biomedical Engineering, MODY University of Science and Technology, Laxmangarh, India e-mail: [email protected] S. K. Jangir Department of Computer Science and Engineering, School of Engineering and Technology, MODY University of Science and Technology, Laxmangarh, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_69

635

636

A. Thanvi et al.

transfer of money [2]. Bitcoin is presumably the most acknowledged digital currency in the world. Over the last couple of years, Blockchain technology and the Bitcoin cryptocurrency that are the foundation of Bitcoin prices have witnessed a flood of attention [3]. By communicating digitally signed messages to the network users send payments. Bitcoin is an open-source as it is designed for public use, nobody owns or controls Bitcoin, i.e. the Bitcoin does not require any organization or central bank to exude and control it. Participants, who are known as miners, timestamp and verify transition into blockchain, which is a common open database, for which they are remunerated with transition charges and recently stamped bitcoins. In a peer to peer format its exchanges are evaluated and stored in the system itself, utilizing cryptography models [2, 4]. Moreover, compared to traditional credit cards the cryptocurrencies are unidentified, quicker and simpler. This makes it a very attractive mode for payments [5]. However, the valuation of Bitcoins is quite uncertain and to estimate it various mathematical or machine learning techniques are presented by various researchers [6]. It requires numerous translations and so it requires the use of computer algorithms to identify graphs of Bitcoin prices [7]. As a currency, Bitcoin offers a Novel open door for price expectation because of its relatively small age and coming about unconventionality, which is far more noteworthy than that of fiat currencies. Since the bitcoin value differs just like any other stock market value. The present study is centered around to understand and identify everyday exchange rate behaviour of the Bitcoin advertise, considering the direction, maximas and minimas, and shutting costs. Thus, the goal of this paper is to study the prediction techniques for forecasting daily price change with utmost accuracy. To accomplish our research objective, we deliberately gathered published academic blockchain and machine learning papers and have been reviewed under consideration of related research predictions of bitcoin price and relevant technologies used.

2 Literature Review D. Shah and Zhang, experimentally investigated a dormant source model as developed by G. H. Chen and S. Nikolov et al., to predict the prices of Bitcoin and came up with a Sharpe ratio of 4.1 with a particular 89% return in 50 days [8, 9]. Work is being done using text data from many sources to predict Bitcoin prices and also social media platforms. On Google trends M. Matta and I. Lunesu et al., examined the connection between price, tweets and views for bitcoin [10]. However, one deficiency of these sort of studies is the frequently small sample size, and predisposition for misinformation to reach through an assortment of media channels, for example, Twitter handles or a message sheets, for example, Reddits, which misleadingly increment/decline costing of Bitcoin [11]. There is considerably limited liquidity in bitcoin exchange. Subsequently, the market experiences from a more serious danger of moving and thus, reactions from online life isn’t considered and confided in further.

Bitcoin Exchange Rate Price Prediction Using …

637

I.Madan and S. Saluja, et al. analyzed and used Blockchain data, applying Binomial Generalised Linear Model (GLM), SVM and Random Forests having particular prediction accuracy of 97% on the other hand without cross-validating their models lacks in the unreliability of their outcomes [12]. To predict bitcoin prices wavelets have also been used, with R. Delfin Vidal, L. Kristoufek noting a positive relationship between network hash rate, mining difficulty and search engine views with Bitcoin price [13, 14]. A. Greaves researched the bitcoin in blockchain to foresee the bitcoin prices using Artificial Neural Network(ANN) and SVM reporting value route exactness of 55% with an ANN model [15]. They also concluded that their alone was a limited inevitability in blockchain data. Edwin sin and lipowang explored the association between the following day variety in the Bitcoin costs utilizing an ANN ensemble approach called Genetic Algorithm and the features of Bitcoin which depends on Selective Neural Network Ensemble and obtained 64% precision from backtesting, with 32 correct forecastings out of 50 days [16]. For everyday currency advertise data P. Wang concluded that support vector regression (SVR) enhanced the precision forecast near about 2009 [17] (Table 1). Table 1 Summary of typically previous research for bitcoin exchange rate price prediction No.

Study

Year

Algorithm

Accuracy (%)

1

Khashei and Bijari [18]

2010

ANN and traditional statistical models

72

2

Enrique Zá rate and Andrade de Oliveira [19]

2011

ANN

79

3

Shah and Zhang [8]

2014

SVM

85

4

Shah et al. [8]

2014

BNN

89

5

Greaves and Au [15]

2015

SVM, ANN

55

6

Madan et al. [12]

2015

Binomial GLM, SVM and Random Forests

97

7

McNally [20]

2016

RNN and AutoRegression Integrated Moving Average (ARIMA)

52.78

8

Edwin Sin [16]

2017

ANN and Ensemble

64

9

Evita Stenqvist and Jacob Lonno [21]

2017

Naive prediction model, SVM

83

10

Mallqui et al. [22]

2018

Ensemble of neural network and SVM

62.91

11

Amitha Raghava-Raju [23]

2018

K-Nearest Neighbours (KNN)

98.77

638

A. Thanvi et al.

3 Methodology Most of the previous studies showed their interest in modelling bitcoin prices. We have focused on time series model in prediction of bitcoin prices using linear and nonlinear methods like ARIMA, Recurrent Neural network, Bayesian Neural Network and many more. We compared multiple models to determine the best choice for further optimization wherein we considered linear and nonlinear models with both dense and convolution architecture. All the models were checked on how well they performed on the undertaking and their outcomes were analyzed. To predict changes in Bitcoin, prices multiple models who are assessed. Models based on Regression algorithms like ARIMA, LR, SVM, RNN, Random Forest, ANN and Bayesian Regression were implemented and tested. All the methods underlying these models and their presumptions are quickly abridged below. A. Logistic Regression It is a statistical model used for investing a dataset wherein there are one or more distinctive variables that determine an outcome. The outcome is calculated with the help of probability with an odds variable in which only two kinds of possible outcomes are there. When a set of independent variable is given it is used to predict a binary outcome (1/0, yes/no, True/False) [24]. To formulate the probabilities in which Logistic Regression will take on a particular class, it uses Maximum likelihood Estimation.   h θ (x) = g θ T x =

1 1 + e−θ T x

(1)

where θ is the parameter that must be learned and x is the input. B. Support Vector Machine (SVM) This algorithm capitulates a paired characterization model, however, making not very many suppositions about the dataset [25]. The obtained classifier by optimizing is 1 min(γ , ω, b) ||ω||2 2

(2)

  y (i) ω T x (i) + b ≥ 1, i = 1, . . . , m

(3)

s.t.

where x is the input and w, b are parameters. C. AutoRegression Integrated Moving Average (ARIMA) This model representation is used for forecasting and time series analysis [26]. The representation is utilized on time series data which will be changed into astationary time series; the linear regression forecast highlights after including moving averages and time contrasts [27].

Bitcoin Exchange Rate Price Prediction Using …

 1−

p  k=1

639



 αk L

k

(1 − L) X t = 1 − d

q 

 βk L

k

k=1

(4) t

D. Recurrent Neural Network (RNN) It is a set of artificial neural network(ANN) and is structured like Multi-Layer Perceptron (MLP) with the exemption that the signals flow in a unidirectional manner, i.e. both backward and forward in a redo manner [28]. During Run time phase, RNN outperforms ANN in total calculation time. Its mapping power is very well from parameter value and so results in quality evaluation. E. Bayesian Neural Network (BNN) BNN is a non-linear edition of ridge regression, depends on the Bayesian hypothesis for neural systems. It is a changed form of multi-layer perceptron(MLP). A BNN network is developed with various handling units which have been classified as three classes: input layer, output layer, and one or multiple hidden layers. The systems have been victorious in many applications such as the financial time series, image recognition, natural language processing and pattern recognition. BNN is a machine misusing the estimation through the use of Bayes’ hypothesis. n k α  β EB = (tnk − Onk )2 + ≥ t B ≥ B 2 n=1 k=1 2

(5)

where: E B is summation of the errors, K is size of output layer, N is no. of variables trained,≥ B is the weight vector of Bayesian neural network.

4 Result We considered earlier Bitcoin transactions in which bitcoin price and timestamps were the attributes used to predict the bitcoin price for the future. From studies being reviewed, we came to know that data was collected from different communities and therein number of transitions and each cryptocurrency daily prices were crawled and especially from coindesk [29]. We took 5 methods for the price prediction of bitcoin such as SVM, LR, ARIMA, RNN and Bayesian regression and compared their accuracy on the basis of different features like number of transactions, Ups and Downs in a day, block size, a total number of bitcoins mined, etc. Features in detail are being explained in Table 2. Among all methods, ARIMA and Bayesian Regression perform well for the next day’s predictions but ARIMA performs poor for longer terms like given last few days price predict next weekdays prices. RNN perform invariably up to 6 days. If separable hyperplane exists Logistic regression could able to classify accurately. As per our study, this review paper is summarized by the graph shown in Fig. 1. It shows that SVM model and Bayesian regression performed well with the aggregate accuracy of 97% and 89%, respectively, whereas logistic regression, ARIMA model

640

A. Thanvi et al.

Table 2 Features of Bitcoin S. No.

Features

Equations/Definitions

1

The total number of transactions made

The no. of distinctive Bitcoin transactions made every day

2

High and Low of the Day

Highs and lows values of unlike days

3

Size of Block

Size of the average block in Megabyte

4

Total number of Bitcoins

Number of Bitcoins mined totally

5

Volume of Trade

Trade volumes of the USD from the top exchange

Fig. 1 Accuracy of Bitcoin price predictor model

and recurrent neural network lacks at some places with the accuracy of 47%, 52.7% and 50%, respectively.

5 Conclusion The blockchain technology is used to maintain a ledger of all bitcoin transactions and dynamically track and analyze at micro-economic level. The Bitcoin digital currency has evolved from a novel financial experiment to a major currency with exchanges all over the world. By examining the sequential order and analyzing time series of Bitcoin pricing procedure our learning seeks to reveal the results of BNN and came up with aggregate accurate result of 87%. Based on the recent studies on prediction Bitcoin prices using Machine learning, SVM model performs well with 97% accuracy. This study opens various possibilities for future studies related to Bitcoin prediction. Work should be performed to improve the performance of regression models by inducing optimizing trading strategy and additional hyperparameters in correlation with the prediction. Furthermore, wavelet coherence can

Bitcoin Exchange Rate Price Prediction Using …

641

be used to study the movement to digital currency alongside related factors and explore the relationship between different cipher currencies. The work should use deep learning techniques, particularly a convolution neural network which is capable to generate a predictive model of Bitcoin price that simplifies reasonably well to unseen market conditions. Also model could incorporate RNN elements into the CNN model to induce dynamics directly into the network.

References 1. Chaum, D. (1983) Blind signatures for untraceable payments. In Advances in cryptology (pp. 199–203). Boston, MA: Springer US. 2. Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. Charles Sturt Univ. (p. 9). 3. Mangla, N., Bhat, A., Avabratha, G., & Bhat, N. (2019). Bitcoin price prediction using machine learning. Researchgate, 6(5), 318–320. 4. Javier Iglesias de Ussel, T.-J., MIT Sloan Sc., & M. Herson Director MBA Program. (2015). Bitcoin: A new way to understand Payment Systems Signature redacte Signature redacted IT Sloan School of Management Thesis Supervisor Signature redacted, Massachusetts Institute of Technology (pp. 1–68). 5. Cocco, L., Concas, G., & Marchesi, M. (2017). Using an artificial financial market for studying a cryptocurrency market. J. Econ. Interact. Coord., 12(2), 345–365. 6. Kumar, S., Lokesh Soni, J., & Goswami, A. (2019). Machine translation : A brief overview. Journal of Analysis and Computation (JAC), 1–4. 7. Babel, V., Jangir, S.K., & Singh, B.K. (2018). Evaluation methods for machine learning. J. Anal. Comput., XI(I), 1–6. 8. Shah,D., & Zhang, K. (2014). Bayesian regression and Bitcoin. 9. Chen, G., Nikolov, S., D. S.-A. in N. Information, & U. (2013). A latent source model for nonparametric time series classification. Adv. Neural Inf. Process. Syst., 1088–1096. 10. Matta, M., Lunesu, I., & Marchesi, M. (2015). Bitcoin spread prediction using social and web search media. In UWAP Workshops, Proceedings of DeCAT. 11. Gu, B., Konana, P., Liu, A., Rajagopalan, B., & Ghosh, J. (2006). Identifying information in stock message boards and its implications for stock market efficiency. In Workshop on Information Systems and Economics, Los Angeles (pp. 1–5). 12. Madan, I., Saluja, S., Zhao, A. (2015). Automated Bitcoin Trading via machine learning algorithms (Vol. 20, pp. 1–5). http//cs229.stanford.edu/proj2014/Isaac\%20Madan. 13. Delfin-Vidal, R., & Romero-Meléndez, G. (2016). The fractal nature of bitcoin: Evidence from wavelet power spectra. In Trends in mathematical economics: Dialogues between Southern Europe and Latin America (pp. 73–98). Springer International Publishing. 14. Kristoufek, L. (2015). What are the main drivers of the bitcoin price? Evidence from wavelet coherence analysis. PLoS One, 10(4). 15. Greaves, A., & Au, B. (2015). Using the Bitcoin transaction graph to predict the price of Bitcoin. 16. Sin, E., & Wang, L. (2018). Bitcoin price prediction using ensembles of neural networks. In IEEE Access, ICNC-FSKD 2017—13th Int. Conf. Nat. Comput. Fuzzy Syst. Knowl. Discov. (pp. 666–671). 17. Wang, P. (2011). Pricing currency options with support vector regression and stochastic volatility model with jumps. Expert Systems with Applications, 38(1), 1–7. 18. Khashei, M., & Bijari, M. (2010). An artificial neural network (p, d, q) model for timeseries forecasting Calibration of NIR with Mathematical Model View project An artificial neural network (p, d, q) model for timeseries forecasting. Expert Systems with Applications, 37, 479–489.

642

A. Thanvi et al.

19. Andrade de Oliveira, F., Enrique Zárate, L., de Azevedo Reis, M., & Neri Nobre, C. (2011). The use of artificial neural networks in the analysis and prediction of stock prices. J. Behav. Financ., 18(1), 54–64. ieeexplore.ieee.org. 20. McNally, S., Roche, J., & Caton, S. (2018). Predicting the price of Bitcoin using machine learning. In Proc.—26th Euromicro Int. Conf. Parallel, Distrib. Network-Based Process. PDP 2018 (pp. 339–343). 21. Stenqvist, E., & Lönnö, J. (2017). Predicting Bitcoin price fluctuation with Twitter sentiment analysis. Diva, 37. 22. Mallqui, D. C. A., & Fernandes, R. A. S. (2019). Predicting the direction, maximum, minimum and closing prices of daily Bitcoin exchange rate using machine learning techniques. Appl. Soft Comput. J., 75, 596–606. 23. Raghava-Raju, A. (2018). A machine learning approach to forecast Bitcoin prices. International Journal of Computers and Applications, 182(24), 39–46. 24. Mangla, N. (2018). Unstructured data analysis and processing using big data tool-hive and machine learning algorithm linear regression. Int. J. Comput. Eng. Technol., 9(2), 61–73. 25. Huang, S.-Y. W. W., & Nakamori, Y. (2005). Forecasting stock market movement 555 direction with support vector machine. Computers & Operations Research, 32(10), 2513–2522. 26. Chen, N.Y., & Chung, S.M. (2006). Forecasting enrollments using high order fuzzy time series and genetic algorithms. Int. Intell. Syst, 21, 485–501. 27. Wei, L. Y. (2016). A hybrid ANFIS model based on empirical mode decomposition for stock time series forecasting. Appl. Soft Comput. J., 42, 368–376. 28. Laboissiere, G. G. L. L. A., & Fernandes, R. A. (2015). Maximum and minimum stock price forecasting of Brazilian power distribution companies based on artificial neural network. Applied Soft Computing, 35, 66–74. 29. Bitcoin Forum[Internet]: Simple Machines. Available: https://scholar.google.co.in/scholar?b.

A Critical Review on Security Issues in Cloud Computing Priyanka Trikha

1 Introduction Cloud computing stands as a programming paradigm, which is still evolving; it offers the opportunity for various computer services through the network of networks. It should be noted that the programming paradigm constitutes a technological solution that has as an essential objective to solve one or several problems that were previously defined [1]. The information in the model as was discussed is stored permanently in various Internet servers. Data and applications are included somewhere on the Internet with remarkable frequency are represented as a cloud—hence, precisely, the term “cloud computing.” Cloud computing [1] is a paradigm that stores information on Internet servers and is used for clients to provide them with temporary storage which includes the use of desktops, tablets, laptops, etc. It is a model that allows the user to access standardized services to respond to one’s needs in an adaptive, fast and flexible way, paying only for the consumption realized. This concept can be approached using three perspectives. First of all, it can be observed that cloud computing is the result of the evolution of a set of technologies which have been consolidated for several years. Secondly, it is a technological trend so it enjoys great popularity and is widely adopted. Lastly in the area of development-oriented cloud computing, the same process methodologies are still used for traditional development [2]. With the exploration of mobile applications, the NC has integrated with a wide variety of services for mobile users. This resulted in the emergence of the mobile computing cloud (NCM). The NCM is seen as the new generation of computing infrastructure [3]. However, the current challenges in mobile communications such as bandwidth limitations, latency, user mobility, propagation channel effect, and P. Trikha (B) SGVU, Jaipur, India e-mail: [email protected]

© Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_70

643

644

P. Trikha

Fig. 1 Analysis of cloud computing

traffic load variations make it difficult to fulfill the services of the NCM [4]. This technology [5] refers to infrastructure, in the storage of data as well as the processing thereof takes place outside (Fig. 1).

1.1 Cloud Access Security Broker The use of cloud access security broker (CASB) is one of the solutions available to use the cloud safely. CASB is software specially designed to protect and control access to the cloud. This relatively new solution for cloud security is considered an external method of control as it is located between the cloud service and the user, controlling communication. In addition, there are many more functions available to CASB, which serve as monitoring and management tools within the cloud, report irregular processes, and establish the actions to be taken in the event of a security alert. If CASB is implemented as a gateway, this software is located between the user and the cloud service, that is, it is in the information stream, which allows it to directly block unwanted actions. However, cloud performance may be reduced if the amount of work increases. API-based solutions are more suitable for companies with a greater number of employees [6]. In these cases, CASB is outside the direct communication between user and cloud, so it cannot directly intervene in these actions but does not influence the reduction of the service performance in the cloud.

A Critical Review on Security Issues in Cloud Computing

645

2 Literature Review [S. Zissis, D. Lekkas et al. 2015] proposed that cloud computing will become a future generation infrastructure for computing. Cloud computing is defined as a bunch of computing resources that can be accessed by the Internet [1]. Usually, users store their data in the cloud with firewall and other security services to keep the data safe from third parties or other intruders.. (Armbrust M, Fox et al. 2014) In cloud computing, virtualization is a crucial term. Its virtual switching enables the user to spawn network topology as a virtual machine and connect them with a software. Since the performance of the data or software depends on the load of the server, it is very difficult to test virtual circuit switching capability to notice high bandwidth. (Ashraf I 2014 et al. 2013) proposed a study on testing as a service on the cloud. Testing of software is a process that is used for evaluating an attribute or capability of the program. It also checks if it meets the user’s requirement. In the present scenario, testing has become an important activity in terms of security, performance, and usability. If a user considers using testing, then the security of data becomes too expensive for the user. Cloud computing can provide anything as a service, where user can use the services of testing without maintenance and upgradation of the system which is more cost-effective [5, 7].

3 Security in Cloud Computing 3.1 The Major Security Considerations Taken into Account Are A recurring question while a user consumes the cloud, either in the form of applications (SaaS), development platform (PaaS), or hardware resources (IaaS), is about safety guarantees provided by service providers and if they are more or less secure than traditional solutions [8]. The truth is a simple fact that applications, hardware platforms, or resources offered as a service through the cloud input do not make them neither more nor less secure. The only difference [9, 10, 15] is that the service provider is responsible for managing certain aspects of security that and otherwise it is a user’s responsibility.

3.2 Security Issue in Cloud Computing Security is one of the most concerning issue of information technology issue. To keep organizational or user data is primary concern. If organization’s data is not safe on cloud, then there is no use of shifting from old technology to cloud technology.

646

3.2.1

P. Trikha

Misuse of Cloud

Some malicious persons can use cloud computing for illegal and criminal activities. Use of cloud for illegal purposes is called cloud abuse. Some malicious user can use cloud to host harm code or can use to provide pirated data to large no of users. Most of the time some malicious user creates advertisement on cloud that attracts users and asked for their personal information. These personal information is then used for illegal task or sending spam’s on users address. This kind of advertisement on cloud is a kind of cloud abuse [11].

3.2.2

Weak and Insecure Application Programmable Interface

Data and services of cloud are accessed via application programmable interface. Weakly designed errors in API tend to exposure of services and data to unwanted users. For example, vulnerability in Apache Web Server can lead to a user to have access on full server. Some times in shared cloud services, data can be shared between various users because of malfunction in API and sometimes because of rewritten of privacy setting of users [12]. Most of the times it has been that designing API without all security measures leads to weak API. But sometime by intention weak API is designed for malicious activities.

3.2.3

Insider Theft

Although users can trust on CSP but cannot trust on his employees as everyone is not good. Some malicious employee of company can check individual data or can steal individual data for some illegal work [13]. Even some CSP company can provide our data to other company to earn money.

3.2.4

Security Issues in Shared Cloud

Issues due to virtualization: Due to virtualization architecture users of IaaS service have ability to create many virtual machines on same server. Attacks on such kind of machines by describing attacks on Amazon EC2 [14, 15]. According to their paper, users can have access to internal machine and let them know how many virtual machines are running on same server. Secondly, users can set their virtual machine on same server, and if once they installed their virtual machine on same server, they will be able to apply many attacks and can easily find out information regarding keystroke timings, CPU cache use, and network traffic rates.

A Critical Review on Security Issues in Cloud Computing

3.2.5

647

Combined Services

Some services on cloud (public or community) are based on other service and dependent on other services. In such kind of services, users’ information is shared among all the services that composed service which user is using. This leads user to share personal information to such service provider to which he was not supposing. And backend services can misuse their personal information without knowing the users.

3.2.6

Data Damage and Loss

Data is most important part of cloud services, so huge concern should be given to its security. Data damage and loss can be done in two ways: First of them is due to natural reason, and second is due to manmade problems. Data loss due to natural reason like servers are damaged due to earthquake at server location, fire in server room, physical damages to server, problems in hardware so that data cannot be recovered, this type of problem can be solved using replica servers but that lead to insecurity and double the cost. Data damage due to manmade problems is attacked by malicious user, and he might delete all data or overwrite all data even he may move all data. A malicious user can change access permissions of data so that original users cannot access his data that is equal to data damage. Some malicious user can change services such that they will be able to get all data by user which was supposed to be stored on cloud storage. Solution to such problem is regular checking of API of cloud services [16, 17]. And make secure access control list so that it should not be easily available to malicious users.

3.2.7

Eavesdropping

As per dictionary, eavesdropping meaning is to listen others private conversation without knowing them. But in terms of computer technology eavesdropping meaning is to listen others chat, messages, and audio on telephone without the knowledge of users. Eavesdropping is also known as MIMA (Man in the middle attack). In this, a malicious person makes individual connections to a cloud user that seems like original connection of cloud to user; by this, malicious person redirects all your conversation and data to his storage devices. It is just like phishing of Web sites in which user get the same Webpage like original one so that users enter his data and data should go to malicious user storage devices [18].

4 Result and Finding As users are taking advice seriously as to where the server is hosted, American companies increasingly keep European customer data on European servers. Although it is not the only element that should be considered, because the head office is also

648

P. Trikha

very important to determine the security of the data. Even when the server of a USbased company If you are in Spain, US offices may have access to your data despite all voices against and judicial litigation.

4.1 Web Provider Data Protection Policy If you want to know how the different services in the cloud proceed with the data stored in them, you have to read the provisions on data protection, not forgetting that especially the main providers such as Google or Apple do not get most of their economic benefits by establishing progressive quotas to its users, but precisely using their data. Due to a certainly lax data protection policy, continually criticized by those struggling to protect information, there is a certain margin that these companies use for their benefit.

4.2 Data Encryption Those who store data online should always encrypt it, as this method is of great help to ensure its protection. Although there are many different encryption processes, from the technical point of view they tend to present some complexity; hence, the majority of users access the encryption methods of the cloud providers themselves. These require a lower degree of knowledge, but also prevent the user from checking if the measures are sufficient, especially in the case of public clouds that barely offer transparency on information processing.

4.3 Two or Multiple Factor Authentication While the complex CASB solutions are specifically designed to ensure greater cloud security, different authentication methods are considered as their most important additional components, becoming a decisive method of cloud protection. These systems are responsible for controlling access to a cloud service, regulating who can use it. Companies are increasingly turning to an external authentication service (identity provider) that is the third competent authority between the cloud provider and the user. If a user wants to use a company’s computer service, he will first be diverted to an authentication system in which he must normally identify himself with a password [19]. The identity provider or authentication method chosen is decisive for the use of a secure cloud storage service. An authentication method is especially secure if at least one other parameter is required in order to unlock it. There is talk of two or multiple factor authentication, understood as the most effective measure to guarantee the security of access to the cloud.

A Critical Review on Security Issues in Cloud Computing

649

4.4 Dynamic Authorization Cloud rights management refers not only to the identification of users, but also to the authorization of permits. The word authorization describes the set of rights that each user enjoys within the cloud, rights that one or more administrators individually assign to each employee in the multi-user scope of a company. These authorizations regulate who can modify settings, have access to certain subfolders, or have limited access time, as well as employees who can access the view of a document but cannot modify it. To use the cloud services securely, companies must have dynamic authorization processes.

4.5 Protect the Corporate Network To provide the necessary security, companies must efficiently protect not only individual services, but also the surrounding structure, that is, the company’s own network. This acquires significant importance for those who work with cloud services, since also through an insufficiently secure business network, user keys can be stolen. In fact, it is one of the most common methods to gain unauthorized access to cloud services. Large companies that work with more complex networks should consider protecting internal networks with individual security devices such as firewalls or antivirus. An external firewall, also known as hardware firewall, offers the advantage of being created to control the connection between two networks and prevent unwanted network access.

4.6 Centrally Manage Identity Securely Especially for large companies that have users in different cities or countries, cloud security is really a challenge. If they intend to use the cloud to favor and improve the pace of work they need to match and centralize the heterogeneous identities of the users. While small businesses resort to central identity management through external services, larger companies build their own network, and while start-up companies install cloud computing from the beginning, those that are older are forced to integrate it later. In short, secure and centralized identity management is also an important task to ensure the security of cloud services. In order for the changes to work and the identity of the workers can be safely integrated into a centralized structure, an integration layer installed in the network system architecture that concentrates the information on the identity of the employees and allows their management is required centralized. By implementing this type of integration layer, companies also manage to manage cloud services with security.

650

P. Trikha

4.7 Analysis of Security and Privacy in the Cloud Several studies affirm the importance of security and privacy in the adoption of the cloud in SMEs (Gupta et al. 2013; Lian et al. 2014). 75% of the ICT [16] directors of organizations were concerned about cloud security and argued that Google does not encrypt data on their servers. However, it is observed that at the user level, 66% of USB [17, 20] drives are lost; therefore, the cloud is safer (Sultan 2011). However, security does not have limits and requires continuous security updates that improve the effectiveness of the technology. However, these updates involve insurmountable periods of time until all users or organizations are updated.

4.8 Recommendations for the Cloud Computing Security Cloud computing establishment’s must-have tools allow them to have control of the administration, accounting, reservations, and customer management. Cloud Computing has applications, mostly under the SaaS (Software as a Service) modality [19]. Its solutions are used in the management of the accommodation. One of its applications is in property management systems (PMS) being especially relevant as the central core of establishment management.

4.9 Privacy in Cloud Computing Finally, we must not forget the privacy or level of protection given to a user in the cloud. Here it is of special recommendation, to follow the guidelines of the Data Protection and the Organic Law of Data Protection, as well as the Law on Services to the Information Society. The organization during its growth must keep in mind, that when the needs in the cloud grow, they do so by becoming a very critical part of the global security infrastructure [20]. The cloud does not have a delimited security zone. All the benefits and cost savings that come with it are achieved by moving the servers outside the traditional security zone of the organization and therefore delegate’s security to the cloud provider, with which a contractual relationship regulated by a contract is maintained.

5 Conclusions The fact that cloud environments proliferate exponentially forces potential users to better understand these environments and their main problems. When choosing cloud services, it is important to be clear about the type of infrastructure that supports it

A Critical Review on Security Issues in Cloud Computing

651

and the type of service offered. After the analysis carried out in this report, a global vision of this problem is obtained and conclusions are drawn common to all points of view. Data security and privacy are some of the key aspects. If we add to this to the problem of data security, after the adoption and incorporation of cloud technologies by the sector, the initial question of this research work could be answered that the sector is not sufficiently prepared. We must unite to deal with the lack of knowledge in users and organizations and move toward planning so as to use the cloud in terms of privacy, generation of user confidence and confidentiality of the information that is managed from tourist establishments.

References 1. Zissis, S., & Lekkas, D. (2012). Addressing cloud computing security issues. In Proceedings of Future Generation Computer Systems, March 2012. 2. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., et al. (2010). A view of cloud computing. Communications of the ACM Magazine, 53, 50–58. 3. Ashraf, I. (2014). An overview of service model of cloud computing Int. Journal of Multidisciplinary and Current Research, 2, 779–783. 4. Bala Narayada Reddy, G. (2013). Cloud computing-types of cloud. Retrieved from http://big datariding.blogspot.my/2013/10/cloud-computing-types-of-cloud.html. 5. Christina, A. A. (2015). Proactive measures on account hijacking in cloud computing network Asian. Journal of Computer Science and Technology, 4, 31–34. 6. Choubey, R., Dubey, R., & Bhattacharjee, J. (2011). A survey on cloud computing security challenges and threats. International Journal on Computer Science and Engineering (IJCSE), 3, 1227–1231. 7. Cloud Security Alliance. (2013). The notorious nine: Cloud computing top threats in 2013. Retrieved from https://downloads.cloudsecurityalliance.org/initiatives/top_threats/The_Notori ous_Nine_Cloud_Computing_Top_Threats_in_2013.pdf. 8. Dinesha, H. A., & Agrawal, V. K. (2012). Multi-level authentication technique for accessing cloud services. International Journal on Cloud Computing: Services and Architecture (IJCCSA), 2, 31–39. 9. Doelitzscher, F., Sulistio, A., Reich, C., Kuijs, H., & Wolf, D. (2011). Private cloud for collaboration and e-Learning services: from IaaS to SaaS. Journal of Computing-Cloud Computing, 91, 23–42. 10. Hamlen, K., Kantarcioglu, M., Khan, L., & Thuraisingham, B. (2012). Security issues for cloud computing. Optimizing Information Security and Advancing Privacy Assurance: New Technologies, 8, 150–162. 11. Jain, S., Kumar, R., Kumawat, S., & Jangir, S.K. (2014). An analysis of security and privacy issues, challenges with possible solution in cloud computing. In Proceedings of the National Conference on Computational and Mathematical Sciences (COMPUTATIA-IV) (pp. 1–7). 12. Kandias, M., Virvilis, N., & Gritzalis, D. (2011). The insider threat in cloud computing. In Proceedings of 6th International Conference on Critical Infrastructure Security (pp. 95–106). 13. Khoshkholghi, M. A., Abdullah, A., Latip, R., Subramaniam, S., & Othman, M. (2014). Disaster recovery in cloud computing: A survey computer and information. Science, 7, 39–54. 14. Khurana, S., & Verma, A. G. (2013). Comparisons of cloud computing service model: SaaS. PaaS IaaS International Journal of Electronics & Communication Technology (IJECT), 4, 29–32. 15. Kiblin, T. (2011). How to use cloud computing for disaster recovery. Retrieved from http://www.crn.com/blogs-op-ed/channel-voices/230700011/how-to-use-cloud-comput ing-for-disaster-recovery.htm.

652

P. Trikha

16. Kill, A. (2013). Cloud computing risk: Due diligence and insurance. Retrieved from http:// www.metrocorpcounsel.com/articles/17928/cloud-computing-risks-due-diligence-and-ins urance. 17. King, N. J., & Raja, V. T. (2012). Protecting the privacy and security of sensitive customer data in the cloud. Computer Law & Security Review, 28, 308–319. 18. Kuyoro, S. O., Ibikunie, F., & Awodele, O. (2011). Cloud computing security issues and challenges. International Journal of Computer Networks (IJCN), 3, 247–255. 19. Li, A., Yang, X., Kandula, S., & Zhang, M. (2010). CloudCmp: Comparing public cloud providers. In Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurements (pp. 1–14). 20. Malimi, N. (2014). Cloud computing. Retrieved from http://ngeleki.blogspot.my/2014/03/ what-is-cloud-computing.html.

Smart Traveler—For Visually Impaired People Amrita Rai, Aryan Maurya, Akriti, Aditya Ranjan, and Rishabh Gupta

1 Introduction With an increase in the advancement in modern technologies, Android-based application is being used all over the world. We are working to help visually impaired people to travel safely and comfortably. At the present time, a number of applications are being used such as SIMPLEEYE and BLIND SQUARE for blind people. SIMPLEEYE is based on the Braille script and BLIND SQUARE is used for only navigation, not for sending emergency alerts or detecting obstacles [1, 2]. Thus, the present systems being used for navigating blind people are not complete solution whereas smart traveler is a better option over the present products being used. It implements both hardware and software. In hardware, we have used microcontroller and various sensors to design an electronic device which acts as an obstacle detector device.

A. Rai · Akriti · A. Ranjan · R. Gupta ECE Department, GL Bajaj Institute of Technology and Management, Greater Noida, UP, India e-mail: [email protected] Akriti e-mail: [email protected] A. Ranjan e-mail: [email protected] R. Gupta e-mail: [email protected] A. Maurya (B) CSE Department, GL Bajaj Institute of Technology and Management, Greater Noida, UP, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_71

653

654

A. Rai et al.

An Android application also accompanies the device which alerts the traveler in case of upcoming obstacle and potholes on the basis of data received from an obstacle detector [3]. Apart from sending alerts, the application will also provide a number of useful features to make traveling easy.

2 Related Works 2.1 Be My Eyes Be the eyes for a blind person in need of help remotely through a live video connection if you are sighted or be assisted by the network of sighted users if you are blind [4].

2.2 Ariadne GPS Ariadne GPS is a new app that brilliantly meets the needs of the blind in an easy to use interface. Talking maps allow you to explore the world around you by moving your finger around the map. While exploring, crossing a street is signaled by vibration. Rotating maps keep you centered, with territory behind the user on the bottom of the screen and what is ahead on the top portion. Available in multiple languages, Ariadne GPS works anywhere Google Maps are available [5].

2.3 Navigon Mobile Navigator North America: NAVIGON’s Mobile Navigator North America transforms the iPhone into a fully functional mobile navigation system that uses the latest NAVTEQ map material. The app offers text-to-speech voice guidance, enhanced pedestrian navigation, a turn-by-turn Route List, location sharing via email, and a Take Me Home function. It also provides direct access and navigation to iPhone address book contacts. Navigation is automatically resumed after an incoming phone call [6].

2.4 Wayfindr This organization has been using wireless beacons to provide the same service as the iconic signs that sighted people use—as the company’s name suggests, beacons are a wayfinding tool that don’t rely on vision. They can be placed anywhere—on

Smart Traveler—For Visually Impaired People

655

a ceiling or a wall, along escalators or on platforms, and are fully programmable by the installer. They work by sending out information via a Bluetooth signal to a smartphone that has the accompanying app installed. The app then provides audio descriptions and directions (e.g. turn left and then walk forward until you reach the escalator) to the user, so that they can navigate their way safely around the station [7].

3 Need and Significance of This Paper This paper elaborates an innovative work for helping visually impaired people as the use of an android application integrated with hardware device (obstacle detector) stands out as a very different and user-friendly approach to help visually impaired people in navigation. Also, the system is completely based on voice command and is quite customizable. The working prototype (shown in Fig. 1) gives decent level of accuracy and works quite well. Also, this system integrates all necessary features required for navigating visually impaired people. Therefore, it becomes vital to pen down this ingenious work in the form of a paper. The paper describes the functioning and features of smart traveler system and will be quite resourceful in serving as a future reference for any work related to navigation for visually impaired people. The smart traveler system benefits society by reducing the number of road accidents that occur when visually impaired people commute on the streets without any support. Also, it can be a lifesaver as it can send alerts via voice commands if the user is trapped in a critical situation. Smart traveler makes visually impaired people more self-dependent and is a small step towards making their lives easier. Fig. 1 Working prototype of obstacle detector device

656

A. Rai et al.

4 Obstacle Detector and Device The obstacle detector device is a sensor-based device for detecting and warning the blind person about any hindrance or obstacle which may come in the path while walking. The real-time data that it collects about the obstacles surrounding the blind person can be transferred to the mobile application developed for obstacle detector and give information through voice command to the blind person so that he can be alert and prevent himself from colliding with any obstacle or from falling in open manholes on streets [8–11]. It can also detect any variation in ground level such as upcoming stairs, etc. Obstacle detector consists of two sensors. The first sensor detects the obstacle in front of the person and the second sensor detects obstacles on the ground like potholes, stairs, etc. The obstacle detection device is attached to the belt of the visually impaired person or it can be mounted on the shirts of a person as well as on the shoes also.

5 The Major Components of Obstacle Detector Device and Its Working 5.1 Arduino Uno The Arduino Uno is a microcontroller board based on ATmega328 AVR microcontroller. It can be powered by an AC-to-DC adapter or by USB cable and it can be programmed and reprogrammed according to the needs of user to perform any specific task. Specification of Arduino Uno is given in Table 1. Table 1 Specification of Arduino Uno

S. No.

Component Name

Specification

1

Microcontroller

ATmega328P

2

Operating Voltage

5V

3

Input Voltage (recommended)

7–12 V

4

PWM Digital I/O Pins

6

5

Analog Input pins

6

6

Flash Memory

32 KB, 0.5 KB used by loader

7

Clock Speed

16 MHz

Smart Traveler—For Visually Impaired People

657

Arduino board is used to connect with the sensor device and provide connectivity and hardware control. It is a platform to implement the hardware implementation for the system.

5.2 HC-SR04-Ultrasonic Range Finder The ultrasonic sensors basically work on the SONAR principle. It finds distance by using ultrasonic waves. The sensor first emits ultrasonic waves and these emitted waves propagate forward and if any obstacle is encountered on the path of the wave then it collides with obstacle and returns to the receiver of the sensor. This journey time of the wave is recorded and the distance is calculated by multiplying sonic speed with half the time taken for propagation of wave. These sensors keep on emitting ultrasonic waves and thus give continuous obstacle distance values [3, 11–13]. The distance can be calculated by the following formula: Distance = 1/2 × Time (for emission and reception) × Speed of ultrasound wave

5.3 HC-05 Bluetooth Module The HC-05 is a communication device that is used for transferring data. It has a 9600 baud rate and can be interfaced with any microcontroller that supports USART. Here it transfers the data of the distance of the obstacle collected from sensor to the mobile application. HC-05 Technical Specifications [14]: • • • • • •

Operating Voltage: 3.3–5.0 V Operating Current: 30 mA Works with Serial communication (USART) and TTL compatible Follows IEEE 802.15.1 standardized protocol Uses Frequency-Hopping Spread spectrum (FHSS) Supported baud rate: 9600, 19,200, 38,400, 57,600, 115,200, 230,400, 460,800.

The HC-05 Bluetooth module has some default values such as its default password is 1234 or 0000 and Default Communication is Slave and default mode is Data Mode. Its communication range is up to 10 meters. It has six pins which are—EN, VCC, GND, TXD, RXD, STATE. In order to know whether the Bluetooth module is connected or not, it has a red LED whose blinking rate shows the connection status. In order to set up connectivity with the android application first search for the device with the name “HC-05” and then get the smartphone paired with it.

658

A. Rai et al.

6 Android Application Details The obstacle detector will be accompanied by an android application. As the whole project is for blind people, the app will be completely voice and gesture-controlled. The application can perform various tasks like calling a person, sending an emergency alert, navigation to a place with just voice commands. When smartphone is paired with the obstacle detector Bluetooth then app starts alerting the person in real-time if any obstacle comes in their way or if any pothole comes in front of him during walking on the streets. Minimum requirements for application 1. 2. 3. 4.

Android version 5.1 Android lollipop (API 22) Internet Connection Bluetooth GPS Enabled Permission Required

1. To access Microphone Permission 2. To send SMS 3. To use Telephone.

6.1 Features of Application App has a basic layout; the whole screen is divided into two parts. 1. Upper Half (Tap) 2. Lower Half (Hold). Upper Half—Upper part is gesture-based. Gestures are like Single Tap, Double Tap, and Triple Tap. Actions performed on various taps are listed in Table 2. Lower Half—Lower half of the screen acts like a walkie-talkie. The person needs to hold the screen and speak the command then release. The application detects the commands and acts accordingly just like an assistant. Machine learning is used for decoding the sentence and to perform accordingly. A screenshot of the application is shown in Fig. 2. As this project is under development, the currently app can only detect small number of commands but support for more commands and features are being added Table 2 Functions performed on various Taps Taps

Action Performed

Single Tap

Tells current time

Double Tap

Sends a message to an emergency contact including current location

Triple Tap

Starts navigation to nearest bus stand

Smart Traveler—For Visually Impaired People

659

Fig. 2 Screenshot of application with specification

constantly. As app will be paired with the obstacle detector, as the person starts walking on street app will start making warnings on the data received by the sensors (Table 3). Table 3 Various voice commands and actions performed Example Commands

Actions performed

Where am I? Speaks the current location of person (Phone’s GPS is used Tell me about my current location for finding location) Where am I standing? This is an emergency Send alert Call police Call ambulance Call fire brigade

Calls emergency contact Calls emergency contact Calls nearest police station Calls ambulance Calls fire brigade

Start Navigation to *destination

Start navigation to destination using google map

Call a cab Book a Cab

Calls the specified cab service

Unrecognized command

Unrecognized command please try again

*destination is any place or address spoken by the user

660

A. Rai et al.

7 Actual Implementation Results and Simulation The block diagram of the proposed system is shown in Fig. 3, consisting basically of four modules. First module is the real-time ultrasonic sensor for sensing the obstacle after that for controlling entire system Arduino Uno or microcontroller is used which acts as second module. Finally, through the Bluetooth module, the data is received and transmitted to the Android app. Figures 1 and 4 show the working prototype of the obstacle detector device. A minimum of two ultrasonic sensors is required for detecting obstacles. One sensor for detecting obstacles in front of the person and the other for detecting obstacles on the ground. The ultrasonic sensor gives the distance of obstacle from the person. This data is passed into Arduino board and then this data of obstacle distance is transferred to a LCD for display or on a buzzer to alarm the user. Further, this data is also transferred to the android application through the Bluetooth module so that the application can give voice commands for navigating the user by warning about the distance of the obstacles [15]. The android application works in combination with the obstacle detector device. The obstacle detector device is mounted on the buckle of the waist belt of the person and it is interfaced with the smartphone which keeps on giving voice commands to navigate the person. The hardware is powered by a CMOS battery which can be either placed in the pocket of person or mounted on the belt itself. Thus the smart traveler system comes with a wearable belt and a smartphone making it very handy and easy to use. Using self-design PCB, it will become cheaper and lightweight device as compared to other available devices. Fig. 3 Block diagram of proposed system

Fig. 4 Obstacle detector display and Bluetooth module for data transmission

Fig. 5 Time of flight versus measured distance of ultrasonic sensor

SENSOR OUTPUT(MS )

Smart Traveler—For Visually Impaired People

661

25 20 15.2

15

21.2

11.5

10 5 0

18.3

0.5 0

2.5 50

8.1

5.2 100

150

200

250

300

350

DISTANCE(CM

Ultrasonic sensors (SRH-04) is verified separately as well as integrated. Working of ultrasonic sensor depends upon the echo reflection on different objects and generate signal according to the obstacle. Ultrasonic sensor output response uses the time of flight (TOF) factor for detection of obstacle and varies with distance as shown in Fig. 5. To evaluate the performance of the smart traveler prototyping system is tested in real-world by actual beneficiaries making people blind in Our Lab. Experiments have been carried out using 10 number of obstacles. The experiment was conducted by six blind people from which two were without any system, two were with prototype system without training and rest with training. For the period of the experiment, a blind person was travel in the testing area with different obstacles with a range of (16 × 15 m) area. The user’s walking speed is recorded and depicted in Fig. 6. The time difference of using trained smart travel, untrained smart traveler, and without it is clearly shown in Fig. 6, and ratio is 4:3:2, which reflects the performance of prototype smart traveler. If we fabricate it in belt of blind persons and training is provided then it can perform better traveling speed and also increased the user trust in escaping obstacles in free path. No of Obstacle 10

Travel speed (s)

Fig. 6 Performance and accuracy of smart traveler 0.8 0.6

0.4

0.5

0.7

0.6 0.4

0.4 0.2 0

Users

0.8

662

A. Rai et al.

8 Conclusion In this paper, an Android-based mobile application that is combined with an obstacle detector device using Arduino microcontroller is proposed and implemented. This proposed system aims to ensure the ease of traveling and efficient daily life work of the intended audience. It will help them to travel freely and safely thus making them more self-dependent and giving them complete control over their commute. With the help of this proposed system, it can reduce the number of road accidents and increase their morale. Also, in the future, the proposed system can include Digital Image Processing to help visually impaired people in reducing humane efforts and travel without any harassment.

References 1. https://simpleeye.com. 2. https://www.blindsquare.com. 3. Dimitrov, A., & Minchev, D. (2016, 29 May–1 June). Ultrasonic sensor explorer. In 2016 19th International Symposium on Electrical Apparatus and Technologies (SIELA). Available at https://ieeexplore.ieee.org/document/7542987/authors#authors. 4. https://www.bemyeyes.com/. 5. http://www.ariadnegps.eu/. 6. https://www.imore.com/app-review-navigon-mobilenavigator-north-america-iphone. 7. https://btplc.com/inclusion/NewsAndEvents/LookWhosTalking/wayfindr/index.htm. 8. Nahar, V.V., Nikam, J.L., & Deore, P.K. (2016). Smart blind walking stick. International Journal of Modern Trends in Engineering and Research(IJMTER), April 2016. 9. Sukhija, N., Taksali, S., Jain, M., & Kumawat, R. (2014). Smart stick for blind man. International Journal of Electronic and Electrical Engineering, 7(6). 10. Nada, A., Mashelly, S., Fakhr, M.A., & Seddik, A.F. (2015). Effective fast response smart stick for blind people, April 2015. 11. Adhe, S., Kunthewad, S., Shinde, P., & Kulkarni, V.S. (2015). Ultrasonic smart stick for visually impaired people. IOSR Journal of Electronics and Communication Engineering (IOSR-JECE). 12. Paulet, M.V., Salceanu, A., & Neacsu, O.M. (2018, 20–22 October). Ultrasonic radar. In 2016 International Conference and Exposition on Electrical and Power Engineering (EPE). Available at https://ieeexplore.ieee.org/document/7781400. 13. Hoomod, H.K., & Al-Chalabi, S.M.M. (2017, June). Objects detection and angles effectiveness by ultrasonic sensors HC-SR04. International Journal of Science and Research (IJSR), 6(6). 14. https://components101.com/wireless/hc-05-bluetooth-module For HC-05 Bluetooth module features and specifications. 15. Al-Fahoum, A.S., Al-Hmoud, H.B., & Al-Fraihat, A.A. (2013). A smart infrared microcontroller-based blind guidance system. In Active and passive electronic components.

Comparative Study of Stability Based AOMDV and AOMDV Routing Protocol for MANETs Polina Krukovich, Sunil Pathak, and Narendra Singh Yadav

1 Introduction A mobile ad hoc network (MANET) is a group of wireless nodes which are jointed with the support of a wireless connection. Such network may dynamically create without requirement of any fixed infrastructure. Because of dynamic nature of node, each mobile node is free to move at any location, movement of mobile node required extra efforts to maintain the routing information [1]. Routing is a key issued in ad hoc networking because of dynamically changes in the topology. There are certain ondemand routing protocols, which build the routes based on request. Route detection process takes place when there is need to find a route. The information of currently used routes is always maintained to minimized control overhead and routing load. Existing routing algorithm always use the single route between sender and receiver in MANETs. Because of device mobility, link failure and inability of mobile node, existing route may not be unavailable. In the existence of node mobility and route failure, additional packets exchanges are required to from new route which increase packet delivery delay and overhead. [2].

P. Krukovich Faculty of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics, Minsk, Belarus e-mail: [email protected] S. Pathak (B) Amity University, Jaipur, Rajasthan, India e-mail: [email protected] N. S. Yadav Department of Information Technology, Manipal University, Jaipur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_72

663

664

P. Krukovich et al.

2 Related Work The primary principles of re-active routing algorithms are known as On-Demand Routing Discovery. Where route is formed based on requirement. AODV and AOMDV are discussed in related work section.

2.1 Ad Hoc on-Demand Distance Vector Routing Protocol (AODV) AODV is in re-active category routing protocol and determines routes when required based on on-demand route discovery mechanism. AODV maintains a routing table with the next hop for reaching destinations. If a mobile device needs to forward a data packet, and it does not recognize the required path toward target, it starts route detection process, by distributing an RREQ multicast communication. All neighboring devices store information about from where the message received and resend it to their neighboring nodes until the message wasn’t delivered to the endpoint node. The endpoint node replies with an RREP, which gets back to the source on the reverse path along which the RREQ came. An intermediate node can also send an RREP if it knows the route to the destination. When the RREP arrives at the source, communication between the source and the destination can begin [3].

2.2 Ad Hoc on-Demand Multipath Distance Vector Routing Protocol (AOMDV) Ad hoc On-demand Multipath Distance Vector Routing (AOMDV) protocol is an extension to the AODV protocol. It finds various paths such that they are loop-free and link or node disconnect. There may be much option in next hops for destination with similar sequence number, which assistances in care track of a route. AOMDV maintains a publicized hop count for all destinations. [4]. Various algorithms [5– 11] has already implemented for 1Hop communication in clustering algorithm in MANETS.

2.3 Stability Based AOMDV Routing Protocol AOMDV is an on-demand routing algorithm which broadcast RREQ (route request) packet to all neighbors in order to determine the route. Intermediate nodes checks destination address and forward RREQ to their neighbors. A RREQ packet traversing from different path arrives at the destination. RREP (route reply) packet is send from

Comparative Study of Stability Based AOMDV …

665

Fig. 1 Processing RREQ in stability based AOMDV

receiver upon receiving RREQ. AOMDV finds multiple paths for a single sourcedestination pair. Those routes might needed weak links, which may go to numerous route failures. Numerous route failures will growth the number of route detections and in turn increase the routing overhead of the network. Minimization of routing overhead of the network is primary focus of the Stability based AOMDV protocol. When a node receives RREQ packet, it determines signal strength of the link and forwards RREQ packet only if the link has sufficient received signal strength. Therefore, the links with lower received signal strength may not participate in formation of route. A predetermined threshold is used to determine whether the link has enough received signal strength or not. If the received signal strength from the RREQ is greater than the threshold then RREQ packet will be processed else it will be discarded. Algorithm of processing RREQ is shown at Fig. 1.

3 Performance Matrices and Simulation Results NS2 [12] is a simulation tool which has been used to simulate both Stability Based AOMDV and existing AOMDV Algorithm. The packet delivery fraction (PDF) is a function which required number of packets received and number of packets sent. Figure 1 displays the packet delivery fraction for stability based AOMDV and AOMDV. The stability based AOMDV seem to accomplish improved performance than AOMDV in terms of packet sending fraction. From the Fig. 2, it is shown that the PDF is changeling as mobile node speed increased. Total data transmitted per second is known as throughput. As shown in Fig. 3 that the throughput get from the simulation result of Stability Based AOMDV is compared with AOMDV by changing mobile node speed ranging from 0 to 20 m/s. Stability based AOMDV achieve higher throughput than AOMDV.

666

P. Krukovich et al.

Fig. 2 Packet delivery fraction against nodes speed

Fig. 3 Throughput against node speed

The number of packets dropped in Stability based AOMDV is low, compared to AOMDV. With increasing nodes speed number of packets dropped increases for both protocols but Stability based AOMDV drops less packets than AOMDV (Table 1). The average end-to-end delay is the average of delay occurred in transmitting each data packet. Figure 4 shows simulation results of average end-to-end delay obtained for stability based AOMDV and AOMDV against various speeds of nodes which starts from 0 to 20 m/s. Stability based AOMDV has lower average end-to-end delay compared to AOMDV (Fig. 5).

Comparative Study of Stability Based AOMDV … Table 1 Performance Matrices

Fig. 4 Packets dropped against nodes speed

Fig. 5 Average end-to-end delay against nodes speed

667

Parameter

Value

Number of nodes

16

Mobility model

Random way point

Propagation model

Shadowing

Mac layer

802_11

Simulation time

2060 s

Transmission protocol

UDP

Routing protocol

AOMDV, Stability based AOMDV

668

P. Krukovich et al.

4 Conclusions In this manuscript performance of Stability Based AOMDV and AOMDV protocol are compared. Results suggest that stability based AOMDV has lower routing load compared to AOMDV protocol. Provision of stable path in stability based AOMDV reduces total of route failures, which will also decrease number of route discovery. In Stability Based AOMDV excessive RREQ packets are released which also minimize the g load of the network. Because of reduced routing load, stability Based AOMDV achieves better packet delivery fraction, end-to-end delay and throughput as compare with AOMDV.

References 1. Trung, H. D., Benjapolakul, W., & Duc, P. M. (2007). Performance evaluation and comparison of different ad hoc routing protocols. Bangkok, Thailand: Department of Electrical Engineering, Chulalongkorn University. 2. Perkins, C.E., Belding-Royer, E.M., & Das, S.R. (2003, February). Ad hoc on-demand distance vector. Mobile Ad Hoc Networking Working Group, Internet Draft. 3. Marina, M.K., & Das, S.R. (2001). On-demand multipath distance vector routing in ad hoc networks. In International Conference on Network Protocols (ICNP). 4. Sargolzaey, H., Moghanjoughi, A.A., & Khatun, S. (2009, January). A review and comparison of reliable unicast routing protocols for mobile Ad Hoc networks. International Journal of Computer Science and Network Security. 5. Pathak, S., & Jain, S. (2013). A survey: On unicast routing protocols for Mobile Ad Hoc network. International Journal of Emerging Technology and Advanced Engineering, 3(1), 204–210. 6. Pathak, S., Dutta, N., & Jain, S. (2014). An improved cluster maintenance scheme for ˙ mobile AdHoc networks. In IEEE 2014 International Conference on Advances in Computing, Communications and Informatics (pp. 2117–2121). https://doi.org/10.1109/icacci.2014.696 8281. 7. Pathak, S., & Jain, S. (2016). A novel weight based clustering algorithm for routing in MANET. Wireless Networks, 22(8), 2695–2704. https://doi.org/10.1007/s11276-015-1124-8. 8. Pathak, S., & Jain, S. (2017). An optimized stable clustering algorithm for mobile ad hoc networks. EURASIP Journal on Wireless Communications and Networking, (1), 1–11. https:// doi.org/10.1186/s13638-017-0832-4. 9. Pathak, S., & Jain, S. (2019). A priority-based weighted clustering algorithm for mobile ad hoc network’. International Journal of Communication Networks and Distributed Systems, 22(3), 313–328. https://doi.org/10.1504/IJCNDS.2019.098872. 10. Pathak, S., & Jain, S. (2019). Comparative study of clustering algorithms for MANETs. Journal of Statistics and Management Systems (JSMS), 22(4), 653–664. https://doi.org/10.1080/097 20510.2019.1609723. (Taylor Francis). 11. Pathak, S., Dutta N., & Jain S. (2014). An ımproved cluster maintenance scheme for mobile AdHoc networks. In IEEE-International Conference on Advances in Computing, Communications and Informatics (ICACCI) (pp. 2117–2121). 12. Fall, K., & Vardhan, K. The Network Simulator (ns-2). Available: http://www.isi.edu/nsnam/ns.

IoT-Based Automatic Irrigation System Using Robotic Vehicle Sakshi Gupta, Sharmila, and Hari Mohan Rai

1 Introduction The plant is an indispensable part of life. The presence of plants in nature ensures the presence of oxygen in the surrounding. Government and many NGOs are involved in organizing programs and seminars to educate people about the usefulness of plants around us, thus motivating them for plantation programs. Forest Department [1] of the various states reserve the median between the highways for the plantation. These plants act as blinders to block light/ distractions when vehicles are moving fast in close proximity in opposite directions. But, along with the plantation, it is also necessary to water the planted plant regularly for their survival. In general, it is easy to water the plants present in our house or organization, but the presence of a human is necessary. Watering the plants present in the medians of the highways is a tedious job. The watering techniques used to date are either stagnant or not portable and they are very costly [2]. The proposed model will do the needful by providing a cost-effective solution to the problem. In this, robot will receive the command to water the plant from the soil moisture sensor placed in each plant individually. The Plant pots will be kept on predefined locations. The moisture sensor will be sending the signal, indicating the need for water to the Arduino Uno placed near the plants. This Arduino Uno will send the signals to the Arduino Uno of the robot using Bluetooth technology. According to the signal received, the robot will proceed to each plant to provide water. But before S. Gupta (B) · Sharmila · H. M. Rai Krishna Engineering College, Ghaziabad, UP, India e-mail: [email protected] Sharmila e-mail: [email protected] H. M. Rai e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 D. Goyal et al. (eds.), Information Management and Machine Intelligence, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-4936-6_73

669

670

S. Gupta et al.

proceeding, the Robot will check the obstacle in its way using the ultrasonic sensor. In case of obstacle, the Robot will trigger the Buzzer. Using an Android application “Arduino Bluetooth Control” designed by Broxcode, France [3]; the movement of a robot can be controlled remotely using mobile phone Bluetooth. After watering the plants needing moisture, the Robot will automatically reach its initial position. This robot can also be used to sprinkle pesticides in indicated plants.

1.1 Literature Survey A system was designed by Kevin Sikorski from the University of Washington named “A Robotic Plant Care System” [4]. This system was using Laser light to locate the plant pot and obstacle avoidance, which increased the cost of the model up to $9000. Another system was developed by Ahmed Imteaj, et.al., named “IoT based autonomous percipient irrigation system using raspberry Pi” [5]. This system is using Raspberry Pi controlling the system. Along with the basic goal of providing water to the plants, this system has the provision of notifying the user about the shortage of water supply. But this system is not having any provision of avoiding the obstacle occurring in the path of robots. Moreover, the use of Raspberry pi is making this system costly. Additional work in this field has been done by Ankita Patil, Mayur Beldar, Akshay Naik, Sachin Deshpande and they have published their research by the name “Smart farming using Arduino and data mining” [6, 7]. They have introduced the Arduino based plant watering system with the android application which will help them to control Arduino via the internet. The main contribution of this paper is to overcome the shortcomings of the above three models and have designed an improved cost-effective prototype with an additional facility of avoiding the obstacle and controlling the Robot through Voice Commands.

2 Proposed Prototype The complete system is divided into two modules: Robot Unit and Irrigation Unit. The component used in designing the whole system is two Arduino Uno (one for each module), Moisture Sensors (one for each pot of plant), one Rain Sensor, one Ultrasonic Sensor, Motor Driver, Relay, dc motor and some consumables shown in Fig. 1. In this prototype, three plants are considered and thus three Moisture Sensor signals will be monitored. To program the Arduino Uno, Arduino IDE software, downloaded from the official website [2] is used. Android application “Arduino Bluetooth Control” is used to control the Robot movement remotely.

IoT-Based Automatic Irrigation System …

671

Fig. 1 Components required

2.1 Irrigation Unit A moisture sensor is inserted in the soil of each plant of the system that senses the moisture content of the soil. The moisture sensor has two legs connected to positive and negative terminal respectively. The moisture content present in the soil completes the circuit and the feedback signal is sent to the microcontroller Atmega 328 present in Arduino Uno. Rain Sensor also works on the same principle. Both Moisture Sensor and Rain Sensor provide digital data to the Arduino Uno. In case of the presence of moisture, Logic 1 is attained. Otherwise, the default signal is Logic 0. The working of the Irrigation unit is briefed in the Flow Chart shown in Fig. 2. The flow chart shows that the value of the rain sensor is checked first. If the value of rain sensor is Logic 0 that means it is raining outside then plants will get the natural rainwater if needed. The Arduino will continuously check the rain sensor until the value of the rain sensor becomes Logic 1, which means it is not raining. Now, the Arduino will further check the values of all the moisture sensors and respective signals will be sent through the Bluetooth Module HC-05 to Robot Unit. If the Moisture Sensor 1(MS1) gives value Logic 1 and the other two moisture sensors (MS2 and MS3) give value Logic 0, then the signal going through the Bluetooth will be ‘5’. Details of the combinations for the signal are indicated in Table 1. In the circuit of the irrigation side shown in Fig. 3, the Bluetooth module is connected in data mode with the Arduino Uno. The receiver pin of the Bluetooth Module is connected to the transmitter pin of Arduino Uno(here pin D11) and the transmitter pin of the Bluetooth Module is connected to the receiver pin of Arduino Uno (here pin D10). Moisture sensors MS1, MS2, MS3 and Rain Sensor is connected to the

672

S. Gupta et al.

Fig. 2 Flowchart of Irrigation Unit

Table 1 Bluetooth Signals Decoding Rain sensor

MS1

MS2

MS3

Bluetooth signal

1

0

0

0

1

1

0

0

1

2

1

0

1

0

3

1

0

1

1

4

1

1

0

0

5

1

1

0

1

6

1

1

1

0

7

1

1

1

1

8

digital pins of Arduino Uno D3, D4, D5, and D6 respectively. Depending upon the different values of the three moisture sensors and rain sensor 8 combinations are made, shown in Table 1. According, Arduino Uno of Irrigation Unit sends either of the eight different signals through Bluetooth to the Robot unit [8].

2.2 Robot Unit The Robot Unit has Bluetooth Module HC-05 that receives the signal from the Irrigation unit. This unit includes a motor driver that to drives the robot, the ultrasonic sensor to sense the obstacle (if any) in the path, a Buzzer to notify about the presence

IoT-Based Automatic Irrigation System …

673

Fig. 3 Circuit Diagram of Irrigation Unit

of obstacle, water container, and a relay to switch ON the pump to provide water. The working of the Robot unit is briefed in the flow chart is shown in Fig. 4. Bluetooth signal received will be decoded using Table 1, and the values of three variables MS1, MS2, and MS3 will be obtained accordingly. These variables will be representing the digital values obtained from the moisture sensor. Then the robot will check the value of the obstacle sensor. In case of any obstacle, Arduino will trigger the Buzzer. In the case of the buzzer, it will check until the obstacle gets off, then it will move forward. Then it will check the value of moisture sensor 1, if it is dry, the relay will be on and the pump will start. If the value of moisture sensor 1 is ‘0’ it will again check obstacle for the next plant and the cycle goes on for all the 3 plants. After all the 3 plants are watered, the robot will again go back to its initial position. The circuit diagram of the Robot Unit is shown in Fig. 5. Bluetooth Module HC-05 can work in Master or Slave mode. To make the communication between two HC-05, one of them needs to be programmed as master and others to be as Slave. By default, HC-05 works in Slave mode. To configure it in the Master mode, some AT commands are used. One Master can communicate with more than one Slave module. The only difference in the working of Master and Slave is that the communication between the two is always initiated by Master. Here HC-05 of the robot is Master and that of Irrigation Unit is Slave.

674

S. Gupta et al.

Fig. 4 Flow Chart of Robot Unit

2.3 Voice Control If an obstacle is present in the path of Robot, Arduino triggers the Buzzer and the master Bluetooth Module present in the Robot sends an initiating a signal to the Bluetooth device present in the Mobile phone of the user. The user will then use the Android Application “Arduino Bluetooth Control” to Control the movement of the Robot.

IoT-Based Automatic Irrigation System …

675

Fig. 5 Circuit Diagram of Robot Unit

3 Result and Discussion The complete autonomous system has been developed which can be used to water the plant placed indoor or outdoor. Figure 6 shows the image of the irrigation unit. Figure 7 shows the image of the Robot Unit. The Robot is capable of performing three main functions, i.e. getting the information of the plants about the need water, positioning the plant, and providing water to the plant. Moreover, the Robot is having an advanced feature of getting controlled remotely by a human in case of any mishap. Figure 8 shows the image of the complete model. To increase the number of moisture sensors, an Encoder can be connected between the Moisture Sensors and Arduino Uno. Ten digital I/O pins are available for connecting moisture sensors. The use of Encoder will allow us to interface 210 (=1024) Moisture Sensors with Arduino Uno. Fig. 6 Proposed Irrigation Unit

676 Fig. 7 Proposed robot unit

Fig. 8 Proposed system

S. Gupta et al.

IoT-Based Automatic Irrigation System …

677

4 Conclusion The automated irrigation system proposed in this paper takes care of plants which indirectly increase productivity. The design of the proposed system can be made simple and inexpensive also. Due to the development of sensor technology, the proposed system will be more efficient and beneficial for agriculture. In conclusion, the proposed system is completely automated which will help to reduce water utilization wisely and there is no need for any human interventions. The proposed model will help us in modernizing agriculture by reducing the expenditure, manpower, and water thus leading to enhanced profit.

References 1. Naik, P., Kumbi, A., Vishwanath, H., Chaitra, N.K., Pavitra, H.K., Sushma, B.S., et al. (2017). Arduino based automatic irrigation system using IoT. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 881–886. 2. https://www.arduino.cc/en/Main/Donate. 3. https://play.google.com/store/apps/details?id=com.broxcode.arduinobluetoothfree&hl=en. 4. Sikorski, K. (2003). Thesis-A robotic plant care system. University of Washington, Intel Research. 5. Imteaj, A., Rahman, T., Hossain, M.K., Zaman, S. (2016). IoT based autonomous percipient irrigation system using raspberry Pi. In 2016 19th International Conference on Computer and Information Technology (ICCIT) (pp. 563–568). https://doi.org/10.1109/iccitechn.2016.786 0260, 18–20 Dec. 2016. 6. Patil, A., Beldar, M., Naik, A., Deshpande, S. (2016). Smart farming using Arduino and data mining. In 2016 3rd International Conference on Computing for Sustainable Global Development (INDIA Com) (pp. 1913–1917). 7. Ogidan, O., Onile, A., & Adegboro, O. G. (2019). Smart irrigation system: A water management procedure. Agricultural Sciences Journal, 10(01), 25–31. 8. Ashwini, et al. (2018). Automatic irrigation system using Arduinoc. International Research Journal of Engineering and Technology(IRJET), 5(10).